out of memory

I have developed a data field and I am really on the memory limit for some of the devices. Basically as soon as I add any more code/functionality to the data field I get "out of memory" in the sim (even one line of code may tip it over). I understand that memory available for data fields is restricted and the only way forward for me is to optimize memory usage of my data field.

On the Phoenix 5 Sim, I get peak usage 28.5kB, so clearly hitting the memory limit, but when I bring up the memory usage stats in the sim, the memory size does not add up to the 28.5kB. I'm using a background process, could it be that the memory used in background is not shown here?  

  

In order to optimize memory usage, would it make a big difference to move from layouts to dc.writes? I currently have many layouts, and I load a specific layout based on the obscurity flags and device type (with simple data field I could just let the system render with the proper font size, but this is a complex data field so I need to take care of that myself). It would add a lot of complexity to the code if I were to change to dc.writes, but if that will save me some memory it might be worthwhile, what are your thoughts? 

I am also loading a bitmap resource in runtime, from a list of approx 10 bitmaps. Each bitmap is approx 250 bytes. I only load one bitmap and not the entire list, to preserve as much memory as possible. Is there anything in terms of handling resource and bitmaps that I should think about in order to save memory? 

I store some user settings and other variables I need to share between the main process and background process, will objects stored in the object store also consume from the 28kB memory pool? and if so, is there any other approach that would be more efficient? 

Sorry for asking such open questions but wanted to see if some of you experienced developers have som tips and tricks for me. Many thanks in advance!

/Fredrik

  • I'm using a background process, could it be that the memory used in background is not shown here?  

    My understanding is that the background process and main app have separate memory pools (and limits) as they are separate processes. I don't dev apps with background processes tho.

    On the Phoenix 5 Sim, I get peak usage 28.5kB, so clearly hitting the memory limit, but when I bring up the memory usage stats in the sim, the memory size does not add up to the 28.5kB.

    Peak usage != current usage. If you're creating objects (and allowing them to go out of scope), that would cause a memory spike. Loading a resource may also cause a memory spike. If your background process returns an object to the main process, seems like that could also cause a memory spike. I imagine that making any Monkey C API call at all has the potential to cause a memory spike, as you don't know what it's doing behind the scenes. I imagine that getActivityInfo(), for example, would cause a spike, as it returns a pretty big object. (This is assuming that object isn't "always" available somehow -- I doubt it is.)

    I can't say what's exactly happening in your specific case, but one of the known causes of memory spikes is if/when you push app settings from the phone to the app.

    In order to optimize memory usage, would it make a big difference to move from layouts to dc.writes? I currently have many layouts, and I load a specific layout based on the obscurity flags and device type (with simple data field I could just let the system render with the proper font size, but this is a complex data field so I need to take care of that myself). It would add a lot of complexity to the code if I were to change to dc.writes, but if that will save me some memory it might be worthwhile, what are your thoughts? 

    Layouts are pretty memory-intensive. And I've saved lots of memory in code by even moving from complex dynamically calculated layouts (really dc.writes) to precalculated layouts.

    You do need a function to consume your own precalculated layouts, but you get to save a lot of memory by encoding layouts as efficiently as possible (e.g. as bit-packed arrays of numbers -- i.e. 32-bit integers). With CIQ 3 watches, you can store arbitrary data as JSON resources. For older watches, you could wrap your data in a function so it doesn't consume memory all the time. (This is another way to you have memory spikes, but at least it reduces the "resting" amount of memory required. It works for any kind of data that doesn't need to be accessed all the time.)

    Another thing that uses up memory is application settings (properties) and resources (*).

    - Properties are stored in a dictionary of key/value pairs which is always present in memory, so the longer the property names and the more properties you have, the more memory you use up.

    - As long as you call loadResource() at least once, all of the resource tables are loaded into memory permanently. This means that each resource (but not its contents) takes up a fixed amount of memory. This even includes application settings strings (which are 99% likely to not even be used by the app itself.)

    I'm not 100% about whether the object store consumes memory, but just making an educated guess and also looking at your screenshot, I'd say the answer would be "no". You do have a ton of code and data. There are a lot of threads about saving memory by refactoring code (most of which makes your code an unreadable, unmaintainable mess) -- the highlights are:

    - don't use switch statements

    - prefer array lookup to multiple if/else statements when possible

    - don't use dictionaries

    - avoid using classes/objects. (e.g. if you need to return more than one thing, return an array instead of an object. if you need to implement some abstract data type like a queue, use static functions instead of a class)

    - manually inline 1-liner functions

    - use functions for longer code that's repeated

    - use hardcoded constants (with comments) instead of enums. (symbols and symbol accesses consume memory for code)

    - wrap big data that doesn't need to be accessed all the time in a function. (this does create more memory spikes, but it also lowers your "resting" memory usage)

  • My understanding is that the background process and main app have separate memory pools (and limits) as they are separate processes.

    Not really. When you have a 28.6 limit for an app, and if the background service has 4k of code, the DF itself is left with 24.6k.  The code for the background gets loaded with the DF, as code with the background annotation is still available to the DF itself.  I use the same code in a background service and the main app for doing a makeWebRrequest in some widgets for example.  

  • The two processes share some of the same *code*, but they have different memory limits, no? And they have separate heaps and stacks, right?

    For example, the Fenix 6X Pro allows 32768 bytes for a background process but 131072 bytes for a data field.

    If you had code that only runs in the foreground process which allocates 50 K of memory for some reason, that wouldn't affect the background process, right? That's what I meant.

    Sorry for being imprecise.

  • As an analogy, if I compile two executables in any language/environment which share a common statically linked library and only can only communicate using OS interprocess communication, I wouldn't consider them to share the same "memory", even if adding code to the shared library would mean that the memory usage of both executables would increase.

  • Actually, you'll likely see the same memory limit with a background and widget in this case.

    The general way this works is the prg is "layered"

    0 (:background) annotated

    x (:glance) annotated

    y no annotation

    28.6k

    When the background runs, 0 to x-1 gets loaded into memory (stuff with the background annotation)

    When a glance view runs, 0 to y-1  (stuff with background or glance annotation)

    and the app itself, 0 to y+whatever (everything)

    The code with the background annotation impacts how much memory is available to the DF itself.

  • The code with the background annotation impacts how much memory is available to the DF itself.

    I get that part, but once again are we still not talking about separate *processes* which happen to have common *code*??? Code is only one aspect of a program that takes up memory.

    Otherwise it wouldn't make sense for there to be a separate "background" memory limit in compiler.json.

    Again I will return to the example of a shared (statically linked) library in any language/OS -- just because your library gets bigger and impacts all executables which link to it, doesn't mean the executables themselves share a memory limit.

    Actually, you'll likely see the same memory limit with a background and widget in this case.

    So in this example (fenix6xpro):

    A datafield/device app/widget has the exact same memory limit as its associated background process?

  • Some devices have the same max for the DF itself as the background service

  • Obviously, but that doesn't mean that background process isn't separate from the datafield process. That's what I mean when I say they don't share memory. (But they do share code.)

    If I have two apps running on Windows with the same code and same virtual memory limit, do they share the same memory? If certain user input causes one app to consume more memory, does the other app (which didn't receive that input) also consume the same amount of memory?

    If one app runs out of memory, does the other app automatically run out of memory?

  • Thank you so much for your answers, I will look into each and every on of them in more detail. 

  • If you free up 50 bytes of code space in the background, you have 50 bytes more for the data field.

    Also, while the background runs on it's own, when it does the Background.exit(), the amount of data you pass impacts the data field.  If you return 500 bytes, in onBackgroundData, the DF uses 500 bytes, so you can free up some memory if you return 100 bytes instead of 500 for example.