Memory Requirements when Storing JSON from Glance

I'm working on a glance that preloads a JSON file via a web request and caches it in storage for the widget. Since the JSON can be fairly large, I need to implement safeguards to prevent the glance from crashing. If there isn't enough memory available, the glance should simply skip caching the JSON - letting the widget handle the request instead when it's launched.

Handling large JSONs during reception is straightforward: if it's too big, I get a -403 error. But writing to storage seems to be much more memory-intensive. Here, I need to check in advance how much memory is available and try to predict whether storing the JSON will actually succeed.

While experimenting, I noticed something surprising. A modest 3.5kB JSON results in a Dictionary from Communications that consumes over 12kB of memory. Then, to store that Dictionary, I need an additional 24kB of free memory. So in total, that small 3.5kB JSON ends up requiring around 36kB of memory throughout the process.

Have you seen similar behavior in your experience?

  • yeahhh, still struggling with exactly the same... So much memory is needed for a 5kB json to be saved that I started to question my decision to try to support as many devices as possible (In my case it's a data field and I wanted to support older devices that can only make web requests in a background app, that has small memory constraints that it looks like all the refactoring I worked on for days to do it in the background will be thrown away, together with the devices that have CIQ < 5.0...)

  • If you have control over the server side, then maybe it's worth to brake the api down to smaller parts. This way it'll need more requests, but each one will get a smaller json that will be possible to process with less memory. Unfortunately in my case this is not a possibility as I use some public api.

  • Yes, unfortunately, also the API in my case is outside of my control.

  • I’ve been experimenting with the relationship between the size of a JSON object in memory and the memory required to store it using Storage.setValue. To estimate the in-memory size, I compare the used memory immediately after calling makeWebRequest and again in onReceive.

    From what I can tell, there’s no linear correlation. For relatively small JSON objects (around 10–15 kB in memory), Storage.setValue works fine if there’s about 1.2× that amount of free memory—so roughly 12–18 kB. But when testing with larger JSON data in a widget context, the required free memory increases drastically. For example, a 90 kB JSON object requires about 400 kB of free memory for Storage.setValue to succeed.

    What’s puzzling is that this isn’t reflected in the simulator’s Peak Memory view. If I stay just under the threshold, the Peak Memory usage is slightly below 400 kB out of the 763 kB available (on an Epix 2 Pro). Increasing the JSON size just a bit beyond that causes Storage.setValue to fail with an out-of-memory error.

    It seems like whatever Storage.setValue is doing internally isn't fully accounted for in the simulator’s Active Memory metrics.

    Caching isn’t critical for my app—it’s mainly to speed up startup—so I can just add safeguards to skip caching when the JSON size gets too large.

    That said, I’m still a bit at a loss as to what's going on here. I’ll probably need to test this behavior on an actual device before drawing any conclusions.

    For context: my app is for home automation. Users configure a server-side view of the devices they want to display on the watch. The 90 kB JSON corresponds to roughly 100 devices, which is already plenty. Still, I want to make sure users can’t crash the app by creating views that are too large.

  • There are few reasons why it seems there is no correlation. One is easy to understand. The same json size (in bytes) can look very different when deserislized. To give you an example, you don't even need a web request, just think about the following example. Let's say the data you want to store (in memory) consists of 100 locations. The 1st instinct probably is to have an array of 100 location objects (each has a latitude and a longitude as Float). Do you see any problem with this?

    I'll tell you: you'll obviously have the 200 Floats, there's no easy way getting around that. But then you also have the overhead of 100 objects and +1 array. It's convenient to use it, you can index the array and there you have the location object.

    But if you want to save lot of memory then try to have (almost) 100 object less. (I haven't test it, but my instinct says that an object costs more than the 2 floats that we're really interested in)

    You could have 2 arrays; latitudes as Array<Float> and the same for longitudes. Only 2 arrays instead of 1 array and 100 objects.

    This is why a 2kB json string when deserislized can occupy 40kB or 40kB (and the deserislization itself probably uses even more)

    In some cases it might be worth to have a lightweight webserver between the user and the actual API that acts as a proxy and repacks the data in a way that fits more the use case that the client needs

  • So, one aspect is the correlation between the serialized JSON and its memory footprint. But what really puzzles me is why storing the JSON dictionary using Storage.setValue seems to require four times more memory than the dictionary occupies in memory. Whatever format they’re using to serialize dictionaries, it shouldn’t need that much space—should it? If I were to write my own serialization to JSON, it would consume only a fraction of the memory that Storage.setValue does.

    That said, I can live with the current limitations. There’s no real need to optimize the web request itself, especially since it handles out-of-memory errors gracefully and simply returns an error code.

    As for storage, it’s not critical—it's just used for caching to speed up app startup. Without the cached JSON, the app just fetches a fresh one before displaying anything. The cached data is only used to render the menu structure and items; the actual item states are shown as “waiting” until fresh data is available.

    So at this point, it’s really just about figuring out a reliable rule for deciding when it’s safe to write the JSON to storage.

  • (I haven't test it, but my instinct says that an object costs more than the 2 floats that we're really interested in)

    This came up recently (pretty sure you were in some of those threads) and it was established that instances of classes (other than "primitives" and arrays) have a huge overhead, such that even in 2025, if you're tight on memory it helps to eliminate the use of classes and dictionaries for large amounts of data. (For a data field that runs on old devices and works with large-ish amounts of data such as moving averages, I saved a lot of memory by converting a queue class to a set of global functions that passes around array data, for example)

    I did a quick test:

    class Foo {} // empty class which implicitly extends Lang.Object

    var foo = new Foo(); // 84 bytes (SDK 8.1.1, fr955, sim memory viewer)
    var object = new Object(); // 36 bytes

    var emptyDictionary = {}; // 116 bytes
    var emptyArray = []; // 16 bytes

    This old comment might shed some light:

    https://forums.garmin.com/developer/connect-iq/f/discussion/4395/memory-efficiencies

    So, you must remember that every class you declare implicitly extends Lang.Object. This adds overhead to every class automatically. Every class/module appears to be implemented as a Dictionary. The dictionary maps from a symbol name to the object in question, so when you declare a variable x in your class, an entry for that symbol is added to the class dictionary automatically by the compiler.

    (The person who wrote that comment now works in the Connect IQ team.)

    It is interesting that the Monkey C "primitives" (Number, Long, Float, Double, Boolean, Null, String) - as well as arrays - also inherit from Lang.Object yet don't have the same overhead. I assume that they have special support in the language to avoid that overhead.

    The overhead for objects doesn't explain some of the things the-ninth is seeing, such as:

    no linear correlation

    Regardless of the amount of overhead for each object (100 bytes, 100 KB, 100 MB), it would still make sense for the correlation to be roughly linear (e.g. if the size of the input is doubled/tripled/quadrupled, you'd expect the size of the output to be doubled/tripled/quadrupled), assuming that the "shape" of the data remains uniform as the size increases.

    For example, a 90 kB JSON object requires about 400 kB of free memory for Storage.setValue to succeed.

    What’s puzzling is that this isn’t reflected in the simulator’s Peak Memory view. If I stay just under the threshold, the Peak Memory usage is slightly below 400 kB out of the 763 kB available (on an Epix 2 Pro). Increasing the JSON size just a bit beyond that causes Storage.setValue to fail with an out-of-memory error.

  • It would be very helpful if functions like Storage.setValue() returned an error when there isn't enough memory—similar to how Communication.makeWebRequest() provides a response code. For operations that require a significant amount of memory, this kind of feedback is important. There's no reason the rest of the code shouldn't be able to continue running gracefully even if a storage operation fails.

    Anyway, I'll do some testing on the real device to see if it behaves the same as the simulator, or whether there are different thresholds for when Storage.setValue() begins to fail.

  • I’ve started running some tests on a real device, and I’m seeing some strange behavior.

    I set up a test where I continuously increase the size of the JSON payload received from the server. This scenario worked fine in the simulator, thanks to the safeguards I implemented.

    However, on the actual device, the app crashes — and what’s worse, it takes down other Glances as well. I had a second, completely unrelated app running, whose Glance was visible during the test. That app also performs web requests for JSON data, and it crashed too.

    This suggests that for Glances, there might be some kind of shared memory component across CIQ apps.

    Below is the CIQ_LOG output from both crashing apps. Again, the second app has no direct connection to the test scenario.

    ---
    Error: Out Of Memory Error
    Details: 'Failed invoking <symbol>'
    Time: 2025-05-22T04:40:47Z
    Part-Number: 006-B4313-00
    Firmware-Version: '20.22'
    Language-Code: eng
    ConnectIQ-Version: 5.1.1
    Store-Id: 8a6b62f1-5450-4912-8799-2eeaf9779d97
    Store-Version: 60
    Filename: F56A2548
    Appname: openHAB
    Stack:
      - pc: 0x10001161
      - pc: 0x100008e6
    ---
    Error: Out Of Memory Error
    Details: failed inside handle_json_callback
    Time: 2025-05-22T04:40:47Z
    Part-Number: 006-B4313-00
    Firmware-Version: '20.22'
    Language-Code: eng
    ConnectIQ-Version: 5.1.1
    Store-Id: 261ec5d2-bcbe-4b8b-b770-5f92f2c19ba4
    Store-Version: 74
    Filename: F3ED5327
    Appname: 'evcc-beta'
    Stack:
  • The documentation for Storage.setValue states:

    "There is a limit on the size of the Object Store that can vary between devices. If you reach this limit, the value will not be saved and an exception will be thrown. Also, values are limited to 32 KB in size."

    While an exception is defined for exceeding the overall store limit, it's unclear how the API handles the 32 KB per-value limit. What exactly is supposed to happen if a single value exceeds 32 KB?

    I’d really like to find a reliable way to safely write large, nested dictionaries that represent JSON data.