Under Review

Application.Storage: unreasonable high overhead for deleteValue (200% of object size) and setValue (200%-300% of object size)

When using the Application.Storage API to deal with large objects, there's a noticeable overhead for for get, set and delete operations.

- getValue seems to have a 100% (of object size) overhead (above and beyond the size of the actual data that's returned). this is fairly reasonable.

- deleteValue seems to have a 200% overhead. This is strange. I would expect there to be zero (or constant) overhead here

- setValue seems to have a 300% overhead if the key exists, but a 200% overhead of the key does not. This means that calling deleteValue before setValue can prevent your app from running out of memory!

These numbers were determined by:

- saving a large (e.g. 10 KB) array to storage

- running various tests where used memory was logged at various points and peak memory in the simulator was examined to guess the overhead (given that nothing else is happening in the program to produce such a memory spike)

e.g.

0) I have a skeleton test program (simple data field) which normally has peak memory of 8 KB and "resting" used memory of 6 KB.

Before running the test program, I save a hardcoded 10 KB array of numbers via Application.Storage.setValue() (using a modified program with the same UUID, so the test program can access the data). To avoid any confusion, I comment out / remove the hardcoded array and the code that saves it storage, before running the test program.

1) I add code to 0) that calls getValue on the large array, at init time. Before the call, the used memory is ~6 KB. After the call, the used memory is ~16 KB (as expected), and the peak memory is ~26 KB. From this I can guess there was a memory spike of ~10 KB (roughly equal to the object size)

2) I add code to 1) that calls setValue after getValue. The used memory before and after setValue is ~16 KB (as expected), but the peak memory is ~46 KB! This indicates roughly 300% overhead for setData

3) I take 2) and insert a call to deleteValue() between getValue() and setValue(). Now the peak memory is only 36 KB. This still indicates a rough overhead of 20 KB (200%) for setData

4) I take 0) and add a call to deleteValue() (no getValue or setValue involved). Before the call, the used memory is 6 KB and after the call the used memory is slightly lower (by 16 bytes). But the peak memory is 26 KB! (Indicating that simply deleting a value has an overhead of 200%)

In all cases, the value in question is the aforementioned 10 KB array, and the key is always the same.

Related thread:

https://forums.garmin.com/developer/connect-iq/f/discussion/419801/memory-consumed-by-storage-setvalue/1965272#1965272

I don't know if these are bugs or design issues, but it would be nice to get some clarity on whether this is expected behaviour.

In a perfect world:

- I would expect the overhead of setValue to at least be the same as getValue. i.e. 100%, not 200%-300%

- I would not expect the overhead of setValue to be larger if the key exists

- I would not expect the overhead of deleteValue to be 200%. (It should be constant, if anything)

  • I also think it's conceivable for the overhead for get/set to be less than 100% if data was able to be serialized/deserialized in chunks (I get that this may not be possible). But I think getting setValue down to 100% - if possible -would be greatly appreciated.