Hi all,
I just went through a round of optimizing Note2Watch for memory usage and speed and learned a few things along the way, so thought I'd share.
I was able to get the memory usage from ~58K, which was causing a memory exhaustion error on the VivoactiveHR, down to ~30K.
Here's how I did it:
MEMORY OPTIMIZATIONS
1. Inline EVERYTHING!!!: I originally had this idea that I would have a nice, reusable and publishable, standard library of functions that I could link against. I called this MonkeyExtras. Turns out the overhead of maintaining the `modules`, `objects`, `enums` and `functions` associated with a nice library is just too much for an app running in 64K of code space. This is especially true when it's a bytecode-interpreted, dynamic language with little-to-no optimizations performed in the compiler-pass.
By inlining everything, I was able to get quite a lot of memory back. Part of this is avoiding the overhead of objects, but, I think part of this is that by using the stack instead of the heap more, you're helping the garbage collector (GC) by giving it less to do.
Unfortunately, the code readability suffers a bit, but gotta say that it was totally worth it.
2. Avoid objects at all costs! Take any objects you're using and refactor the code to be pure functions using native data-types. Since you're being explicit about the state you're passing around, you'll be able to better save memory, and you'll help the garbage collector out some by not having an object instance hold a strong reference to your data. The moment data isn't used anymore, the GC can clean it up.
3. Remove any internal use of Strings! I was using String objects in a number of places, as property store keys, as setting values, as arguments to web services. Turns out, storing all of these strings was pretty inefficient, when in most of these cases the string was really just acting as an Enum type. By converting most of these cases to an actual Enum, I was able to save a bit more memory.
4. Consider bit-packing. A lot of times bit-packing is an over-optimization, but if you're trying to store a large amount of numeric data, in my case text rendering data, it maybe worth it packing the data into arrays of 32-bit integers. Unfortunately, you still have to pay the overhead of a dynamic array (Garmin: can we please get a memory-efficient native integer-array data-type?), but it can help. Again, the code readability will suffer though, so do this last.
SPEED OPTIMIZATIONS
1. Cache everything! I know, I know, this goes against everything just said above. However, if you optimized out the unnecessary memory usage of your app, you now have room to trade memory for speed and make it perform better for the end user. In this case, you want to take any expensive computations and make sure you're performing them only once and saving the results in memory (obvious, I know!). One thing you might want to consider is checking the amount of currently used memory to decide dynamically how aggressive your caching policy should be.
2. Hide the computations from the user! If you can, offload computations into the background so they don't block the UI thread. You can do this in Connect IQ 1.X by starting a timer thread and dispatching work to it. In Connect IQ 2.X, you should be able to use the Background Worker framework as well. One thing to note, though, is to avoid doing too much work in the background because otherwise the Watchdog timer will kill your app.
To avoid this, set a timeout yourself of approx 30ms, and preemptively exit if you exceed it so that the Watchdog doesn't trip.
Hope this helps!
-Rick