I feel ya man. I've felt the same way, just as a CIQ hobbyist. It's almost like we need another language which can be transpiled to IQPL, so we can have efficient constants, etc. My biggest pet peeve is…
The other thing which I've thought about is just creating the Companion app (or use a web-app) to parse the settings into a LONG string and then have users paste it into the app-settings page…
Another approach would be to use switchToView (instead of popView or pushView) to handle switching between nested menus. You do have to keep track of the "pushed" and "popped" menus with your own little…
Ui.View.findDrawableById():
vs Speed: this is a costy call and its result should be cached in the View's onShow() method
vs Memory: ? are drawable resources loaded in memory at the time of the call or initially at app start (and this app call only leading to an additional reference to the memory object) ?
PS: caching can not be performed in View's initialize() since the onLayout() must happen first.
Ui.loadResource():
vs Speed: ? should the result of this function be cached; better in View's initialize() or on Show() ?
vs Memory: ? are string resources loaded in memory at the time of the call or initially at app start (and this app call only leading to an additional reference to the memory object) ?
PS: I found that this should be used even for XML-defined strings (Rez.Strings.XYZ); in some cases, not doing so leads to an integer number being displayed rather than the string itself
XML resources IDs:
vs Memory: XML resources IDs are strings; in order to retrieve some of those resources at runtime (layout drawables, application properties), one uses those string IDs in the source code, resulting into corresponding static string objects defined in runtime memory; is there anyway to optimize/avoid this (e.g. using symbols rather than strings when addressing those XML resources
vs Speed: see Ui.View.findDrawableById() above
PS: I understand such optimization may not be applied to properties, which must be identifiable unequivocally across different builds of the application
Custom fonts:
vs Memory: ? it seems each custom fonts used uses as much runtime memory as the corresponding *.fnt/*.png files; right ?
vs Speed: see Ui.loadResource() above
function onLayout(dc) {
myFont = Ui.loadResource(Rez.Fonts.MyFont);
}
function onUpdate(dc) {
// this can happen if the view was hidden, and now it is shown...
if (myFont == null) {
onLayout(dc);
}
// use myFont to draw
}
function onHide() {
myFont = null;
}
Interesting topic. I would ask further about what is a battery impact of:
How to understand the limits where one is better than the other?
And the most importantly: how to measure the battery impact if I want to compare two approaches?
Regarding nested menus, I completely agree with the memory impact. Specially since new devices keep having the 59,9kb widget memory restriction (I'm looking at you Enduro).
I found a work-around by passing weak menu references to the menu delegates. These are then used to:
1. Remove both sub-menu and parent menu items from memory when a new view is loaded (e.g. navigate to a picker).
2. Remove the menu items from memory on "back" action from the Menu.
This greatly helps keeping things under control when you want to have a user friendly menu system, that also looks good.
Another approach would be to use switchToView (instead of popView or pushView) to handle switching between nested menus. You do have to keep track of the "pushed" and "popped" menus with your own little stack, though.
Unfortunately not all devices support "switchToView", but yes I use it everywhere its compatible (yay .jungle - it is a jungle maintaining all the different devices!).
switchToView with native view is only available on devices with CIQ 3.1 or later. From the 3.1.0 change log:
Is there a case where using layouts makes sense? The memory difference is pretty big. I used them at first because that's what you get in the tutorials, then reality hits and you need to refactor the whole application to get rid of them for both flexibility and memory optimisation. If I was heading the Monkey C development I would for sure HEAVILY invest in either a static code analysis plugin or a linter that optimizes your written code into the least possible footprint. All of the nice things like constants need to be replaced by their literal values to save memory manually, how can this not be optimised? I guess the layouts could generate similarly optimised code that would do the direct dc calls. It puzzles me why this is not done yet given that memory usage is so crucial in Monkey C, specially with new devices coming out still only supporting 59,9kb of memory for widgets.
Ok, this kinda developed into a rant, sorry. But sometimes it gets frustrating when you want to support the platform and feel no one is listening.
I feel ya man. I've felt the same way, just as a CIQ hobbyist. It's almost like we need another language which can be transpiled to IQPL, so we can have efficient constants, etc. My biggest pet peeve is the fact references to string resources always consume app memory (if you load at least one resource with loadResource), which means that each string you use for app settings wastes memory (which is at a premium for 16 KB or 32 KB data fields). String resources which are only used by the CIQ store (settings and FIT contributor properties) should not consume app memory, IMO. It's bad enough that for one of my data fields, 32 KB device users get a nice drop-down list for a "theme color" settings, and 16 KB device are asked to manually enter a CSS code. :/
BTW, regarding layouts, I found that even code which dynamically lays out items (e.g. for a full-page data field which shows 6 values) is far less memory efficient then writing your own static layout system (with precalculated layouts for different devices).
I have the layout data in bit-packed arrays which are stored in JSON resources when available, and a single function which consumes the data and outputs various elements.
Sometimes I feel like there's a lot of reinventing the wheel here...
You mean, instead of calculation device specific data at runtime you do precalculate them for each device and just load them?
So instead of following
var x = w / 2; var y = h / 12; // draw line 1 at x, y y += h / 6; // draw line 2 at x, y y += h / 6; // draw line 3 at x, y // and so on
you do something like following
var x = Device.POSITIONS[0][0] var y = Device.POSITIONS[0][1] // draw line 1 at x, y x = Device.POSITIONS[1][0] y = Device.POSITIONS[1][1] // draw line 2 at x, y // ... and so on
Is there really a noticeable difference in such a simple use case?