Some things to note:
- the default for the new compiler is -O 1 (not -O 0 !!!)
- I can hardly imagine any reason (unless we find some bug in the compiler that happens only in -O 2) why anyone would not use always -O 2
Yes, that statement is totally not true, even my app that did compile and run with -l 3 with the old compiler had to be modified. So that is an actual bug in the new compiler if you take that sentence…
Thank you for your detailed response, but I think any response will inexorably entrench us on opposite sides of the debate over the value of Object Oriented program design. And that won…
that's an interesting promo, but it looks like they intentionally didn't include it in the announcement, because I tested: -O2, -O2p -O2z, -O3, -O3p -O3z, and all of them produce the same code/data/..…
Because your old code is already manually optimized (instead of "Graphics.FONT_XTINY" you have "0 /*Graphics.FONT_XTINY*/", etc).
But still 100 bytes can be a difference between a crash and succeeding. And with 150 you can almost add 1 tiny more featute (i.e: add a boolean setting to enable/disable some part of the code)
No, I use FONT_XTINY as I like readable code! It's more a matter of how you use it
var tFont=Gfx.FONT_TINY;
And then I use tFont.
There is not one of my apps were 100 bytes would make any difference, as the wise thing to do, is stay a ways back from the edge. Being at the edge can be a long term headache in general. A simple bug fix could push you over.
The compiler now has a new optimization pass. This optimization pass will perform constant folding, constant substitution, branch elimination and a few other optimizations before generating your executable. These optimizations will make your code smaller and more performant without any code changes.
Oh wow my prayers have been answered. Thanks for this post! It's a shame that it likely can't do anything that we can't already do through manual optimization tho.
There is not one of my apps were 100 bytes would make any difference, as the wise thing to do, is stay a ways back from the edge. Being at the edge can be a long term headache in general. A simple bug fix could push you over.
Or an update of the SDK... But we only have 16 KB for data fields on old devices, and even some newish devices still only have 32 KB. Once you add resources (including properties/settings), it's really easy to hit the limit.
- I can hardly imagine any reason (unless we find some bug in the compiler that happens only in -O 2) why anyone would not use always -O 2
I think that's the reason.
Plus I think the following statement can't possibly be true 100% of the time:
These optimizations will make your code smaller and more performant without any code changes.
If we think of this as a form of compression, by the pigeonhole principle, there must be some (very rare) cases where it makes your code bigger and/or less performant. (I think...lol). I could be totally wrong about that tho.
I mean with C compilers there's usually a trade off for optimization (like making your program harder to debug.)
Maybe one issue could be that stack traces become harder to read with optimization turned on.
Yes, that statement is totally not true, even my app that did compile and run with -l 3 with the old compiler had to be modified. So that is an actual bug in the new compiler if you take that sentence as a promise (or just another proof that Garmin don't check their documentation, examples, promises with -l 3) but even then it was worth to "fix" my code (it was mostly adding a fer more ugly "as SomeType" so the byte-code didn't really change only the source-code ) because I still gain some bytes from the -O 2.
Regarding the stack traces: we'll probably see it in ERA after our next upgrade. But I hope it'll be less of a problem than using the prettier monkey c plugin that although does a bit more but at the cost of changing all the line numbers
Oh wow my prayers have been answered. Thanks for this post! It's a shame that it likely can't do anything that we can't already do through manual optimization tho.
Yup. most of my code was hand optimized over the years, so I see very little gain.
I don't think they meant that you don't have to change your code if you update the compiler, just that turning on optimization gives you gains without any code changes. (Hopefully I interpreted your comment correctly)
(Also, my reply got nuked when you edited your comment, which is a long-standing forum issue. Glad I didn't type a wall of text. These forums are really terrible.)
Yup. most of my code was hand optimized over the years, so I see very little gain.
It will be good for new devs and new apps tho (much like type checking.)
What would be good is if Garmin had a tutorial like "How to design and implement an efficient app" that took someone through the whole process. There is a definite learning curve to CIQ. Not something you can pick up in a weekend, and much longer if you are new to this a kind of coding for you. As time goes on CIQ keeps getting more complex and the learning curve increases..
One problem from my POV is you can't implement code that's maximally efficient (in terms of memory usage) if you use the standard features of the language, like dictionaries, classes, switch statements, etc. I'm sure people who haven't worked with C or C++ aren't used to writing code that's unmaintainable or unreadable just to save a few 100 bytes (which in most other contexts, is meaningless.)
It would be nice if there were tutorials, but it would also be nice if documentation issues and long-standing bugs were fixed.