I have a big question about optimization. I could try testing things in the simulator, but I don’t trust it much, because so often I’ve noticed that the behavior differs between the simulator and the actual device.
My AI seems to indicate that in terms of performance we have:
bitmap > drawLine > drawText
This is supposedly due to rasterization… Bitmaps go directly from memory to the screen without complex calculations.
So far, my reasoning has been:
-
Complex shape → bitmap
-
repeated shape → font
-
Basic shape → drawLine....
I want to know if I’m wrong here. If I want to optimize an application as much as possible, should I just use bitmaps for everything, even for displaying the time or a progress bar?
(I used to think that if I wanted to display a progress bar, it would be better to prepare something like "XXXXXXXX", where each "X" represents a segment of my font, and then I could draw the whole bar with a single command. But it seems that in reality, the watch ends up performing a dozen complex operations for that.)