Optimisation: bitmap > drawLine > drawText

I have a big question about optimization. I could try testing things in the simulator, but I don’t trust it much, because so often I’ve noticed that the behavior differs between the simulator and the actual device.

My AI seems to indicate that in terms of performance we have:
bitmap > drawLine > drawText

This is supposedly due to rasterization… Bitmaps go directly from memory to the screen without complex calculations.

So far, my reasoning has been:

  • Complex shape → bitmap

  • repeated shape → font

  • Basic shape → drawLine....

I want to know if I’m wrong here. If I want to optimize an application as much as possible, should I just use bitmaps for everything, even for displaying the time or a progress bar?

(I used to think that if I wanted to display a progress bar, it would be better to prepare something like "XXXXXXXX", where each "X" represents a segment of my font, and then I could draw the whole bar with a single command. But it seems that in reality, the watch ends up performing a dozen complex operations for that.)

  • The AI might be correct, but even if it is it's probably by luck. I'd guess the answers might come from other platforms with more data on the net. I'd also be cautious even if you tested this on your real device because different devices (most notably older vs newer) might have different results (in theory even the best choices can be different, not only their timing)

    I'd say you might consider other things, like what devices you want to support and what capabilities they have (screen size, display technology, number of colors, etc) Having to support many different variabilities might make you choose not the most optimal but easier maintainable choice.

  • I ran some tests with my profiler, modifying part of my application, and noticed that my numbers doubled on the profiler. So I think I'll leave it like that for now. It's better to have something maintainable than to attempt optimizations that may not be so effective.