Design pattern for drawing views in advance to improve responsiveness?

Drawing my views is relatively complex and slow. For higher-memory devices, is there a design pattern to draw views in advance, so that switching between them is more responsive?

Would BufferedBitmaps be an option? Or can layers be used to draw something before a view is put on the top of the view stack?

  • I meant to ask on the slowest device you can test it on. I surely would pre-fetch data from the internet, for multiple reasons: internet can be slow. Bluetooth IS slow. Request can fail, and need to retry, etc...

    My web request logic is already fairly decoupled from the display. I cache the response in storage, and when the app opens, it first displays that cached data - provided it's not too old - while waiting for an updated result from the web. If the cached data is too old, the app instead shows a “Loading…” message. The cache is only kept for a minute, but that's already helpful in scenarios like the user moving up and down the glance list or switching between the glance and the widget. In those cases, the app can often avoid making a new request entirely.

    Once the app is open, the onUpdate processing time doesn’t really matter, since the screen is only redrawn after that work completes - there’s no noticeable lag from the user’s perspective.

    The only case where performance is a concern is when switching between views, which is relatively slow on the Epix2Pro47mm, the only real device I have for testing. The oldest devices I support are the Fenix 6 and Vivoactive 3.

    I did a quick test with BufferedBitmap and it looks like it could be used for pre-rendering views in the background. However, it does come with a performance cost: my drawing routine takes about twice as long when rendering to the Dc of a BufferedBitmap compared to the Dc passed into onUpdate. The actual draw call to render the BufferedBitmap onto the main Dc is fast, though, so the approach could still work. The main tradeoff would be longer blocks of unresponsive input while the background drawing is happening.

    I like the approach with `BufferedBitmap` because it allows me to apply the functionality selectively - for devices with enough memory - without needing to change the existing drawing logic. The same code can be reused on devices where I don’t use `BufferedBitmap`, so there’s minimal duplication. In contrast, separating calculations from drawing would require a significant refactor, introducing a lot of new code. It would also make it harder to maintain a lean implementation for devices where I don’t want to cache those calculations in memory.

  • I built a proof-of-concept and it worked pretty well. The app is notably more responsive, I'd say on par with the native apps. The screen content was just dummies, I still have to build out the logic that updates the bitmap in the background. But the result was good enough to give it a try.

  • I run some further tests and ran into a major roadblock: the graphics pool seems to be quite limited. My proof-of-concept used three views, and that is exactly the maximum of BufferedBitmaps equaling the Dc size that fit in the graphic pool on an Epix2Pro47mm.

    Does anyone know how the graphic pool is sized? 

    The announcement on it just says it is base on available memory, without any details on limits. 

    forums.garmin.com/.../a-whole-new-world-of-graphics-with-connect-iq-4

  • The way I've understood it is the size can vary by device, and it'd not unique to your app, but shared between apps on the device

  • Thanks! I think that approach is a bit too unpredictable, so I'll have to rule out using BufferedBitmap as a solution for improving responsiveness.

    I'm now exploring the idea of doing pre-calculations, as @flocsy had previously suggested. However, I've hit another snag.

    One of the functions I need for pre-calculation is determining the width of text using Dc.getTextDimensions. While values like text height or display dimensions can be retrieved from sources other than Dc, it seems that text width specifically requires access to a Dc.

    I tried retaining the Dc obtained during onUpdate, and while getWidth and getHeight still work outside of onUpdate, calling getTextDimensions results in an "Invalid Value" error (Failed invoking <symbol>).

    In your opinion, what's the best way to get text width outside of onUpdate? I could create a BufferedBitmap or Layer solely to access a Dc for these calculations, but that feels a bit hacky.

  • Does the text change? You could do it in onLayout

  • dc.getWidth, dc.getHeight, dc.getFontHeight, I'll do in onLayout.  getFontHeight. maybe for a couple fonts, so I might get it for FONT_SMALL and FONT_MEDIUM .  No need to do it each time onUpdate is called, as they don't change.

    I use dc.getTextWidthInPixels if I need the width, but it's not that often - an example could be to place seconds properly after HH:MM on a watch face , but in most cases once I determine the font to use (visually, in the sim), I just use TEXT_JUSTIFY_*         

    As far as speeding things up, understand how often you need to calculate things - it's not just pre-calculation, but how often you do the pre-calculation.  The example I've used is sunrise/sunset.  You can display it as often as you want, but you only need to calculate it once a day.  And if you have things that only change every 5 minutes, there's no reason to calculate it every second.  Only calculate things when needed and not when the last value is fine.     

    If you use data you get from something like a makeWebRequest, you only need to calculate things when you get new data      And if you are requesting data, say every 30 seconds, look at if you really need to do that, and if requesting data every couple minutes would work.  This really depends on your app.  For me with weather data, every 15 or 30 minutes works fine.              

  • I did some refactoring today, but unfortunately the results weren’t as good as expected. It turns out that the real performance bottleneck lies in the drawing phase, not in the precalculation. Previously, my onUpdate method took about 70,000 us. After separating the drawing and precalculation (though both still ran within onUpdate), the total time increased to 80,000 us - only 20,000 us of which was due to precalculation. So even if I removed the precalculation entirely, the net gain would be just 10,000 us, or about 14%. That’s probably not worth the added complexity.

    A typical screen in my app draws around 10-12 bitmaps, 10-12 text elements, and a few basic shapes like circles and an arc. Drawing text elements in particular seems to be especially time-consuming.

    Using BufferedBitmap proved to be much more effective and elegant in terms of implementation - if not for the limitations of the graphics pool, it would likely be the best solution.

    I also experimented with using a Layer, but it seems that a Layer's Dc behaves similarly to the View's Dc: it can’t be drawn to outside of onUpdate.

    Does the text change? You could do it in onLayout

    The text can change with every update.

    I’ve now been using the approach below: defining an interface along with a stub implementing the relevant Dc functions. This allows me to switch between the real Dc and the stub as needed, depending on what's available.

    Using a BufferedBitmap for text width calculation works well as shown in the code below.


    typedef EvccDcInterface as interface {
        function getWidth() as Number;
        function getHeight() as Number;
        function getTextWidthInPixels( text as String, font as FontType ) as Number;
    };
    
    (:glance) class EvccDcStub {
        
        private var _width as Number;
        private var _height as Number;
        private var _bufferedBitmap as BufferedBitmapReference;
    
        public function initialize() {
            var systemSettings = System.getDeviceSettings();
            _width = systemSettings.screenWidth;
            _height = systemSettings.screenHeight;
            _bufferedBitmap = Graphics.createBufferedBitmap( { :width => 1, :height => 1 } );
        }
        public function getWidth() as Number { return _width; }
        public function getHeight() as Number { return _height; }
        public function getTextWidthInPixels( text as String, font as FontType ) as Number {
            return ( _bufferedBitmap.get() as BufferedBitmap ).getDc().getTextDimensions( text, font )[0];
        }   
    }

    If you use data you get from something like a makeWebRequest, you only need to calculate things when you get new data      And if you are requesting data, say every 30 seconds, look at if you really need to do that, and if requesting data every couple minutes would work.

    Yes, of course, onUpdate is only triggered when new data arrives. But anyway, my concern is only the first onUpdate, when a view is shown. That is where the user can really feel the lag.

  • Another dead end is Storage. I had hoped to store pre-rendered views there, but while you can store a BitmapResource, you cannot store a BufferedBitmap.

    That raises the question - what’s the point of storing a BitmapResource in Storage at all? Since it's already part of the app’s resources and can't be modified, it seems redundant to store it separately. What use case did Garmin have in mind for that?

  • Also, understand that Storage uses the file system, which can slow things down.  The graphics pool will use the file system when something is first loaded (or re-loaded) but it will often be in memory/