Design pattern for drawing views in advance to improve responsiveness?

Drawing my views is relatively complex and slow. For higher-memory devices, is there a design pattern to draw views in advance, so that switching between them is more responsive?

Would BufferedBitmaps be an option? Or can layers be used to draw something before a view is put on the top of the view stack?

  • 1. Assuming there's some linear flow (user clicks next, and occasionally previous) I would try to pre-draw the next view, and keep the previous (maybe user clicks previous)

    2. However you;ll need to take into account that everything happens in one thread, so the things you do in order to pre-calculate things for the next view means other things will have to wait. So for example if you start to pre-calculate the next view right after you displayed the current view (the user just landed on), then usually the time it takes for the user to click again might be enough to pre-calculate the next view, however if the user clicks back, then they'll have to wait until you finish calculating the next view, and only then they'll be able to see the action of the back button click. However probably even this is OK (as it probably will be anyway after some lag (reaction time) and assuming until now you haven't cashed the previous view either, now it will be already calculated, so it'll be faster than until now)

  • I would try to pre-draw the next view, and keep the previous

    How would you technically implement pre-drawing a view or keeping one? As far as I understand the Dc for the View is only available once the view is shown, right? Are layers or BufferedBitmaps something that could be used to draw already before the view is shown?

  • I don't know your code, but it might be enough to separate calculations from actual drawing in onUpdate into 2 separate functions, and then do the calculations and save the results in an object, and use that object in onUpdate to do the drawing. So the caching would be just to keep prevPageObj, currentPageObj, nextPageObj and move them between them and calculate the one that became empty.

    Next step can be something like what you lamented about, but I'll leave that to others who have experience with buffered bitmaps.

  • TL;DR

    - might be beneficial to schedule any precomputation step using a timer, to avoid blocking the reponse to the current event (e.g. onUpdate, onKey)

    - human reaction time / lag != threshold for perceived UI/display sluggishness (humans can perceive changes much more quickly than they can react to them, otherwise nobody would need a display faster than 4 hz)

    --

    Speaking of precomputing views and any perceived lag on the user side, my understanding of CIQ is that it operates on a single-threaded event-based model (kind of like javascript), such that while your app is responding to one "event" (like a call to onUpdate() [*]) it will not be able handle another event (like user input).

    [*] whether it's via requestUpdate() for device apps, or it's scheduled as in watch faces or data fields

    (Yeah I know this was pretty much covered by "everything happens in one thread")

    So to avoid the situation that flocsy described, where the act of precomputing the next view may actually cause the UI to become less responsive (whether that means the screen is drawn more slowly, or the app responds to input more slowly), you might want to use a timer to schedule the next precomputation step. (The expiration time could be 0 milliseconds - the main idea is to avoid blocking the current event response.)

    That way the precomputation step at least won't delay what the app is *currently* responding to. Although it will still block future events, so you could still have problems if the user is pressing buttons rapidly or if your current view has live data (and updates continuously on a timer).

    However if the user clicks back, then they'll have to wait until you finish calculating the next view, and only then they'll be able to see the action of the back button click. However probably even this is OK (as it probably will be anyway after some lag (reaction time)

    Speaking of (human) reaction time, this sort of thing comes up all the time when discussing video games and input lag (which actually refers to how fast the game/display reacts after receiving an input). The general (incorrect) argument is that trying to reduce the input lag below X ms is not beneficial/important, where X ms is the typical human reaction time. This argument is incorrect because input lag determines how fast the game reacts to you, and not vice versa. And humans notice how fast a display/UI/game reacts to them. The point is for the game to be as responsive as possible to the user's inputs, not for the game to update at least as fast as the user can physically press a button. This can especially be seen in rhythm-based games, where the user is pressing buttons in anticipation of an event, not in reaction to an event.

    For a concrete example, the average human reaction time is about 200-250 ms (1/4 of a second). But if your computer display only updated at 4 Hz, you would definitely notice. I once worked at a place where the USB 3 docks for the external monitors needlessly set the refresh rate to 30 Hz, and it was immediately obvious - the mouse cursor felt like it was underwater.

    This is also why 120 Hz displays are a marketing point for phones and tablets.

    As a concrete Garmin example, I have definitely swiped or pressed buttons faster than I could react to the display changing, when I know the info I want is a few screens away. As an end user, I still want the display to react as fast as possible. And when you're swiping, you expect the content to track your fingers perfectly, otherwise it feels weird. That's why it was considered a bug when swiping on a CIQ watchface would cause the watchface to just disappear and be replaced with the next page of content (with no transition), instead of smoothly scrolling under the user's finger (like a native watchface). And Garmin agreed it was a bug, as they fixed it.

  • could be 0 milliseconds

    Actually no, there's a minimum. It seems to be 50ms, though it's unclear if that's a global minimum or device dependant:

    "The number of available timers (default 3) and the minimum time value (default 50 ms) depends on the host system"

    It's also possible that the 50ms min is only ensured in case the repeat parameter is true

  • Oops sorry haha. I should've just said use the minimum possible time.

  • So to avoid the situation that flocsy described, where the act of precomputing the next view may actually cause the UI to become less responsive (whether that means the screen is drawn more slowly, or the app responds to input more slowly), you might want to use a timer to schedule the next precomputation step. (The expiration time could be 0 milliseconds - the main idea is to avoid blocking the current event response.)

    That way the precomputation step at least won't delay what the app is *currently* responding to. Although it will still block future events, so you could still have problems if the user is pressing buttons rapidly or if your current view has live data (and updates continuously on a timer).

    Thanks for the interesting observations.

    The precomputation would be triggered by a timer. Right now, I have a timer firing a web request every 10 seconds, and the incoming response would initiate the precomputation. Currently, there’s only one active timer, but once I implement precomputation, I might have several running in parallel. To manage load, I could stagger their timings to avoid too many operations happening simultaneously.

    Even now, it’s possible that event processing delays user input responsiveness. With precomputation added - especially as it involves calculations for multiple views in the background - this risk would increase.

    One idea to ease this is showing a small loading indicator in a corner whenever background tasks are running. It could help users understand why a swipe or other input might feel sluggish at that moment, and make the experience feel more intentional.

    And when you're swiping, you expect the content to track your fingers perfectly, otherwise it feels weird.

    Absolutely. While button navigation in my app feels reasonably responsive, swipe-based view switching highlights the slowness more clearly.

  • How long the steps take (I know it can vary depending on internet, device); download (how many pages you download in one request?), calculation, drawing?

  • In the simulator/profiler, the onUpdate() method for a relatively simple or standard site takes around 70,000 µs. Interestingly, on older devices, the profiler reports significantly lower times - around 25,000 µs, for example. Some users have larger sites with more data to process and display, which could increase onUpdate() times further.

    Is it correct to assume that the profiler reflects actual execution times on the PC-based simulator, which are considerably faster than those on the real device? My app feels very responsive in the simulator, but noticeably less so on the actual device.

  • I also don't think the absolute times in the simulator are indicators on the speed on the real device. You can use it to find the slowest part or compare before/after changes, but not to the actual device.

    I meant to ask on the slowest device you can test it on. I surely would pre-fetch data from the internet, for multiple reasons: internet can be slow. Bluetooth IS slow. Request can fail, and need to retry, etc...

    But if the calculation+rendering takes only only about 100ms then I wouldn't start with the refactoring there. Especially if most of the time is drawing and not calculating that can be done in advance. Instead what I would do is something like you also wrote: right when the user clicks, I'd display a loading... icon, then call requestUpdate so the user would see it immediately, and then with a timer call the calculation, and call another requestUpdate at the end of the calculation.

    But then if the whole calculation only takes 100ms or even 200ms, then probably as a user I'd prefer not to see a flash of something (the loading... icon) that is meaningless, and in fact if you use the timer with 50ms it just adds to the delay.

    Maybe the strategy could depend somehow on which device it is. I don't know what devices you support, and what devices you have for testing, but as a theoretical idea, it might be different on old/new devices. i.e devices with Graphic pool, or whatever you decide that is the divide between old and new.