How to avoid heavy API calls each second in a datafield?

I'm developing a complex datafield that's pulling Activity.Info but also Weather under compute. Under onUpdate the dc is drawn every second (using direct draws only).

1. What's a clean and preferred way to avoid overloading on Weather (or other heavy calls) when the datafield naturally refreshes every second?

Direction & speed, and other navigational data, are also called every second from Activity.Info but at least these don't require an external API call, whereas Weather calls need to pull this from the mobile phone (cached and updated every 20 mins I read) - still it seems taxing to make such heavy API call each second.

2. Am I correct in thinking that direction and speed pulled every second is reasonable for a datafield, or should some pause be introduced here as well, and if so what's the cleanest approach?

New to this and getting results is relatively easy, but knowing the design and approach is appropriate is something very different!

  • Sounds right. For weather data you can refresh it once in half an hour or so. I'm pretty sure that the weather api would rate limit you if you would call it every second

  • Assuming the API doesn't do rate limiting, what would be a clean and structural way to call this API say once every 10 minutes or whatever a reasonable rate would be?

  • Keep track of the last timestamp of when you last called the weather api and don’t call it again for 10 mins. Another way is to round the elapsed time to seconds than perform modular arithmetic dividing by 600. Whenever the remainder is 0 make the call. 

  • When you call the Garmin Weather API, it doesn't actually request it.  The device does that, and it seems right now, once an hour.  So you wont save much as you are already just getting the info that's cached on the device.

    If you look at observationTime, you'll see it really doesn't change that often.

  • Another way is to round the elapsed time to seconds than perform modular arithmetic dividing by 600. Whenever the remainder is 0 make the call. 

    This isn’t the best way to schedule recurring stuff in a CIQ datafield. compute() is called roughly once per second, not exactly every second on the dot. (I’m not addressing the question of whether scheduling anything at all is appropriate in this case.) I realize that rounding is supposed to handle this, but there are edge cases.

    I’ve tested a datafield of mine which outputted rounded seconds (timerTime), and more than once I’ve seen a situation where the value seemed to “skip” a second (e.g. go directly from 59 to 61.)

    I assume one of two things happened:

    - compute() was called at times which were close to 1 second apart, but rounded such that they appeared to be 2 seconds apart. e.g.

    59.49 (rounds to 59)
    60.51 (rounds to 61)

    - compute() was delayed so much (due to other stuff going on in the activity) that the subsequent call was actually ~1.5 seconds after the previous one. e.g.

    59.01 (rounds to 59)
    60.52 (rounds to 61)

    Sure, this may be a very unlikely scenario on modern Garmins (I tested on a 935, which is pretty old now), but why bother taking that chance? In general, it’s better to write defensive code which has as few unnecessary assumptions as possible.

    TL;DR as flocsy mentioned the last time this was suggested, there’s a better solution if you wish to use elapsed time: record elapsed time mod 600 seconds — every second — and execute your recurring code when the current value is less than the previous value.

    Or simply note the elapsed time at beginning of activity (0) and at each recurring call. Whenever the current time minus previous time is greater or equal to 600 seconds, execute your recurring call (and record the elapsed time.)

    Both of the solutions are simple in terms of concept and code, but they don’t have the problem where a scheduled task can be missed due to compute() not firing exactly every second..

  • As I'm learning best practices in this ecosystem, I appreciate the Garmin Weather data is cached and updated infrequently (whether it's 20 mins or 1 hour). However calling Weather every second seems excessive, even if the API may be throttled?

    What if another Weather API were to be used (like OWM or Stormglass), where the amount of calls are limited, then what would be the prefered way of coding this? Just implementing a timer or as Ultra suggests tracking elapsed time?

  • The way I do it for things like OWM is use a background service with a temporal event scheduled for every X minutes (I default to 15 minutes).  When the background runs, it requests the data and returns the results to the main app.  You could do something like this for Garmin Weather, but I think it would be overkill.

    As of System 7 you could do makeWebRequests in the main app of a data field, but you'd still need a  background for pre System 7 devices.

  • It’s weather data, it really doesn’t matter if the odd 10 minute gap appears very rarely in the requests.  Personally I’ve never seen it skip but then I don’t write cpu intensive code in my data fields.  Your TL;DR is a minor adaption to avoid a problem you’ve seen.

  • It’s weather data, it really doesn’t matter if the odd 10 minute gap appears very rarely in the requests.  Personally I’ve never seen it skip but then I don’t write cpu intensive code in my data fields.  Your TL;DR is a minor adaption to avoid a problem you’ve seen.

    For practical purposes pertaining to this specific use case: agreed on all points, except I would never use the “easy” approach in the first place, regardless of whether some weird edge case exists or not. The easy approach doesn’t really model the problem statement properly and it has a hidden assumption, which is why edge cases exist.

    For pedagogical and idealistic purposes relating to best practices in general: again, there’s no reason to make unnecessary assumptions that don’t even result in significantly simpler code, when the trade off is at worst sometimes your code doesn’t work under certain circumstances, and at best you’re not thinking about problem solving the right way (imo). 

    Given this problem statement…

    “I want to execute a recurring “action” every 600 seconds in a Connect IQ data field”

    …compare these approaches:

    - [everyone’s favorite approach] on every call to compute(), calculate elapsedTime (converted to seconds) mod 600 and perform the action when the result is 0

    - [alternative approach] initialize member variable lastEventElapsedTime to 0. On every call to compute(), calculate elapsedTime (convertedToSeconds) - lastEventElapsedTime, and if the result is greater or equal to 600, update lastEventElapsedTime with the current elapsedTime (converted to seconds) and perform the action

    Yes, the first approach is much easier to describe in a forum (unless you just go with the problem statement of “do X every 600 seconds”). But it doesn’t truly capture the problem statement, which is why it doesn’t work in all cases.

    I’ve worked at both an old school company (much like garmin) and a modern startup, and the very common line of thinking where “I’ll take this shortcut that isn’t even really a shortcut bc it doesn’t matter in this case” invariably leads to predictable bugs. Saving 5-30 minutes today leads to hours or days of wasted time tomorrow. In the worst case, you’re stuck with a suboptimal design decision for years bc backwards compatibility requirements prevent undoing or fixing it.

    It’s especially significant in this case bc this is the 2nd recent instance where someone has mentioned calculating elapsedTime mod someTimeout and either implicitly or explicitly stating that it’s perfectly fine to compare the result to 0. The first one didn’t even mention converting elapsedTime to seconds and rounding, which made the approach even more invalid (there is no way that elapsedTime will be perfect multiple of 1000 ms on each call to compute, so if you’re just calculating elapsedTime (in ms) mod (x * 1000) and straight up comparing the result to 0, your action will never happen.)

  • TL;DR thinking about defensive coding (and problem solving) in all cases (when feasible) will serve you well when it actually comes to problems which have real edge cases that aren’t covered by the easy solution.

    Going for the easy solution most or all of the time leads to bugs when you work on the harder problems. 

    Ofc this is CIQ and most or all of us are just doing this for fun, so it doesn’t matter that much.