WatchDog and Sys.getTimer() in the Simulator and in real devices... how do they really work ?

I have a loop that can execute for a fairly long time and was triggering the WatchDog timer in the Simulator.

I've made changes so that the operations are executed in multiple passes, limiting the number of times through the loop on each pass by taking the starting time from Sys.getTimer() then before starting each successive pass I call Sys.getTimer() and check for the number of mS passed.

My successive passes are run on a 1 second interval using a timer callback.

From other posts in the forums I read that the WatchDog is set to something around 5 Seconds. I also saw something that indicated that the WatchDog isn't time based, it's counting op codes executed or some equivalent.

I figured with each pass being started on 1 second intervals, if I exit the loop after 200ms I should be really safe.

That's not how it works. The WatchDog gets triggered at various times before my loop reaches 100mS according to Sys.getTimer().

If you understand the relationship between the WatchDog and the values returned from Sys.getTimer() in the simulator, please explain to I can choose a bullet proof way to let my code execute in the Simulator and in real devices without being terminated by the WatchDog and without sitting idle for a high percentage of the time.

Thanks

  • The watchdog is based on the number of byte codes executed without returning the the System (actually the VM)

    In a watch face,onUpdate is called at most every second for 10 seconds, and timers can only be used when in high power.

  • Hi Jim,

    I'm working on an App rather than a watch face.

    Is there any relationship that you know of between the number of byte codes that trigger the Watchdog and the values returned by Sys.getTimer() ? Any number of mS of execution that should be safe, on the Sim and on a device ?

  • The watchdog is the same for all app types, but does vary by device.  No sture why you are even using Sys.getTimer.  In a device app you can use a timer that fires every 300ms or something

  • The code is in a loop. At the bottom of each loop I read Sys.getTimer() and compare with the start time reported by Sys.getTimer() to control whether I'll go round the loop again. Seemed straight forward and easy to just read the system mS timer.

  • You could do it that way, but remember, it's not the time but the bytecodes.

  • It seems like that puts me (and others) in a bind unless there's some way to read the number of byte codes executed since the watchdog was last reset.

    Is there any known relationship between the watchdog setting on a given device and actual time ?

  • I think another approach would be to ensure that every execution pass only executes a fixed number of statements (which would be the case for most code, unless the amount of executed code per pass is determined dynamically, as in your case, or based or user input / outside data).

    Then you just run your one pass of your code, note that it doesn't trip your watchdog, and call it a day.

    IOW, why make something dynamic (and prone to run-time unpredictability), when you can make it static and test it once? (This of course assumes that the watchdog limit is the same in the simulator and across all devices. If we can't make that assumption, then you could also test it once on a real device. EDIT: Now I see above that it's different across devices, which is unfortunate. But assuming that the limit in the sim reflects the real device, you could test your app in the sim for every supported device.)

    If your code needs to parse outside data which can change in size, then you test it on the largest possible (or practical) input. (And if you need to handle input that's too large for 1 pass, then you have to split up input parsing into multiple pieces.)

  • It seems like that puts me (and others) in a bind unless there's some way to read the number of byte codes executed since the watchdog was last reset.

    It only puts you in a bind if you're trying to dynamically maximize the amount of code that you run per pass.

  • I'm hoping to find a way to get the loop's job done fairly quickly in each device that it may run on.

    With devices having different processor speeds I expect that the time to execute N byte codes on a modern high-end watch will be quite different from the time that takes on an older FR or VA-HR, so having some way to know what limit applies on each device seems useful and important, otherwise in order to run on a VA-HR and an FR-945, the code would likely be idle a very high percentage of the time on the FR-945. The only way I can see to sort this out is if there is some real-world relationship between the watchdog's limit on a given device and actual time.

  • Yeah, I get that your goal is optimize execution efficiency on every device.

    I don't know if this helps, but the Devices library (%APPDATA%\ConnectIQ\Devices on Windows) has a file called simulator.json for each device. e.g. %APPDATA%\ConnectIQ\Devices\f245\simulator.json.

    This file has a key watchdogCount (at the top level of the JSON object) whose value is the device-specific watchdog timeout. Looks like it's 120000 for most modern devices (such as FR245).

    What you could do is initially optimize your loop for devices with watchdogCount  = 120000, since that's by far the most common case.

    Then you could look at other devices and take the ratio between their specific timeout and your baseline timeout.

    e.g. FR920XT has a watchdogCount of 80000. Therefore, you can hypothetically run your loop 2/3 of the time that you run it on an FR245, for a single pass.

    You'd probably still want to test on all sets of devices which have different watchdogCounts, but at least this approach would be somewhat data-driven. Maybe you could even use run no evil tests to automate the testing.