Monkey business in data types used by Moment internals

Sorry if this is confusing; I haven't slept in a few days. Long story short, I'm writing my first datafield and I'm pulling in SunCalc. For reasons too silly to get into, I am seeing divergent behavior with respect to precision of the data types used internally by the Moment class, when running on simulator vs on real hardware.

Consider the following (simplified) code, which originated in SunCalc:

function toMoment(epoch) {
return new Time.Moment(epoch);
}

function onUpdate(dc) {
var moment0 = toMoment(1702900000);
var moment1 = toMoment(1702902000);
System.println("Delta = " + moment1.compare(moment0));
}

This constructs two Moment objects (via a wrapper function - this is important), subtracts them, and prints the delta. On real hardware and on sim (venu3 for both), the delta printed as being 2000, just like we expect.

Now consider the following minor modification:


function toMoment(epoch) {
return new Time.Moment(epoch);
}

function onUpdate(dc) {
var moment0 = toMoment(1702900000.5);
var moment1 = toMoment(1702902000.5);
System.println("Delta = " + moment1.compare(moment0));
}

We're now constructing Moments using fractional seconds, (which is what happens in SunCalc internally). I believe this will still print 2000 on real hardware (though I haven't re-tested the simplified version), while the simulator prints 2048.

There seems to be some monkey business going on. I'm guessing the simulator is using the underlying duck types to emulate a Moment, whereas real hardware implements Moments in native code, with more strict typing?? Needing a wrapper function to create a moment is important - if we simply call new Time.Moment(1234.5), we'll get a type error, but wrapping this operation into an intermediate function allows us to create a Moment based on fractional seconds (at least on sim, anyway) but on real HW the Moment will truncate the input argument to int?

Guessing we're probably getting 2048 on sim because we're ending up with a Moment whose timestamp is backed by a float instead of an int, so we're hitting loss of precision? But this is just a guess. Any input would be appreciated.

The trivial fix is to call .toNumber() in the wrapper function above (I'll send a PR to SunCalc later) but suspect I've stumbled on something more interesting here. I'm on Linux, CIQ SDK 6.4.1, Venu3 for sim and real HW.

Am I completely losing my mind here?

Thank you!

  • Why do you try to use sub-second precision ? Does it really make sense? Isn't it easier to round (I would even say floor) the "timestamp" to seconds?

  • This is very interesting, thanks for the post!

    I can reproduce this on Windows (x64).

    Looks like what's happening here is:

    - 1702900000.5 as a single precision float is 1702899968.000000. (Confirmed with a C program)

    - 1702902000.5 as a float is 1702902016.000000. (Also confirmed via C)

    - At least on the simulator, Moment will use *whatever datatype you passed in*.

    Sample code (built with type-checking disabled):

        function toMoment(epoch) {
            System.println("epoch = " + epoch);
            var moment = new Time.Moment(epoch);
            System.println("moment.value() = " + moment.value());
        }
    
        function testMoment() {
            toMoment(1702900000);
            toMoment(1702902000);
    
            toMoment(1702900000.5);
            toMoment(1702902000.5);
    
            toMoment(1702900000.5d);
            toMoment(1702902000.5d);
            
            toMoment("a");
        }

    Output:

    epoch = 1702900000
    moment.value() = 1702900000
    epoch = 1702902000
    moment.value() = 1702902000
    epoch = 1702899968.000000
    moment.value() = 1702899968.000000
    epoch = 1702902016.000000
    moment.value() = 1702902016.000000
    epoch = 1702900000.500000
    moment.value() = 1702900000.500000
    epoch = 1702902000.500000
    moment.value() = 1702902000.500000
    epoch = a
    moment.value() = a

    I haven't tried it, but I assume if I try to do any operations with the "a" Moment, the app will crash.

    Why do you try to use sub-second precision ? Does it really make sense? Isn't it easier to round (I would even say floor) the "timestamp" to seconds?

    I think the question here is why doesn't the API truncate/convert float input, given that the constructor is typed as taking a Lang.Number as input. And the answer is that the Moment constructor doesn't actually check the type of its input at runtime.

    The correct behavior would probably be to throw an error immediately, since there's no implicit conversion between Number and Float in Monkey C afaik.

  • I believe this will still print 2000 on real hardware (though I haven't re-tested the simplified version)

    You should probably try it. I bet it would print 2048 on real hardware, but I'm just guessing.

  • Actually now that I looked at the docs, Time.Moment's constructor is not in the docs. So it looks like the way you use it is not as it's supposed to be used.

  • Actually now that I looked at the docs, Time.Moment's constructor is not in the docs. So it looks like the way you use it is not as it's supposed to be used.

    https://developer.garmin.com/connect-iq/api-docs/Toybox/Time/Moment.html#initialize-instance_function

  • Yes, I've used it for years.  In some of my apps that do makeWebRequest() call, the data includes a time stamp in Unix time so I use

    new Time.Moment(epoch);

    where epoch is in "Unix time"

  • I've re-run the simplified example on HW and I'm seeing a value of 2048 as well. I think the earlier delta was due to different rounding due to location of sim vs real device. Sorry about that. Regardless, I feel Moments should convert the provided Unix time to an integer, or we'll end up with a Moment that's internally backed by a single-precision float (which is why my the delta for my example produces 2048 and not 2000 as expected).

    I realize that passing a floating-point value to a Moment is silly - this was a side effect of how SunCalc is handling fractional days. I've fixed that separately (and opened a ticket in the original project). Fun problem, this.

  • Regardless, I feel Moments should convert the provided Unix time to an integer, or we'll end up with a Moment that's internally backed by a single-precision float (which is why my the delta for my example produces 2048 and not 2000 as expected).

    Yeah I agree that in an ideal world, if you pass a non-Number to the moment initializer, it should throw a runtime type error or it should convert the argument to a Number. In the latter case, the argument should be typed as Lang.Numeric, otherwise the compile-time type checker would reject non-Number arguments anyway, since it's currently typed as Lang.Number.

    It's pretty ridiculous that new Moment("a") doesn't crash at runtime, for example. (It will certainly crash if you try to do anything with that Moment, though.)

    I do think that this problem would be impossible if you enable strict type checking (in the absence of casts), since all types would have to be specified and you wouldn't be able to pass an unknown type into Moment.initialize().

  • There's a few problems here. A 32-bit float can not be converted to a 32 bit integer or vice versa for large numbers (> 2^24) without loss of precision because of the way the bits work. This is possible with 64-bit doubles because there are more bits available. The other problem should be pretty obvious. There have been over 8000 days since the beginning of the J2000.0 epoch, so this leaves about three decimal digits to represent the fractional part of the day. Best case scenario is that you can get accuracy to the third decimal place, which translates to 1.4 minutes of resolution. You probably won't get this, so now your best hope is maybe 3-5 minutes. The reality is that if you want to calculate anything not of this world you need to use double precision. This will give about 16 digits, which sufficient for even the IERS. Of course, if all you want to know is that sunrise is sometime in the morning then 32 bits is plenty.