(VIRB) 3D sensor calibration, orientation_matrix is +/- 65 535?

EDIT: I forgot to check Profile.xlsx... Always check Profile.xlsx.

For calibration of 3D sensor values, the old PDF for the FIT SDK states that orientation matrix can contain values between -square_root(3) to square_root(3). This has since been removed and it now only states that it "can support values from + and –".

In the SDK example, the orientation_matrix values are specified as +/- 1 (or 0) for the corresponding axis. However for VIRB Ultra 30 it is so far +/- 65 535 (or 0) for all sensors. The values are stored as sint32, but look suspiciously like a +/- maxed out uint16?

Is it the calibration_factor and calibration_divisor that adjusts this? Or am I completely misunderstanding? I have an implementation that seems to work with the SDK example (if orientation_matrix is assumed to be in the range +/1), but need some input on whether the VIRB's orientation_matrix values need further adjusting before use.

Here's an example of a calibration message for the gyroscope:

Global ID: 167 | Message type: three_d_sensor_calibration | Header: 5/0b00000101
     253 timestamp                        UINT32([6841])
       1 calibration_factor               UINT32([5])
       2 calibration_divisor              UINT32([82])
       3 level_shift                      UINT32([32768])
       4 offset_cal                       SINT32([-23, 19, -12])
       5 orientation_matrix               SINT32([0, -65535, 0, 0, 0, -65535, -65535, 0, 0])
       0 sensor_type                      ENUM([1])

EDIT: If I try to adjust orientation_matrix for the magnetometer data (208) and try to convert the calibrated data to degrees (i.e. some raw-ish heading value), the end result will be the same since the change is linear for all values - the relative proportions do not change. Intermediate values change of course.

  • The missing √3 is a cut-n-paste error. It was an embedded formula in the original document that was lost in translation in the new markdown version of the doc. We are keeping a list of errata and plan to update the docs mid year. 

    65535 is the scale for this field. The scale and offset for all fields are documented in Profile.xlsx. Most floats are stored as integers in the file, and then then SDK automatically applies the scale and offset. If you look at that message in the Java, C#, or C++ SDK you will see that the field is a float. If you are working with the C SDK or writing your own decoder then you need manually apply the scale and offset.

    The ThreeDSensorAdjustmentPlugin will be your best source of information for working with the calibration data. 

  • I completely forgot to check scale, thank you! Which is embarrassing since I'm applying those adjustments all over the place for other kinds of output data.

    This is for my own parser in [badly written] Rust, but I'll be sure to check the SDK for other languages. Thanks for the tip about ThreeDSensorAdjustmentPlugin.

  • I realised I have a related question. Is it correct to assume that the relevant calibration message is the one last logged before the sensor data message in question (assuming sensor type matches)?

    Basically I read the FIT-file first, parsing data as "generic" data messages, so that I can filter and process these only when needed. No fancy "live" listener. Not sure if I'm making sense, just trying to compare my code with the 3d adjustment c++ code (no experience of c++, even if Rust is said to more c++ like).