I recognize that this post belongs in a 'Port JAVA to VBA' or 'Port c to VBA' forum, but couldn't find one, so I'm gambling that programmers will generally try to help (often after a bit of criticism or challenging a premise). I have spent many hours going over and over my logic, and I can't see how my calculation produces a different [wrong] checksum than established implementations such as runanalyze.com. It has to be related to the precise way the two languages operate on [unsigned] integers for bitwise operations.
I'm checksumming all bytes of all Definition and Data messages; the 14-byte FIT header is excluded as is the final two-byte checksum. My FIT file uploads to Garmin fine after I force the checksum computed by runanalyze.com.
The published SDK code in 'c' or 'Java' deals in unsigned 16-bit integers and updates the running CRC by its return value instead of updating CRC in-place.
FIT_UINT16 FitCRC_Get16(FIT_UINT16 crc, FIT_UINT8 byte) { static const FIT_UINT16 crc_table[16] = { 0x0000, 0xCC01, 0xD801, 0x1400, 0xF001, 0x3C00, 0x2800, 0xE401, 0xA001, 0x6C00, 0x7800, 0xB401, 0x5000, 0x9C01, 0x8801, 0x4400 }; FIT_UINT16 tmp; // compute checksum of lower four bits of byte tmp = crc_table[crc & 0xF]; crc = (crc >> 4) & 0x0FFF; crc = crc ^ tmp ^ crc_table[byte & 0xF]; // now compute checksum of upper four bits of byte tmp = crc_table[crc & 0xF]; crc = (crc >> 4) & 0x0FFF; crc = crc ^ tmp ^ crc_table[(byte >> 4) & 0xF]; return crc; }
VBA is humble at bitwise manipulations, so resorted to using 32-bit integers with the top 16 bits always clear and then use division to shift the bits toward the LSB, and use 'XOR' as the replacement for the '^' operator. My crc_table consists of 32-bit integers that I have verified [multiple times] contains the exact values given in FitCRC_Get16() above.
Dim crc as Long ' global variable initialized to zero Private Sub Compute_Checksum(b As Byte) ' [crc_tbl(), crc, and tmp are 32-bit LONGs, but high 2 bytes are always zero] Dim tmp As Long, idx As Integer idx = crc And &HF tmp = crc_tbl(idx) crc = (crc And &HFFF0) / 16 ' high 3 nibbles of USHORT >> 4 crc = crc Xor tmp idx = b And &HF ' low nibble of data byte as an index crc = crc Xor crc_tbl(idx) idx = crc And &HF tmp = crc_tbl(idx) crc = (crc And &HFFF0) / 16 ' high nibbles of USHORT >> 4 crc = crc Xor tmp idx = (b And &HF0) / 16 ' high nibble of data byte as an index crc = crc Xor crc_tbl(idx) End Sub
Casual observers will see that I mask the crc before I shift it, instead of shifting and then masking, but I believe this will ensure the VBA division by 16 will produce the desired integral value. IOW, an input crc of 25 (x19) divided by x10 will not produce an intermediate floating value of 1.5625, which would then get rounded to 2 (x02) when assigned to the integer variable instead of truncated to the desired x01). Debug stepping appears to confirm this doing what is needed. Numerous debug stepping cycles confirm that the high 16 bits of the 32-bit LONG are always zero (no hidden sign extension when the CRC already has bit 15 on).
I should be able to see the invisible differences in the way numbers are treated by these two languages, but it's been a ridiculous number of hours without a breakthrough and it's time to ask for help. Absent anyone seeing the flaw, I guess I'm going to have to write a Java program to read my FIT file and display the checksum it computes for each byte given. [sigh]