Processing BLE sensor data incoming via onCharacteristicChanged - best pratice how to?

My data field app reads the data flow of a BLE sensor, interprets the received data and generates display values for the data field. The data is received as a byte array via the callback function “onCharacteristicChanged”, each in 20 byte blocks.

The sensor sends between 80-120 bytes per second. The callback function is therefore called approx. 4-6 times per second.

What is the better strategy for processing the incoming data: Immediately, in the callback function? Or buffering in a separate byte array, which is evaluated once per second in the compute() function?

  • It depends on the data, but the datafield will only be refreshed (the display) once per second.

  • Once per second display is fully okay for this application. I am wondering if it is good or unneccessary practice to use a separate ByteArray (which consumes RAM) to process the data buffered. Or just process them directly from the ByteArray in tje callback function.

  • Do you have the necessary amount of memory available to buffer it?

    Is there a difference (in memory and CPU usage) whether you process the data as you get it or once per second?

    Do you really have to process all the data? Or is it enough to process the last chunk of it, that will be displayed, and it's a waste of resources to process all the previous packets?

  • I already tried both strategies, and watched with profiler and memory viewer. Did not see any reasonable difference, weather i put the processing in the callback (unbuffered), or do it once per second (buffered). That's why i'm asking.

    The buffer consumes up to 300 bytes, because i slice the incoming byte array to 300. During regular operation it is between 80-120 bytes, which is the payload of the sensor per second.

    I must process the whole payload. Lost chunks would cause missing data. Not wrong data since the payload is checksum protected (XOR over all characters in a sentence, like in NMEA).

  • There's things here that depend on the data itself.  Is it all 8bit unsigned, or some 16 bit, some 32, etc?

    if it's a mix of data, I'd only pull it apart when it was going to be used (in compute()?)  Otherwise, you'd be pulling it apart when that might not be needed,  A byte array that's 300 bytes long really isn't that much memory  Moving that to a normal array would take more than 300 bytes

  • Then I would process it as the data comes in. Another advantage is that if you also write the data to a fit file, then you'll have less lag in the fit file