Efficient Data Logging with Memory Constraints: Dynamic Buffer and Downsampling (merge) on Embedded Devices
Many developers face the problem that Garmin or any embedded device has very limited memory, yet you want to log data continuously for hours - often offline, with no way to upload data for a long time.
If you simply store every sample in memory, you'll quickly run out of space.
The solution: use a fixed-size buffer, and when it fills up, automatically merge (downsample) the samples so that older data is always retained - just at lower temporal resolution - while the newest data is stored at the highest possible detail.
The Core Idea
-
Collect samples in a buffer (e.g.,
maxSamples
, say 60), typically adding a new sample every tick (e.g., every second). -
When the buffer is full (overflow):
-
Merge pairs of samples (downsampling):
-
Now, each sample covers twice as much time as before.
-
After the merge, every two samples become one (e.g., 60 → 30), freeing up space.
-
-
Crucial note:
After each merge, every sample in the buffer - including the newest and oldest - covers exactly the same time interval
(the "sampling interval" as set after the latest merge, e.g., 2s, 4s, 8s, etc.). -
All new incoming samples are also stored using this “coarser” (enlarged) interval until the next merge.
General Rules for Merging All Data Types
Data type | Merge logic | Explanation |
---|---|---|
Timestamp | Always take from the first sample | Indicates the start of the merged time window |
Maximum value | Take the max of the pair | E.g., max depth, max speed |
Minimum value | Take the min of the pair | E.g., lowest temperature |
Averaged value | Take the average | E.g., avg heart rate, avg temp, avg speed |
Monotonically increasing/decreasing | Always take from the last sample | E.g., total distance, battery left |
Latest/current value | Always take from the last sample | E.g., GPS position, current sensor state |
Aggregate/trend | Type-dependent: average, min, max, diff | Always compress: aggregate, max, min, as fits |
Changing Resolution Over Long Offline Periods
-
After each merge, all buffer samples cover the same time interval:
-
E.g., 1s → 2s → 4s → 8s → 16s, etc.
-
-
All new samples are only stored with this interval (until the next merge).
-
There will never be finer-resolution samples at the end of the buffer than at the start.
Formula:
If your total offline period is T seconds, buffer size is N, the sampling interval should be:
-
The smallest power of 2 where N × interval ≥ T
-
(e.g., 6 hours = 21,600s → 60 × 512 = 30,720s, so interval = 512s)
Example Table:
Offline period | Sampling interval (s) | Each sample covers | Buffer covers total time |
---|---|---|---|
2 hours | 128 | 2m 8s | 2:08 |
4 hours | 256 | 4m 16s | 4:16 |
6 hours | 512 | 8m 32s | 8:32 |
Why Is This Good?
-
You can log data for hours or days without running out of RAM - the memory usage is always fixed.
-
You never actually lose data, only reduce temporal resolution for the oldest records (the information is still retained, just in coarser “chunks”).
-
It’s always easy to reconstruct the real time interval for each sample - just use the timestamp.
Short "merge" algorithm (pseudo-code):
function mergeSamples(a, b): ts = a.timestamp // Take the timestamp from the first sample maxValue = max(a.maxValue, b.maxValue) minValue = min(a.minValue, b.minValue) avgValue = (a.avgValue + b.avgValue) / 2 lastValue = b.lastValue // Always use the last sample for “latest” values // For other fields, apply type-specific logic! return [ts, maxValue, minValue, avgValue, lastValue, ...]
Key Takeaway
With this method, you can log indefinitely, regardless of offline duration, and never lose data - only the resolution drops for the oldest samples.
After every buffer merge, all samples in the buffer cover the same time interval.
There are never finer-resolution samples at the end than at the start.
This approach is universal - whether for Garmin, any embedded system, IoT device, or custom health/activity tracker.