problem with makeWebRequest w/ json response

Sorry if this has been covered, but I don't think it has.

I have a background service that's pulling data from a rest api. The api (was) returning a dictionary as json data. This would work as long as the json data was very small - like 400 bytes. But when I got to around 600 bytes, it fails. (-403 - NETWORK_RESPONSE_OUT_OF_MEMORY)

If I change the request to expect 'text/plain', and change the response to match, it will succeed. Of course, now I have a blob of unparsed json that I have to deal with.

It seems to me that the background process is running out of memory when parsing the json response and turning it into a dictionary object. Has anyone else run into this?

The dictionary has 38 key/value pairs - not an unreasonable amount. Obviously, none of the values are too big if the entire json blob is only 600 characters.

The server-side code is mine, so I can certainly change it to something easy to parse - like a comma-separated list. Seems like a shame to do that when the SDK is supposed to take care of converting data types to/from json. But if it's doing it in a really memory-inefficient way, I may not have a choice.

This is a data field, so perhaps there are very tight memory constraints (as opposed to apps/widgets). I'm testing on a Fenix 5X in the simulator, but I know the SDK documentation says the background process memory is even more limited than what the data field has available.

Are the background process limits documented anywhere? Anyone else run into this yet? Thanks...
  • With a DF, the background process can be limited to as little as 16k for code and data. Also, when you get JSON data, that will take more memory than text/pain as the JSON data must be turned into a dictionary, so right there, you'll be using more than twice the amount of memory as the response.

    400 or 600 bytes seems small, but I recall that the complexity of the data also comes into play, and it could be it's the 38 pairs that's getting you, as a single pair uses more memory than the json data itself. I'd suspect you'd have no problem with 600 bytes, but 5 pairs for example.

    Since you control the data, maybe find a different way to organize the data so there's fewer pairs.
  • The background process is the same size for all application types on a device, and the smallest size this has ever been is 32Kb.

    I think the space taken up by a hash container with 38 pairs would be about 1200 bytes

    I think the overhead on a string type is 8 bytes per string, so if all 38 pairs are string->string, this is an additional 608 bytes of overhead. If some of the entries are just numbers, then this might be smaller.

    with the 600 bytes of actual payload, you are looking at something in the vicinity of 2.5Kb to represent that JSON as a Monkey C Dictionary. You can of course find the exact size by putting together an application that loads the payload into its memory directly and checking it out in the Memory Viewer in the simulator.

    Are you using the :background annotation to include only your background code in your background process? If you aren't your full data field application could be getting loaded up when the background process spawns. This could put you close to the limit. If fetching this JSON is the only thing your background process does, I don't think there is any reason it shouldn't be able to receive it. You can certainly reduce the overhead by reducing the number of key value pairs as Jim indicated. The hash container approximately doubles in size every time it crosses the packing threshold for its entries. If you drop the number of pairs you could drastically reduce the 1200 byte container overhead. Again, creating a device-app that loads this payload and looking at it in the Memory Viewer is going to be pretty helpful here.
  • Thanks. I suspect the SDK just isn't very efficient at converting the json into a dictionary, and it's running into whatever memory limit the background process has. I'll probably just simplify the response into easily-parseable csv text and pass that back to the main process via Background.exit(). Then I can parse the data myself into whatever data structure I want. Still frustrating to run into limitations like this even on a more powerful device like the F5X. I could maybe understand it on an older (16KB) device like the Fenix 3. Oh well...
  • Humm. For some reason I thought it was smaller in DFs, on some devices but yes, 32k is the min.. I stand corrected! :)
  • Thank you for the very thorough explanation!!! :-D

    I probably am loading more code into the background process than needs to be there. The simulator says I'm using about 12KB if I recall. I think it ought to be well below the 32KB limit, but I'll try reducing the code in the bg process and see if it makes a difference. I'll also look to see how much memory that dictionary is actually consuming.

    The background process is the same size for all application types on a device, and the smallest size this has ever been is 32Kb.

    I think the space taken up by a hash container with 38 pairs would be about 1200 bytes

    I think the overhead on a string type is 8 bytes per string, so if all 38 pairs are string->string, this is an additional 608 bytes of overhead. If some of the entries are just numbers, then this might be smaller.

    with the 600 bytes of actual payload, you are looking at something in the vicinity of 2.5Kb to represent that JSON as a Monkey C Dictionary. You can of course find the exact size by putting together an application that loads the payload into its memory directly and checking it out in the Memory Viewer in the simulator.

    Are you using the :background annotation to include only your background code in your background process? If you aren't your full data field application could be getting loaded up when the background process spawns. This could put you close to the limit. If fetching this JSON is the only thing your background process does, I don't think there is any reason it shouldn't be able to receive it. You can certainly reduce the overhead by reducing the number of key value pairs as Jim indicated. The hash container approximately doubles in size every time it crosses the packing threshold for its entries. If you drop the number of pairs you could drastically reduce the 1200 byte container overhead. Again, creating a device-app that loads this payload and looking at it in the Memory Viewer is going to be pretty helpful here.


  • Well, I made sure only the code required by my background process has the (:background) annotation. The simulator says I'm using 15.9KB for the data field.

    Memory usage: 15.9/124.7 kB
    Peak Memory: 21.1 kB
    Object Usage: 117/65535
    Peak objects: 132

    a json blob of 603 characters succeeded - it appears it was using 1728 bytes of memory when converted to a dictionary.

    a json blob of 729 characters failed.

    Now here is where it gets weird...

    My callback from makeWebRequest gets called, and I get a 200 return code. I printed the dictionary to the console and it looks fine. My callback calls Background.exit(dict) to return the data to the main process.

    But my App.onBackgroundData() method is never called. If I reduce the json size again, the data makes it all the way through to App.onBackgroundData();

    I add a try/catch block around Background.exit() to see if ExitDataSizeLimitException is being thrown. And, indeed, sometimes it does get caught. But sometimes the data makes it through. (No other code changes have been made, and the data is roughly the same size each time) Sometimes, I'll get a -403 responseCode. Sometimes the simulator just hangs and I have to kill the process. I tried running it in the debugger, and that didn't give me any useful information. Usually, the simulator just hung.

    So the behavior is very inconsistent. And it *seems* like I should be nowhere close to the 32KB limit. It also seems like I should be well under the 8KB limit for the data size that Background.exit() can return.

    So maybe this is a bug in the SDK or simulator? Maybe it's a bug in my code. (It seems not - but I won't rule it out.) Still, I think I'll probably just return 'text/plain' from the web service and parse that in my main application process instead. It seems like that is going to be a more robust approach for now. Too bad there's nothing exposed in the SDK to turn a json string into a dictionary - especially since you can do it when loading json resources.

    Anyway - thanks for the ideas to try. Looks like it was a bust - but it was a learning experience if nothing else.
  • Oh - here's an example of the json. Basically, I'm posting the Activity.Info data to implement something akin to 'livetrack'. It's in the early stages, so I'm just echoing everything back to the client right now to prove the communication piece works. In the end, it'll be different data coming back - but could potentially be the same number of key/value pairs or more. I could certainly change the web service to return the data in chunks (and require multiple calls to retrieve it). But I think I'll just bite the bullet and write some simple parsing code. Plus, I don't know if Polar/Suunto support app development on their devices, but I thought I would potentially make this cross-platform someday. So perhaps a plaintext response would be easier to deal with on other platforms...

    community.garmin.com/.../1424567.png
  • Have you looked at the web traffic. I am not sure how you pull this up, but I believe it was added recently. Perhaps your service is occasionally returning a much larger response, or malformed payload. If you are occasionally getting ExitDataSizeLimitException when the payload should only be about 1-2KB, then this seems possible.

    If data is not successfully making it to onBackgroundData, that seems like something the team would be interested in investigating if you can put together a sample that reproduces the error.
  • Brian - you mean "File>View HTTP Traffic" in the sim?
  • Checked the web traffic, and it's definitely not any bigger than expected when it fails in the simulator. I'll see if I can put a sample together that reproduces the problem.