makeWebRequest returns responseCode -403 although traffic log shows 200

I'm making a web request and the weird thing is that I get a different responseCode in my code than in the traffic log.

I make a simple request like following:

function makeWebRequest(url, params, callback) {
	var options = {
		:method => Communications.HTTP_REQUEST_METHOD_GET,
		:headers => {
			"Content-Type" => Communications.REQUEST_CONTENT_TYPE_URL_ENCODED
		},
		:responseType => Communications.HTTP_RESPONSE_CONTENT_TYPE_JSON
	};
	Communications.makeWebRequest(url, params, options, callback);
}

function loadWeatherPreview(callback) {
	BackgroundUtil.makeWebRequest(
		"https://api.openweathermap.org/data/2.5/forecast",
		{
			"lat" => Props.get(Constants.PROP_LAST_LOC_LAT),
			"lon" => Props.get(Constants.PROP_LAST_LOC_LONG),
			"appid" => Props.get(Props.OWM_KEY),
			"cnt" => 7,
			"units" => "metric" // Celcius
		},
		callback
	);
}

In my callback I get -403 as result code and an empty data value, but as you can see in the screenshot, the traffic log shows that I get result code 200.

How can this be?

  • I'm currently at 9336 free memory and the request is working again. I just wonder how I can explain this. If the downloaded data has 0.9kB, how can this fill up about 9kB?

    Now that it works I can see that I have 2096 free memory AFTER processing the response. Again this is confusing, why does the request then fail if I have about 0.5kB less memory when the background process starts?

    My AppBase is already very lean, I only have a constants file with some constant definitions and a second class with the functions for my background service.

  • Without seeing the actual response payload, I can only make some estimations of what you're seeing.

    The API docs for openweathermap provide links to sample data. If I take the sample data for the forecast endpoint (here) and cut the number of entries in data["list"] down to just 7, the resulting JSON is 6,487B. If I load that ~6KB JSON document into into an app a JsonResource, the memory used is 11,635B (~180% of the original size).

    If you assume that the system holds the entire JSON document in memory while it is converted to MonkeyC objects, the total memory required when loading is 18122B (~284% the original size).

    There are other allocations that are happening under the covers that aren't obvious.. for example the strings in the JSON data are deduplicated as they are converted. This functionality requires additional memory during the conversion (the exact amount depends on the number of unique strings), but reduces the amount of memory required after the conversion has completed. In the JSON I had, there were 50 unique strings and the dedup table is 2082B. With this the total memory required is 20240B (~311% the original size).

    I can see that I have 2096 free memory AFTER processing the response.

    If you have ~2KB free AFTER processing the response then it is likely that the memory available while processing the response would have been ~2KB. As I pointed out above, this is probably very close to the amount of memory the system would require to transform the JSON.

    I'm currently at 9336 free memory and the request is working again.

    I'm not sure how you went from ~9KB free down to ~2KB free after making/receiving a web request and doing nothing else. Once the web request is complete, the system should have released all resources related to the request/response. If you don't cache the result, the memory used for the request should all have been returned to the system and any memory that is still used should be taken up by your application.

    You could easily swallow up 7KB of memory if you are changing views (pushView/switchToView) given the new response. If you are loading image data this is very easy (a 32x32 bitmap with 16 colors is more than 512 bytes).

  • Based on your example numbers - can I deduce, that I need approx. 3 times as much free memory as the size of the JSON response to be able to handle it? Is there a way to check this with some debugging tool?

    Regarding the unclear mentioning of the free memory from me, I want to explain them with my current numbers:

    • in BackgroundService::initialize I have 10928KB free memory
    • in BackgroundService::onTemporalEvent I do make a web request
    • in the web request's callback function I do check the response, create a array[9] out of it and check the memory after creating this array, so the after was meant as "after getting the response" - I'm still inside the background process. Here I have 4544KB free memory now

    Still in this scenario I see that the background process fails every now and then - not in my emulator anymore but on my real device - the JSON response does vary very little only here, it contains only a few strings that very and those are limited in size.

    Btw, I do only use the OWM API with the weather preview and a cnt value of 3, so I do limit the weather preview entries in data["list"] to 3... This seems to be the maximum that I can somehow handle.

  • One way to get a better handle on things with the response is first, just enter what you are sending into a browser and look at what you get back.  It's not only the size, but the complexity that matters.  And thinks like a "0" may look like 1 char, but when it's converted, it takes more memory.

    The next thing, is maybe throw together a simple widget or device app, and make the call in the foreground.  It's much easier to look at things that way, when it comes to peak memory, etc.

  • Is there a way to check this with some debugging tool?

    Background apps are a little tough to debug, but if you make a simple foreground application that makes a request, you can use the HTTP Traffic Viewer to look at the Content-Length header. That is the size of the raw JSON document that is coming from the server. When you get the response, keep a reference to the data that you get, and use the Memory Viewer to see the size of that object and it's children.

    As for how much memory is used internally by the system to transform the raw JSON into MonkeyC objects, you can't really see that directly. As I mentioned above, we deduplicate the strings so you can find all the unique strings in the data and assume they are packed into a collection like a Dictionary.

    Keep in mind that there will be additional memory use when you pass the data back to the application via Background.exit(). We have to serialize the data so that the foreground app can read it back when it starts up.

  • Just a note here... There are things you can do when receiving the payload to reduce the memory used by the app. You say that in your response handler, you create an array of 9 elements, but you don't go into much depth about what is in that array. Are you keeping the entire structure around and passing that back to the foreground application like this...

    funciton onForecastResponse(code, data) {
    
        var array = new [9];
        array[0] = code;
        
        // pass the entire response back to the foreground app via the array
        array[1] = data;
        
        // ...
        
        Background.exit(array);
    }

    Do you really need all of the data in that data structure? If not, you can trim out the stuff that you don't need. As an example:

    function onForecastResponse(code, data) {
      var array = new [9];
      
      var forecasts = data["list"];
      for (var i = 0; i < forecasts.size(); ++i) {
        var forecast = forecasts[i];
    
        var time = forecast["dt"];
        var main = forecast["main"];
        var wind = forecast["wind"];
        
        // release the forecast data that we aren't holding references to. this
        // should free up some memory for the allocation below
        forecasts[i] = null;
    
        array[i] = [ time, main["temp"], main["temp_min"], main["temp_max"], wind["speed"], wind["deg"] ];
      }
      
      // release the response data as we don't need it any more
      data = null;
      
      // send the data to the background process
      Background.exit(array);
    }

    Ideally you would have done the data trimming before the data gets to the watch because the BLE connection on these devices is so very slow. Any reduction in the size of the data will be an increase in battery life and a boost in performance, not to mention having to worry less about memory limits...

    The OpenWeatherMap service API doesn't really allow you do filter out the stuff you don't want, but you could always setup a simple web service proxy (in Google App Engine or in Amazon Cloud) that would take the full response and trim it down to just the stuff your app needs.

  • And with background.exit() the limit is 8k, and the calling app must have enough free memory to handle the response.

  • thanks for all the informations.

    The tipp with making a small example app that runs the code to test in the foreground is very helpful.

    I do what you suggest already, but thanks for pointing this out. Some interesting detail is in your code - setting the data to null - isn't this memory released by the background process itself? That's probably only there to tell the GC that it can clean this memory already if needed and may result in more free memory for that background service a little earlier?

  • Some interesting detail is in your code - setting the data to null - isn't this memory released by the background process itself?

    I somehow confused myself into thinking you had got past that -403 error passed to your callback and were running into further memory issues inside the callback.

    Yes, that memory will be released automatically when the background process exits. If you were running into memory issues *after* you received the response, you could trim bits that you don't need out of the incoming data, freeing up memory for things that you do need.

    MonkeyC doesn't use a garbage collector. Objects that are larger than 4 bytes are allocated from the heap. All heap allocated objects are reference counted, and when the last handle to such an object goes out of scope the memory for the object is cleaned up and the associated memory is returned to the system *immediately*.

  • The OpenWeatherMap service API doesn't really allow you do filter out the stuff you don't want, but you could always setup a simple web service proxy (in Google App Engine or in Amazon Cloud) that would take the full response and trim it down to just the stuff your app needs.

    If you're still getting -403 errors, I think your best option is to write a web service proxy to trim out the data from the response that you do not need in your app.