Rez vs extendedCode

I tried to load a json dictionary from Rez with 1284 elements in a function decorated with :extendedCode but it crashed with an out of memory error anyway. All was well until it got to this function. The compiler warned the array would be >8k. I don't know its actual size but not surprisingly it made the prg file about 1 MB bigger. Is there a trick to make Rez work with :extendCode, or is this one of the new Garmin "features" that sounds great on paper but isn't really usable in the real world?

(:extendedCode)
	public function getTides() as Void {
		var tide_rez = Application.loadResource(Rez.JsonData.NOAA) as Dictionary;

  • I wouldn't expect something called extendedCode to work with data, just code. 

  • By the way. there is a sample for this in the SDK

  • I saw that. I also saw that it just initialized an array with a loop. Not a particularly useful example.

  • Big data is more of a thing than big code, so I was hoping that all that extra space could be used for a million rows of data instead of a million lines of code.

  • The way I understand this is that it's not millions of lines of code either.  What it allows is the same space in memory can be used for different code at different times.  So after initialize is complete. it's space can be reused by code needed at runtime for example

    Here's what the doc says

    :extendedCode (API Level 5.1.0)
    Code in extended code space is paged in during runtime on demand. For supported devices, this provides 16 megabytes of code space beyond what is loaded into the heap. When building for supported products, the compiler will move those functions into extended code space. Code in extended code space is paged in memory using a “most recently used” strategy, but there can be a performance penalty if code must be paged in. Code that is performance-dependent should be kept out of extended code space to avoid the additional performance hit.
  • I read that too, thanks, but I haven't seen any documentation about extendedCode beyond that.

    Way back in the 1980's when computers had about as much memory as our watches do now, I had to segment the programs I wrote and load in code overlays as they were needed. I get the idea. 

    Right now the code size for my watchface is about 22k on a F7X from about 3600 lines of code. Some of those are comments and syntactical sugar, but that works out to about 7.5 bytes per line. Dividing 16M by 7.5 gives 2.1 million lines. So what if we can have virtually unlimited code in our watch, code works on data.

    Back to the original question, when I load a relatively small part of the data it appears to work. After the function is called I can look at the dictionary in the memory viewer. The memory usage is about the same as always, but what is different is that the peak memory and objects is much higher. This implies that the Rez data is pulled into the app memory space before it is moved into extendedCode space. If so then this really limits the usefulness of this whole idea.

    One solution would be to have a huge extendedCode block which would load all the little Rez data chunks one at a time. In the end we might need all those lines of code after all. Besides the obvious problem with that, all the little Rez chunks would each take their own little bit of memory in the Rez portion of app memory space.

    Maybe this is a good thing in theory, but without a few tweaks it doesn't seem so useful in practice.

  • Back to the original question, when I load a relatively small part of the data it appears to work. After the function is called I can look at the dictionary in the memory viewer. The memory usage is about the same as always, but what is different is that the peak memory and objects is much higher.

    "about the same".

    Wouldn't you expect it to be exactly the same? Or different by some fixed size - the size of a theoretical object that points to the extended code space? But I don't see any such object in the memory viewer and I don't think you do, either.

    On the other hand, when the memory viewer displays something like a BitmapReference, which points to the graphics pool (outside of the regular app memory), it's 100% clear that this is not the same thing as a Bitmap that lives in app memory.

    This implies that the Rez data is pulled into the app memory space before it is moved into extendedCode space.

    Are you 100% sure about that? It could just be that there's a huge fixed overhead for converting JSON resources to a Monkey C dictionary (in addition to an overhead that's proportional to the resource/dictionary size.).

    Maybe you should do the same test with without the use of extendedCode at all, and see if you get similar results.

    Not sure how it makes sense that only small amounts of data would be moved to extendedCode but not large amounts.

    And what good would it be if the process of loading / creating data still causes a peak in regular app memory before the data is supposedly moved to extended code space. In that case you would still be limited by the available app memory (unless you loaded data in small chunks, as you suggested)

    btw the SDK sample also shows that peak memory is affected by allocating a new array within an extendedCode function. As I'll argue below, it could be modified to show that the data created by an extendedCode function always resides an app memory, and actually isn't moved into extended code space.

    So I don't think the sample is as useless as you think.

    I saw that. I also saw that it just initialized an array with a loop. Not a particularly useful example.

    If you modify the SDK sample so that the array is returned and stored somewhere in memory (e.g. global variable, member variable of the view), I think you'll see that data created by an extendedCode function indeed resides in regular app memory, not extended code space.

    I think it's very easy to see this by varying the array size. To address your theory about small amounts of data somehow being moved to extended code space where large amounts would not, I tested the following two scenarios:

    - array of size 1 is created by extendedCode function and stored in a global variable

    - array of size 2 is created by extendedCode function and stored in a global variable

    Each scenario produces a different value for System.getSystemStats().usedMemory (which is displayed on the device screen in the sample). (When I did this test, I found a difference of exactly 8 bytes [*]) To me, this indicates that data created by an extendedCode function is in regular app memory. If the data was moved to extended code space, I would expect usedMemory to be exactly the same in both scenarios.

    [*] If I increase the size further (e.g. to 5 or 10), then I see a difference of about 4 bytes per additional array element, but the memory usage is always an integer multiple of 8.

    If run the same tests as above, except with function not as extendedCode, then I still see a difference of about 4 bytes per array element in usedMemory (but the memory usage is always an integer multiple of 8).

    Ofc you really didn't say exactly what you did to persist the dictionary in memory after loading it in your extended function, so it's *possible* you did something differently than I did, which somehow moves the data into extended code space.

    I also tried wrapping the data in a class annotated with extendedCode, and that doesn't seem to do the trick (not that I expected it to).

    After the function is called I can look at the dictionary in the memory viewer.

    Doesn't the fact that you can look at the dictionary in the memory viewer and it just shows up as normal object kind of suggest that it's in the regular app memory space?

    If what I said above doesn't convince you, try making your dictionary just a *little* bit bigger and see if there's an impact on current memory usage.

    Use System.getSystemStats().usedMemory so you don't have to worry about the fact that the memory viewer only shows differences of 100 bytes for current memory usage.

    Big data is more of a thing than big code, so I was hoping that all that extra space could be used for a million rows of data instead of a million lines of code.

    I agree that this would be nice, but it doesn't seem possible.

    What you *could* do is rewrite a lookup to a huge dictionary as a huge amount of pure code (e.g. nested if statements). It would be terribly inefficient in terms of code size ofc but you *might* be able to encode more data in this way than would normally be supported.

    You could even write a script to auto-generate the process of converting a dictionary lookup to a function.

    Maybe a better option would be to try to find a way to split up one huge JSON resource into multiple smaller ones.

  • Maybe this is a good thing in theory

    But your theory is that data will be moved into the extended code space under some circumstances, yet I don't see any evidence for that (either in theory or practice.)

    i.e.

    - Garmin never said you could use this feature to load data outside of normal app memory

    - I don't see any evidence that you can use this feature to load data outside of normal app memory

    Is there a trick to make Rez work with :extendCode, or is this one of the new Garmin "features" that sounds great on paper but isn't really usable in the real world?

    So to be clear, you're complaining that extendedCode doesn't do something that Garmin never promised it would?

    "Sounds great on paper" suggests that there would be some reasonable expectation that this feature would do what you want, but I'm not seeing it.

  • But the data will be disposed off as soon as it is swapped out of memory, what use is that?  I suspect that you are still limited to the same app memory limits. It’s just that extended is paged in and out of that memory as required. Which means you need to be careful with dependencies so you don’t still exceed those memory limits. You can’t just load up something humongous, you can’t have 16Mb in memory, only a subset which fits within the existing limits.