Big update to prettier-extension-monkeyc

I've posted about prettier-extension-monkeyc before, but I've added a bunch of new features that developers will probably like (well, I've been missing them, so maybe you have too).

The new features it implements for VSCode include:

  • Goto Definition. Point at a symbol, Ctrl/Cmd click, and it will take you to the definition. Or F12
  • Goto References. Right click on a symbol and select "Goto References". It will show you all the references. Or Shift-F12
  • Peek Definition/Peek References. Same as above, but in a popup window so you don't lose your place in the original document.
  • Rename Symbol. Right click on a local, function, class or module name, and select "Rename Symbol". It will rename all the references. It doesn't yet work for class members/methods.
  • Goto Symbol. Type Ctrl/Cmd-Shift-O and pick a symbol from the drop down (which has a hierarchical view of all symbols in the current file). This also appears as an outline across the top of the file.
  • Open Symbol By Name. Type Ctrl/Cmd-T, then start typing letters from a symbol name. A drop down will be populated with all matching symbols from anywhere in your project.

Older features include a prettier based formatter for monkeyc, and a monkeyc optimizer that will build/run/export an optimized version of your project.

[edit: My last couple of replies seem to have just disappeared, and the whole conversation seems to be in a jumbled order, so tldr: there's a new test-release at https://github.com/markw65/prettier-extension-monkeyc/releases/tag/v2.0.9 which seems to work for me on linux. I'll do more verification tomorrow, and push a proper update to the vscode store once I'm sure everything is working]

  • v2.0.53 is out.

    More improvements to the post-build optimizer. For my 11k project, it now removes nearly 300 bytes from a -O2 release build. And just for completeness, that's on top of the just over 700 bytes the source-to-source optimizer already removes from it.

  • I found a bug that was driving me carzy for 2 days: after some of my builds the simulator told me there are no settings for my app, even though I did "see" the file in bin/optimized-MyApp.original-settings.json. After some looooong hours I finally understood what's the problem: you should either rename or copy optimized-MyApp.original-settings.json to optimized-MyApp-settings.json. I did not check, but I can imagine that the same would apply to the fit_contributions.json (though I'm not sure if that's ever used from the sim, but maybe it is when people look in the monkeygraph app)

  • I found a bug

    Yes... I'd been half noticing that too; but it looks like the simulator remembers settings based on the App name, and if it doesn't find a settings file, it assumes they haven't changed since the last time. And I do enough builds with and without the optimizer, and with and without the post optimizer that any weirdness sorted itself out pretty quickly...

    Anyway, thanks for figuring that out, and sorry for not thinking about it myself. I'm running tests now, and should have a new version out soon...

  • Yes, it used to work more or less, except a bug supposedly in the simulator that sometimes it does load with the settings of the previous device (when i try devices with different capabilities that should also have different settings) but i think that's a real sim bug, because deleting all apps fixed it usually.

    This time I totally rewrote the settings (also generated by python together with the monkey file) so i noticed because I had no settings at all (it even gave me that error)

    Still it took a while, but at least I found it :)

  • but i think that's a real sim bug, because deleting all apps fixed it usually

    Yes, I get bitten by that one from time to time. So when I saw the occasional weird result, I just assumed it was that.

    Anyway - just published v2.0.54 which will copy the files to the correct names.

    This release also speeds up export a little. I'm seeing a 2.5x speed up for *my* part of the build - but thats only a small fraction of the build time anyway; most of the time for export is spent running Garmin's compiler, and there's no change for that.

  • I use Rez.Strings in a few places, so naively I thought I'll pass Rez.Strings to those functions and save some bytes. It kind of works, except some errors I get from the optimizer, because I can't find the right "type" for the passed "variable":

    function a() {
        var strings = Rez.Strings;
        var foo = loadResource(strings.foo as Symbol) as String;
        b(strings);
    }
    
    (:inline)
    function b(strings as ???Module<Rez.Strings>???) {
        var bar = loadResource(strings.bar as Symbol) as String;
    }

    ERROR> fenix3: X.mc:95:45: Unable to resolve type Module

     do you have any idea how to make this work without the warning/error?

    update: I tried to solve the warning by adding:

    typedef MyStrings as interface {
        var foo as Symbol;
        var bar as Symbol;
    };
    

    and use MyStrings as the type for strings. And it does remove the warning, however it also removes the optimization :( so the code size is exactly when I didn't do the refactoring (just used Rez.Strings.foo, Rez.Strings.bar directly)

  • I've hit this myself, and the problem is that Garmin's type checker doesn't support Module or Class types at all; so I would need to invent a syntax to represent them, and then it wouldn't work when the optimizer is turned off.

    In fact, part of the reason I don't do this particular optimization in PRE is because Garmin's type checker really seems to dislike it - and I didn't want to have to force their type checker off when compiling the optimized code. But that was a while ago - maybe the newer compilers don't complain?

    I'm leaning towards just special casing "Object?" so that anything can be cast to that without a diagnostic. But that would still not be a full solution.

    Your interface approach may be best (but right now *my* type checker doesn't really work with interfaces; it just treats them as a generic unknown type). But you said it undid the optimization... did you look at the source code the optimizer produced after inlining? I suspect that "singleUseCopyProp" is converting "var strings = Rez.Strings; strings.foo" into "Rez.Strings.foo". Normally thats the right thing to do, because PRE will undo it again if that turns out to be profitable; but as I said above, I explicitly prevent PRE from doing this (so if thats what's going on, the solution is to prevent copy prop from doing it in this situation too).

    Meanwhile, here's a slightly different approach that I ended up adopting:

    (:typecheck(false)) // garmin's type checker hates this function
    function loadString(string_id as Symbol) as String {
        return WatchUi.loadResource(Rez.Strings[string_id] as Symbol) as String;
    }

    Now instead of "WatchUi.loadResource(Rez.Strings.foo);" you say "loadString(:foo)". The latter is *much* shorter than the former.

    In fact "loadString(:foo)" is actually shorter than "Rez.Strings.foo" - so it's even profitable to use it in cases where the api accepts a Symbol or a String.

    eg its better to do "new MenuItem(loadString(:foo), ...)" than to do "new MenuItem(Rez.Strings.foo,...)".

    And the savings are huge if you need to put them in an array. Now you can just say [:foo, :bar, :baz] instead of [Rez.Strings.foo, Rez.Strings.bar, Rez.Strings.baz] (which saves about 16 bytes per array element).

    As a result, I only have that one mention of Rez.Strings in my entire app.

  • No, it didn't convert them back to Rez.Strings.foo. It looks like as it should be:

    loadResource(name /*>strings<*/.fieldNoChannel as Symbol) as String;

  • Well :) 

    I tried loadString, but it's not as good as passing Rez.Strings, even though that now I replaces all occurences of loadResource(Rez.Strings.x) with loadString(:x), and previously I only replaces some, that were inside 1 big function (after inlining).

    Originally I had:

    > Sizes for optimized-X.original-fenix3: code: 5921 data: 1751 <
    > Sizes for optimized-X-fenix3: code: 5750 data: 1751 <

    When I passed Rez.Strings between the inlined functions it went down to:

    > Sizes for optimized-X.original-fenix3: code: 5905 data: 1751 <
    > Sizes for optimized-X-fenix3: code: 5734 data: 1751 <

    But replacing all those (and the rest) with a call to loadString increased the size:

    > Sizes for optimized-X.original-fenix3: code: 5910 data: 1759 <
    > Sizes for optimized-X-fenix3: code: 5731 data: 1759 <

    (It's still slightly better than the original though, but 30% worse than passing Rez.Strings: -11 bytes instead of -16)

  • I tried loadString, but it's not as good as passing Rez.Strings

    Oh. I know what's changed.

    With minimizeModules enabled, Rez.Strings.x is smaller than it was when I did my testing a long time ago.

    As we discovered a few weeks ago, adding "using Rez;" - which the optimizer now does - and making no other changes saves 6 (or was it 8?) bytes per Rez.Strings.x. And of course, if it wasn't for the bug I reported, it could do "using Rez.Strings;" and save another 6 bytes. So yet again, new optimizations change the dynamics a little...

    In my case, I'm pretty sure it's still a win, because I also use the string symbols elsewhere. eg I have an addMenuItem function that takes a symbol, and creates a menu item whose title is Rez.Strings.symbol, and whose id is symbol, and whose handler is a method on the delegate named symbol. But disentangling all that to find out is more work than I want to try; especially as you're only talking about 5 bytes!

    EDIT: On the subject of new optimizations changing the dynamics a little...

    I've been looking at a potential optimization in the post-build optimizer. Normally, method calls involve looking up the symbol, pushing the appropriate "self" for the call, then the arguments, then invoking it. That's typically 11 bytes (plus the code to push the arguments). In addition, there's an 8 byte data entry for the method (so that it can be looked up).

    But Garmin's bytecode includes a jsr (jump to subroutine) instruction that takes 3 bytes. Currently (as far as I can tell) its only used to call "finally" blocks in try/catch/finally - basically the finally block is a "subroutine" that gets called from the end of both the try block and the catch block. Its an easy way to avoid duplicating the code in the finally.

    But I think that for certain simple methods, if I can prove they're only ever called directly, and they meet certain fairly common criteria, I could remove the method from the method table (saving 8 bytes of data), and call it directly using jsr. Unfortunately, there's some awkward stack manipulation involved, and I think it will end up needing a couple of extra (one byte) instructions in the caller. But that's still 5 bytes per call instead of 11, and the method itself would be exactly the same size.

    As I said, there are some restrictions on the callee too - but my loadString function is certainly a candidate. So if I can get that to work, the loadString version would end up with the same data size as the other two, and save an additional 6 code bytes per call to loadString...