Big update to prettier-extension-monkeyc

I've posted about prettier-extension-monkeyc before, but I've added a bunch of new features that developers will probably like (well, I've been missing them, so maybe you have too).

The new features it implements for VSCode include:

  • Goto Definition. Point at a symbol, Ctrl/Cmd click, and it will take you to the definition. Or F12
  • Goto References. Right click on a symbol and select "Goto References". It will show you all the references. Or Shift-F12
  • Peek Definition/Peek References. Same as above, but in a popup window so you don't lose your place in the original document.
  • Rename Symbol. Right click on a local, function, class or module name, and select "Rename Symbol". It will rename all the references. It doesn't yet work for class members/methods.
  • Goto Symbol. Type Ctrl/Cmd-Shift-O and pick a symbol from the drop down (which has a hierarchical view of all symbols in the current file). This also appears as an outline across the top of the file.
  • Open Symbol By Name. Type Ctrl/Cmd-T, then start typing letters from a symbol name. A drop down will be populated with all matching symbols from anywhere in your project.

Older features include a prettier based formatter for monkeyc, and a monkeyc optimizer that will build/run/export an optimized version of your project.

[edit: My last couple of replies seem to have just disappeared, and the whole conversation seems to be in a jumbled order, so tldr: there's a new test-release at https://github.com/markw65/prettier-extension-monkeyc/releases/tag/v2.0.9 which seems to work for me on linux. I'll do more verification tomorrow, and push a proper update to the vscode store once I'm sure everything is working]

  • It's a pity you spam this thread with off-topic stuff. Maybe you should open a new thread where programmers can explain to you the basics.

  • I think optimization, especially when it comes to potentially new opcodes, is very on topic for this thread. Having been programming for more than four decades, it seems to me that when a person resorts to personal attacks it's a sign I might be on to something.

  • I gave a pretty thourough description of the changes to the opcodes, and how they affect code size. The most likely ones here are the ones that avoid manually constructing arrays.

    If you have a literal array, such as var x = [1,2,3]; with the old bytecodes, that was just syntactic sugar for

    var x = [1,2,3];

    x[0]=1;

    x[1]=2;

    x[2]=3;

    So there was a lot of code involved. There's a new opcode to construct a literal array, so all those instructions are avoided.

    When an array is constructed this way, it does have to be copied at some point, because if it just handed out a copy to the literal array, you could modify it, and the next time someone asked for the same array they'd get the modified copy.

    So either they've introduced copy on write for this one special case (not impossible), or they just copy the data at the point the instruction executes.

    Either way, arrays are not copied when you assign them to variables...

  • I don't have any hard coded arrays, I build them starting with [] and using .add(someVal) in loops in a function. You can see from the screenshots that the number of objects and the data size is pretty much the same. So maybe what changed isn't array related, but still it seems big to cut code size by a quarter. If we knew what was driving this I can't help but think it would be helpful to a lot of people, but right now "I stumbled on something in the dark and I can't see what it is" is not a lot to go on. I'm willing to help figure it out however I can if anyone is interested in pursuing this.

  • I don't have any hard coded arrays

    In that case, the next most likely are the reduced size literals.

    In the old bytecode, a literal Number or Float took 5 bytes to materialize, and a literal Long or Double took 9.

    In the new bytecode, materializing a zero of any type takes 1 byte, and Numbers between -128 and 127 take 2 bytes, between -32768 and 32767 take 3 bytes and 24 bit ints take 4 bytes.

    "self" has gone from 2 bytes to 1 byte, and get-a-symbol-relative-to-self has gone from 8 bytes and 3 instructions to 5 bytes and one instruction.

    There has always been a 1 byte isnull instruction, but it never worked (always returned false), so it was a 2 instruction 2 byte sequence. Now it works, so it's 1 byte and 1 instruction, and there's a new isnotnull instruction.

    There are also a lot of compound instructions that can save a few instructions, and a few bytes...

  • Having been programming for more than four decades

    If you’ve been programming for even one year (to be generous), you should be able to defend your assertion that at some point, Monkey C copied arrays on assignment.

    Why won’t you run the sample I code I provided to confirm that arrays were copied on assignment? Or provide code of your own (with the same kind of evidence)?

    You’d just prefer to say over and over again that Garmin said arrays used to be copied on assignment, but now they aren’t.

    it seems to me that when a person resorts to personal attacks it's a sign I might be on to something.

    No offense, but the alternative explanation is that the person is just annoyed bc they feel like you aren’t listening to reason. Just because someone makes a personal attack, it doesn’t mean you’re wrong, but it also doesn’t prove that you’re right.

    I think optimization, especially when it comes to potentially new opcodes, is very on topic for this thread.

    So your argument is that Garmin has changed the semantics of array assignment from “copy on assignment” to “reference on assignment” for the purposes of optimization.

    You seriously believe that Garmin would:

    - introduce a change like this which would have the potential to break every single existing CIQ application? Even the chance of breaking > 0 apps seems unacceptable to me.

    - introduce said change *conditionally*, depending on whether optimization is enabled or not? IOW, enabling or disabling optimization would have the potential to drastically change the behavior of the program to be compiled. Has anyone ever implemented optimization this way?

    Does any of that sound reasonable to you?

  • So maybe what changed isn't array related, but still it seems big to cut code size by a quarter. I

    Again, even if arrays were copied on assignment (spoiler: they weren’t), I fail to see how the code for copying arrays could possibly take up so much memory.

    Again, all garmin would have to do is define a helper function to copy an array (like memcpy()), and call that function every time an array is assigned. I don’t think the overhead of function calls is that high, even in monkey C, unless you are literally assigning thousands of arrays in your code.

  • In the new bytecode

    This is great stuff, thanks! Too bad, as flocsy pointed out, none of this applies old devices, which would need it the most. Oh well, Garmin doesn’t care about the FR235 even if lots of runners still wear one. To be fair, none of those runners care about CIQ in my experience.

  • Yes, your trivial example does what you say it does. So what, if that's what it take to make you feel better. I'm not sure why you feel the need to make this personal. When I posted about something I discovered by chance by making a small change to the way variables were passed and referenced and the way compiler handled them the goto reply was that it must be wrong. Whatever. I'm not going to revert the changes I've made since then just to repost them. But the fact is, something is different, and YES, Garmin did introduce breaking syntax changes between SDK 6 and 7, and every language I've ever worked on has at some point introduced breaking changes as they've moved forward.

  • In that case, the next most likely are the reduced size literals.

    Could be, I have a pretty decent number of those but most of them are 7 digit floats and 16 digit doubles for calculating sun, moon, and tide times. Integers are mostly array indices or for limits.

    The optimizer now only manages to squeeze a little over 900 bytes out of the code:

    I haven/t looked at the latest optimized source yet, but I'm guessing a lot are the optimizations I haven't incorporated from prior optimized source (like literals) to mainline to keep some level of readability / maintainability so I'm OK with that.

    It's a shame some of these couldn't be backported for CIQ 4 or 3 which I'm sure there are a lot of them still in the wild, but Garmin is in the business of selling new watches.