Big update to prettier-extension-monkeyc

I've posted about prettier-extension-monkeyc before, but I've added a bunch of new features that developers will probably like (well, I've been missing them, so maybe you have too).

The new features it implements for VSCode include:

  • Goto Definition. Point at a symbol, Ctrl/Cmd click, and it will take you to the definition. Or F12
  • Goto References. Right click on a symbol and select "Goto References". It will show you all the references. Or Shift-F12
  • Peek Definition/Peek References. Same as above, but in a popup window so you don't lose your place in the original document.
  • Rename Symbol. Right click on a local, function, class or module name, and select "Rename Symbol". It will rename all the references. It doesn't yet work for class members/methods.
  • Goto Symbol. Type Ctrl/Cmd-Shift-O and pick a symbol from the drop down (which has a hierarchical view of all symbols in the current file). This also appears as an outline across the top of the file.
  • Open Symbol By Name. Type Ctrl/Cmd-T, then start typing letters from a symbol name. A drop down will be populated with all matching symbols from anywhere in your project.

Older features include a prettier based formatter for monkeyc, and a monkeyc optimizer that will build/run/export an optimized version of your project.

[edit: My last couple of replies seem to have just disappeared, and the whole conversation seems to be in a jumbled order, so tldr: there's a new test-release at https://github.com/markw65/prettier-extension-monkeyc/releases/tag/v2.0.9 which seems to work for me on linux. I'll do more verification tomorrow, and push a proper update to the vscode store once I'm sure everything is working]

  • I'm not sure why, or how to diagnose it, but with this extension enabled editing is painfully slow.

    Can you tell me what OS and hardware you're running on? Also the VSCode version?

    This is something I've been concerned about, but for me (MacBook Pro, 2019, MacOS Ventura) i've never had any issues at all. I also routinely test on a 2014 windows laptop, and don't see any issues there...

    I suspect that the extension is synchronously blocking VSCode's extension host process/thread

    Yes, this is something the vscode extension documentation warns about. Everything the extension does is async, but portions of the analysis do run without yielding (although for the projects I test with, never more than a tiny fraction of a second). I've considered just adding more yield points, but since I don't see any issues myself (and this is the first report of this issue) its not seemed worth it.

    I disabled real-time checking, but that hasn't helped

    I think the problem is that since adding that option, I've added various features (a Hover provider, a Completion provider and a Signature provider, amongst others) that don't respect it, and don't have their own options. There's also the document links feature, which also ignores it (but which has its own setting).

    So as a quick fix, I can make sure that the live analysis option really does turn everything off; and add sub-options for hover, completion, signatures and document links to turn off the various other features selectively.

    I suppose this could also be related to your specific project. A while back I noticed that while my own project (which is quite large) analyzed in a fraction of a second, a few of the open source projects I use to test the optimizer were taking up to a minute. I then found that if I opened one of those in VSCode, editing was, as you say, painfully slow. I found a number of pathological issues, fixed my algorithms, and got everything down to no more than a quarter of a second (with the maximum blocking time being much shorter).

    So it's possible that there is something about your project that makes my analysis really slow. To rule that out, could you try cloning https://github.com/matco/badminton (a random open source project with quite a few files), open that, and see if you have the same issue?

    In any case, I should probably also ensure that the analysis yields more often, regardless of how long it takes overall; and the real fix is going to be to move it off the main thread altogether. I just need to figure out whether it's enough to use a worker thread, or if I need to turn it into a language server in a separate process.

  • Can you tell me what OS and hardware you're running on? Also the VSCode version?

    About dialog:

    Version: 1.81.1 (system setup)
    Commit: 6c3e3dba23e8fadc360aed75ce363ba185c49794
    Date: 2023-08-09T22:22:42.175Z
    Electron: 22.3.18
    ElectronBuildId: 22689846
    Chromium: 108.0.5359.215
    Node.js: 16.17.1
    V8: 10.8.168.25-electron.0
    OS: Windows_NT x64 10.0.19045

    In a VMplayer 17 VM on Slackware Linux. The only other enabled extension is Vim, my editor of choice.

  • In a VMplayer 17 VM on Slackware Linux

    Ok, to make sure I'm understanding correctly, you're running Windows in a VM on Linux, and running VSCode in that VM? And by VMplayer, you mean VMware Workstation Player?

    If thats correct, can you try installing VSCode, the garmin tools, and my extension directly in Linux and see if the problem is still there? I'm guessing this is some artifact of the VM - the last time I tried something like that (to run Windows on my Mac) the result was pretty much unusable - no matter what I tried to run. But if VSCode runs ok without my extension, I'd still like to get to the bottom of it...

    Also, how many cores is the vm configured to use (and how many does your actual hardware have)?

    Finally, when you do the build, you should get output like:

    > Optimization step completed successfully in 322ms

    can you tell me how long its taking for you?

  • On that subject, I have noticed that the code is reduced by around 4k, but the data is increased by about 2k

    This is almost certainly an artifact of debug builds. Debug builds contain tables of debug info (I'm not sure why, because the exact same info is also available in the separate debug.xml file) that include full filenames, repeated over and over (eg there's a table mapping bytecode ranges to line numbers, and instead of saying here's the filename, and here are all the associated ranges, it includes the full file name for each range). Its also not clear why the debug info had to be included in the data section, rather than a separate section of the file...

    Since the generated files are (by default) generated at bin/optimized/groupNNN-debug/source/<original-path-from-project-root>, the new paths are going to be much longer than the originals.

    To really compare, you need to compare release builds. For the unoptimized build, you can do that by just adding "-r" to monkeyC.compilerOptions. That will work for the optimized build too, unless you use the (:release) or (:debug) exclusions in your code - in which case this issue describes how to do optimized release builds. I need to fix that bug...

  • No, but not for lack of trying. Slackware is missing a few of the libraries for the SDK manager. I'll have to track them down or make them myself at some point. VScode seems to be working.

    Also, how many cores is the vm configured to use (and how many does your actual hardware have)?

    The VM has two cores, 4GB RAM, the host has four cores, 16GB RAM.

    can you tell me how long its taking for you?

    Optimization step completed successfully in 21480ms

    To really compare, you need to compare release builds.

    Adding the -r option cut the data by 2k. It's about the same as the unoptimized build now.

  • Slackware is missing a few of the libraries for the SDK manager

    This thread suggests you can use a docker container, and even provides a link to a pre-configured one. I've not tried, but maybe that would be better than a windows vm?

    Adding the -r option cut the data by 2k. It's about the same as the unoptimized build now

    Yes, my optimizer generally doesn't save much data these days. Before garmin implemented their optimizer, it was a big win - but on the data side of things, their optimizer seems to do most of the things that mine does.

    Optimization step completed successfully in 21480ms

    Ok - thats definitely the issue then. Analysis should be faster than optimization since it skips the actual transformation part; but we're still probably looking at 10s for the analysis. It's hard to guess what the longest blocking step in there would be, but it could easily be over a second, which would certainly account for the sluggishness.

    That still leaves us with the question of whether it's something specific to your project, or just an artifact of running windows inside a VM. I'm working on setting up a windows VM right now to see if I get similar results.

  • The good news that with VS in Linux the optimizer is much faster:

    Optimization step completed successfully in 6520ms

    The bad news is editing is still unusable with this extension enabled. Then it gets worse. I'm getting this in a dialog when the simulator starts in Linux:

    ASSERT INFO:
    ../src/common/menucmn.cpp(308): assert "wxIsStockID(GetId())" failed in SetItemLabel(): A non-stock menu item with an empty label?

    BACKTRACE:
    [1] g_main_context_dispatch
    [2] g_main_loop_run
    [3] gtk_main
    [4] __libc_start_main

    I can't get it to work for any device, and none of the other solutions for this message have had any effect. What's the command line usage for the simulator? Maybe I can use strace to see what it's looking for.

  • Optimization step completed successfully in 6520ms

    That's still really slow - so I'd like to figure out if it's something to do with your project, or if you just have a very slow machine. How long does the garmin portion of the build take? The extension doesn't report that, but it should be easy to time roughly. For all the projects I currently build, garmin's compiler takes much longer to run than my optimizer - but there could be something strange going on with yours...

    But also, could you try the project I suggested earlier - https://github.com/matco/badminton - and see how long that one takes? On my mac it just took 2387ms.

    What's the command line usage for the simulator

    You start the simulator itself via <sdk-path>/bin/connectiq, which is a shell script, which on linux runs <sdk-path>/bin/simulator. I assume thats what's crashing. Once you've started the simulator, you use "<sdk-path>/bin/monkeydo path-to-prg deviceId" to actually simulate your program.

  • v2.0.70 is out.

    This version

    • Improves type analysis for array accesses and assignments
    • Speeds up parsing of .xml files
    • Adds options to turn off incremental analysis, in the event that its causing things to be too sluggish.
  • you just have a very slow machine

    It's possible, I built in in January 2014. Thinking 

    But also, could you try the project I suggested earlier - https://github.com/matco/badminton - and see how long that one takes? On my mac it just took 2387ms.

    Optimization step completed successfully in 6278ms

    > Sizes for optimized-badminton-fenix7x: code: 23321 data: 6025 <

    If your mac is barely three times faster I won't ask how old it is. The projects sound like they're comparable, both just under 30k code+data:

    > Sizes for optimized-Annulus-fenix7x: code: 26778 data: 2922 <

    Per your instructions I am trying to start the simulation right. The simulator starts and waits with a grey window, when I start monkeydo that's when the trouble begins. I haven't found anything in the strace log yet that would point me in a useful direction, though I did get this last time:

    /home/andy/.Garmin/ConnectIQ/Sdks/connectiq-sdk-lin-6.2.2-2023-08-02-a0afa25e0/bin/connectiq: line 8: 16772 Segmentation fault      "$MB_HOME"/simulator