Automating the Connect IQ Simulator for Testing

Hello,

I am trying to setup an automation framework around a Garmin watch app that we have developed. I am NOT concerned with unit tests. I am further down the line from there and focused on integration tests and UI tests. 

The question:

is there a way to control the connect IQ simulator? 

can you attach a debugger to a currently running app?

can you send actions (press a button, change gps location, etc) to the simulator once its created?

can you read the state of the app while its running?

Basically want to do this:

compile prg file

start simulator with that prg file

run automated tests on a simulator running the prg file.

The problem I have is that no window inspection (windows inspector, appium, various github inspector tools) can detect anything inside the panel window to "click" on the buttons or read the text on the screen. It is just a "blank" panel object.

Anyone done this?

Any resources you can point me to?

Thanks for reading and I appreciate any help you can give me.

Top Replies

  • I hope you make a good living out of your CIQ projects that can afford you to buy 9+ devices...

    Yeah I think we all know that anyone who tried to make a living off of ciq would starve very…

All Replies

  • You want to have one or two real devices to test with.  Two if you support both MIP and Amoled displays. Some things work differently in the sim than on a watch

    Then as far as resolutions, it depends on how you do some things  For example, if you use custom fonts, you can use the same font files for all devices with the same resolution and see how it looks on one device of a given resolution, and be fairly sure it will look the same on other devices with the same resolution

    Are you doing a data field or device app?  Even with native fonts, it really depends on how you code things...

  • My primary concern is checking that the screens look correct with the different display resolutions and fonts across all of the watch models.

    Unfortunately it's possible for native fonts to look different in the simulator vs. the real device :/

    e.g.

    [https://forums.garmin.com/developer/connect-iq/i/bug-reports/font-positioning-different-between-simulator-and-real-device-for-edge-540-840]

    [https://forums.garmin.com/developer/connect-iq/f/discussion/256346/handling-differences-between-system-fonts-in-simulators-and-devices/]

  • Unfortunately scripting is not possible.

    What you can try to do is to make a custom build (maybe use some jungle magic or a constant) and in it display 1 screen with dummy values. This way you'll see the screen when the app starts. But you'll need to do a new build fir capturing every screen.

    And the capturing isn't scriptable either... So it's a lot of manual work.

    Considering what people wrote before me: you have no choice but to go with whatever the simulator shows (even if we know it's not pixel perfect) and check one from each "category". The meaning of category depends on your app, but it can be: display size, shape, technology, memory size, existence of a feature, etc. sometimes you also need to check multiple "dimensions", i. e: AMOLED small screen, AMOLED big screen, MIP small screen, MIP big screen, etc.

  • Thanks.  That manual testing across a variety of watch resolutions is what I've been doing, but it's highly error prone.  Hopefully Garmin can consider a scriptable ConnectIQ simulator in the future. 

  • I absolutely love the idea of having a script able sim to run automated visual tests on.

    Unfortunately the SIM to way too different from real devices to guarantee an update will not break on a real device when all the tests pass. That's my main issue with this approach. But it would sure be better than nothing.

    I'm running unit tests and manually loading debug versions on all my 9 watches (I keep on buying watches when issues arrise that I can't reproduce on another device...). I use a function that loops through all the settings of the face by loading new configs every second, and then I'll stare at all watches for a few minutes until the loop is done and then I'm fairly confident nothing broke. It's unfortunately very time consuming so I tend to skip this process for smaller updates.

    Being able to run visual regression on real devices would be ideal, or an emulator instead of a simulator. Or simply a way to dump screenshots, so something can be coded to compare all the screenshots that were made during a debug run.

  • Exactly. I hope you make a good living out of your CIQ projects that can afford you to buy 9+ devices...

    What I do is that if a user complains about some alignment issue I ask them to send me a screenshot, then I try to fix it for that specific device or maybe the relevant family of devices. To tell you the truth, this did not happen yet :) 

    However when I encounter similar problems on other developers's apps, I contact them, and tell them that I have some screenshots from whatever device I used it on, and if they are interested they can reply to my email and I'll send them the screenshots. This worked so far a couple of times. Usually active developers are happy to get this kind of feedback.

  • I hope you make a good living out of your CIQ projects that can afford you to buy 9+ devices...

    Yeah I think we all know that anyone who tried to make a living off of ciq would starve very quickly haha.

    In fact ciq development seems to be a net negative on time and money. At least most jobs pay you money instead of the other way around.

  • When asking for money for a face, I think I should do what I can to make sure it works properly on the devices that I support. Asking a person to help me figure out why my app is broken while they paid for it is not the end of the world, and fully understandable looking at the tools we have, but I rather want to avoid that situation.

    I have a job that pays the bills and partly supports this hobby. Watches I try to buy used or on sale from the money I made from the IQ store. But the watches I have sure are very valuable when it comes to fixing issues, mostly Fenix 7, fr965 (beta software has been fun) fr165 and Epix pro have been most valuable. They all have their quirks and handle things slightly different (aod, transparent images, loading resources, flashlight, etc).

  • The numpad on your keypad can mimic some inputs (like swiping) if I remember correctly. If you can automate those you might be able to do something.

  • Maybe not really related to testing but I built https://github.com/bombsimon/garmin-screenshot to automate the process of compiling and screenshotting every device. It made it much quicker to get a quick visual inspection of simple things like colors and icons across all devices. 

    As mentioned by others the simulator supports shortcuts such as M for menu och arrows for navigation so it would be fairly easy to patch the code to send key strokes. 

    I’m fully aware this is a fragile and slow hack and not a real test framework but posting it in case it solves someone else’s problem similar to mine.