My legit experience with run configs....

So based on the fact that everyone else is doing it, I tried using run configs again, with a project that I have which is "conditionally compiled" (using annotations) for CIQ1 vs. CIQ2. Sure, it's only been a few months since I last tried it and hated it, but maybe things would be different this time.

Creating the first run config (935) was no problem. When I wanted to test 235 (with less features), I created another run config.

Then I wanted to test Fenix 3 because it takes slightly more code space than 235 (for reasons I don't want to go into). Yet another run config.

Of course, Approach S60 has less free memory at run-time -- for whatever reason, the VM seems to use more memory compared to other devices in the same class. So if I want to test that, I have to either change an existing run config or create a new one.

Maybe everyone else enjoys editing run configs, but I don't. I feel like there's a lot of unnecessary clicking.

I guess I'm unique in that I like to squeeze every last byte out of every single Garmin platform, so I worry about crashes a lot.

But even if I didn't do that, for some apps I would have test a significant subset of the following visual / input / CIQ / feature configurations:
- VA3/VA3M
- Fenix 5X / etc.
- 645M
- Fenix 5 / 5S / 935
- 645
- Approach S60 (consumes more memory and crashed in situations that no other platform crashed in)
- Fenix 3 HR
- 235
- 735
- 630
- VA
- 920XT
- VAHR

Each of those devices has something that's different, whether it's touchscreen, available memory, fonts, screen size, screen shape, multisport support or CIQ support. VA3 has a completely different input scheme than any other device, too. It's the only one that uses Right Swipe for "Back" and the only one that won't accept START/ENTER/KEY in a widget. So if my app/widget has special handling for VA3, I wonder how I can test that without running it on a simulated VA3?

So maybe I'm just "doing it wrong", but I would like to know how I should be running my app, in a way that it's easy to switch between devices and test everything I want.

Or maybe I shouldn't care what the app looks like or whether it'll crash. Maybe if I have 6 different device feature sets (CIQ 1 / 2 / 3 X Multisport), I shouldn't test all 6 tiers.

How do I do system / integration testing, especially when I don't own all those devices? Just not test every possibility?

I've seen the official Garmin stopwatch app, which does the "tiny milliseconds/hours" thing which I really liked and copied. Except the tiny milliseconds are not vertically aligned with the huge seconds on FR230, so it looks kinda funny. I'm pretty sure this is because the FR230 getFontHeight() function returns a much larger value than the big fonts actually take up on the screen. So the tiny milliseconds are aligned with the way fonts are reported to be, not the way they actually are.

This is the kind of thing you can never discover without testing. Yes, I'm sure it was coded the "right way" which is to respect what getFontHeight returns. But it looks funny.

Same goes for VA3 -- getFontHeight returns too-large values for big fonts, making the app think they are taller than they really are.

Sure no big deal -- except I actually had user complaints (on a different "full-screen run" app) that the fonts were too small for 230 (because of code that uses getFontHeight() to determine whether text will fit, and shrinks it appropriately.)

So on my full-screen run app and my stopwatch, I put various hacks in to hardcode the font height for VA3, 230 and other platforms. Is that nice coding practice? No. But it allowed me to display bigger numbers like my (tiny) userbase wants. And it allowed me to align tiny milliseconds with large seconds on my stopwatch app, so I could make it look as nice as possible

So I'd like to hear anyone's advice on how this kind of thing can actually be tested without having 10 different run configs or constantly editing a 2nd or 3rd config.

It would be amazing if we could have the option for 2 or 3 frequently used fixed run configs and a spare "dynamic" run config which would just prompt us for a device. Basically the best of both worlds.

But I guess I am alone on this.

I won't bring it up again, but thanks for reading anyway.
  • As a counterpoint, I've seen apps in the store from big, well-known companies where users leave reviews like:

    Does not work properly on my VAHR.

    I wonder how that happened? Could it be that they just didn't test that platform? Maybe they just wrote beautiful generic code with no hardcoded device-specific behaviour or hacks, and only tested on their one or two base devices.....

    I've also seen community code that doesn't work properly on old touchscreen devices, especially Vivoactive, because they assumed they could just write generic code that runs everywhere.

    Well maybe I am alone, but I don't think you can, especially when you deal with input and output. Maybe a simple data field, but not much else.

    I thought I could write generic code too, but then I wrote a full-screen data field (*) and a stopwatch widget/app and learned the hard way that you can't, unless you want a subpar experience on certain devices.

    (*) With device-specific layouts (**) that are hardcoded and painstakingly aligned by hand, not computed, to save precious code/data space. How do I test what the layouts look like without running the app on each device family? If I want to support all watches, there's at least 5 families: 920XT/VA, VAHR, 230/etc., Fenix 5S, and 645/935/Fenix 5/etc. Nobody here is creating 5 run configs, right? So how would make you changes that affect each layout and test each family, without editing your second run config 3 times?

    (**) Not CIQ layouts

    Oh, and if your full-screen datafield app supports Edge (one of mine does), then that's 4 more layouts, if you want to support both portrait and landscape. That's 9 different families of devices to test, just to see if you are drawing the correct layouts.

    I'm really curious how one could get away with supporting 9 different families of screen shapes, drawing some sort of full-screen data field layout for each one, and not testing each of the 9 families (at least once).

    Of course it would be great if the API could just draw full-screen data fields for us (i.e. mimic or augment native layout), but it can't, so we have to recreate that stuff by hand.

    That's all I'll have to say about that.
  • Creating the first run config (935) was no problem. When I wanted to test 235 (with less features), I created another run config.

    This is not what I would do, and it is not how many developers use run configs.

    Maybe everyone else enjoys editing run configs, but I don't. I feel like there's a lot of unnecessary clicking.

    As discussed previously, I only create one run config. When I want to work on an app, I set the config to run that app on a suitable device. I repeat the modify/build/run loop until I have it working satisfactorily on that device, then I modify the run config to test another device. I repeat the modify/build/run loop until it works properly there. If I've made any changes that might affect previously verified devices, I loop back and start verifying all of the devices again.

    I guess I'm unique in that I like to squeeze every last byte out of every single Garmin platform, so I worry about crashes a lot.

    You are not unique in this respect.

    So if my app/widget has special handling for VA3, I wonder how I can test that without running it on a simulated VA3?

    There is no shortcut. If you want to know that something works on the simulated VVA3, you have to test it on the simulated VVA3.

    Maybe if I have 6 different device feature sets (CIQ 1 / 2 / 3 X Multisport), I shouldn't test all 6 tiers.

    It wouldn't be wise, but you could do that. It would probably be better to just not support configurations that you don't care to test.

    How do I do system / integration testing, especially when I don't own all those devices? Just not test every possibility?

    We try to make the simulator behave as close as possible to the actual devices, but there are definitely some discrepancies. Unfortunately, the *only* way to know exactly how an application looks/behaves on a device is to test it on a physical device. If you don't have a particular device, someone else will. It doesn't hurt to seek out others to help with app verification.

    It would be amazing if we could have the option for 2 or 3 frequently used fixed run configs and a spare "dynamic" run config which would just prompt us for a device.

    I've filed an enhancement request.

    I wonder how that happened? Could it be that they just didn't test that platform?

    This can happen for any number of reasons. The app could have been fully tested and worked perfectly with a given firmware version, only to be silently broken by a more recent firmware version. Or, the application works perfectly in testing but breaks because the user has configured their device to use a different time zone than the one they are physically in...

    So how would make you changes that affect each layout and test each family, without editing your second run config 3 times?

    I would modify my run config every time I wanted to change the application or device to test. Having multiple run configs just doesn't scale. Having 25 device configurations for 5 apps that work with 5 devices/families is unmanageable.
  • I'm the one with multiple run configs - one for each of the 50 apps I have in the store. And one thing I find really handy is when a "Contact developer" message comes in, It's really easy to switch to that run config and set the target and check things out and fix a bug if one is found. I can then easily switch back to the app/target I was working on before this where I left off in the case targets. Same if I've working on a few things at the same time. I may want to test on a device for the 1st one, move to changing code on the 2nd, and keep the state I was using with both apps.

    As far as targets, I almost never check them all after a change. Sometimes I can verify the fix in a single target, sometime, if it's font related, I just check one of the similar targets (so va3 like, f5 like, semi round, etc), sometimes it's feature related - no sense testing a bunch of targets without maps when the change was to the mapping API code :) For a new device, when I add that as a target, it could be as simple as just running the app with the new target to make sure it works.

    I think it comes down to, do what works best for you..
  • I always use run configs, but generally do as Travis does and change the target when I need to. In a couple cases I have multiple devices I use regularly and I'll create multiple run configs for that app, but not for most. And I'll do as Jim does. I won't test every device depending on my changes. Obviously when developing the app, you need to make sure it is working for each device. But after that, I only spot check, mostly when I'm changing something with the display.
  • Travis.ConnectIQ thanks for the enhancement request! :) I agree that multiple run configs don’t scale which is why I don’t use them for some projects. I obviously want to test all my different “families” and I don’t like constantly editing configs, so I just don’t use run configs for projects with more than a couple of families.

    jim_m_58 ekutter thanks for reading and for the responses :).