Hi, where I can find " Connect IQ Simulator 8.x"? I'm trying to make a watchface but I'm stuck with the simulator.exe found in the SDK zip. Thank you
Hi, where I can find " Connect IQ Simulator 8.x"? I'm trying to make a watchface but I'm stuck with the simulator.exe found in the SDK zip. Thank you
Hi All, thanks for the feedbacks. I'm new and not a developer. I'm just trying to make my own watchface using VS Code and ChaptGPT support. From what I understand I should have installation process but I only have a file "simulator.exe". Vscode didn't launch automically the simulator. I think it is more complicate than I thought . I did my own watchface on a Samsung watch 6 years ago and it was more simple than on Garmin watches. Even Chaptgpt cannot write a correct code (Visual Studio code) I give up on this project for now, too much time consumed with 0 results. Thanks again to all of you! Havea great day ;)
When it comes to CIQ, expect things like ChatGPT to give you complete nonsense many times. Use the actual CIQ documentation to get you started.
It works for me just as you describe. But from a Windows usability point of view I'd expect a background process to be installed handling the updates.
Yeah like I said the UX is not great. Like with a lot of CIQ things, it seems that the bare minimum was done. Sometimes this applies to Garmin stuff outside of CIQ, too.
expect things like ChatGPT to give you complete nonsense many times
This applies to anything you could possibly ask ChatGPT. It's just spicy autocomplete that tells you what you want to hear. If a real human being acted like that - confidently making up bs, they would be fired from their job immediately and/or socially ostracized for being an untrustworthy person.
LLMs can be useful - if you are already an expert on the subject of inquiry.
I give up on this project for now, too much time consumed with 0 results
You may wish to give this 3rd-party graphical watch face builder for Garmin a try:
https://garmin.watchfacebuilder.com/
No coding required. The site generates a built CIQ app for you, which you can then sideload to your watch.
(It's not mine, I have no affiliation with the site or creator.)
Google Gemini gives good results because it integrated recent search data
Using recent training data doesn't prevent LLMs from confidently hallucinating. Actually, all LLMs ever do is hallucinate, but it just so happens that pretty often, their hallucinations correspond with reality.
What LLMs almost never do is say:
- that they don't know something
- that something you are asking for is impossible (when it's something obscure or technical)
Again the problem is that if you're not an expert in the subject matter, you won't be equipped to figure out whether an LLM is giving you useful information or not.
That's why so many tech help posts start with "chatgpt/gemini said X but I tried it and it doesn't work!!!!" (As if LLMs are expected to tell the truth all the time)
Other times you can just tell OP used an LLM before asking people for help because what they initially tried is something that absolutely nobody would do otherwise.
I understand that AI isn't going anywhere, but a lot of times the use of LLMs for stuff like this actually ends up wasting more time and effort than not using LLMs.
Imo, people have to stop:
- anthropomorphizing llms like chatgpt, perplexity, gemini, etc as if they're expert human beings with a personality, knowledge, critical thinking abilities, and reasoning skills
- stop acting like llms actually *know* anything. They don't, they're just spicy autocomplete that always tell you exactly what you want to hear. That's why ppl get so mad / confused when an llm tells them complete nonsense, like X is possible when it isn't
The big difference between an LLM and a non-psychopath/sociopath human is that humans typically *know* when they don't know something, and they will *usually* admit that they don't know something.
If you ask a human about [impossible tech task X], typically they will say "I don't know" or "X is impossible".
If you ask ChatGPT, it will often happily give you detailed instructions on how to perform [impossible tech task X]. If you're lucky, it will say X is impossible. What ChatGPT will almost never do is say "I don't know". This is by design, for 2 reasons:
- the AI bros designed LLMs this way because they realized that people will place greater trust in confident chatbots
- chatbots don't actually know anything. They don't know what they know, and they don't know what they don't know. They just output the most likely sequence of tokens based on the prompt text. (Ofc there are those who will claim that humans do exactly the same thing. All I can say is if that's really the case, there's no hope for humanity.)
Philosophically speaking, to know X, you have to both believe X and X has to be true. But chatbots don't believe anything, since they're just spicy autocomplete, and not autonomous, thinking beings. From a more practical POV, chatbots will say any number of false things with just as much confidence as when they say stuff which happens to be true, just like a psychopathic human or a bs artist who doesn't actually know what they're talking about. But chatbots are still worse than humans in this regard, because humans still at least (usually) know what they don't know (even if we don't always admit it).
Gemini isn't only an LLM and recent internet data are not integrated via training but via grounding. There is a part of analytical logic in Gemini that makes the difference.
keep us posted with your apps that you wrote with Gemini (without asking the forum :)
I didn't say I write apps with it but it was of great help to get my watchface done and to optimize it: apps.garmin.com/.../d21b73a7-69fb-4199-b60a-347062509530 . G**i explained how to use bitmap buffering, gave precise hints on how to optimize my code, replaced deprecated function calls with updated versions ...