How to encourage users to give meaningful feedback

1st, I have to admit: I do give negative feedback to other apps, both in Android's Play Store and in Garmin's. But in Android I do edit it as they improve the app, and here's the big confession:

In Garmin store I stopped giving even 4 star reviews, as now (as a developer) I see them as a punishment, similar to 1 star reviews. This is because any rating less then the current average is decreasing the average, and stays that way forever (because once a new version is released it's impossible to edit or remove it) If I don't receive a response to a "Contact Developer" message after a week I will leave a negative review with few stars.

So today I added the following paragraph at the beginning of one of my app descriptions:

"If you give less than 5 stars  Starreview, it's your choice, but know that I don't get any useful feedback on how to improve the app. So if you really care, then please use "Contact Developer" and leave me your email address so I can get back to you, or at least write a short sentence in the review I can relate to."

Unfortunately I have a feeling that this (being in the beginning of the description) will have a negative effect on the app's downloads, but I won't be able to confirm it, because we no longer see how many times the app is downloaded every day.

Unfortunately I also have a feeling that there will not be many users who read even the 1st sentence of the description before they leave a review.

What is your experience?

Top Replies

  • Wrong. If exactly 50% of the users feel it's a good app and the other half that it's crap (and they all vote 1 or 5), the "limit" will be 3. Similarily if the percentage of the voters (who…

All Replies

  • Wrong. If exactly 50% of the users feel it's a good app and the other half that it's crap (and they all vote 1 or 5), the "limit" will be 3. Similarily if the percentage of the voters (who in this theoretical case vote only with 1 or 5) would be different, like 1/3, 2/3 you'll get different averages, but they will not be 1 or 5. And even considering all those that vote with 2, 3, 4 you'll always end up with an average that is exactly what you have now.

  • Don't they have a similar 1 to 5 star rating for many things where you live?  Amazon products, books, movies, tv shows, restaurants, etc?

    And in all those cases a 4 star review isn't considered a bad review.

  • Where I was brought up we have a 1-5 grade system. It's horrible. Depending on the family/child/school 4 could be not good enough (5 is best)

  • So getting a "B" in school is really awful, even when it's considers a good grade?

    And like school, apps are compared against other apps.  Let's say you have a device app to record an activity, but it only shows 1 screen worth of data, but similar apps in the store show multiple screens and much more data.  As a user, which might you rank higher, even if no bugs were involved?

    For me, I'd go with more data

  • I understand your mentality here  regarding the 4 stars rating, but as has been eluded to most users don't read the description, also the other main thing from a Garmin Store perspective is the rubbishness (yeah I know it's not a word) of the trending or order of 'apps' in a category and how that is determined. So even lots of 1 stars will elevate your app to a higher ranking. If by some miracle they also leave a reason why it's 1 star then I think that serves you well because the small number that care about reviews will read them and either see the user has not read the description / is an idiot / or was correct and you have responded well saying 'thank you, we have fixed.....'.

    I get users hitting contact developer asking questions that are clearly answered in the description, or asking extremely simple thing like how to download the watch face / app -  which I never understand, but I respond nicely and politely asking how I can help, and when sarcasm will not let me respond nicely I don't respond at all.

    I think the (my) conclusion is you are damned if you do and damned if you don't.

    As a side note, I have one very niche app that has only ever received 1 stars - because they don't read the description, and yet it is still one of my most downloaded - go figure

  • I have an app that has a specific requirement to use it, as define by the data provider.  This was stated in the very first line of the description.

    A got 1 star reviews from people that didn't meet the requirement.  I'd replay by pointing them to the first line of description.

    Then it kind of got interesting! 

    Others started giving me 5 star reviews where they pointed out people that gave me a 1 star review after failing to read the first line!  Kind of to level the score.

  • Wrong. If exactly 50% of the users feel it's a good app and the other half that it's crap (and they all vote 1 or 5), the "limit" will be 3. Similarily if the percentage of the voters (who in this theoretical case vote only with 1 or 5) would be different, like 1/3, 2/3 you'll get different averages, but they will not be 1 or 5. And even considering all those that vote with 2, 3, 4 you'll always end up with an average that is exactly what you have now.

    I only meant that if an app was "objectively" good or bad (meaning nearly all users agreed on this "fact"), and users bought into your system where the goal is to move the average, then the average rating would *approach* 1 or 5. (Not be exactly equal to 1 or 5).

    If you think it's ridiculous to say that apps are either good or bad (as a binary system), I would say the same thing about assuming that ratings are only good or bad (e.g. 1-4 are all bad, only 5 is good).

    e.g. of why I think looking at any individual rating in terms of its effect on the average is absurd:

    Let's say dwMap is an objectively "good" app (I happen to love it). The first user, who likes the app, rates it as 3 (maybe they haven't heard that 5 is the only good rating.) The 2nd user, who also likes the app, sees the average score of 3 and thinks this is an outrage! Clearly the goal is to raise the average, so they vote 4.

    As the average rating goes higher and higher, eventually every user who likes the app will be "forced" to vote 5 stars (as you'd prefer), since the goal is to raise the average rating of an app that you like. And as more and more users vote 5 stars, the average will *approach* 5. Again I am assuming that the *vast* majority of users like the app.

    Except ofc nobody actually thinks that the goal of rating an app you like is to raise the average rating, except maybe for app devs?

    Similarly, if there's an app that users tend to dislike just a little bit, they will all be "forced" to give rating of 1 star, since the "goal" is to lower the average rating of apps that you dislike.

    To be clear:

    - in a normal system, if an app is slightly better than average, I would expect the average rating to be slightly over 3.0. and if the app is slightly worse than average, I would expect the average rating to be slightly under 3.0.

    - in a system where the goal is to lower the average rating of an app you dislike (to any degree), and to raise the average rating of an app you like (to any degree), I would expect a slightly good app's rating to approach 5 and a slightly bad app's rating to approach 1. Luckily nobody rates stuff by trying to lower or raise the average rating

    Where I was brought up we have a 1-5 grade system. It's horrible. Depending on the family/child/school 4 could be not good enough (5 is best)

    Sure and when I was growing up, I had a math teacher who refused to give 100% even for a test with 0 wrong answers, out of some weird personal principle that "nobody deserves 100%", which was never fully explained.

    So when my parents saw the 99% mark on a "perfect" math test, they were like "what happened to the 1%?????" as if 99% was "bad".

    And actually, perhaps partly due to this general perception that any mark lower than 80%/90%/95% sucks, North America has seen significant grade inflation, leading to a self-fulfilling prophecy where in truth, anything lower than 90% sucks.

    If you insist that anything lower than 5 stars is objectively bad, and all users agree with you and rate accordingly, then in fact a 5-star rating will be objectively meaningless, except to tell you that the app is "not bad". It could also be good or even great, but you won't be able to tell from the rating.

    It's kinda like one of my old jobs where the highest rating a manager could for any given category in the yearly employee review was "meets expectations". The design of the ratings in the review system had no way to give people any incentive to try harder, since you could never get a better rating than "meets expectations". (I am not claiming there weren't other ways to incentivize people to work harder, or that ppl could not be internally motivated to do so.)

    Similarly, if all "good" apps in the store have a perfect 5 star rating (because users all agree with you that anything less than 5 is "bad"), how can I tell the difference between any of them?

  • To expand on the school analogy:

    When you write an exam for a university course, your mark on that exam is based on the exam alone, not on its effect on your average for that course (which again would be absurd).

    Let's say a student is averaging 95% for the year, and their final exam is multiple choice, where each question is weighted equally. If that student answers 90% of the questions correctly, then the mark for their final exam should be 90%. The teaching assistant who marks the exam doesn't say to themselves: "Well, this mark of 90% can't be fair to the student, because it's lower than their current course average of 95%, which means that the average will decrease. In this case, 90% is an objectively bad mark. In light of this fact, I will adjust their exam mark to 96%, which will raise their average and achieve the goal of rewarding the student for a good final exam."

    No, the mark for the final exam stands alone, regardless of marks earned for other course work and tests during the year.

    Similarly, when a user rates an app, their rating stands alone. They shouldn't base their rating on the current average rating of the app (in the hopes of lowering or raising it), they should base their rating on their personal experience of the app. (This ofc ignores the possibility that the current average may *bias* or predispose the user into thinking that the app is better or worse.)

  • Where I was brought up we have a 1-5 grade system. It's horrible. Depending on the family/child/school 4 could be not good enough (5 is best)

    So to be clear you're looking to replicate this exact situation in the CIQ store by insisting anything less than 5 stars is awful? As I said above, if everyone agreed with you, it would only serve to make 5 star ratings completely meaningless.

    Same as if everyone agrees that anything less than 90% in school is bad, then a 90% mark will also be worthless.

  • As usually the conversation (or rather monolog :) went to a different direction than intended. So I'll only reflect to one point where you make the same mistake again and again: assuming all users use "my" rating system wouldn't cause all app's average to converge to either 1 or 5. This is because fir each if the users who vote there's a subjective number that they feel like the app us worth, and they will only vote higher than the current average rating if their own value is higher. So I claim that the average rating of an app in either of these 2 systems will end up being the same.