• Star rating systems don’t accurately convey opinions. The majority of reviews will be either 5* or 1* with only a few wannabe critics voting in between applying their own arbitrary votes.

    If Amazon are going to change things then why not adopt something more meaningful. Simple up/down votes for things that actually matter.

    Was this product as described: 👍/👎

    Are you satisfied with the quality: 👍/👎

    Are you satisfied with the value for money: 👍/👎

    Then a few optional questions for things that aren’t relevant to the product such as postage/packaging etc.

    • Star ratings are very broken, because everyone seems to think of the rating differently. IMO the criteria should be like this:

      ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ = The best thing ever.

      ⭐️ ⭐️ ⭐️ ⭐️⚫️ = Above average.

      ⭐️ ⭐️ ⭐️ ⚫️ ⚫️ = It’s ok. Get’s the job done. Not great, not terrible. The usual. Nothing special. Totally average.

      ⭐️ ⭐️ ⚫️ ⚫️ ⚫️ = Below average. I’m disappointed.

      ⭐️ ⚫️ ⚫️ ⚫️ ⚫️ = Worst thing ever. Crime against humanity. Ban this product and burn the remaining stock immediately.

      However, in reality people tend to use it like this.

      ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ = It’s ok. I don’t have anything to complain about. Could be ok, could be great or anything in between.

      ⭐️ I’m not happy. Minor complaints, big complaints and anything in between.

      When it comes to book reviews, the five star reviews tend to be useless. Especially when it comes to self help books, it seems like those reviews were made by people who are completely incapable of noticing any flaws in the book. I’m inclined to think those people shouldn’t even review a book if they can’t think of it critically. On Amazon there are always lots and lots of fake reviews produced in a click farm, but in other places you’ll also find genuinely incompetent reviewers too.

      • I think the problem is partially the fault of companies that insist, at least where rated interactions with employees is concerned, that every interaction should be five stars, which in a system where the stars are all meaningful like this, is simply not realistically possible. This gives people the sense in general that rating anything that you don’t completely dislike anything less than 5 stars is a bad thing to do, because it risks hurting some employee somewhere who doesn’t deserve it.

      • Tbh, that different understanding doesn’t matter much if you have enough reviews since it averages out.

        If you compare two products with one review each, then yes, it hugely matters whether the one reviewer considered 5 stars as “expectations fullfilled” or “the best thing that happened to me ever”.

        But if you got >1k reviews, both sides will get equal amounts of both reviewer groups and it will average out so that both products will be compareable based on their stars.

        That’s a big misunderstanding many people have in regards to reviews. Many people are also afraid that unfair reviewers will skew the score. But since these unfair reviewers are usually spread equally over all products, it averages out as soon as you have a couple hundred reviews.

        And that’s also what that article criticises. It’s much more important how many reviews an article has than the exact value. It’s easy to get a straight 5-star rating if you have only a single review. It’s much harder to do so if you have 10k reviews.

        So the information value is: <100 or so reviews, the rating means little to nothing. >1000 reviews it can be usually trusted.