<span>We all do it, whether we're planning a holiday or just choosing a movie to watch – we </span><span>check the reviews.</span> <span>From TripAdvisor to YouTube and Amazon, ratings are everywhere. Yet whether it's oddly ecstatic praise for ho-hum products or catastrophically bad ratings for </span><span>renowned hotels, it's long been clear that many online reviews aren't what they seem.</span> <span>Now the online film review site </span><span><em>Rotten Tomatoes</em></span><span> has become the latest to take action to combat ratings abuse.</span> <span>This year the site killed off its "want to see" scores, where movie fans could show how excited they were about an upcoming release. Now it</span><span> has gone further. If you want your opinion of a film taken seriously, you will first have to prove you </span><span>actually </span><span>saw it.</span> <span>These moves come </span><span>after evidence revealing that some movies are deliberately targeted – "review bombed" – by trolls.</span> <span>In the run-up to the world premiere in March of </span><span><em>Captain Marvel</em></span><span>, the movie generated such a negative response that its "want to see" rating plunged to a dismal 28 per cent. </span> <span>The suspicion was that the movie </span><span>provoked a backlash among internet trolls who took exception to the idea of a female superhero.</span> <span>Similar campaigns are believed to lie behind the tsunamis of negative reviews of </span><span><em>The Last Jedi</em></span><span>, </span><span><em>Ghostbusters: Answer the Call</em></span><span> and the TV debut of the first </span><span>female </span><span><em>Dr Who</em></span><span>.</span> <span>In the end, </span><span><em>Captain Marvel</em></span><span> was </span><span>praised by critics and the public alike on its release, netting more than </span><span>$1 billion (Dh3.67bn) to date, and becoming the second-biggest </span><span>movie </span><span>for gross profit this year.</span> <span>Although welcomed by many, the action taken by </span><span><em>Rotten Tomatoes</em></span><span> is not without problems. For a start, the verification system means only US movie goers can influence the audience score, at least for the time being. The rest of us can still post reviews but they will not count for anything.</span> <span>By opting for verification, </span><span><em>Rotten Tomatoes</em></span><span> is using the same approach to fake reviews as that used by Amazon. But as the </span><span>global online shopping service has found, it's not perfect.</span> <span>In April the UK-based consumer group Which? found that reviews of hundreds of gadgets </span><span>such as dashcams and smart watches showed signs of being faked – for example, with hundreds of five-star ratings appearing on the same day.</span> <span>Most of the suspicious reviews were from unverified purchasers, but it’s known that some sellers instantly contact anyone posting bad reviews offering inducements to change or delete them. It is also becoming common for sellers to solicit reviews in return for freebies.</span> <span>In any case, verified purchase reviews are getting harder to find. The Which? study referred to evidence that while such reviews made up 94 per cent of all monthly reviews on Amazon in the first quarter of last year, this has now dropped to less than 70 per cent.</span> <span>Amazon insists it works hard to protect the integrity of reviews, using both human and artificial intelligence-based techniques to weed out fakes. But the battle for consumer trust is triggering some ugly spats between </span><span>websites and technology companies claiming to be able to spot dodgy reviews. </span> <span>Recent allegations that up to a third of the reviews on TripAdvisor were fake led the hotel comparison site to lambast Fakespot, the </span><span>company whose algorithms were used as the basis of the claim.</span> <span>According to Fakespot, its algorithms look for clues in the spelling, grammar, timing and quantity of reviews to assess reliability. TripAdvisor insists, however, that its own tests showed these methods are</span><span> unreliable.</span> <span>Fuelling the controversy, the respected tech review website </span><span><em>CNET</em></span><span> </span><span>reported </span><span>Fakespot's verdicts often disagree with those from </span><span><em>ReviewMeta</em></span><span>, another review-checking website – which hardly boosts confidence in the reliability of either.</span> <span>Despite all the confusion, claims and counterclaims, at least the online world is taking concern </span><span>about ratings reliability seriously.</span> <span>That </span><span>cannot be said for many global companies who still use spurious methods to rate how they</span><span> are performing, who should get bonuses – and who should get</span><span> the sack.</span> <span>For years they </span><span>measured customer loyalty using the now-familiar question: on a scale of 0 to 10, how likely are you to recommend this product or service to a friend?</span> <span>Customers giving scores of 0 to 6 are deemed unhappy, while those giving scores of nine and 10 are seen as "promoters". The difference in the percentage of customers falling into the two groups then gives the so-called net promoter score, or </span><span>NPS.</span> <span>Launched in 2003 by Frederick Reichheld of global management consultancy Bain</span><span> & Company, this scoring system was originally touted as a useful predictor of customer behaviour.</span> <span>Yet as early as 2007, researchers were casting doubt on the validity and reliability of the method. Despite this, a recent study by </span><span><em>The Wall Street Journal</em></span><span> showed that the NPS ratings have "cult-like status" among chief executives, with 50 S&P 500 companies citing it in earnings conference calls last year – nearly triple the number in 2012.</span> <span>Worse, </span><span><em>The Wall Street Journal</em></span><span> reported that the NPS has now morphed into a metric used in assessing executive performance and compensation in some leading companies – a role Mr Reichheld himself described as "completely bogus".</span> <span>It is hardly the first ratings method to be pushed too hard and too far by big business. During the 1980s, so-called “rank and yank” methods emerged that assumed employee performance followed a bell-shaped curve that could be divided into top, middle and bottom ranks.</span> <span>About 70 per cent of employees were deemed to be middle rank. Managers then rewarded the top 20 per cent, and cautioned or </span><span>sacked the bottom 10 per cent. Despite lacking any real justification, many corporations relied on "rank and yank" for years. Then, in </span><span>about 2010, accusations of bias and "gaming" surfaced along with law</span><span>suits – and the method suddenly fell out of favour.</span> <span>In the end, all ratings systems are about one thing: trying to boil down complex decisions to just one number. But as anyone who</span><span> has snoozed through a top-rated movie knows, just because something is possible doesn't mean it works.</span> <em>Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK </em>