Fake reviews can be seen as an instance of Goodhart's law, where the metric is the rating or score of the business. Initially those ratings may have high correlation with something real, let's say the "quality" of the business. But the more people rely on those scores and the reviews underlying them, the more incentive businesses have to game the system— which destroys the original correlation between ratings and quality.
A big part of the problem with review systems is the one-to-many nature of nearly all of them: when a person posts a review, that review and its score can be seen by everyone. This leverage makes it very efficient for businesses to game the system, as a small amount of fake information can "infect" the purchasing decisions of a large number of users.
So, one alternative might be a many-to-many review system where you only see reviews and ratings from your network of friends/follows (and maybe friends-of-friends, to increase coverage). So essentially Twitter, but with tools and UI that focus on reviews and ratings. That way, fake reviews could only affect a limited number of people, making the cost/benefit calculus much less attractive for would-be astroturfers and shills.
A big part of the problem with review systems is the one-to-many nature of nearly all of them: when a person posts a review, that review and its score can be seen by everyone. This leverage makes it very efficient for businesses to game the system, as a small amount of fake information can "infect" the purchasing decisions of a large number of users.
So, one alternative might be a many-to-many review system where you only see reviews and ratings from your network of friends/follows (and maybe friends-of-friends, to increase coverage). So essentially Twitter, but with tools and UI that focus on reviews and ratings. That way, fake reviews could only affect a limited number of people, making the cost/benefit calculus much less attractive for would-be astroturfers and shills.