“Elegant almost middle-aged red. Reminds one of herbs, complex and stunning shallot and traces of dried berry. Drink now through 2020.” Real wine description or fake? How about this one: ““Verbena, aloe vera, melisse, lemon-balm, and finally the usual apple; the palate as always is shady and cool, though more overtly mineral than usual, but the finish crescendos into a salty tide that clings and doesn’t quit.” If you’ve ever read wine experts write about wine, you might wonder just how much of this sort of mumbo jumbo is science and how much is snobbery.
Turns out, a lot of what wine experts “know” isn’t really based on fact. Pacific Standard has a breakdown of the standard wino talking points, and where they come from.
First, professional tasters often don’t have the same palates as the average person:
Customers rating Bordeaux at cellartracker.com consistently diverged from the opinions of a trio of experts on the same wines, according to a 2011 study. A separate study that collected opinions on unpriced wines found that average drinkers rated expensive wines lower, while the pros liked them more.
Second, professional tasters don’t have the same palates as each other, either. A 20-point test many critics use to grade wine never seems to produce the same results. And the price of the wine also seems to have a lot to do with how good it seems. Pacific Standard writes that when drinkers were aware the wine they were drinking cost more, they derived a whole new kind of enjoyment from it:
Knowing the price fired up the brain areas that registered pleasure, but it didn’t change the activity in the parts that process sensory information about taste. The drinkers reported enjoying the same wine more when they thought it cost more—and brain scans showed they actually did.
Slate argued last year that wine descriptions tell consumers far less about the wine’s taste than about the wine’s price.
Using descriptions of 3,000 bottles, ranging from $5 to $200 in price from an online aggregator of reviews, I first derived a weight for every word, based on the frequency with which it appeared on cheap versus expensive bottles. I then looked at the combination of words used for each bottle, and calculated the probability that the wine would fall into a given price range. The result was, essentially, a Bayesian classifier for wine. In the same way that a spam filter considers the combination of words in an e-mail to predict the legitimacy of the message, the classifier estimates the price of a bottle using its descriptors.
The analysis revealed, first off, that “cheap” and “expensive” words are used differently. Cheap words are more likely to be recycled, while words correlated with expensive wines tend to be in the tail of the distribution. That is, reviewers are more likely to create new vocabulary for top-end wines. The classifier also showed that it’s possible to guess the price range of a wine based on the words in the review.
So when you’re reading a wine’s description, you might actually want to pay attention to how expensive it sounds, since that might be the most rewarding part of the tasting anyway.
More from Smithsonian.com: