Reading Guide Goldstein 39;s Book Answers

0 views
Skip to first unread message

Merilyn Mardis

unread,
Aug 5, 2024, 2:05:24 AM8/5/24
to torodnihy
Howeverthis selection was not representative of the quality of the complete list that he submitted to our program. Goldstein posted reviews for 15 wines. But the submitted list contained a total of 256 wines. Only 15 wines scored below 80 points.

I admit that I haven't followed this closely in the months that followed. Perhaps what Matthews says here was a lie? But if not, did Goldstein ever explain his decision to only publish the partial list, when most of the list was a standard award-winning list?


Obviously I like your blog, because I read it all the time and I talk to plenty of people who do, too. But I'm curious as to how you forgot that you thought Goldstein was "shadowy"? I'd have liked to see you ask him about it.


Excellent interview. Perfect balance between wit, insight and journalism. I score it an absolute 94 (you would have scored higher had you compared the revolutionary book with the brilliance of the Beatles).


IMO wine magazines have their place in the same respect that Consumer Reports has their place. The difference being, CR ratings are usually based on analytical fact and data that can actually be measured scientifically. I don't think I've ever seen them score something lower because they didn't like the color.


I love the fact that he validated wine bloggers. The $15 market is what new wine consumers are interested in. The trouble with the lower price range is that you get a greater variance in quality. It seems, that once you cross a certain threshold, the trustworthiness of a bottle greatly increases. Our only hope is that WS and WA NOT review these lower priced wines because once they do, they'll no longer be lower priced wines. ;)


Also, for what it's worth, I'd be stoked to see a Soldera on a wine list. Any Soldera. And that's one of his "bad wines." Soldera is a controversial name but is regarded as one of the best by many wine lovers.


He's very good at manipulating statistics. I don't care that half his list was comprised of unrated wines. And I agree that the average diner does not know that an "Award of Excellence" is only the lowest level award, achievable with banal wine selections. But Goldstein should never have misled people about the full list. I've yet to hear any good explanation for his deception on that point.


Don't misunderstand my point. I think Goldstein had a point in poking at the awards program. My understanding is that it started out much smaller, and it had grown by large measures. It was so unwieldy that the base-level award should probably be scrapped or re-worked. I've eaten at plenty of establishments where the award seems pointless. And certainly restaurants use the base-level award to frame themselves as something special, which typically they're not.


Like Alder, my views changed as well. And while I'm not Robin's biggest fan when it comes to the piece on the WS awards, I'm not an investigative journalist, and I wanted to keep the focus of the interview on the Wine Trails book, because there are elements in the first 50 pages of that book with (in my opinion) some profound implications on how we should be viewing criticism in general.


What percentage of total reviews does the 3,000 comprise? What percentage of the average issue of WS is devoted to wines at $15? For example, cover stories, expose's etc. (I am not being accusatory, I am genuinely curious as I am not what you would call an avid reader).


As I've explained above in a response to the first thread, Evan's account was inaccurate; as you know, the 15 or so wines that I posted in my article were not a random selection of Osteria L'Intrepido's entire list, but rather the entirety of the high-priced "reserve list." It is generally understood in the industry that a "reserve wine list" is meant to showcase the very best and most expensive wines from a restaurant's cellar.


Do you taste wines blind before you decide what bottles to buy? While I see that it removes some of the fake marketing influences in our judgments of wine, I have another question about the practice (for a consumer) and it has nothing to do with the practical difficulty of arranging blind tastings of wines before buying.


If the label and the image we have of where a wine comes from, and philosophy of the winemaker, and whether the winery is run 'sustainably' are all so-called outside influences that are removed in a blind tasting, is that really what you want to achieve?


Here's what I mean. When I drink a wine at home, I'm not drinking it blind then. I see the label. I know stuff about it. If it's from a part of the world I'd like to visit, I like to think about being there. This all adds to my pleasure of drinking the wine. Choosing wines based on blind tastings misses that on purpose and calls it a good thing. But these biases actually increase my enjoyment of the wine. So why try to eliminate them?


I would agree to a point, the big thing is of course not to fall into the New Coke trap, but if you are trying to evaluate a wine (or soft drink) for quality purposes, not for how well it is going to sell purposes, I don't see a problem with blind tasting, particularly the way Wine Spectator does it by scoring the wine blind and then adding some context by revealing the wine before submitting the completed note.


One comment on the advertiser question is that I think there are pretty serious liberties taken if he's using that study from last month that basically found a 1 point ratings difference with the control group being ratings from The Wine Advocate. The guy who wrote the study said it himself at the end (and good for him for not going to the big headline): he has no idea what is causing this difference. He has a bunch of guesses of course, but the biggest problem in the study is the lack of a control. Now I know the Wine Advocate is supposed to be the control, but because we are talking about subjective viewpoints of the same topic, having the Wine Advocate be the control group is inherently problematic. The correlation between the ratings of the two magazines is less than 0.5, which to me means that you've less than a 50-50 shot that the two publications are going to agree on the wine regardless of ads.


The real test is of course do advertisers get higher scores in Wine Spectator than non-advertisers? The answer of course depends on how you want to slice the data, US-only=difference of 0.3 to advertisers, entire world=difference of 0.25 to advertisers, or his own panel C creation (which is weighted by production amount)=.21 to non-advertisers. Bottom line is to me an entirely unmeaningful less than half a point, when once you consider that Wine Spectator doesn't deal in less than whole numbers, is almost identical. In fact, I would bet the difference between being an advertiser and a non-advertiser is less than just the sheer amount of error in the 100 point scale.


I don't like the 100 point scale and I think there are a lot of things Spectator could do better. But Goldstein's take on this strikes me as way too close to his "sting" operation with the Restaurant Program. As always, there are lies, damn lies, and statistics.


Thanks for clarifying the results of this study about Wine Spectator ratings for advertisers' wines versus wines from non-advertisers. In fact, the author, Jonathan Reuter, concludes that there basically is no bias. Here is an excerpt from his blog post on the subject:


Our loyalty is to our readers, not the wine industry, and our goal is to give every wine a fair and equal chance to show its best, in a methodology that prevents bias, conscious or not, from affecting the taster's judgment, so that we can deliver credible, reliable wine reviews to consumers.


As a side note, I despise the use of OK/Fair as the lesser negative in the "good" scale. Find me one person who thinks "Okay" is the same negatively as "Good" is positively. Sorry about the OT, that is perhaps my biggest research pet peeve.


The problem is that the whole process leading to determining this "one point" difference is flawed beyond belief because the Wine Advocate is not a proper control group. A control group, as I'm sure you know, is a group that acts the same as the test group except for the thing being tested. So other than the fact that Wine Spectator tastes blind and mostly in their office while the Wine Advocate does not, that wine is an inherently subjective subject that leads to disagreement no matter what the various variables about the wine are (less than .5 correlation between the two publications on their scores), I guess you can say that indeed both magazines score wines on a 100 point scale and publish the results. Once you throw out the efficacy of the Wine Advocate as a control group, you're left with the differentials between advertisers and non-advertisers in Wine Spectator, and the difference is less than half a point.

3a8082e126
Reply all
Reply to author
Forward
0 new messages