Wine scores will be with us as long as human beings drink wine, but is more context needed, and how much can one score be compared to another? Andrew Jefford reports on a still pressing issue in this column re-published from 2013 as part of Decanter's 40th anniversary build-up.

Domaine de Montrose sign, Andrew Jefford ©

Scores for wines are philosophically untenable, aesthetically noxious – but have great practical value. Wine scores will, therefore, be with us for as long as human beings drink wine. A shockingly beautiful recent bottle made me think about a little-discussed aspect of this analytical tool.

It’s there in the small print on the cover of Robert Parker’s Wine Advocate: “The numerical rating given is a guide to what I think of the wine vis-à-vis its peer group.” This little sentence should be written in red ink, in bold type, and possibly embossed, too. These are the most important 20 words of the preamble.

I assume all of those who score wine for a living would agree with Mr Parker. I assume, in other words, that no scorer of wines claims that he or she is using a universally applicable scale. (Does anyone claim this?)

It matters, because I suspect most casual ‘users’ of scores don’t appreciate the nuance, though it changes everything. For them, 97 points is 97 points, regardless of the wine’s origins. But it isn’t, and it never will be — as long as human mouths and human minds rather than machines are used to gauge wine quality.

Should any professional scorer of wines want to suggest universality, then he or she should issue a suite of correction factors based on regional origin (i.e ‘peer group’) for their scores, derived from their own palate pantheon. Scores for the regions whose wines they regard most highly should be multiplied by some factor of more than 1, and those whose wines they consider less universally impressive would be multiplied by some factor less than 1.

As this will sound a bit mad, I think I’d better tell you about the bottle that set me off down this path. When I was tasting in Bordeaux last year, I had the chance to taste samples of Montrose 2009 and 2010. As I wrote in Decanter’s 2012 Bordeaux supplement, they struck me as two of the most beautiful young clarets I had ever tasted. Their price is beyond my drinking budget, and in any case it would be a shame to broach them prematurely – but I did, in March this year, buy a case of the 2010 La Dame de Montrose. Second wines are always best in great vintages, and it seemed logical that a little of the stardust would have rubbed off. I’m now on my third bottle of this wine.

It’s gorgeous. I’m enraptured by it. Magnificent scents of chocolate, leather, plums and currants, with the oak a whispered perfume. Tapered power on the palate; a shapely silhouette. But it just gets better; the wonderful Médocain austerity and sobriety grows as the beguiling front palate fades. The quality of the tannins is astonishing: so fine yet so firm. It already has a bucketload of gratifying sediment. Balance, richness, poise, vivacity, concentration: all perfectly judged. Of course I’m drinking it too soon, but too bad. I find it hard not to exclaim after each sip. Not many wines make me do this.

Its RP score is 94 (his sensible advice is “to buy in abundant quantities”, the only drawback being that it costs 47€ a bottle). I don’t doubt that the score of 94 is spot-on against its 2010 left-bank cohort – but I’m also sure that it is much, much better than any other 94-point wine I’ve drunk in the last five years.

It thus needs, I judge, a correction factor of something like 1.11 – or, put differently, if all the other 94-point wines I’ve sampled have been correctly assessed, then La Dame de Montrose 2010 deserves 104 points. Indeed whenever I taste great Bordeaux, it often seems to me that it should be scored in a different way to other wines – it’s just better. (Great Bordeaux, note – not all the dry, dreary, over-oaked and under-wined ones.) This is obviously a function of my personal pantheon and you may feel the same way about Burgundy, or Mosel Riesling, or Argentine Malbec or whatever your bag is – though market prices suggest that more people probably feel this way about Bordeaux than about any other wine.

Remember, folks, the score is not the score – without the peer group.

This article was originally published on 23 September 2013 and has been re-published and updated on 10 August 2015 as part of a celebration of Decanter’s archives in its 40th anniversary year. Andrew Jefford is away.

  • Kent Benson

    Another big problem with the 100-point system is the lack of consistent use from
    reviewer to reviewer.

    Parker’s system claims to allocate the potential 100 points
    (effectively only 50 points, since every wine starts with 50) to ranges of
    points assigned to specific elements (i.e. color, aroma, taste, etc.). In other
    words, the score is an aggregation of component scores. Other wine critics have
    assured me that Parker does not follow his own design, but assigns a score
    without going to the bother of scoring each prescribed component part. (I tend
    to believe them.) If Parker does adhere to his claimed method, I know of no one
    else who does so. I like the component approach, as it adds a more objective
    discipline to the scoring process. Sadly, I suspect most critics score wines
    much in the same manner as James Suckling, who glibly quips, “Oh, I’m 92 on that,” one second after tasting a wine.

    The Parker statement you quoted makes it quite clear that
    Parker claims to score wines relative to their peers. I know of no other critic
    who makes this claim. In a recent interview he went even
    further, saying he is compelled to assigns a 100-point score to wines that
    represent the best he has ever tasted from a particular producer! Surely, he
    doesn’t apply this standard to ever producer! But, even if he only applies it
    to top-tier producers, it means that a 100-point score does not mean the same
    thing from one top producer to another.

    To make matters worse, Parker says he reserves the last 10
    of his 50 points for his estimation of the wine’s ability to improve with age
    and his overall impression. I don’t know of any other critic making this
    precise claim. I can only assume aging is a factor for most critics, since
    wines not expected to improve with age (i.e. Sauvignon Blanc) are not given
    scores much above the low 90s.

    I once argued with Michael Franz, editor of, for scoring all wines relative to their variety and/or
    appellation, as it seemed more logical to me that the absolute best Sauvignon
    Blanc, for example, should receive 100 points. After offering many compelling
    reasons against this approach, he fully persuaded me to embrace the idea of scoring all wines relative to all other wines. I think this is the approach most
    critics rightly take, ostensibly eliminating the need for your “Bordeaux
    factor.” Another problem arises, though, when a critic makes allowances for the
    tastes of others. Many critics assign higher scores than they otherwise would,
    knowing that many of their readers appreciate styles they themselves dislike.

    Needless to say, scoring wine is a tricky business, made
    trickier by too few critics who provide adequate explanations of their scoring

  • David newman

    For those who never breach the surface, scores will mirror the shiny reflection of their interest. And for the few who dare to immerse themselves in the pleasure of wine, a vast ocean of connection and enjoyment.