The other night I spent an hour adding some of my records to Rate Your Music (http://rateyourmusic.com) and dutifully grading them before suddenly thinking “what the hell am I doing wasting time like this?”
It got me thinking again, though, about that age old chestnut (and a bugbear of mine since I began to read the music press), the star ratings system. There are so many reasons why I detest it, and its variants. There’s Robert Christgau’s pompous A to E system treating albums as if they were GCSE exam papers (C-, must try harder), marks out of ten and even (most ludicrously of all) marks out of 100. So what differentiates a 68/100 album from a 69/100 album then?
On sites like Rate Your Music and Discogs there is a logic to it. By amassing individual scores, some kind of consensus is arrived at (although it does throw up some really weird results – apparently the second best album of 1982 is the soundtrack to Conan The Barbarian. What the fuck happened there?). This is useless when comparing band A with band B, but has some function in assessing the relative merits of stuff within an act’s catalogue. However, populist works (by their very nature) will be more popular than more esoteric ones that exist outside the artist’s usual ouevre. Then you have the fourteen year old metal-heads from Ohio who spend all day giving one star to anything that isn’t Kiss or Motley Crue. But I concede that if you want to build a body of mass opinion, then this is probably the only real way to do it.
In magazines, newspapers, e-zines and the like, that’s no excuse. I remember when Sounds was the only paper (in the UK at least) that used such a system. Now there’s hardly a publication that doesn’t. It’s lazy, patronising and next to useless as a guide to whether something is potentially of interest to the reader. Are Jo or Joe Bloggs going to rush out to HMV to buy a CD simply because some hack has awarded it five stars? Aren’t they going to be more interested in whether it appeals to them as an individual?
Fundamentally, a review should be a balance of fact and opinion. It should inform the reader what the music’s like as well as give the writer’s take on its artistic worth. The latter is always a matter of opinion and not a statement of fact – something a lot of writers seem to forget sometimes (and yes, I include myself). A few weeks ago, Dave Simpson wrote a live review of Autechre in the Guardian. He gave them one measly star and obviously had a terrible evening. Give him his due, though, he did a reasonable job of describing what went on, and I found myself wishing I’d been there. The things he hated sounded intriguing to me – I was cursing myself by the end of it that I’d failed to see them in Glasgow. My point here is that the one star at the top conveyed absolutely nothing about the gig except that Simpson thought it was pants.
You’d think that a magazine’s editor would like you to read the damn thing. In the Wire’s Soundcheck section, you need to read the reviews to get an idea of what the records are about. In Mojo, the eyes glaze over confronted by a sea of three or four star ratings. You end up reading none of them because the headline says it all – “good / quite good”. It’s bland and actively discourages further reading. I don’t even bother reading the reviews of acts I like, unless they have a proper full page piece. I end up skipping through looking for the one star hatchet jobs for a giggle. Alas, there are so few of those too.
Ultimately, I just fundamentally object to reducing art to numbers like everything is a school project. Imagine Beethoven turning around after conducting the premier of Eroica to see punters holding up scorecards as if they were at a gymnastics final. It’s ludicrous, lazy and plain dumb.
Rant over. It’s off my chest, although it will still continue to bug me. If you would like to rate this piece, please do so in the comment box below :)