by the numbers color warp

I am a consumer of both literature and video games, as well as reviews about both media. I’m also a producer of literary and video game reviews, both formal and informal (including a blog entry here in Full Stop about Kentucky Route Zero). However, my habits when it comes to the two media and their respective reviews have a significant difference. Lately I’ve been thinking a lot about that difference, partially because my own work is in a nascent phase filled with decisions and indecision and partially because of the myriad of conversations going on about the relationship between book reviews and literary citizenship (the best summation of the issue comes from The Review Review’s Becky Tuch in Beyond the Margins).

Here’s the difference, in its simplest form: a good portion of video game reviews have a quantified score associated with them. Book reviews, outside of user-generated reviews like those on Goodreads or Amazon, typically do not. I know what you’re thinking. Apples and oranges, that the two media have too much distance between their consumption and the culture around them for a comparison to be valid at all. Hear me out, and we’ll go from there. An open dialogue between video games and books and their respective creators and critics is not totally new. This Full Stop piece from last year, for example, argued that writers should play video games.

When I want to pick my next video game to play, I read and watch video game reviews, usually found via the aggregate site Metacritic. Metacritic, owned by CBS, collects reviews and uses a corporate-secret algorithm to transform them into a score out of 100, similar to Rotten Tomatoes. However, whereas Rotten Tomatoes focuses solely on movies, Metacritic provides aggregate scores for movies, television, music, and video games. When you click on a single game, it pops up with the score, a link and introduction to every review that makes up that score, and a few user-generated reviews. I hop around these to make my decision.

By contrast, when I want to pick the next book I will read, I completely ignore “professional” reviews and usually go straight for Goodreads‘ user-generated score system. Don’t get me wrong, I still read book reviews, but I read them because they’re well-written and enjoyable, or because I want to know what other people think about a book I’ve already read, not to decide what to read next. And therein lies the dodged question: what do we read and write reviews for? If it’s to decide what to read, then the current book review paradigm seems to fail at that purpose. Book reviews are overwhelmingly positive, and it’s become something of a shock to read anything with solid criticism, as pointed out by Francine Prose in The New York Times. This is one of those truths we pretend doesn’t exist, though, as seen in the drama when a new Buzzfeeds books editor announced he didn’t want to publish negative reviews (along with the accompanying New Yorker satire). When reviewers do have something negative to say, they couch it between positives, like a manager trying to make sure criticism isn’t taken the wrong way. Some reviewers, and I’m guilty of this as well, take it a step further and seem to be applying the praise-to-criticism ratio found to be ideal by researchers in American Behavioral Scientist: 5.6:1. It’s scary to criticize a book as someone who wants to make a living writing: your mind fills with images of burning bridges, and it’s unclear what the payoff would be for truth and bluntness.

Not that video game reviews don’t have this problem. They do. Laura Parker discusses how video game reviews are too often in lock step with one another, using Bioshock Infinite’s high ratings as a focus point. Metacritic itself uses a different scale for games versus movies, television, and music, with favorable running from 75–100 for games and 61–100 for the others. And, like the literary world, there’s a lot of overlap between the producers of content and their reviewers, which makes writing a negative review have the potential for impacting a future interaction in the reviewer’s career. Reviewers rely on having a solid relationship with game publishers in order to maintain access, and venues for the reviews also rely on advertising cash from the publishers. In fact, some game publishers even stipulate that if a review is below a certain threshold, that review can’t be published until a later date than positive reviews. These conflicts lead to situations like the now infamous firing of Jeff Gerstmann following a negative review of Kane & Lynch: Dead Men in 2007.

But game reviews are able to account for this problem, to a certain extent, via quantification. With a number attached to reviews, even if the reviews are overly positive, you still have a basis for comparison between two games, or between two reviewers of the same game. The difference between an 85 and a 90 may seem small, but if a significant portion of games you’re looking at are rated above a 75, then the difference can still be meaningful. And, while it might be impossible to compare a heavily archetypal first-person shooter like the latest Call of Duty with an experimental narrative-based game like Kentucky Route Zero (or comparing David Baldacci with Karen Russell?), you have the ability to compare apples to apples, finding similar products and checking their scores against the same reviewer. If two scores for a single game vary significantly, I can quickly find those two reviews and see what they had to say that differed. Validity through relativity.

Quantification of professional reviews in the video game industry has led to its own set of problems. Not all reviewers use numbers, or they use a scale that doesn’t fit into a percentage model, which leads to aggregate sites either ignoring them, or the staff of that site assigning a number based on the tone of the review. Some sites, like Ars Technica, choose not to be a part of the Metacritic process due to the loss of control over their review presentation. The Metacritic scores are also taken so seriously by the publishers that in some cases developers’ bonuses are tied to the review score, independent of the game’s commercial success. This year, Amazon began listing Metacritic scores alongside their own user-generated reviews for video games, despite not using similar systems for other media (Amazon also lists Metacritic’s user-generated scores, which are entirely separate from their professional review aggregate). Metacritic itself has come under fire for weighting reviews differently based on what’s being reviewed and who’s doing the reviewing, as well as for that process not being transparent. In addition, many game journalists have decried the overemphasis on the aggregate scores, arguing that they further pressure reviewers to give certain scores and incentivize shallow, quick reviews.

Despite these problems, numbers allow for ease of use. If a new game comes out as part of a series and I want to see how it compares to a previous game in the series, I can compare the two scores on Metacritic and then use Metacritic to browse to individual reviewers or review sites if I want more details. Compare this with the book review world. If it’s a big name writer, there’s a good chance two of their books will have been reviewed by the same publication, but if the two reviews are written by two different people, it’s difficult to have a sense of comparison based on a prose description alone. If you want to go off into books that are not from the Big Four publishing houses or if you want to read a review from a publication besides the New York Times, the LA or New York Review of Books, or The Guardian, it takes a decent amount of work.

There does appear to be a call for quantified reviews in the literary world. Look at the popularity of Goodreads, which was bought by Amazon back in March of 2013. From 2012 to 2013 alone, the site doubled their membership, rising to 20 million users. Goodreads draws exclusive author interviews, creates significant interaction between authors and readers in the forms of Q&As and message board conversations, and hosts large book club discussions. Goodreads also draws a lot of advertising dollars from publishers. And I know, a lot of people in the lit scene’s first reaction is to be upset at a site that gives The Hunger Games a full point higher on a five-point scale than Moby Dick — Lee Klein addresses that anger in his piece in Full Stop from 2012. The same disdain goes for Amazon’s user-generated reviews: the radio program WITS features dramatic readings of one-star reviews. But again, relativity. If you’re someone who is looking to read Bolaño for the first time and are unsure where to start, Goodreads allows you to see that The Savage Detectives and 2666 have each been reviewed about the same number of times, and that 2666 comes out on top by a thin margin.

Adding numbers to book reviews would lead to problems. Maybe they would look like the problems video game reviews have faced, with publishers adding incentives to authors for hitting certain review marks, with review publication dates being limited by the score, or a myriad of others. There would be reviewers who assign an outlier score to books just for the publicity it entails (although, this seems to happen to some degree already). If an aggregate site popped up to collect review scores, there would inevitably be drama about whose scores were included — where the line is between amateur and professional, between a casual review on a blog and a serious venue for consideration. However, adding some quantification to book reviews could get them used more for actual decisions, both by readers and publishers. More use could lead to book reviews being taken more seriously. And if book reviews were taken more seriously, that could lead to advertising dollars to pay reviewers. Paid reviews, in turn, would reduce the reliance on literary citizenship exploitation and allow for more risk-taking in review writing. Maybe not the whole answer, but a solution to consider.

Of course, here I am: a person who has published book reviews without numbers, who has made their Goodreads’ shelves private due to a mild, unfounded paranoia of the possibility of being passed over for an opportunity because I wasn’t a huge fan of Washington Square, and who is writing this essay for a site that runs reviews without scores. I could brush these things aside in the name of my contribution to the review world being too small to have an impact, but I think there’s more to it. As readers, it’s scary to think that our experiences with books could be translated into a simple number, and I’d be the first to argue that they can’t. Not entirely. I’m fairly certain that for me, assigning a digit or two to a book would be way more difficult than writing the thousand word reviews. And books are different from the other media found on Metacritic in meaningful ways. Unlike video games, length compared to price is typically not a consideration for books. Unlike television shows, a book doesn’t get cancelled if a bunch of people stop reading after chapter three. Unlike movies, the cost and ease of consumption of a book doesn’t go down significantly if you wait for a year. All of these things and more matter when it comes to reviews of the media, but they also matter when it comes to the audience and producers of that media’s relationship with the reviews.

Maybe the score isn’t the answer. Maybe it’s what having an aggregate score is admitting. Aggregate scores like those on both Metacritic and Goodreads draw the reviewer out of isolation and place them in a bigger conversation about the work. They prominently argue that each individual piece of that aggregate score matters, but not just that they matter: that they matter in relation to one another. If we, as the literary community, started thinking about reviews more in terms of what they provide the reader and what their place is in the literary conversation, I think the whole book review ecology would benefit. Maybe this would take the form of scores and aggregate sites, but it doesn’t have to. It could be something as simple as venues adding links to reviews of the same book or author at other venues.

Reviews of art ask something of us. They ask us to agree that while we might enjoy art in isolation and bring to bear our personal histories to each new work we consume, other people still have something to say about that art that will be meaningful to us. If we’re choosing those other people, I’d ask that we think about how and why we’re making that choice. If we’re hoping to be those other people, those reviewers, I’d ask that we think more about how and why we plan to do that.

 
Illustration by Eliza Koch. See more of Eliza’s work here.


 
 
Become a Patron!

This post may contain affiliate links.