Before you continue, a small experiment to see if I’m the only one for whom this is the case. It’ll probably take about 5 minutes.
First, pull up any old random list of books (not a “Best-of” or “worst-of” list), or even better your bookshelves if you keep all the books you read there (instead of just the ones you particularly like). It’s most important that this be a truly random set of books. Glance over it, and quickly rate the books based on how much you liked them – great, average, and lousy books. Don’t count sequels. Keep track of how many make it into each group. Do this for maybe 50 books, just to get an idea of how frequently you consider a random book you’ve read of your own choice (as opposed to required reading for whatever reason) great vs. mediocre vs. lousy.
In my case, of the first 51, I counted 15 as great, 27 as average, and 9 as lousy. This step by itself was mildly interesting to me just for informing me about the accuracy of my ability to judge a book before reading it. However, it’s not the point of this little experiment.
Now go to http://www.goodreads.com/list. Poke around. You’re looking for a list that satisfies three criteria.
- At least 200 books on the list.
- At least 1000 voters.
- From the title of the list, one that probably includes a bunch of books you’ve already read of your own choice.
Browse through this list until you’ve run into 50 books you’ve read (or finished the list), sorting and counting those books. Again, don’t count sequels.
My count was 23 great, 14 average, and 13 lousy.* The odd thing is based on the books I chose for myself, I’d expect about 1.7 great books per lousy book. Which means with 13 lousy books, I’d expect about 22 great books – and I got 23. What apparently happened is that a lot of average books got winnowed out by the process of mass-voting on books. However, the bad books were just as likely as the good ones to be selected for.
It is not an unreasonable result, but an interesting one. It implies that if I’m looking to avoid disappointment, I should stick with books I choose for myself (82% success rate, as opposed to 74% based on recommendations). If I’m looking to be truly excited about a book, I should seek recommendations (46% success, as opposed to 29% from my own selection).
It also causes me to wonder what (if anything) it would imply about books themselves, if people tended to agree when a given book was unusual, but not about what effect its unusual nature has on the quality.
One final side note: now try rating books in series and by the same authors the same way you just rated others, in cases where you considered the first book “great.” In my case, of 20 books I checked that way I found 17 great, 2 average, and 1 lousy. A fairly clear quantitative explanation of why I tend to serially read authors – three times as likely to find a great book, and one-third as likely to find a lousy one.
*To be fair, it’s possible there was observer bias in there, because I first noticed this trend then tried to check it precisely with numbers. So another possible explanation for these numbers is that I was unintentionally deceiving myself in the rating process to produce this result. Which is another reason why I wanted other people to try it before knowing what pattern I thought I saw.