peer-review, what it is good for? (peer-reviewed Monday)

ResearchBlogging.org

In a lot of disciplines, the peer reviewed literature is all about the new, but while the stories may be new, they’re usually told in the same same same old ways.  This is a genre that definitely has its generic conventions.  So while the what of the articles is new, it’s pretty unusual to see someone try to find a new way to share the what.  I’ll admit it, that was part of the attraction here.

And also attractive is that it is available.  Not really openly available, but it is in the free “sample” issue of the journal Perspectives on Psychological Science.  I’m pulling this one thing out of that issue, but there are seriously LOTS of articles that look interesting and relevant if you think about scholarship, research, or teaching/learning those things — articles about peer review, IRB, research methods, evidence-based practice, and more.

Trafimow, D., & Rice, S. (2009). What If Social Scientists Had Reviewed Great Scientific Works of the Past? Perspectives on Psychological Science, 4 (1), 65-78 DOI: 10.1111/j.1745-6924.2009.01107.x

So here’s the conceit – the authors take several key scientific discoveries, pretend they have been submitted to social science/ psychology journals, and write up some typical social-science-y editors’ decisions.  The obvious argument of the paper is that reviewers in social science journals are harsher than those in the sciences, and as a result they are less likely to publish genius research.

No Matter Project (flickr)

I think that argument is a little bit of a red herring; the real argument of the paper is more nuanced.  The analogy I kept thinking about was the search committee with 120 application packets to go through – that first pass through, you have to look for reasons to take people out of the running, right?  That’s what they argue is going on with too many reviewers – they’re looking for reasons to reject.  They further argue that any reviewer can find things to criticize in any manuscript, and that just because an article can be criticized doesn’t mean it shouldn’t be published:

A major goal is to dramatize that a review who wishes to find fault is always able to do so.  Therefore, the mere fact that a manuscript can be criticized provides insufficient reason to evaluate it negatively.

So, according to their little dramatizations, Eratosthenes, Galileo, Newton, Harvey, Stahl, Michelson and Morley, Einstein, and Borlaug would each have been rejected by social science reviewers, or at least some social science reviewers.  I won’t get into the specifics of the rejection letters – Einstein is called “insane” (though also genius – reviewers disagree, you know) and Harvey “fanciful” but beyond these obviously amusing conclusions are some deeper ideas about peer review and epistemology.

In their analysis section, Trafimow and Rice come up with 9 reasons why manuscripts are frequently rejected:

  • it’s implausible
  • there’s nothing new here
  • there are alternative explanations
  • it’s too complex
  • there’s a problem with methodology (or statistics)
  • incapable of falsification
  • the reasoning is circular
  • I have different questions I ask about applied work
  • I am making value judgments

A few of these relate to the inherent conservatism of science and peer review, that has been well established (and which was brought up here a few months ago).  For example, plausibility refers to reviewers are inclined to accept what is already “known” as plausible, and challenges to that received knowledge as implausible, no matter how strong the reasoning behind the challenging interpretation.

A few get at that “trying to find fault” thing I mentioned above.  You can always come up with some “alternative explanation” for a researcher’s results, and you can always suggest some other test or some other measurement a researcher “should have” done.  The trick is to suggest rejection only when you can show how that missing test, or alternative explanation really matters, but they suggest that a lot of reviewers don’t do this.

Interestingly, Female Science Professor had a similar post today, about reviewers who claim that things are not new, but who do not provide citations to verify that claim.  Trafimow and Rice spend a bit of time themselves on the “nothing new” reason for rejectin.  They suggest that there are five levels at which new research or knowledge can make a new contribution:

  • new experimental paradigm
  • new finding
  • new hypothesis
  • new theory
  • new unifying principle

They posit that few articles will be “new” in all of these ways, and that reviewers who want to reject an article can focus on the dimension where the research isn’t new, while ignoring what is.

Which relates to the value judgments, or at least to the value judgment they spend the most time on – the idea that social science reviewers value data, empirical data, more than anything else, even at the expense of potentially groundbreaking new theory that might push the discourse in that field forward.  They suggest that a really brilliant theory should be published in advance of the data – that other, subsequent researchers can work on that part.

And that piece is really interesting to me because the central conceit of this article focuses our attention, with hindsight, on rejections of stuff that would fairly routinely be considered genius.  And even the most knee-jerk, die hard advocate of the peer review process would not make the argument that most of the knowledge reported in peer-reviewed journals is also genius.  So what they’re really getting at here isn’t does the process work for most stuff so much as it is, are most reviewers in this field able to recognize genius when they see it, and are our accepted practices likely to help them or hinder them?

More Revision, Djenan (flickr)

And here’s the thing – I am almost thinking that they think that recognizing genius isn’t up to the reviewers.  I know!  Crazytalk.  But one of the clearest advantages to peer review is that revision based on thoughtful, critical, constructive commentary by experts in the field will, inherently, make a paper better.  That’s an absolute statement but one I’m pretty comfortable making.

What I found striking about Trafimow and Rice’s piece is that over and over again I kept thinking that the problems with the problems they were identifying was that they led to reviews that weren’t helpful to the authors.  They criticize suggestions that won’t make the paper better, that conventions that shouldn’t apply to all research, and the like.  They focus more on bad reviews than good and they don’t really talk explicitly about the value of peer review but if I had to point at the implicit value of peer review as suggested by this paper, that would be it.

There are two response pieces alongside this article, and the first one picks up this theme.  Raymond Nickerson does spend some time talking about one purpose of reviews being to ensure that published research meets some standard of quality, but he talks more about what works in peer review and what authors want from reviewers – and in this part of his response he talks about reviews that help authors make papers better.  In a small survey he did:

Ninety percent of the respondents expected reviewers to do substantially more than advise an editor regarding a manuscript’s publishability.  A majority (77%) expressed preferences for an editorial decision with detailed substantive feedback regarding problems and suggestions for improvement…

(Nickerson also takes issue with the other argument implied by the paper’s title – that the natural and physical sciences have been so much kinder to their geniuses.  And in my limited knowledge of this area, that is a point well taken.  That inherent conservatism of peer rview certainly attaches in other fields – there’s a reason why Einstein’s Theory of Special Relativity is so often put forward as the example of the theory published in advance of the data.  It’s not the only one, but it is not like there are zillions of examples to choose from.)

Nickerson does agree with Trafimow and Rice’s central idea – that just because criticisms exist doesn’t mean new knowledge should be rejected.  M. Lynne Cooper, in the second response piece, also agrees with this premise but spends most of her time talking about the gatekeeper, or quality control, aspects of the peer review process.  And as a result, her argument, at least to me, is less compelling.

She seems too worried that potential reviewers will read Trafimow and Rice and conclude that they should not ever question methodology, or whether something is new — that just because Trafimow and Rice argue that these lines of evaluation can be mis-used, that potential reviewers will assume that they cannot be properly used.  That seems far-fetched to me, but what do I know?  This isn’t my field.

Cooper focuses on what Trafimow and Rice don’t: what makes a good review.  A good review should:

  • Be evaluative and balanced between positives and negatives
  • Evaluate connections to the literature
  • Be factually accurate and provide examples and citations where criticisms are made
  • Be fair and unbiased
  • Be tactful
  • Treat the author as an equal

But I’m less convinced by Cooper’s suggestions for making this happen.  She rejects the idea of open peer review in two sentences, but argues that the idea of authors giving (still anonymous) reviewers written feedback at the end of the process might cause reviewers to be more careful with their work.  She does call, as does Nickerson, for formal training.  She also suggests that the reviewers’ burden needs to decrease to give them time to do a good job, but given other things I have read make me wonder about her suggestion that there be fewer reviewers per paper.

In any event, these seem at best like bandaid solutions for a much bigger problem.   See, what none of these papers do (and it’s not their intent to do this) is talk about the bigger picture of scholarly communication and peer review.  And that’s relevant, particularly when you start looking at these solutions.  I was just at a presentation recently where someone argued that peer review was on it’s way out not for any of the usual reasons but because they were being asked to review more stuff, and they had time to review less.  Can limiting reviewing gigs to the best reviewers really work; can the burden on those reviewers be lightened enough?

The paper’s framing device includes science that pre-dates peer review, that pre-dates editorial peer review as we know it, that didn’t go through the full peer-review process, which begs the question – do we need editorial peer review to make this knowledge creation happen?  Because the examples they’re putting out there aren’t Normal Science examples.  These are the breaks and the shifts and the genius that Normal Science process kind of by definition has trouble dealing with.

And I’m not saying that editors and reviewers and traditional practices don’t work for genius, that’s would be ridiculous.  But I’m wondering if the peer-reviewed article is really the only way to get at all of the kinds of knowledge creation, of innovation, that the authors talk about in this article – is this process really the best way for scholars to communicate all of those five levels/kinds of new knowledge outlined above?  I don’t want to lose the idealistic picture of expert, mentor scholars lending their expertise and knowledge to help make others’ contributions stronger.  I don’t want to lose that which extended reflection, revision and collaboration can create.

I am really not sure that all kinds of scholarly communication or scholarly knowledge creation benefit from the iterative, lengthy, revision-based process of peer review.  I guess what I’m saying is that I don’t think problems with peer review by themselves are why genius sometimes gets stifled, and I don’t think fixing peer review will mean genius gets shared.  I don’t think the authors of any of these pieces think that either, but these pieces do beg that question – what else is there.

2 thoughts on “peer-review, what it is good for? (peer-reviewed Monday)

  1. To build on the “alternative explanation”-type reasons for rejection, in my experiences with peer review, defining the relevant peers for a given article is one reason why arguably innovative scholarship gets rejected. An editor asks someone who works in the same general area as the paper, say the research is about some region of the world, but the author and the reviewer do so from radically different perspectives, say that the reviewer really is a “specialist” in the region, while the author’s interests are more topical, but the region in question is a useful case. The reviewer rejects or heavily criticizes the article for missing this or that piece of the literature on the region, even though said literature has no relevance to how the author of the paper is looking at that place. The problem here is that the paper is not being looked at on its own terms, but has become a pawn in a professional chess game over what it means to do research on a particular subject, with the reviewer, essentially, asserting their primacy over the definition of “their” field.

    You’d like to think that editors would referee these kinds of situations, but, also in my experience, you can’t count on that. I don’t even know how many editors bother to read the papers themselves in many cases.

    As to your concluding thoughts, I agree that peer-reviewed journals are not the only places where meaningful scholarly dialogue and inquiry happens. One of Doreen Massey’s most influential papers, republished, cited countless times, given its own chapter and section in textbooks, was originally published in a political magazine, not a scientific journal.

  2. I’m wondering how to connect this notion of “peer review” to my spring term class in “Critical Reviewing” – books, films, food, art, etc. Of course the goals are somewhat different – with peer review being a preliminary to publication – a guide and judge both. For the critics and reviewers in my class, their work is post-publication (extending that word to mean the cooking of a meal, installation of an art exhibit, etc) but the same expectations of knowledge and generosity along with experienced criteria and service to readers (of the reviews) is involved, I think.

Leave a comment