Teaching undergraduates about peer review – how and why, and did I mention how?

Lately I’ve noticed a number of different conversations I’ve been having coalescing around the question of evaluation – how can students evaluate the information they find. Some of the conversations have been versions of your normal standard “information on the web can be bad” and aren’t very interesting, but more of them have been about the much more interesting and much trickier question of — how do students evaluate scholarly information they find on the web when they are neither content experts (like their classroom teachers are) nor format/scholarly communication experts (like librarians are).

Which is why the title of this post jumped out and hit me over the head when I saw it today: Can you tell a good article from a bad based on the abstract and title alone?

(the post is a couple of months old and had quite a bit of discussion in the science blogs, but I haven’t seen much about it in library discussions)

So – what do you think? Can you? I sure can. And can’t. I mean, it depends, right? But when students are looking at something like this — that’s kind of what we’re asking them to do.

typical result list - ebsco

And the thing about the story linked above is that is also shows that the default we sometimes turn to – peer review – isn’t good enough. A lot of the comments on this post and on these related posts at P.Z. Myers and the Nature blogs focus on the suggestion that this paper is written from a creationist/ intelligent design perspective and the implications of this for peer review —

  • The potential that an author can choose/target politically friendly reviewers for a paper
  • The suggestion that this paper’s publications might allow an affirmative answer to the question “can you find one peer reviewed article supporting intelligent design” – and what that might mean for science.

The article was retracted by the journal, not because of its politics but because of plagiairism. Which is also something one would hope would be caught by the peer review process. It seems like it would be the least we should expect.

So on the one hand, you have the science blogs – you have someone reading the title and abstract for this article, seeing some red flags, using the dynamic web to point them out. This generates discussion, which spreads to other dynamic sites and eventually results in the article in question being pulled down. On the other hand, you have the peer reviewers, working in isolation, who didn’t seem to catch any of the red flags. On one level, it reads like a fairly straightforward Web 2.0 Makes Good story.

But on another level, what does this mean for students, especially undergraduate students? Here’s the sentence that raised the red flags for most of these scholars:

These data are presented with other novel proteomics evidence to disprove the endosymbiotic hypothesis of mitochondrial evolution that is replaced in this work by a more realistic alternative.

I can’t say this raises the same questions for me. “Novel… evidence” might be a little odd, and “a more realistic alternative” is an interesting turn of phrase. But the thing is, you have to know something about the “endosymbiotic hypothesis” to be able to contextualize, or criticize, the idea expressed here. How many students are going to have the content knowledge to do either of those things? And the other thing is – if this had become the one peer reviewed article supporting intelligent design, there’s a really good chance that even my beginning composition students would come across it.

I don’t have any really good answers for how to help students make sense of this – except I don’t think librarians and composition instructors can do this alone. And I don’t think we can make any decent stab at figuring out an answer to this question without engaging with the question of what the participatory web means for scholarship – and engaging with the related question of what the limitations of traditional peer review are as well.

And this is where the “wisdom of crowds” vs. “cult of the amateur” story that gets played out so much in the popular media really fails us. Because if this story shows anything, it shows that we still need experts to help us evaluate, contextualize and make sense of information. And at the same time it shows that trusting those experts blindly doesn’t work out so well. Adding the transparency of the participatory web to the opaque processes of traditional scholarly publication – I think part of the answer is in that grey area somewhere.

A long post at the Bench Marks blog examines the question of Why Web 2.0 is failing in Biology. It would make this too crazy long to engage with everything there today, but I do want to pull out a bit from the end. After talking about how life scientists aren’t reading or contributing content to blogs, he does look at the end at who is reading science blogs and what that might mean.

Two of the groups he pulls out are really relevant here I think — science journalists and non-scientists. If blogging is a good way to get scientific ideas out there to a more general public — people who aren’t reading the scholarly journals or going to the conferences — then they’re a way that that general public can get access to the kind of experts who can help them make sense of the research literature. More on this later, maybe.

Full disclosure – some of this thinking is to prepare for this presentation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s