Kicking off Peer Reviewed Mondays

I am apparently not the only theory geek out there.  But I realized that I haven’t been doing a great job of putting my money where my mouth is where it comes to the value of peer-reviewed articles.  So my Monday evenings this term look like they are, for a variety of reasons, going to lend themselves to this experiment – Peer Reviewed Mondays.

Which means – Mondays I’ll try to at least point to a peer-reviewed article that I think is worth talking about.  I’ll also try to explain why.  That seems do-able.

So I’ve had the idea for this for a while, which of course means that I haven’t come across anything peer reviewed that really inspired me to write.  But today, I was inspired by a presentation I heard on campus to go poking around into journals I haven’t looked at before and I came across this article (PDF) from 2004, which takes a different kind of look at issues of evaluation when it comes to science and research and peer reviewed papers.  The meta of that delights me, and I think this paper is a useful addition to the stuff in my brain.

The article is called The Rhetorical Situation of the Scientific Paper and the “Appearance” of Objectivity.

(Useless aside – when the presenter today was being introduced I was unreasonably amused because to fully express the meaning of each journal article, the professor doing the introducing had to read out all of the punctuation – like the quotation marks above.  That doesn’t happen in library papers enough. )

Allen’s method is simple.  He articulates ideas and theories from the rhetoric/composition discourse, and then analyzes a single scholarly article –

in this essay, I analyze Renske Wassenberg, Jeffrey E. Max, Scott D. Lindgren, and Amy Schatz’s article, “Sustained Attention in Children and Adolescents after Traumatic Brain Injury: Relation to Severity of Injury, Adaptive Functioning, ADHD and Social Background” (herein referred to as “Sustained Attention in Children and Adolescents”), recently published in Brain Injury, to illustrate that the writer of an  experimental report in effect creates an exigence and then addresses it through rhetorical strategies that contribute to the appearance of objectivity.

Allen’s initial discussions, that scientific rhetoric is rhetoric, that scholarly objectivity has limits, and that specific rhetorical decisions (like the use, or what might be considered overuse in any other context, of the passive voice) are employed to enhance the impression of objectivity.  are interesting but not earth-shattering.  Where I found the nuggets of real interest were in the concluding sections.  Allen draws upon John Swales, who examined closely how scientists establish the idea that their research question is important.

Swales called the strategy Create a Research Space (CARS)  and identified three main rhetorical moves that most scholars make.  As Allen describes them:

  • Establish centrality within the research.
  • Establish a niche within the research.
  • Occupy the niche.

So what’s interesting about this to me, is Allen’s conclusion that – scientific authors seek to establish their study’s relevance through the implication that they share the same central assumptions and information base.

The reason this is interesting is that it dovetails neatly with Thomas Kuhn’s idea of normal science – the idea that a shared set of first principles is central to the idea of scholarly discourse, particularly a discourse that advances knowledge.   Where that intersects with the idea of peer review is in the idea of what makes it through peer review – the idea that an article or a piece of research needs to be more than just good or interesting research, it also needs to be a good example of what research is in a particular discipline, or discourse community.

Allen compares the relatively ordinary piece of research that he is anaylzing with a far more influential piece and finds that some of the rhetorical strategies that are used to establish the science-ness and the objectivity of the former are present, but far less present in the latter.  This – especially the idea that authors proposing truly ground-breaking research might be less likely to use the passive, “objective” voice – might be more likely to refer to themselves as active agents – is simply fascinating.

Allen points out that the very language we use to talk about scholarly research – creating meaning, knowledge, verifying truth claims – implies that the situation in which scientists communicate their findings is rhetorical.  He also points out that it is not just scientists – but the rest of us who rely in many ways on the meaning and knowledge scientists create – who need to understand these rhetorical practices.  His last sentence, in fact, is as good a justification for information literacy in higher ed as I’ve seen:

Certainly, scientists and researchers should be aware of embedded rhetorical strategies. But given the profound and pervasive influence of science in Western culture, we should all––scientist or not––be attentive to how our knowledge is shaped.

And now on to that delicious meta I love so much.

I love the idea of using this article to talk about what scholarship really is with undergraduates, particularly undergrads working on understanding academic writing.  But here’s the thing – the author of this article is himself an undergraduate writer.


The article appears in Young Scholars in Writing: Undergraduate Research in Writing and Rhetoric – a peer-reviewed journal, with an editorial board made up of faculty in rhetoric and composition.  The content of the journal is all by undergraduate researchers, and the peer reviewers themselves are undergraduates who have also published in the journal.

The journal reflects some of the practices of open peer review that fascinate me – especially as information literacy resources for students as they learn the practice of scholarly knowledge creation.  The review process itself is not made public, but each issue of the journal has a Comments and Responses section where student writers write 2-5 page responses to papers that have been published in the journal.

Which gets to the last piece of meta.  The one overwhelming benefit of the peer-review process, that is rarely discussed by us librarians in information literacy classes when we have to talk to students about finding peer reviewed articles – and that is rarely discussed by the faculty who require their students to find peer reviewed articles — the one thing that is pretty much unanimously agreed is that the process of peer review makes the paper better.  Maybe not all better, maybe not better in the same way as it would be better if it were reviewed by other people.  But better than it would have been had it not gone through that process.  This paper is beautifully written – clear and accessible and smart.

I love the idea of digging into the idea of peer review, using student engagement with the peer review process as an entry point — but to be able to do that with a paper that itself should spark new ideas about the value of scholarship, and how to evaluate scholarship – that looks kind of like a gift.

Matthew Allen (2004). The Rhetorical Situation of the Scientific Paper and the “Appearance” of Objectivity Young Scholars in Writing: Undergraduate Research in Writing and Rhetoric, 2

“Peer reviewed” might not be code for awesome but hey! it’s not code for useless either

So I’ve spent a lot of time in the last year talking about how we need to understand what peer review really is. Most of that time it leads to posts about the limitations of the system. Today, not so much.

I keep going back and forth about whether to write this this morning because while I’ve been thinking about it since reading this post at Information Wants to be Free yesterday, it really isn’t just about that post.

And it really isn’t even about the one snippet of the post that got me worked up. Seriously, the snippet isn’t even about the main point of the post, and it isn’t expressing any sentiment I haven’t heard a million times before, starting in library school and again, and again and again since.

And it feels like piling on to just pull out one throwaway line and write a whole post about it, especially by someone who has been dealing with kidney stones, who I have never met in person, on whose blog I do not regularly comment, and who may have not even meant this just as it sounds. It’s like “nice to meet you, way to totally miss the point.” I did get the point of the post, and I realize this snippet isn’t it. But it’s a snippet from an academic library blog and it is expressing a sentiment that I have heard a million times, and I think it’s a problematic sentiment, especially from academic librarians. And my blog is also a blog and I need to have something to link to to respond to so here we go.

Here’s the snippet:

I don’t write for peer reviewed journals since I’m not tenure-track and I actually want my work to be read. So this doesn’t make me particularly annoyed. To me, it’s just another reminder that peer-reviewed journals are completely irrelevant to me. So many peer-reviewed journals publish absolutely useless studies that were primarily done for the sake of getting the authors tenure. But at least I felt they had some sort of quality standards.

Do you see what I mean? Maybe not. Here’s what I mean – how can we as academic librarians pretend to have any relevance at all when it comes to helping students find, use and create their own scholarship — to helping students be successful in college — when so many of us have absolutely no respect for what it is that scholars actually do?

Now, the first time I wrote that I wrote “for the scholarship in our own discipline” I get that she’s probably talking here about the library literature, not articles in Science or Nature or The William and Mary Quarterly or Physics Review Letters or The Journal of Modern Literature or The American Journal of Sociology, though there’s nothing there to really indicate that distinction. But it really doesn’t matter – I do think this goes deeper than saying the library literature blows.

I mean, there is an issue with the do as I say not as I do thing that must be going on when academic librarians who disdain what is in peer reviewed journals in library science tell students that they should care about the scholarly literature in their own disciplines. But most of the time, even when it is articulated as the library literature isn’t timely enough/ cutting edge enough/ rigorous enough to meet my needs – I don’t think that’s the whole picture. The perception that there is an academic/real world gulf is so ingrained in our culture that it’s okay to state it as kind of a truism. This kind of thing – look at the comments in this piece from the Librarian in Black last summer.

And that’s the deeper issue that I think is there. I think there’s a perception that academic studies not directly and deliberately intended to inform practice can have no relevance for practice. That knowledge for knowledge’s sake has no value or relevance to the real world and that in a field like ours that is dominated by practitioners that means the academic research going on is hopelessly, inherently useless to the vast majority of the field. The work being done in other fields might be valuable to those fields, but only because those fields are academic and not as about practitioners — it’s okay for them to be useless to practice, it’s okay for them to be academic and theoretical. Knowledge for knowledge’s sake is useful in those fields because those whole fields are somewhere other than the real world.

Which could be read as librarian self-deprecation or self-hatred – “we’re just not real academics – they’re good and we’re bad.” But I think this cuts deeper than saying the library literature could be better – I tried to parse this snippet this way, and I think the other people being quoted in the post are thinking of the issue that way – but I don’t think this statement can be read to mean the library literature needs to be better. There’s no way that the peer-review model can be the go-to place for practitioners who want cutting edge answers to current problems, who want what they get from blogs and other dynamic information sources – that’s not a matter of better or worse.

The truth of the matter is even if academic research in library science was as cutting-edge, current, and rigorous as any academic research could ever be – a lot of it would still not be intended to inform practice.

When I hear people talking about how useless or stultifying or hard to understand or badly written they find the peer-reviewed literature – there’s a pride there in being a practitioner instead of an academic. There’s a sense that we are doing the real work in a way the academics never can. There’s nothing wrong with being proud of practice, don’t get me wrong – I am absolutely not saying that. It’s the “instead of an academic” piece that I have issues with because theory/practice isn’t a zero-sum thing. There’s no need to do either/or. There is something wrong with cutting yourself off from something that can and does inform practice in ways that nothing else can simply because it wasn’t created specifically to do so.

And there’s especially something wrong with academic librarians cutting themselves off from that because a huge part of our job, from collection building to information literacy, is all about connecting students to scholarship. There’s no way to compartmentalize that to the library literature – there’s no way to say “well, I think the scholarly study in librarianship is useless but in sociology, or social work, it’s totally awesome.”

Because here’s another thing – when I hear people talking about how useless or stultifying or hard to understand or badly written they find the peer-reviewed literature, they sound just like year after year of students I’ve heard complaining about their classroom reading. Classroom reading not found by a keyword search in Library Literature, but carefully selected and assigned by experts in the field who are saying “this, this is an important thing you need to understand to understand what knowledge is in our discipline.” Yes, a lot of what is in the peer-reviewed literature, in all fields, is not well written. A lot of it is not well researched. A lot of it is published only because it needed to be for the author’s tenure hit. This isn’t just true for us – it’s true across the board. It might be more true for some fields and less true for others but it’s true on some level for all of them. And not recognizing that it is not ALL like that, that sometimes the language is hard because the concepts are hard, sometimes you have to read it three, four or five times not because it’s badly written but because it’s talking about really complex things that take three, four or five readings to understand means closing yourself off from a type of knowledge and a way of understanding that can absolutely inform practice — not understanding that will keep a student from being successful in college. More than that, I think not understanding that hurts the practitioner as well.

Most of our students are going to be practitioners, not academics. We can’t just assume that they will magically understand the value of knowledge for knowledge’s sake because they start taking 400 level classes. It takes a particular skill set to apply theory to practice – it takes practice to apply theory to practice. Our students don’t come to us knowing how to do those things. They need help understanding not just how to find scholarly sources – but how to read and use them. One of our writing faculty was telling me the other day that the students in her class, when they are required to find “speaker sources” – sources that take a stand on issues – almost never use academic sources even though they are required to cite them somewhere in the paper. They use the academic sources as background sources instead of speaker sources. See, the point is that they don’t have the skills or the knowledge yet to see the academic sources as speakers. They can easily identify a policy agenda, but they don’t know yet how to identify the scholarly argument, agenda or point of view. Just like we can easily identify a practical problem that needs solving, but we think that academic problems are pointless.

Our students will be better at what they do – whether that is working, voting, or heck, even dieting, if they have the skills to be informed by what the research says, what the science says – even though that research will almost never have been created simply to inform them. But I don’t know that they can learn that skill set or gain that understanding from librarians who don’t have it, or at least who don’t use or practice it, themselves.

The peer-reviewed literature is what it is. It can be a whole lot better – but that doesn’t mean more obviously and immediately practical. As someone who spends an awful lot of time going on and on and on and on about the problems with the library literature, I still have to say if you can’t find any research in that literature to inform your practice, you aren’t trying very hard.

Will you find stuff on “how can I troubleshoot this problem I’m having today?” Probably not. Can you find stuff on “how can I deal with this issue in a really cutting-edge and awesomely new way?” Probably definitely not. Can you find stuff that gets you thinking about how to frame the problem in a new way, how to understand potential solutions in a new way, how to understand the root of the problem in a new way? I would certainly hope so.

See, here’s my last thing – sometimes the questions that scholars are interested in ARE different than the questions that arise out of daily practice. Sometimes the problems that they are passionate about solving are not the same problems that keep practitioners up at night. But the questions they ask and the answers they come up with are still valuable to practitioners if those practitioners are willing to accept the research for what it is instead of focusing entirely on what it is not.

There’s going to be a little feature in an OSU publication about undergraduate information literacy instruction at my library and I was looking at the most recent draft just before I went to read my feeds. The author came to watch one of my instruction sessions to get a feel for what that kind of teaching was like – and she told me that she saw my interactions with the students more as interactions between peers than traditional teacher/student. I thought about that and realized that what she was seeing was that, to me, the purpose of most undergraduate instruction — across the disciplines but especially in the library — is to bring these new college students into an existing community of scholars, and giving them the skills, concepts, data and sharing the knowledge that will let them find their own place within that community.

To do that, we don’t all need to be scholars ourselves in same way – we are practitioners and for most of us that is one of the wonderful things about librarianship. But we need to respect, value and celebrate those who are and what they do.

oh no – is this serious enough?

This morning, on my drive in to work I was thinking about ScienceBlogs (tagline – “the world’s largest conversation about science”) and in particular the “blogging on peer-reviewed research icon” you see on those posts.

I really like the ScienceBlogs project – the whole idea is to get experts to comment on research, but in a way that’s accessible to the general public. As a teacher and as a non-scientist myself, I love this project. Really, this might be the most important way the participatory web connects to scholarship – not in helping scholars communicate with each other, but in helping scholars communicate with the rest of us.

If you want to check it out, I’d recommend Cognitive Daily – cognitive science, just about every post is blogging on peer-reviewed research, and there’s lots and lots of connections to teaching and learning.

Connected to that is this broader effort – ResearchBlogging dot org — trying to make it easy for readers to find serious, thoughtful posts about peer-reviewed research in a whole variety of disciplines. They’re reorganizing right now, and on a kind of hiatus. When they come back, bloggers will be able to register their blogs that (at least sometimes) analyze peer reviewed research. And library science is one of the subtopics (under the main topic Research/Scholarship) bloggers can claim. They mean to aggregate such posts, provide a registry of blogs dealing with research-related topics, and they provide an icon bloggers can use to mark their analytical, research-focused posts.

And that’s what I was thinking about this morning – we should be using that icon when we talk about peer-reviewed research from our field. At least I should be using that icon, right? For posts like this one, and this one over at ⌘f.

But then I started reading the conversations over at and I started to get nervous! Is the peer-reviewed research I write about peer-reviewed enough? I don’t know. More important, though, do I provide the kind of commentary that counts as thoughtful commentary. I don’t mean this as a call for people to tell me I’m thoughtful – it’s more a rumination on what I am thinking about when I write about research.

I usually say something about the research methods, but not always. I rarely contextualize the research itself within the larger discipline. I raise questions that I have about the method or the conclusions, but I don’t usually go into my analysis thinking “I am going to comment on whether or not I think this is good research.” And when I read the science bloggers – I want them to do these things. I want them to help me evaluate the research and contextualize it.

But I don’t do that myself. More often, I want to share how the research informed or sparked my thinking about the work we all do – the connecting of the theory and the research to my practice as a librarian. I’m influenced a lot by Donald Schon and reflective practice here, and I’m thinking I might write about that sometime soon. But not now. Now I’ll just say that I think there’s value there. Deeply oversimplifying – apologies to Schon – value in the idea that theory is useful when it’s useful, when it’s connected to knowledge generated by lived experience. There’s a lot of people out there who don’t have time or inclination to reflect on research and what it means for practice, but who find it useful to read what others have to say.

So I think that’s kind of interesting, and worth thinking about what the theory we generate, the research we do means for our discipline, which, let’s face it is not the same as biology or sociology or physics. I’ll be registering when comes back up, and I’ll be using the icon. I hope others do too.  I’m interested in the idea of what peer reviewed research means to our field, and this is one way to think about that.

pointing out those giants, there with the shoulders

So back in April, gg at Skulls in the Stars challenged science bloggers across the disciplines to read and research some classic article in their discipline, and then write a blog post about it.  The results are in, and they’re awesome.  Not just fascinating – this is a potential time suck (with none of the guilt I feel wasting time with old sports clips on YouTube – I mean, it’s reading about science.  Important science!) – but also a really intriguing way to think about introducing a lot of overlapping ideas about scholarship to students.

One – we all know that context is one of the hardest things to figure out when you’re taking your first steps into understanding a new topic or discipline. Which things to read, what do they mean, why were they important, why are they still important  – answers to these questions aren’t immediately apparent to an outsider and scholarship written for other experts takes a lot of the keys to unlocking this discourse for granted.  Each of these posts lifts some disciplinary curtain aside, telling us what to read and why – in language written not for experts but for smart, motivated people who don’t already have that contextual knowledge.

And by showing the significance of a work in a discourse, these bloggers also (in both text and subtext) show us something about what discourse is and how it works in science or scholarship or research.  My hands-down most favorite entry in this series is from the person who issued the initial challenge – the Gallery of Failed Atomic Models – and this entry really gets at what I’m talking about here.  From gg:

It is often said that history is “written by the victors”. While this statement is usually referring to the winners of a military or political conflict, a similar effect occurs in the history of science. Physics textbooks, for instance, often describe the development of a theory in a highly abbreviated manner, omitting many of the false starts and wrong turns that were taken before the correct answer was found. While this is perfectly understandable in a textbook (it is rather inefficient to teach students all of the wrong answers before teaching them the right answer), it can lead to an inaccurate and somewhat sterile view of how science actually works.

And that might be my favorite piece of this project – the view of how science actually works that you get from these articles is anything but sterile.  They’re planning a second go-round of this project, which will be hosted here in about a month.  I’m marking my calendar.  Well not literally.  But I’m glad this will be an ongoing thing.

There’s another version of the first set of posts up at A Blog Around the Clock – organized chronologically, with some great excerpts highlighting what makes each post good.

Thanks to Cognitive Daily for the pointer.

Why we should read it before we cite it — no, really!

Last week, Female Science Professor wrote a lovely pair of posts about scholars and scholarship, what it feels like when your work has an impact on someone and what it feels like to meet the people who have influenced you in that particular undefinable way where it’s hard to even express what they’ve meant to you. I shared one, saved the other and generally felt very good about being a small part of this world where rock star crushes on ideas and the people who share them are understood and embraced.

Way to ruin everything, Inside Higher Ed.

Okay, not really. But seriously, it’s a lot harder to feel like a rock star because someone has read and used your work if, as Malcolm Wright and J. Scott Armstrong suggest, they probably didn’t read it and if they did, they probably read it wrong.

That might be a little bit strong, but not by much. So what does it mean when a published, peer-reviewed article in a real life journal kicks off its final, concluding paragraph with this sentence – Authors should read the papers they cite.


This isn’t a library tutorial aimed at fifth-graders writing their first research paper, after all. This is a paper talking about what professional scholars, people responsible for the continued development of knowledge in disciplines, should do. It can’t mean anything good. Here’s the original article:

Article at Interfaces – requires subscription

Article at Dr. Anderson’s faculty page – does NOT require a subscription – (opens in PDF)

Nutshell – Dr. Anderson wrote one of the more impact-heavy articles in his discipline, and the only article that analyzes and explains how to correct for non-response bias in mail surveys (that’s bias caused by people who do not respond at all to the survey). By analyzing 1. how often research based on mail surveys includes a citation to this article, and 2. how often the later researchers seem to interpret and apply the original article correctly the authors conclude that many, many researchers are not reading all of the relevant literature. More disturbingly, many, many researchers aren’t even reading all of the articles they themselves cite.

Now, on one level this isn’t a shocker – anyone who has read moderately deeply in any body of literature has probably looked at at least one bloated literature review and said “hey – this person probably didn’t really read all of these books and articles.” This article suggests that it’s more complex than just lit-review padding, that scholarly authors also mis-cite and mis-use the resources they use to support the methods they use and the conclusions they draw.

Working on the assumption that if your research uses a mail survey, you should at least be considering the possibility of nonresponse bias, they found that:

…far less than one in a thousand mail surveys consider evidence-based findings related to nonresponse bias. This has occurred even though the paper was published in 1977 and has been available in full text on the Internet for many years.

Working on the further assumption that someone who makes a claim about nonresponse bias, and who reads, understands and cites an article that outlines a particular method for correcting nonresponse bias to support that claim, will follow the method outlined in the article they cited, the authors conclude that many authors are either not reading or are not understanding the articles they cite:

The net result is that whereas evidence-based procedures for dealing with nonresponse bias have been available since 1977, they are properly applied only about once every 50 times that they are mentioned, and they are mentioned in only about one out of every 80 academic mail surveys.

Most of the research that seriously digs into how well researchers use the sources they cite has come out of the sciences, particularly the medical sciences. This is one of the first articles I’ve seen dealing with the social sciences, and I think it’s worth reading more closely because this very rough and brief summary doesn’t really do justice to the issues it raises. But right now, I want to turn to the authors’ conclusions because I think they get at some of the things we’ve been talking about around here about how new technologies and the read/write web might have an impact on scholarship.

The first two outline author responsibilities:

  • First – read the sources you cite. I think we can take that as a given – a bare-minimum practice not a best practice.
  • Secondly, “authors should use the verification of citations procedure.” Here they’re calling for authors to contact all of the researchers whose work they want to cite to make sure that they’re citing it correctly. I’m going to come back to this one.

The second two put some of the burden on the journals:

  • Journals should require authors to attest that they have in fact read the work they cite and that they have performed due diligence to make sure their citations are correct. That seems a sad, largely symbolic, but not unreasonable precaution.
  • Finally, journals should provide easily accessible webspaces for other people to post additional work and additional research that is relevant to research that has been published in the journal. Going to come back to this one too because I think it’s actually related to the one above.

Basically – both of these recommendations suggest that more communication and more transparency would be more better for knowledge creation. And what is the read/write web about if not communication and transparency, networking and openness?

Some of the commenters on the IHE article expressed, shall we say, polite skepticism that an author should be obligated to contact every person they cite before citing them. These concerns were also raised by one of the formal comment pieces attached to the Interfaces article. And I have to say I agree with these concerns for a few reasons. Anderson made the claim more than once that he does this as an author, with good results, and that the process is not too onerous. But that doesn’t really address the question of how onerous it would be for a prolific or influential author to have to field all of those requests.

And I’ll also admit to having some author is dead reactions to this. What if I contact Author A and say I’m planning to use your work in this way and they say “well I didn’t intend it to be used in that way so you shouldn’t.” Does that really mean I shouldn’t? Really? It’s hard to see this kind of thing not devolving quickly into something that actually hinders the development of new knowledge because it hinders new researchers’ ability to push at and find new connections in work that has come before.

But not to throw everything out with this bathwater – the idea that more and better and faster communication between scholars (more and better and faster than can be provided within journals and the citation-as-communication tradition) makes better scholarly conversations and better scholarship – that’s something I think we need to hold on to. Anderson points out how talking to the researcher who really knows the area described in the thing you’re citing can point you to other, less cited but more useful resources – how they can expand your knowledge of the field you’re talking about:

We checked with Franke to verify that we cited his work correctly. He directed us to a broader literature, and noted that Franke (1980) provided a longer and more technically sophisticated criticism; this later paper has been cited in the ISI Citation Index just nine times as of August 2006.

This is an area where the transparency, speed and networking aspects of the emerging web might have a real impact on the quality of scholarship even if there are no material changes in the practice of producing journal articles. I might not be sure about the idea of making this communication a part of citation verification but it should be a part of knowledge creation. And it’s tied as well with the final recommendation – that journals should provide webspaces for some, not all but some, of this communication to happen.

The types of conversations between similarly interested scholars that Anderson is describing is nothing new – the emerging web offers some opportunities for those conversations to move off the backchannel. Or maybe it’s the idea that it’s still a backchannel, but the back channel being visible is interesting. Whether the journal has its own backchannel for errors, additions, omissions and new ideas to be posted, or whether that backchannel exists on blogs, in online knoweldge communities, or networking spaces doesn’t matter so much as it can exist. We certainly have the technology.

And the journal Interfaces itself I think provides a suggestion as to why this kind of addtional discourse and conversation is valuable. You may have noticed that what looks like a fifteen page article is really an eight page article with six pages of response pieces, followed by an authors’ response. The responses challenge parts of the original article, and enrich other parts with additional information and examples. They illustrate the collaborative nature of knowledge production in the disciplines in a way that citations alone cannot. I couldn’t find anything on the journal’s website about this practice – if it’s a regular thing, how responses are solicited, or more. These responses are a spot of openness in a fairly closed publication.

And that as well points to the last point to make here because this is far too long already – I don’t think we have to change everything to fix the problems raised here – and I don’t think if we did change everything it would fix all of the problems raised here. There’s that scene in Bull Durham where Eppie Calvin gets his guitar taken away because he won’t get the lyrics right. And that’s the connection between FemaleScienceProfessor and Anderson and Wight — who can feel like a rock star if they’re singing your songs but getting them wrong?

There will always be Eppie Calvins out there inside and outside of academia -for them, women are wooly because of the stress. But injecting just some openness, making some communication visible – won’t stop Eppie Calvin, but might keep the next person from replicating his mistakes. And that’s a good thing.

LOTW follow-up – from people who weren’t there!

Kate and I are still buzzing from the great conversation we had with the people who came to our session at LOEX of the West. It’s always an amazing and kind of surreal experience when you find out that other people are excited by the same ideas you are.

And it seems that other people really are. Almost the second we stopped talking, we started finding other people who were. All over the web.

At ACRLog, Barbara Fister brings up the issue of promotion and tenure, and how many committees find it difficult to evaluate the significance of publications that don’t fit into the traditional scholarly formats — particularly when they are trying to evaluate the impact of scholars from other disciplines. These ideas are strongly connected to the ideas about distributing professional rewards, and we really just got started talking about the question of expertise, and evaluating work outside your discipline at the end there – good to see and think more about it.

Dorothea Salo talks about the differences between informal writing on the participatory web (like blog posts) and scholarly journal writing. She brings up one benefit to scholarly journals that we only hinted at – the way that the lengthy give and take between author and editor in the traditional publication process can make an individual article better. Not bring it up to some objective standard of quality, but make it better than it was. She also talks about something we did spend a lot of time talking about – the archive of knowledge, or the scholarly record. But she goes a lot further than we did talking about the role academic libraries play in that process.

Then today, I saw Tenured Radical’s discussion of the Social Science Research Network. She’s asking why historians aren’t participating in this project, and looking at some of the implications of that lack of participation. The SSRN is a digital archive that has as its goal the rapid dissemination of research in the social sciences. It includes an abstract database (of scholarly working papers and forthcoming papers) and an e-library of downloadable papers. These resources are available to registered members for free; there are also entry points into some proprietary database holdings, for a fee.

Tenured Radical highlights one of the reasons we think it’s so important that we are all having these conversations – not to replace traditional forms of publication, but to make them accessible. Not to encourage scholars to write for the public instead of for each other, but to leverage technological change in ways that can keep that scholarly discourse available to those who want to find it —

An insistence that the only good work has been heavily vetted through our current refereeing practices may be a mistake, much as soliciting the criticisms of others does contribute to producing good work (although it doesn’t always, I’m afraid, as cases where flawed research has slipped through to publication or to a prize demonstrates.) In its current form, it may be a fetish that is doing us more harm than good, and may be something that our professional associations need to review to take advantage of an atmosphere of intellectual vigor offered by electronic and other forms of mass publication.

LOEX of the West presentation, 2008

Peer Review 2.0: Tomorrow’s Scholarship for Today’s Students

LOEX of the West, Las Vegas

Anne-Marie Deitering & Kate Gronemyer


Five Web 2.0 themes — from the ACRL Instruction Section’s Current Issues Discussion Forum, Research Instruction in a Web 2.0 World (Annual, 2006).


{Edit: These didn’t make it into the presentation, but they are examples of some discussions on the web over the last year that started our thinking on this topic.}

danah boyd – open-access is the future: boycott locked-down academic journals

danah boyd – Viewing American Class Divisions through Facebook and MySpace

danah boyd – editing a special issue of JCMC






Current Anthropology

Expressive Processing: An Experiment in Blog-Based Peer Review – Noah Waldrip Fruin on Grand Text Auto

Cognitive Daily blog

ScienceBlogs – The World’s Largest Conversation about Science

BPR3: Bloggers for Peer-Reviewed Research Reporting — Icons


UsefulChem Wiki

Radiology Wiki

Open Notebook Science Using Blogs and Wikis (Jean-Claude Bradley, at Nature Precedings)

Rrresearch Blog

“The Academic Manuscript” — Wicked Anomie: Sociology Run Amok

Welcome to Nature Precedings


Barbara Fister points to this article in the Chronicle:  Certifying Online Research (Gary Olson), about the challenges of evaluating online publications.  See also Barbara’s post at ACRLog: Peer (to Peer) Review.

All about the intersection of scholarship and peer review around here

all the time.  That’s because Kate and I are deep in preparation for our Loex of the West talk and it’s hard to think about anything else.  A few things that have come out of my work in the last few days.

This video at Kairos – This is Scholarship

This video cuts across a lot of the things I’ve been thinking about lately – the connections between new scholarly forms and traditional reward systems based on peer review.

The context for the video work done (and released in 2006) by the MLA Task Force on Evaluating Scholarship for Tenure and Promotion

Some related reading —  Kathleen Fitzpatrick at The Valve, argues that the conversation about the future of scholarship can’t end with journals, but must be extended to a conversation about books in her essay – On the Future of Academic Publishing, Peer Review and Tenure Requirements.

At Inside Higher Ed, Catherine Stimson provides a dean’s perspective on the MLA report – significant because the MLA task force left the specifics of how their recommendations should be implemented up to individual institutions.

And Alex Reid’s post today talking about last week’s Computers in Writing conference – Experimentation and Expertise in Web-based Scholarship looks at some of these questions through the lens of the digital journal Kairos, which published the video that started this off.

Teaching undergraduates about peer review – how and why, and did I mention how?

Lately I’ve noticed a number of different conversations I’ve been having coalescing around the question of evaluation – how can students evaluate the information they find. Some of the conversations have been versions of your normal standard “information on the web can be bad” and aren’t very interesting, but more of them have been about the much more interesting and much trickier question of — how do students evaluate scholarly information they find on the web when they are neither content experts (like their classroom teachers are) nor format/scholarly communication experts (like librarians are).

Which is why the title of this post jumped out and hit me over the head when I saw it today: Can you tell a good article from a bad based on the abstract and title alone?

(the post is a couple of months old and had quite a bit of discussion in the science blogs, but I haven’t seen much about it in library discussions)

So – what do you think? Can you? I sure can. And can’t. I mean, it depends, right? But when students are looking at something like this — that’s kind of what we’re asking them to do.

typical result list - ebsco

And the thing about the story linked above is that is also shows that the default we sometimes turn to – peer review – isn’t good enough. A lot of the comments on this post and on these related posts at P.Z. Myers and the Nature blogs focus on the suggestion that this paper is written from a creationist/ intelligent design perspective and the implications of this for peer review —

  • The potential that an author can choose/target politically friendly reviewers for a paper
  • The suggestion that this paper’s publications might allow an affirmative answer to the question “can you find one peer reviewed article supporting intelligent design” – and what that might mean for science.

The article was retracted by the journal, not because of its politics but because of plagiairism. Which is also something one would hope would be caught by the peer review process. It seems like it would be the least we should expect.

So on the one hand, you have the science blogs – you have someone reading the title and abstract for this article, seeing some red flags, using the dynamic web to point them out. This generates discussion, which spreads to other dynamic sites and eventually results in the article in question being pulled down. On the other hand, you have the peer reviewers, working in isolation, who didn’t seem to catch any of the red flags. On one level, it reads like a fairly straightforward Web 2.0 Makes Good story.

But on another level, what does this mean for students, especially undergraduate students? Here’s the sentence that raised the red flags for most of these scholars:

These data are presented with other novel proteomics evidence to disprove the endosymbiotic hypothesis of mitochondrial evolution that is replaced in this work by a more realistic alternative.

I can’t say this raises the same questions for me. “Novel… evidence” might be a little odd, and “a more realistic alternative” is an interesting turn of phrase. But the thing is, you have to know something about the “endosymbiotic hypothesis” to be able to contextualize, or criticize, the idea expressed here. How many students are going to have the content knowledge to do either of those things? And the other thing is – if this had become the one peer reviewed article supporting intelligent design, there’s a really good chance that even my beginning composition students would come across it.

I don’t have any really good answers for how to help students make sense of this – except I don’t think librarians and composition instructors can do this alone. And I don’t think we can make any decent stab at figuring out an answer to this question without engaging with the question of what the participatory web means for scholarship – and engaging with the related question of what the limitations of traditional peer review are as well.

And this is where the “wisdom of crowds” vs. “cult of the amateur” story that gets played out so much in the popular media really fails us. Because if this story shows anything, it shows that we still need experts to help us evaluate, contextualize and make sense of information. And at the same time it shows that trusting those experts blindly doesn’t work out so well. Adding the transparency of the participatory web to the opaque processes of traditional scholarly publication – I think part of the answer is in that grey area somewhere.

A long post at the Bench Marks blog examines the question of Why Web 2.0 is failing in Biology. It would make this too crazy long to engage with everything there today, but I do want to pull out a bit from the end. After talking about how life scientists aren’t reading or contributing content to blogs, he does look at the end at who is reading science blogs and what that might mean.

Two of the groups he pulls out are really relevant here I think — science journalists and non-scientists. If blogging is a good way to get scientific ideas out there to a more general public — people who aren’t reading the scholarly journals or going to the conferences — then they’re a way that that general public can get access to the kind of experts who can help them make sense of the research literature. More on this later, maybe.

Full disclosure – some of this thinking is to prepare for this presentation.

writing for publication – on getting it done (with a little why we do it)

Do academic librarians have a love-hate relationship with academic writing?  I mean, we spend a lot of time explaining how to find scholarly articles to our students, and hopefully we also get to spend some time telling them why they should want to?   But that doesn’t always translate into “scholarly writing YAY” when it comes to carving out time to do some of that academic writing ourselves.

In the two-birds-one-stone department — this post from Wicked Anomie on The Academic Manuscript rocks.  Basically, it’s a template for an academic article (specifically – quantitative research in the social sciences).  It’s funny.  It’s useful.  And best of all, amid the funny there’s a fair dose of what each of the sections we tell students to look for (abstract, methods, etc) is really supposed to be doing in a good academic article.

I’m saving this for my own use — I find these types of things useful when I need a kickstart to just start writing the damned article — but I’m also thinking this this kind of approach might lead to better discussions with students about the value of peer reviewed articles than I’ve been having lately.

(plus there’s a bonus link in the post to a book on academic productivity.  Which leads to the question – how is it that I just discovered Wicked Anomie?  She’s awesome)

Which reminds me of some other stuff on the academic productivity scale —

Scatterplot has a similar template for the research proposal.

The sociologists have also started a wiki to share their experiences with scholarly journal publishing.  It doesn’t have a lot of content now, but it does have some potential.  The organization is beautiful in its simplicity.  It’ll be interesting to see where this ends up — there are some job hunt wiki examples out there that can get really speculative and negative – so much so that you wonder if it’s healthy to read them while you’re on the market.  Tenured Radical has a substantive discussion of this idea here.  But for now, I’m thinking moving towards transparency in scholarly publishing is a good thing.

And a recommendation for those interested in scholarly productivity — the Getting Things Done in Academia blog.  This one comes from the sciences (Ecology and Evolutionary Biology).