Last week, Female Science Professor wrote a lovely pair of posts about scholars and scholarship, what it feels like when your work has an impact on someone and what it feels like to meet the people who have influenced you in that particular undefinable way where it’s hard to even express what they’ve meant to you. I shared one, saved the other and generally felt very good about being a small part of this world where rock star crushes on ideas and the people who share them are understood and embraced.
Way to ruin everything, Inside Higher Ed.
Okay, not really. But seriously, it’s a lot harder to feel like a rock star because someone has read and used your work if, as Malcolm Wright and J. Scott Armstrong suggest, they probably didn’t read it and if they did, they probably read it wrong.
That might be a little bit strong, but not by much. So what does it mean when a published, peer-reviewed article in a real life journal kicks off its final, concluding paragraph with this sentence – Authors should read the papers they cite.
This isn’t a library tutorial aimed at fifth-graders writing their first research paper, after all. This is a paper talking about what professional scholars, people responsible for the continued development of knowledge in disciplines, should do. It can’t mean anything good. Here’s the original article:
Article at Interfaces – requires subscription
Article at Dr. Anderson’s faculty page – does NOT require a subscription – (opens in PDF)
Nutshell – Dr. Anderson wrote one of the more impact-heavy articles in his discipline, and the only article that analyzes and explains how to correct for non-response bias in mail surveys (that’s bias caused by people who do not respond at all to the survey). By analyzing 1. how often research based on mail surveys includes a citation to this article, and 2. how often the later researchers seem to interpret and apply the original article correctly the authors conclude that many, many researchers are not reading all of the relevant literature. More disturbingly, many, many researchers aren’t even reading all of the articles they themselves cite.
Now, on one level this isn’t a shocker – anyone who has read moderately deeply in any body of literature has probably looked at at least one bloated literature review and said “hey – this person probably didn’t really read all of these books and articles.” This article suggests that it’s more complex than just lit-review padding, that scholarly authors also mis-cite and mis-use the resources they use to support the methods they use and the conclusions they draw.
Working on the assumption that if your research uses a mail survey, you should at least be considering the possibility of nonresponse bias, they found that:
…far less than one in a thousand mail surveys consider evidence-based findings related to nonresponse bias. This has occurred even though the paper was published in 1977 and has been available in full text on the Internet for many years.
Working on the further assumption that someone who makes a claim about nonresponse bias, and who reads, understands and cites an article that outlines a particular method for correcting nonresponse bias to support that claim, will follow the method outlined in the article they cited, the authors conclude that many authors are either not reading or are not understanding the articles they cite:
The net result is that whereas evidence-based procedures for dealing with nonresponse bias have been available since 1977, they are properly applied only about once every 50 times that they are mentioned, and they are mentioned in only about one out of every 80 academic mail surveys.
Most of the research that seriously digs into how well researchers use the sources they cite has come out of the sciences, particularly the medical sciences. This is one of the first articles I’ve seen dealing with the social sciences, and I think it’s worth reading more closely because this very rough and brief summary doesn’t really do justice to the issues it raises. But right now, I want to turn to the authors’ conclusions because I think they get at some of the things we’ve been talking about around here about how new technologies and the read/write web might have an impact on scholarship.
The first two outline author responsibilities:
- First – read the sources you cite. I think we can take that as a given – a bare-minimum practice not a best practice.
- Secondly, “authors should use the verification of citations procedure.” Here they’re calling for authors to contact all of the researchers whose work they want to cite to make sure that they’re citing it correctly. I’m going to come back to this one.
The second two put some of the burden on the journals:
- Journals should require authors to attest that they have in fact read the work they cite and that they have performed due diligence to make sure their citations are correct. That seems a sad, largely symbolic, but not unreasonable precaution.
- Finally, journals should provide easily accessible webspaces for other people to post additional work and additional research that is relevant to research that has been published in the journal. Going to come back to this one too because I think it’s actually related to the one above.
Basically – both of these recommendations suggest that more communication and more transparency would be more better for knowledge creation. And what is the read/write web about if not communication and transparency, networking and openness?
Some of the commenters on the IHE article expressed, shall we say, polite skepticism that an author should be obligated to contact every person they cite before citing them. These concerns were also raised by one of the formal comment pieces attached to the Interfaces article. And I have to say I agree with these concerns for a few reasons. Anderson made the claim more than once that he does this as an author, with good results, and that the process is not too onerous. But that doesn’t really address the question of how onerous it would be for a prolific or influential author to have to field all of those requests.
And I’ll also admit to having some author is dead reactions to this. What if I contact Author A and say I’m planning to use your work in this way and they say “well I didn’t intend it to be used in that way so you shouldn’t.” Does that really mean I shouldn’t? Really? It’s hard to see this kind of thing not devolving quickly into something that actually hinders the development of new knowledge because it hinders new researchers’ ability to push at and find new connections in work that has come before.
But not to throw everything out with this bathwater – the idea that more and better and faster communication between scholars (more and better and faster than can be provided within journals and the citation-as-communication tradition) makes better scholarly conversations and better scholarship – that’s something I think we need to hold on to. Anderson points out how talking to the researcher who really knows the area described in the thing you’re citing can point you to other, less cited but more useful resources – how they can expand your knowledge of the field you’re talking about:
We checked with Franke to verify that we cited his work correctly. He directed us to a broader literature, and noted that Franke (1980) provided a longer and more technically sophisticated criticism; this later paper has been cited in the ISI Citation Index just nine times as of August 2006.
This is an area where the transparency, speed and networking aspects of the emerging web might have a real impact on the quality of scholarship even if there are no material changes in the practice of producing journal articles. I might not be sure about the idea of making this communication a part of citation verification but it should be a part of knowledge creation. And it’s tied as well with the final recommendation – that journals should provide webspaces for some, not all but some, of this communication to happen.
The types of conversations between similarly interested scholars that Anderson is describing is nothing new – the emerging web offers some opportunities for those conversations to move off the backchannel. Or maybe it’s the idea that it’s still a backchannel, but the back channel being visible is interesting. Whether the journal has its own backchannel for errors, additions, omissions and new ideas to be posted, or whether that backchannel exists on blogs, in online knoweldge communities, or networking spaces doesn’t matter so much as it can exist. We certainly have the technology.
And the journal Interfaces itself I think provides a suggestion as to why this kind of addtional discourse and conversation is valuable. You may have noticed that what looks like a fifteen page article is really an eight page article with six pages of response pieces, followed by an authors’ response. The responses challenge parts of the original article, and enrich other parts with additional information and examples. They illustrate the collaborative nature of knowledge production in the disciplines in a way that citations alone cannot. I couldn’t find anything on the journal’s website about this practice – if it’s a regular thing, how responses are solicited, or more. These responses are a spot of openness in a fairly closed publication.
And that as well points to the last point to make here because this is far too long already – I don’t think we have to change everything to fix the problems raised here – and I don’t think if we did change everything it would fix all of the problems raised here. There’s that scene in Bull Durham where Eppie Calvin gets his guitar taken away because he won’t get the lyrics right. And that’s the connection between FemaleScienceProfessor and Anderson and Wight — who can feel like a rock star if they’re singing your songs but getting them wrong?
There will always be Eppie Calvins out there inside and outside of academia -for them, women are wooly because of the stress. But injecting just some openness, making some communication visible – won’t stop Eppie Calvin, but might keep the next person from replicating his mistakes. And that’s a good thing.