it’s the math

I’m not sure that even my tendency to see information literacy connections everywhere will explain why I’m posting this, but I just thought it was really interesting.  This morning, I got pointed to this article (via a delicious network) which argues that hands-on, unstructured, discovery-based learning doesn’t do the trick for many science students at the secondary level.  Using preparedness for college science as their definition of success, most students are more successful if their high school science learning is significantly structured for them by their teachers.

Structure More Effective in High School Science Classes, Study Reveals

What jumped out at me here was that the reason seemed to be linked to the math – students with good preparation in the math, did benefit from unstructured, discovery based learning.  And then there was another “similar articles to this one link at the bottom of the page, pointing to another study, making another point -which supports this idea too (which is not hugely surprising because both items point to different papers by the same researchers).

You do better in college if you’ve taken high school classes in chemistry, better in physics if you’ve taken physics – but the one big exception to the success in one doesn’t generalize argument?  You do better in everything if you’re well-prepared in math.

College Science Success Linked to Math and Same-Subject Preparation

After that there are more “articles like this one links” leading to articles about middle-school math teachers in the US being really ill-prepared, or things about gender and math and science which really got me thinking about further implications of those findings – if math is such a lynchpin.  So there is something there about how this dynamic, browsable environment makes your brain work in ways that make research better.

There’s also something there about context – getting the “math teachers aren’t prepared” article in the context of the “math is key” research made the significance of the former clearer, made how I could *use* that research much clearer than it would have been if I came upon it alone.  There’s also something there about the power of sites like ScienceDaily (and ScienceBlogs, and and others) to pull together research, present it in an accessible way in spaces where researchers/readers can make those connections.

And there might even be something there about foundational, cognitive skills that undergird other learning. But mostly, I just found it interesting.


Studies referenced were reported on here:

Sadler, Philip M. & Tai, Robert, H.   The two high-school pillars supporting college science (Education Forum)  Science 27 July 2007:  Vol. 317. no. 5837, pp. 457 – 458.   DOI: 10.1126/science.1144214  (paywall)
Tai & Sadler, Same Science for all?  Interactive association of structure in learning activities and academic attainment background on college science performance in the USAInternational Journal of Science Education, Volume 31, Issue 5 March 2009 , pages 675 – 696.  DOI: 10.1080/09500690701750600

what do huge numbers + powerful computers tell us about scholarly literature? (peer-reviewed Monday) A little more than a month ago, I saw a reference to an article called Complexity and Social Science (by a LOT of authors).  The title intrigued me, but when I clicked through I found out that it was about a different kind of complexity than I had been expecting.

Still, because the authors had made the pre-print available, I started to read it anyway and found myself making my way through the whole thing. The article is about what might be possible with computers and data and powerful computers able to crunch lots of data – what might be possible for the social sciences, not just the life sciences or the physical sciences. The reason it grabbed me was this here –

Computational social science could easily become the almost exclusive domain of private companies and government agencies. Alternatively, there might emerge a “Dead Sea Scrolls” model, with a privileged set of academic researchers sitting on private data from which they produce papers that cannot be critiqued or replicated. Neither scenario will serve the long-term public interest in the accumulation, verification, and dissemination of knowledge.

See, the paper opens by making the point that research in fields like biology and physics have been incontrovertibly transformed by “capacity to collect and analyze massive amounts of data” but while lots and lots of people are doing stuff online every day – stuff that leaves “breadcrumbs” that can be noticed, counted, tracked and analyzed, the literature in the social sciences includes precious few examples of that kind of data analysis.  Which isn’t to say that it isn’t happening – it is and we know it is, but it’s the googles and the facebooks and the NSA’s that are doing it. The quotation about gets at the implications of that.

The article is brief and well worth a scan even if you, like me, need a primer to really understand the kind of analysis they are talking about.  I read it, bookmarked it, briefly thought about writing about it here but couldn’t really come up with the information literacy connection I wanted (there is definitely stuff there – if nowhere else it’s in the discussion of privacy, but the connection I wasn’t looking for wasn’t there for me at that moment) so I didn’t.

But then last week, I saw this article, Clickstream Data Yields High-Resolution Maps of Science, linked in the ResearchBlogs twitter feed (and since then at Visual Complexity, elearnspaceStephen’s Web,, and EcoTone).

And they connect – because while this specific type of inquiry isn’t one of the examples listed in the Science article, this is exactly what happens when you turn the huge amounts of data available, all of those digital breadcrumbs, into a big picture of what people are doing on the web — in this case what they are doing when they work with the scholarly literature. And it’s a really cool picture:

The research is based on data gathered from “scholarly web portals” – from publishers, journals, aggregators and institutions.  The researchers collected nearly 1 billion interactions from these portals, and used them to develop a journal clickstream model, which was then visualized as a network.

For librarians, this is interesting because it adds richness to our picture of how people, scholars, engage with the scholarly literature – dimensions not captured by traditional measures of impact data.  For example, what people cite and what they actually access on the web aren’t necessarily the same thing, and a focus on citation as the only measure of significance has always provided only a part of whatever picture there is out there.  Beyond this, as the authors point out, clickstream data allows analysis of scholarly activity in real-time, while to do citation analysis one has to wait out the months-and-years delay of the publication cycle.

It’s also interesting in that it includes data not just from the physical or natural sciences, but from the social sciences and humanities as well.

What I also like about this, as an instruction librarian, is the picture that it provides of how scholarship connects.  It’s another way of providing context to students who don’t really know what disciplines are, don’t really know that there are a lot of different scholarly discourses, and who don’t really have the tools yet to contextualize the scholarly literature they are required to use in their work.  Presenting it as a visual network only highlights this potential for this kind of research more.

And finally – and pulling this back to the Science article mentioned at the top, this article is open – published in an open-access journal and I have to think that the big flurry of attention is has received in the blogs I read, blogs with no inherent disciplinary or topical connection to each other, is in part because of that.


Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabasi, A., Brewer, D., Christakis, N., Contractor, N., Fowler, J., Gutmann, M., Jebara, T., King, G., Macy, M., Roy, D., & Van Alstyne, M. (2009). SOCIAL SCIENCE: Computational Social Science Science, 323 (5915), 721-723 DOI: 10.1126/science.1167742

Bollen, J., Van de Sompel, H., Hagberg, A., Bettencourt, L., Chute, R., Rodriguez, M., & Balakireva, L. (2009). Clickstream Data Yields High-Resolution Maps of Science PLoS ONE, 4 (3) DOI: 10.1371/journal.pone.0004803

peer-review, what it is good for? (peer-reviewed Monday)

In a lot of disciplines, the peer reviewed literature is all about the new, but while the stories may be new, they’re usually told in the same same same old ways.  This is a genre that definitely has its generic conventions.  So while the what of the articles is new, it’s pretty unusual to see someone try to find a new way to share the what.  I’ll admit it, that was part of the attraction here.

And also attractive is that it is available.  Not really openly available, but it is in the free “sample” issue of the journal Perspectives on Psychological Science.  I’m pulling this one thing out of that issue, but there are seriously LOTS of articles that look interesting and relevant if you think about scholarship, research, or teaching/learning those things — articles about peer review, IRB, research methods, evidence-based practice, and more.

Trafimow, D., & Rice, S. (2009). What If Social Scientists Had Reviewed Great Scientific Works of the Past? Perspectives on Psychological Science, 4 (1), 65-78 DOI: 10.1111/j.1745-6924.2009.01107.x

So here’s the conceit – the authors take several key scientific discoveries, pretend they have been submitted to social science/ psychology journals, and write up some typical social-science-y editors’ decisions.  The obvious argument of the paper is that reviewers in social science journals are harsher than those in the sciences, and as a result they are less likely to publish genius research.

No Matter Project (flickr)

I think that argument is a little bit of a red herring; the real argument of the paper is more nuanced.  The analogy I kept thinking about was the search committee with 120 application packets to go through – that first pass through, you have to look for reasons to take people out of the running, right?  That’s what they argue is going on with too many reviewers – they’re looking for reasons to reject.  They further argue that any reviewer can find things to criticize in any manuscript, and that just because an article can be criticized doesn’t mean it shouldn’t be published:

A major goal is to dramatize that a review who wishes to find fault is always able to do so.  Therefore, the mere fact that a manuscript can be criticized provides insufficient reason to evaluate it negatively.

So, according to their little dramatizations, Eratosthenes, Galileo, Newton, Harvey, Stahl, Michelson and Morley, Einstein, and Borlaug would each have been rejected by social science reviewers, or at least some social science reviewers.  I won’t get into the specifics of the rejection letters – Einstein is called “insane” (though also genius – reviewers disagree, you know) and Harvey “fanciful” but beyond these obviously amusing conclusions are some deeper ideas about peer review and epistemology.

In their analysis section, Trafimow and Rice come up with 9 reasons why manuscripts are frequently rejected:

  • it’s implausible
  • there’s nothing new here
  • there are alternative explanations
  • it’s too complex
  • there’s a problem with methodology (or statistics)
  • incapable of falsification
  • the reasoning is circular
  • I have different questions I ask about applied work
  • I am making value judgments

A few of these relate to the inherent conservatism of science and peer review, that has been well established (and which was brought up here a few months ago).  For example, plausibility refers to reviewers are inclined to accept what is already “known” as plausible, and challenges to that received knowledge as implausible, no matter how strong the reasoning behind the challenging interpretation.

A few get at that “trying to find fault” thing I mentioned above.  You can always come up with some “alternative explanation” for a researcher’s results, and you can always suggest some other test or some other measurement a researcher “should have” done.  The trick is to suggest rejection only when you can show how that missing test, or alternative explanation really matters, but they suggest that a lot of reviewers don’t do this.

Interestingly, Female Science Professor had a similar post today, about reviewers who claim that things are not new, but who do not provide citations to verify that claim.  Trafimow and Rice spend a bit of time themselves on the “nothing new” reason for rejectin.  They suggest that there are five levels at which new research or knowledge can make a new contribution:

  • new experimental paradigm
  • new finding
  • new hypothesis
  • new theory
  • new unifying principle

They posit that few articles will be “new” in all of these ways, and that reviewers who want to reject an article can focus on the dimension where the research isn’t new, while ignoring what is.

Which relates to the value judgments, or at least to the value judgment they spend the most time on – the idea that social science reviewers value data, empirical data, more than anything else, even at the expense of potentially groundbreaking new theory that might push the discourse in that field forward.  They suggest that a really brilliant theory should be published in advance of the data – that other, subsequent researchers can work on that part.

And that piece is really interesting to me because the central conceit of this article focuses our attention, with hindsight, on rejections of stuff that would fairly routinely be considered genius.  And even the most knee-jerk, die hard advocate of the peer review process would not make the argument that most of the knowledge reported in peer-reviewed journals is also genius.  So what they’re really getting at here isn’t does the process work for most stuff so much as it is, are most reviewers in this field able to recognize genius when they see it, and are our accepted practices likely to help them or hinder them?

More Revision, Djenan (flickr)

And here’s the thing – I am almost thinking that they think that recognizing genius isn’t up to the reviewers.  I know!  Crazytalk.  But one of the clearest advantages to peer review is that revision based on thoughtful, critical, constructive commentary by experts in the field will, inherently, make a paper better.  That’s an absolute statement but one I’m pretty comfortable making.

What I found striking about Trafimow and Rice’s piece is that over and over again I kept thinking that the problems with the problems they were identifying was that they led to reviews that weren’t helpful to the authors.  They criticize suggestions that won’t make the paper better, that conventions that shouldn’t apply to all research, and the like.  They focus more on bad reviews than good and they don’t really talk explicitly about the value of peer review but if I had to point at the implicit value of peer review as suggested by this paper, that would be it.

There are two response pieces alongside this article, and the first one picks up this theme.  Raymond Nickerson does spend some time talking about one purpose of reviews being to ensure that published research meets some standard of quality, but he talks more about what works in peer review and what authors want from reviewers – and in this part of his response he talks about reviews that help authors make papers better.  In a small survey he did:

Ninety percent of the respondents expected reviewers to do substantially more than advise an editor regarding a manuscript’s publishability.  A majority (77%) expressed preferences for an editorial decision with detailed substantive feedback regarding problems and suggestions for improvement…

(Nickerson also takes issue with the other argument implied by the paper’s title – that the natural and physical sciences have been so much kinder to their geniuses.  And in my limited knowledge of this area, that is a point well taken.  That inherent conservatism of peer rview certainly attaches in other fields – there’s a reason why Einstein’s Theory of Special Relativity is so often put forward as the example of the theory published in advance of the data.  It’s not the only one, but it is not like there are zillions of examples to choose from.)

Nickerson does agree with Trafimow and Rice’s central idea – that just because criticisms exist doesn’t mean new knowledge should be rejected.  M. Lynne Cooper, in the second response piece, also agrees with this premise but spends most of her time talking about the gatekeeper, or quality control, aspects of the peer review process.  And as a result, her argument, at least to me, is less compelling.

She seems too worried that potential reviewers will read Trafimow and Rice and conclude that they should not ever question methodology, or whether something is new — that just because Trafimow and Rice argue that these lines of evaluation can be mis-used, that potential reviewers will assume that they cannot be properly used.  That seems far-fetched to me, but what do I know?  This isn’t my field.

Cooper focuses on what Trafimow and Rice don’t: what makes a good review.  A good review should:

  • Be evaluative and balanced between positives and negatives
  • Evaluate connections to the literature
  • Be factually accurate and provide examples and citations where criticisms are made
  • Be fair and unbiased
  • Be tactful
  • Treat the author as an equal

But I’m less convinced by Cooper’s suggestions for making this happen.  She rejects the idea of open peer review in two sentences, but argues that the idea of authors giving (still anonymous) reviewers written feedback at the end of the process might cause reviewers to be more careful with their work.  She does call, as does Nickerson, for formal training.  She also suggests that the reviewers’ burden needs to decrease to give them time to do a good job, but given other things I have read make me wonder about her suggestion that there be fewer reviewers per paper.

In any event, these seem at best like bandaid solutions for a much bigger problem.   See, what none of these papers do (and it’s not their intent to do this) is talk about the bigger picture of scholarly communication and peer review.  And that’s relevant, particularly when you start looking at these solutions.  I was just at a presentation recently where someone argued that peer review was on it’s way out not for any of the usual reasons but because they were being asked to review more stuff, and they had time to review less.  Can limiting reviewing gigs to the best reviewers really work; can the burden on those reviewers be lightened enough?

The paper’s framing device includes science that pre-dates peer review, that pre-dates editorial peer review as we know it, that didn’t go through the full peer-review process, which begs the question – do we need editorial peer review to make this knowledge creation happen?  Because the examples they’re putting out there aren’t Normal Science examples.  These are the breaks and the shifts and the genius that Normal Science process kind of by definition has trouble dealing with.

And I’m not saying that editors and reviewers and traditional practices don’t work for genius, that’s would be ridiculous.  But I’m wondering if the peer-reviewed article is really the only way to get at all of the kinds of knowledge creation, of innovation, that the authors talk about in this article – is this process really the best way for scholars to communicate all of those five levels/kinds of new knowledge outlined above?  I don’t want to lose the idealistic picture of expert, mentor scholars lending their expertise and knowledge to help make others’ contributions stronger.  I don’t want to lose that which extended reflection, revision and collaboration can create.

I am really not sure that all kinds of scholarly communication or scholarly knowledge creation benefit from the iterative, lengthy, revision-based process of peer review.  I guess what I’m saying is that I don’t think problems with peer review by themselves are why genius sometimes gets stifled, and I don’t think fixing peer review will mean genius gets shared.  I don’t think the authors of any of these pieces think that either, but these pieces do beg that question – what else is there.

doodling as pedagogy

This one has been all over the news in the last two days, but if you haven’t seen it, it’s an Early View article in the journal Applied Cognitive Psychology. The article suggests that people who doodle while they are listening to stuff retain more of what they hear than non-doodlers do.

As an unabashed doodler, for me it’s usually fancy typography-like versions of my dog’s name, this isn’t all that surprising. But my brain keeps going back to it — should we be figuring out ways to encourage our students to doodle in library sessions?

See, the article doesn’t say definitively why the doodling works.  But the author, Jackie Andrade, does suggest that it might have something to do with keeping the brain engaged just enough to prevent daydreaming, but not enough to be truly distracting:

A more specific hypothesis is that doodling aids concentration by reducing daydreaming, in situations where daydreaming might be more detrimental to perfomance than doodling itself.

So you’ve got an information literacy session in the library, with a librarian-teacher you have no relationship at all, about a topic about which you may or may not think you need instruction.  That sounds like a perfect situation for daydreaming.

And it’s not too hard to think of ways to encourage doodling.  Handouts with screenshots of the stuff you’re talking about – encourage them to draw on the handouts.  Maybe even provide pencils?  I don’t know – it’s not an idea where I’ve fully figured out the execution, but I’m interested.

My students, most of the time, don’t take notes while I’m talking.  Part of this is my style, I talk fast and I don’t talk for very long in any one stretch before switching to hands-on.  But I don’t think that’s all of it – most of them don’t even take out note-taking materials unless they are told to do so by their professor (and then they ALL do) or unless I say “you should make a note of this” (then most of them do).   And this isn’t something I’ve worried about.  I have course pages they can look at if they need to return to something, and I’m confident that most of them know how to get help after the fact if they need it.

But the no-notetaking thing means that they aren’t even in a position to do any doodling.  And as someone who needs that constant hands/part of the brain occupation to stay focused, I wonder why I’ve never thought about that as a problem before.

This study specifically tried to make sure that the subjects were prone to boredom.  They had them do this task right after they had just finished another colleagues experiment, thinking that would increase the chance that they would be bored.  And they gave them a boring task – monitoring a voice message.  Half doodled, half did not, and then they were tested on their recall of the voice message.

I don’t mean to suggest that information literacy sessions are inherently boring; I don’t actually think they are.  But I think some of the conditions for boredom are there, particularly in the one-shot setting, and I don’t think there’s stuff that we can do about all of those conditions.  Some of them are inherent.  The idea of using the brain research that’s out there to figure out some strategies for dealing with that interests me a lot.

Jackie Andrade (2009). What does doodling do? Applied Cognitive Psychology DOI: 10.1002/acp.1561

Peer-reviewed Monday (plus 24 hours) – has anyone tried out this Delphi method?

So this is a little different for peer-reviewed Monday, even though it is a peer-reviewed article about information literacy. It’s different in that I chose the article because of the research method – the infolit topic was just a bonus. I’m going to be involved in a project for the Oregon Library Association that is going to be using this same method, so I wanted to check it out.

(It’s the Delphi method, if you’re curious. And I talked about another project that used it about halfway through this post.)

In the January issue of portal: Libraries in the Academy, Laura Sanders from Simmons College uses the Delphi method to do some forecasting about information literacy and academic libraries. The Delphi method is frequently used for forecasting, which is actually how my project will be using it, so we’re off to a good start.

So, in short, the Delphi method involved identifying a set of experts on a topic. Then these experts are asked to complete a survey – an open-ended survey with lots of room for them to talk about what they think is important. The researcher collects the surveys and synthesizes the responses, and then sends out another set of questions (which may be new and which may be repeats) for another set of responses. This goes on with the goal being consensus – expert consensus on the topic in question. In this case, the experts were asked to examine some potential scenarios for the future of information literacy:

This study develops possible scenarios for the future of library instruction services and offers practitioners, administrators, and library users a sense of how existing technologies, resources, and skills can best be employed to meet this vision.

Saunders identified her experts by their participation in information literacy organizations, publishing, presenting, and research. She identified 27, 14 agreed to participate and 13 eventually did. She did two rounds of surveys. She pulled some potential futures for information literacy out of the literature (things stay the same, librarians get replaced by faculty who take over all information literacy instruction, and librarians and faculty collaborate) and asked the expert panel to talk about four things:

  1. the scenario they thought was most reasonable or likely and why
  2. any obstacles they could see getting in the way of realizing these scenarios
  3. alternative possibilities or scenarios
  4. other comments

After the first round, she kept one scenario (the collaborative one) and created another composite scenario based on responses from the experts (this last scenario posited a reduced need to teach information literacy because of improvements in the technology). Participants could reiterate their initial choice or choose the new scenario. It isn’t clear from the methods section whether or not the participants were given the same four questions again, nor is it clear what information they were given from the first round – did they see everyone’s responses, a synthesis or summary of those responses, or just the new questions?

The research showed that these experts were largely optimistic about the future of information literacy, and that they overwhelmingly thought the collaborative scenario was the most likely. They identified faculty resistant as a major obstacle, and also mentioned staff and money issues as obstacles. They believed that librarians should leverage their expertise to play a stronger role establishing information literacy goals at the institutional level.

They saw partnerships on instructional design and assignment design as a place where librarians would continue to play a role in information literacy instruction, even if classroom faculty took on more responsibility for teaching, but they expressed concern that librarians aren’t ready to take on those roles. They were also not sure that library schools were providing new practitioners with these skills and that knowledge:

For librarians to be truly integrated into the curriculum rather than offering one-shot sessions, they must have much more pedagogical and theoretical knowledge. Although practicing librarians might have experience with library instruction, few have the background to transition easily to the [consulting] roles being described. Furthermore, respondents were unsure that library school programs were developing courses to adequately prepare future graduates for these responsibilities

The experts also argued that assessment needs to be a concern, and they also raised the age-old question of what do we really mean by information literacy anyway. Following along the lines of the researchers discussed in two recent peer reviewed Mondays, they generally agreed that context is significant when it comes to information literacy, and that information literacy must be understood more broadly than “library research skills.” At the same time, some argued for the more reductionist, standards-based definitions because they are easier to assess.

On a methods level, I found this study compelling, though there were two things that nagged at me. First was the idea that 13 is just not enough experts. Too often Saunders was forced to spend significant time on a point only to undercut its significance when the reader realizes that it had been articulated by only one person on the panel. Some of them were necessary correctives or added important subtleties to the conversation, so I don’t fault her for including them. But when it’s just one voice it is just not possible to entirely dismiss the idea that that voice is alone because it’s wacky.

The second thing was related to this and that is that I didn’t get any sense from the article that any kind of real consensus had been reached, or that the experts involved had changed or refined their views as a result of the process. Those who were outliers after round one of the surveys remained outliers. As it was described to us, the Delphi process offers the interactivity and social learning benefits of a focus group, while allowing the participants to provide individual, thoughtful, reflective feedback. That may have gone on in this study, but I don’t feel like I really saw it if it did.

On a content level, I found the argument that instruction librarians needed more pedagogical and theoretical knowledge intriguing. I was struck by the extent to which the expert panel focused on teaching knowledge as the thing separating the faculty and the librarians, as shown by this quote from one of the panelists: “faculty ‘view librarians as having no pedagogic understanding.'”

Librarians frequently talk about teaching as something faculty don’t think we can do, but in my experience it isn’t teaching but research that causes this gap. I haven’t run across many faculty members who don’t think librarians can teach, though some certainly don’t know that they do, but I regularly run across faculty members who are surprised to hear that there is information science research. And when it comes to what would make me feel comfortable approaching faculty and saying “this is what your students need” – it is research on what students actually do need, what they do and don’t know, and more that would help me do that – not better knowledge or teaching techniques. This may be a function of spending a lot of time at research-focused institutions, but I’m not sure. And it may be that this is exactly what these experts mean by knowledge that is more pedagogical and theoretical, but again, I’m not sure.

And this relates to the assessment piece and the definitions of information literacy piece as well. Because these are both examples of places where there is a danger of following the path of least resistance – of defining information literacy like faculty understand it, or of assessing what other people think is important. Not that that has to be what happens, but the danger exists.

And I do wonder how these experts have themselves avoided the gaps they see in others – how they have developed the knowledge of theory and pedagogy that they think librarians need. Or maybe they haven’t – maybe they are including themselves in the number of librarians who aren’t ready to be faculty partners in this way. I’m not sure. But I have been thinking lately that the Delphi method might be useful for getting at that question as well – how do expert instruction librarians develop the knowledge they need to do what they do?

Laura Saunders (2008). The Future of Information Literacy in Academic Libraries: A Delphi Study portal: Libraries and the Academy, 9 (1), 99-114 DOI: 10.1353/pla.0.0030

Peer-reviewed Monday – Reflective Pedagogy
When I wrote that one theory and practice post last November I was thinking about reflective practice, but I didn’t really talk about it.  Luckily, Kirsten at Into the Stacks picked up that thought for me.  The whole post is great, but here’s the reflection piece:

But the purpose of theory, it seems to me, is as much to cause us to reflect on our practice as it is to inform our practice.

In my own post, I over-used the term “inform,” because while that is important, I think that reflection is just as, if not more important.  Reflection is the point where the practice part of the job mixes with the theory part, with the writing part and the presenting part and the reading outside the discipline part.  It’s not just a matter of taking what someone else has done and saying “I could do that.”  It’s taking what someone else has said and saying “wow, this makes me think/feel/understand something about what I do.”

So this article from last year’s Journal of Academic Librarianship jumped out at me – as it brings together the ideas of praxis and information literacy:

H JACOBS (2008). Information Literacy and Reflective Pedagogical Praxis The Journal of Academic Librarianship, 34 (3), 256-262 DOI: 10.1016/j.acalib.2008.03.009

The article is well-done, and I recommend it if you’re interested in the why of reflective practice, particularly where it comes to teaching and information literacy, but for me it felt a little like that one song in Dirty Dancing – the one they dance to at the end that sounds like it is going to be classic 80’s overwrought pop and you keep thinking it’s going to take off into the saxophones and dance beat and it never does because in that last scene they’re doing the mambo that Jennifer Grey’s character learned as a novice and it can never really deviate from its initial beat as much as it sounds like it is going to?

The whole thing is why we should think about reflective practice, with no how or even how I do it.  Which is fine, and important, but when you’ve already drunk that particular kool-aid it lacks a certain punch.

Anyway, quick summary.  Jacobs argues that librarians need to think more about pedagogy and not just about teaching.  She briefly touches on the lack of teaching/ pedagogy training in library school, and argues that even if one has had a teaching methods class that isn’t enough.   Because so much of the teaching/learning work we do happens outside of the classroom setting, teaching methods alone won’t give us the coherence or the big picture we need to be effective.

She also argues for a broad, inclusive definition of information literacy.  Based on the the UN’s Alexandria Proclamation on Information Literacy and Lifelong Learning, the definition she favors includes that stuff in the standards-based definitions, but “goes on to make explicit what is implied in the other definitions by emphasizing the democratizing and social justice elements inherent in information literacy.”  This broad definition, she says, forces an understanding of information literacy that has to extend beyond the classroom.

Which brings us to the crux of the paper’s argument:

What I am suggesting is that the dialogues we have surrounding information literacy instruction strive to find a balance in the daily and the visionary, the local and the global, the practices and the theories, the ideal and the possible. One of the ways we can begin to do this in our daily teaching lives is to work toward creating habits of mind that prioritize reflective discussions about what it is that we are doing when we ‘do’ information literacy. This means thinking about pedagogy and talking about how we might work toward making the global local, the visionary concrete, the theoretical practicable, and, perhaps, the ideal possible. But how can we, as individual librarians, begin to work toward making information literacy ideals possible?

She argues that letting external standards, or quantitative measures, or standards-based rubrics define what we “do” when we do information literacy is not the way to go.  Not only will that keep us from understanding IL in that broad way advocated here, but it also reinforces an old-school, disempowering vision of education itself – Paolo Friere’s banking metaphor – where the teacher deposits knowledge among the students.

Finally, she gets to the argument mentioned in the abstract, that composition and rhetoric offer a lot to librarians trying to figure out how to understand information literacy in broader contexts.  She points out that the rhet/comp literature pushes back on the idea of standards-based assessment or pedagogy.  For one thing, this kind of approach makes it that much harder to really critically interrogate the assumptions underlying the standards or models themselves.

Which brings her to praxis:

Praxis — the interplay of theory and practice — is vital to information literacy since it simultaneously strives to ground theoretical ideas into practicable activities and use experiential knowledge to rethink and re-envision theoretical concepts.

She points to a particular article from the rhet/comp field, Shari Stenberg and Amy Lee’s College English article Developing Pedagogies: Learning the Teaching of English.  Drawing on Stenberg and Lee, Jacobs argues that we must develop ways to study our actual practice as texts, our teaching as texts.  She further argues that most of what we do when it comes to pedagogy is articulate different visions of it – visions that are not grounded in what practitioners actually do.

Beyond this, she argues that we need to study these things together, have critical, reflective conversations together about what it is that we do.  At the heart of this is the idea that teaching can’t be mastered, that developing our understanding of what we do is an inherently ongoing process.

And here’s the thing – I really like all of what she has to say here.  I do find it interesting that given the large body of literature on reflective practice she doesn’t draw from that, but what she says is consistent with the parts of that literature I like so overall, I don’t mind.  But here is where she ends.  She’s made the case for reflective practice and reflective conversations, for reading our practice like texts – but she doesn’t go on to the how of things.

Partly, because she refuses to do so:

For these reasons, I resist offering answers, solutions, or methods to questions about how to engage theory and practice within information literacy initiatives.

But recognizes that this is frustrating:

At the same time, I acknowledge that refusing to provide answers to questions such as “how do I teach information literacy” or “how do I become a reflective pedagogue” or “how might I foster a reflective pedagogical environment in my library” often seems evasive and counter productive.

She argues that librarians should engage in reflective dialogue, and that they should in effect walk the walk in front of their students – that the best way to get students engaged in the learning process is for teachers to be engaged in it as well.   That teachers should interrogate their own assumptions about their own learning process, examine why the set the problems they set, be engaged in their own learning process as they would want their students to be engaged.  To encourage students to develop their curiosity, to set meaningful problems for themselves to investigate – librarians should do that too.  Especially when it comes to their own practice.

But again, no how.  And I will admit I don’t find the “articulating this for you would be against what i am arguing” to be unsatisfying.  Because I don’t really think that Jacobs is letting us see her process – I don’t think that she is letting us see her walk the walk.  I see her problem-setting on a personal, engaged level – but instead I see her telling us that there is a problem, arguing that in very traditional, very objective scholarly language, and then positing a solution to the problem that doesn’t fit in that rhetorical structure.  It’s late, and I’m tired, and I will defintiely accept that I didn’t catch something that is here – but I don’t think that personal engagement is here.

Don’t get me wrong, one of the things I like about the article is what I do see of Jacobs’ passion for this subject, her ability to draw connections and connect the dots.  But I want to see her reflecting on her practice – as a teacher, maybe and as a scholar certainly.  I think that would have allowed her to be true to her “no prescriptive reflection recipes” principles, while still offering something more satisfying than “creative, reflective dialogue.”

Perhaps my perspective is skewed, though, because I am increasingly starting to believe that showing students how we use the tools we describe in our own research and scholarship is the best way to communicate their value.  I do think that modeling what we preach is crucial.   So I may be glomming onto what is a less important part of her overall argument than I would have you believe.

Still, my favorite part of this article is buried in footnote 59 – where she can’t resist weighing in with some ideas.  And I find the peek into the reviewing process entirely charming:

59. The question of how to go about enacting this creative, reflective dialogue is undeniably pressing. In response to this piece, an anonymous reviewer asked a crucial question: “am I simply to include more problem based learning into my teaching of information literacy, or do I need to start from scratch and sit alongside the classes I work with, understanding how they think, and walking with them on their path to critical thinking and information literacy. God please give me the time for this.” The reviewer concludes, “However, this is perhaps the nature of the reflective activity the author is recommending.” Indeed, the answer the reviewer provides to his or her question is the answer I too would offer. The act of asking questions such as the ones quoted above is precisely the kind of reflective activity I am advocating. Pedagogical reflection does not mean we need to dismantle and rebuild our information literacy classes, programs, and initiatives from the ground up (though we may, after reflection, choose to do so). Instead pedagogical reflection means that that we ask questions like the ones quoted above of ourselves and our teaching and that we think critically and creatively about the small and large pedagogical choices we make.

Peer-reviewed Monday post-conference-drive-by

Oh who am I kidding.  It probably won’t be short.  But it might be disjointed.  My good intentions were foiled by intermittent Internet access at the Super Conference, which was not that unexpected.  And by a seriously limited amount of power for my computer, which was totally unexpected except for my expected ability to do boneheaded things like leaving my power adapter at home.

{FYI – the Canadians, they know how to treat their speakers.  It’s been great.}

I do have something to say about peer reviewed research today though – it’s about this 2005 Library Quarterly article by Kimmo Tuominen, Reijo Savolainen and Sanna Talja.

Fair warning, I really liked this article.  I first read something by Savolainen when I was working on an annotated bibliography in library school (I think the topic was genre) and I’ve been something of a fan ever since.  Like AnneMaree Lloyd who was discussed here two weeks ago, these authors argue that we need to expand our definitions of information literacy.  And the expansion they’re arguing for is similar to Lloyd’s.  I find more food for thought here – more connections between the different things I’m thinking about and working on.

Perhaps this is because this is not a research article – these authors are not bound by their own sample, questions, or data.  Perhaps it is because tthey do a better job of placing their vision of information literacy in its theoretial context, or at least of explaining what that context is and why we should care about it.  Or perhaps it is just because their vision is broader.

In any event, their starting point is similar to Lloyd’s –

The predominant view of information literacy tends to conceive of IL as a set of attributes – or personal fluencies – that can be taught, evaluated, and measured independently of the practical tasks and contexts in which they are used.

And they have similar conclusions –

We argue that understanding the interplay between knowledge formation, workplace learning, and information technologies is crucial for the success of IL initiatives.  Needs for information and information skills are embedded in work practice and domain-dependent tasks.

So from here the authors look back at the IL discussion over time. They locate its start in the 1960’s and early 1970’s, trace the initial involvement of professional associations in the 1980’s, touch on the Big 6 model in the 1990’s and then argue that the concept of information literacy began to be associated with the broader concept of lifelong learning in the 1990’s.  They conclude this history section with the argument that since the 1990’s there have been many attempts to define competency standards for information literacy.

From here, they move to talking about challenges to the idea of information literacy.  Interestingly, they place the argument that IL instruction requires cooperation with faculty, integration into the curriculum, and a grounding in content-focused classroom assignments as one such challenge.  Given that that model has been presented to me as the norm (with the separate, credit-course instruction idea as the exception) since I was in library school, this rang a little strange to me.

The authors dismiss the challenges to IL.  They argue that so long as definitions of IL take the individual as subject, and outline a set of generic, transferable skills that individual can master – there is broad agreement as to what the potentially vague concept of “information literacy” means.  They argue that the ACRL IL Standards for Higher Education, for example, define a set of generic skills that are supposed to have relevance across the disciplines and across contexts.

This is very interesting to me, because we spent a long time on my campus defining just that kind of generic, transferable information literacy standards.  We did so in conjunction with faculty across the disciplines – from all of our colleges and who taught all levels of undergraduates.  The thing is this, this was a really invigorating process.  We held focus groups with faculty and had conversations with a lot of programs and units across campus and I’m really, really proud of the document we came up with .  As a model for objectives/ goal-writing, this document is not bad.  Look at the action verbs!

And more than that, the repeated conversations with faculty were really morale-boosting.  Getting faculty to come over to the library and talk, and talk in-depth and really, really intelligently about information literacy wasn’t a challenge – it was easy.  And the faculty had such useful and smart things to say about the stuff we all cared about.  It was a good process.

Since then, we’ve been wondering what to do with the document.  Our campus doesn’t have any instiutution-wide learning goals; we don’t have a structure where our competencies could fit in or be adopted on a campus-wide level.   So that’s an issue.  But even within the library, we’ve struggled with where to go next.

And I think that the factors mentioned in this article may have something to do with it.  We use the course-integrated, there should be an assignment, IL has more meaning with taught in the context of an actual information need model.  And we thought and still think that we *could* define disciplinary or context- specific versions of our competencies (or at least of the examples), but we haven’t done that.  Our one attempt to do so got bogged down in a nightmare mire of granularity.

We want to define a program that integrates the branch campuses, the archives/ special collections, faculty programs and all levels of graduate/undergraduate student instruction and I’m not sure that the competency document is that helpful in doing that.  It was a useful reflective exercise for us, and the process of creating it collaboratively with faculty was very useful.  But beyond that, I’m not sure how to make it useful for us as we try to structure a doing-more-with-less type of instruction program.

And it might be because of what is articulated in this article.  When I think of how to create a document like this for beginning composition, for example, which while multidiscipinary has a clearly articulated goal of introducing students to academic writing and knowledge creation.  And that goal is a context – academic writing provides a context.  Context is even easier to conceptualize in different fields.

The authors argue that these standards-based ideas of IL are based on an assumption of information as something factual and knowable (I think our collaborative process with faculty undercut this in our case).  They also suggest that the standards are too focused on the individual as agent seeking and using information.  This piece I have a little bit of a problem with.  It’s kind of a typical criticism of constructivism – arguing that it’s too individual, too grounded in individual cognitive processes:

Most of the published IL literature draws from constructivist theories of learning stressing that individuals not only absorb the messages carried by information but are also active builders of sense and meaning.

What they’re missing here, or probably not giving the same emphasis that I would give more than ignoring – is Vygotsky.  Kuhlthau, who they acknowledge as influential, deliberately focuses on Vygotsky’s brand of constructivism, which was a deliberate effort to integrate the social and cultural back IN to constructivism.  Still, much as I love Vygotsky, and much as I respect Kuhlthau for going that route — I have to agree that the *image* of the solitary scholar undergirds the picture painted by most IL competency standards.

Beyond this, the authors idea of the social in information literacy is very specific – grounded in this idea of socialtechnical practice.  As they suggest, “the most important aspects of IL may be those that cannot be measured at the level of the individual alone.”   By this, they mean, that it is not the individual but the community that decides what kinds of sources are useful, and valued, and important — which things you have to master to be successful within the community:

Groups and communities read and evaluate texts collaboratively.  Interpretation and evaluation in scientific and other knowledge domains is undertaken in specialized “communities of practice,” or “epistemic communities.”

Which is why I think the “academic writing” context in beginning composition is not too broad as to be useful.  For new or neophyte scholars, the idea that there are practices of communicating knowledge, that there are types of knowledge more avalued than others – these ideas are new enough that they deserve an introduction all their own.  Expecting students to jump into the epistemic community of a discipine before they really understand that there is such a thing as epistemology… that seems unreasonable to me.

The authors here tend to argue that IL is too grounded in school and that it misses the communitie of practice aspect because it’s too grounded in school.  I think I would probably argue (though this just occurred to me and I’m a classic introvert which means I need more processing time and thus must reserve the right to argue the opposite of this later) … anyway … I would probably argue that the problem isn’t that we focus on generic academic writing skills instead of grounding things in context – I would argue that we present generic academic writing skills without really grounding them in their context.  I agree that we have an assumption that these skills, mastered in any context, will be useful and valuable.  That we don’t have to explain their significance across contexts because students will be able to draw those connections.  I’m not sure that’s true unless we specifically, and deliberately, explain the academic context in the first place.

And I really like the idea of grounding those pieces of information literacy – that what you will even be looking for is determined by the discourse, by the practice-standards of a particular discourse community — in the community or the context.  So, to these authors, “sociotechnical practice” means identifying where the community determines what it means to be information literate.    And that’s really, really valuable.

Beyond this, I think we need to start deliberately teaching our students how to figure out what those community standards are.  Not teaching them what they are – but how to figure that out.  I wondered the other day if students are using search engines to figure out how to enter the scholarly discourse even if they aren’t taught specifically what “peer reviewed” means, or anything like that.  Looking at my referral logs, I don’t think they are.  That kind of bothers me – they should know how to go looking for the how-to information they need.  And I suspect we should start teaching it.

Kimmo Tuominen, Reijo Savolainen, Sanna Talja (2005). Information Literacy as a Sociotechnical Practice The Library Quarterly, 75 (3), 329-345 DOI: 10.1086/497311

Kicking off Peer Reviewed Mondays

I am apparently not the only theory geek out there.  But I realized that I haven’t been doing a great job of putting my money where my mouth is where it comes to the value of peer-reviewed articles.  So my Monday evenings this term look like they are, for a variety of reasons, going to lend themselves to this experiment – Peer Reviewed Mondays.

Which means – Mondays I’ll try to at least point to a peer-reviewed article that I think is worth talking about.  I’ll also try to explain why.  That seems do-able.

So I’ve had the idea for this for a while, which of course means that I haven’t come across anything peer reviewed that really inspired me to write.  But today, I was inspired by a presentation I heard on campus to go poking around into journals I haven’t looked at before and I came across this article (PDF) from 2004, which takes a different kind of look at issues of evaluation when it comes to science and research and peer reviewed papers.  The meta of that delights me, and I think this paper is a useful addition to the stuff in my brain.

The article is called The Rhetorical Situation of the Scientific Paper and the “Appearance” of Objectivity.

(Useless aside – when the presenter today was being introduced I was unreasonably amused because to fully express the meaning of each journal article, the professor doing the introducing had to read out all of the punctuation – like the quotation marks above.  That doesn’t happen in library papers enough. )

Allen’s method is simple.  He articulates ideas and theories from the rhetoric/composition discourse, and then analyzes a single scholarly article –

in this essay, I analyze Renske Wassenberg, Jeffrey E. Max, Scott D. Lindgren, and Amy Schatz’s article, “Sustained Attention in Children and Adolescents after Traumatic Brain Injury: Relation to Severity of Injury, Adaptive Functioning, ADHD and Social Background” (herein referred to as “Sustained Attention in Children and Adolescents”), recently published in Brain Injury, to illustrate that the writer of an  experimental report in effect creates an exigence and then addresses it through rhetorical strategies that contribute to the appearance of objectivity.

Allen’s initial discussions, that scientific rhetoric is rhetoric, that scholarly objectivity has limits, and that specific rhetorical decisions (like the use, or what might be considered overuse in any other context, of the passive voice) are employed to enhance the impression of objectivity.  are interesting but not earth-shattering.  Where I found the nuggets of real interest were in the concluding sections.  Allen draws upon John Swales, who examined closely how scientists establish the idea that their research question is important.

Swales called the strategy Create a Research Space (CARS)  and identified three main rhetorical moves that most scholars make.  As Allen describes them:

  • Establish centrality within the research.
  • Establish a niche within the research.
  • Occupy the niche.

So what’s interesting about this to me, is Allen’s conclusion that – scientific authors seek to establish their study’s relevance through the implication that they share the same central assumptions and information base.

The reason this is interesting is that it dovetails neatly with Thomas Kuhn’s idea of normal science – the idea that a shared set of first principles is central to the idea of scholarly discourse, particularly a discourse that advances knowledge.   Where that intersects with the idea of peer review is in the idea of what makes it through peer review – the idea that an article or a piece of research needs to be more than just good or interesting research, it also needs to be a good example of what research is in a particular discipline, or discourse community.

Allen compares the relatively ordinary piece of research that he is anaylzing with a far more influential piece and finds that some of the rhetorical strategies that are used to establish the science-ness and the objectivity of the former are present, but far less present in the latter.  This – especially the idea that authors proposing truly ground-breaking research might be less likely to use the passive, “objective” voice – might be more likely to refer to themselves as active agents – is simply fascinating.

Allen points out that the very language we use to talk about scholarly research – creating meaning, knowledge, verifying truth claims – implies that the situation in which scientists communicate their findings is rhetorical.  He also points out that it is not just scientists – but the rest of us who rely in many ways on the meaning and knowledge scientists create – who need to understand these rhetorical practices.  His last sentence, in fact, is as good a justification for information literacy in higher ed as I’ve seen:

Certainly, scientists and researchers should be aware of embedded rhetorical strategies. But given the profound and pervasive influence of science in Western culture, we should all––scientist or not––be attentive to how our knowledge is shaped.

And now on to that delicious meta I love so much.

I love the idea of using this article to talk about what scholarship really is with undergraduates, particularly undergrads working on understanding academic writing.  But here’s the thing – the author of this article is himself an undergraduate writer.


The article appears in Young Scholars in Writing: Undergraduate Research in Writing and Rhetoric – a peer-reviewed journal, with an editorial board made up of faculty in rhetoric and composition.  The content of the journal is all by undergraduate researchers, and the peer reviewers themselves are undergraduates who have also published in the journal.

The journal reflects some of the practices of open peer review that fascinate me – especially as information literacy resources for students as they learn the practice of scholarly knowledge creation.  The review process itself is not made public, but each issue of the journal has a Comments and Responses section where student writers write 2-5 page responses to papers that have been published in the journal.

Which gets to the last piece of meta.  The one overwhelming benefit of the peer-review process, that is rarely discussed by us librarians in information literacy classes when we have to talk to students about finding peer reviewed articles – and that is rarely discussed by the faculty who require their students to find peer reviewed articles — the one thing that is pretty much unanimously agreed is that the process of peer review makes the paper better.  Maybe not all better, maybe not better in the same way as it would be better if it were reviewed by other people.  But better than it would have been had it not gone through that process.  This paper is beautifully written – clear and accessible and smart.

I love the idea of digging into the idea of peer review, using student engagement with the peer review process as an entry point — but to be able to do that with a paper that itself should spark new ideas about the value of scholarship, and how to evaluate scholarship – that looks kind of like a gift.

Matthew Allen (2004). The Rhetorical Situation of the Scientific Paper and the “Appearance” of Objectivity Young Scholars in Writing: Undergraduate Research in Writing and Rhetoric, 2

Why we should read it before we cite it — no, really!

Last week, Female Science Professor wrote a lovely pair of posts about scholars and scholarship, what it feels like when your work has an impact on someone and what it feels like to meet the people who have influenced you in that particular undefinable way where it’s hard to even express what they’ve meant to you. I shared one, saved the other and generally felt very good about being a small part of this world where rock star crushes on ideas and the people who share them are understood and embraced.

Way to ruin everything, Inside Higher Ed.

Okay, not really. But seriously, it’s a lot harder to feel like a rock star because someone has read and used your work if, as Malcolm Wright and J. Scott Armstrong suggest, they probably didn’t read it and if they did, they probably read it wrong.

That might be a little bit strong, but not by much. So what does it mean when a published, peer-reviewed article in a real life journal kicks off its final, concluding paragraph with this sentence – Authors should read the papers they cite.


This isn’t a library tutorial aimed at fifth-graders writing their first research paper, after all. This is a paper talking about what professional scholars, people responsible for the continued development of knowledge in disciplines, should do. It can’t mean anything good. Here’s the original article:

Article at Interfaces – requires subscription

Article at Dr. Anderson’s faculty page – does NOT require a subscription – (opens in PDF)

Nutshell – Dr. Anderson wrote one of the more impact-heavy articles in his discipline, and the only article that analyzes and explains how to correct for non-response bias in mail surveys (that’s bias caused by people who do not respond at all to the survey). By analyzing 1. how often research based on mail surveys includes a citation to this article, and 2. how often the later researchers seem to interpret and apply the original article correctly the authors conclude that many, many researchers are not reading all of the relevant literature. More disturbingly, many, many researchers aren’t even reading all of the articles they themselves cite.

Now, on one level this isn’t a shocker – anyone who has read moderately deeply in any body of literature has probably looked at at least one bloated literature review and said “hey – this person probably didn’t really read all of these books and articles.” This article suggests that it’s more complex than just lit-review padding, that scholarly authors also mis-cite and mis-use the resources they use to support the methods they use and the conclusions they draw.

Working on the assumption that if your research uses a mail survey, you should at least be considering the possibility of nonresponse bias, they found that:

…far less than one in a thousand mail surveys consider evidence-based findings related to nonresponse bias. This has occurred even though the paper was published in 1977 and has been available in full text on the Internet for many years.

Working on the further assumption that someone who makes a claim about nonresponse bias, and who reads, understands and cites an article that outlines a particular method for correcting nonresponse bias to support that claim, will follow the method outlined in the article they cited, the authors conclude that many authors are either not reading or are not understanding the articles they cite:

The net result is that whereas evidence-based procedures for dealing with nonresponse bias have been available since 1977, they are properly applied only about once every 50 times that they are mentioned, and they are mentioned in only about one out of every 80 academic mail surveys.

Most of the research that seriously digs into how well researchers use the sources they cite has come out of the sciences, particularly the medical sciences. This is one of the first articles I’ve seen dealing with the social sciences, and I think it’s worth reading more closely because this very rough and brief summary doesn’t really do justice to the issues it raises. But right now, I want to turn to the authors’ conclusions because I think they get at some of the things we’ve been talking about around here about how new technologies and the read/write web might have an impact on scholarship.

The first two outline author responsibilities:

  • First – read the sources you cite. I think we can take that as a given – a bare-minimum practice not a best practice.
  • Secondly, “authors should use the verification of citations procedure.” Here they’re calling for authors to contact all of the researchers whose work they want to cite to make sure that they’re citing it correctly. I’m going to come back to this one.

The second two put some of the burden on the journals:

  • Journals should require authors to attest that they have in fact read the work they cite and that they have performed due diligence to make sure their citations are correct. That seems a sad, largely symbolic, but not unreasonable precaution.
  • Finally, journals should provide easily accessible webspaces for other people to post additional work and additional research that is relevant to research that has been published in the journal. Going to come back to this one too because I think it’s actually related to the one above.

Basically – both of these recommendations suggest that more communication and more transparency would be more better for knowledge creation. And what is the read/write web about if not communication and transparency, networking and openness?

Some of the commenters on the IHE article expressed, shall we say, polite skepticism that an author should be obligated to contact every person they cite before citing them. These concerns were also raised by one of the formal comment pieces attached to the Interfaces article. And I have to say I agree with these concerns for a few reasons. Anderson made the claim more than once that he does this as an author, with good results, and that the process is not too onerous. But that doesn’t really address the question of how onerous it would be for a prolific or influential author to have to field all of those requests.

And I’ll also admit to having some author is dead reactions to this. What if I contact Author A and say I’m planning to use your work in this way and they say “well I didn’t intend it to be used in that way so you shouldn’t.” Does that really mean I shouldn’t? Really? It’s hard to see this kind of thing not devolving quickly into something that actually hinders the development of new knowledge because it hinders new researchers’ ability to push at and find new connections in work that has come before.

But not to throw everything out with this bathwater – the idea that more and better and faster communication between scholars (more and better and faster than can be provided within journals and the citation-as-communication tradition) makes better scholarly conversations and better scholarship – that’s something I think we need to hold on to. Anderson points out how talking to the researcher who really knows the area described in the thing you’re citing can point you to other, less cited but more useful resources – how they can expand your knowledge of the field you’re talking about:

We checked with Franke to verify that we cited his work correctly. He directed us to a broader literature, and noted that Franke (1980) provided a longer and more technically sophisticated criticism; this later paper has been cited in the ISI Citation Index just nine times as of August 2006.

This is an area where the transparency, speed and networking aspects of the emerging web might have a real impact on the quality of scholarship even if there are no material changes in the practice of producing journal articles. I might not be sure about the idea of making this communication a part of citation verification but it should be a part of knowledge creation. And it’s tied as well with the final recommendation – that journals should provide webspaces for some, not all but some, of this communication to happen.

The types of conversations between similarly interested scholars that Anderson is describing is nothing new – the emerging web offers some opportunities for those conversations to move off the backchannel. Or maybe it’s the idea that it’s still a backchannel, but the back channel being visible is interesting. Whether the journal has its own backchannel for errors, additions, omissions and new ideas to be posted, or whether that backchannel exists on blogs, in online knoweldge communities, or networking spaces doesn’t matter so much as it can exist. We certainly have the technology.

And the journal Interfaces itself I think provides a suggestion as to why this kind of addtional discourse and conversation is valuable. You may have noticed that what looks like a fifteen page article is really an eight page article with six pages of response pieces, followed by an authors’ response. The responses challenge parts of the original article, and enrich other parts with additional information and examples. They illustrate the collaborative nature of knowledge production in the disciplines in a way that citations alone cannot. I couldn’t find anything on the journal’s website about this practice – if it’s a regular thing, how responses are solicited, or more. These responses are a spot of openness in a fairly closed publication.

And that as well points to the last point to make here because this is far too long already – I don’t think we have to change everything to fix the problems raised here – and I don’t think if we did change everything it would fix all of the problems raised here. There’s that scene in Bull Durham where Eppie Calvin gets his guitar taken away because he won’t get the lyrics right. And that’s the connection between FemaleScienceProfessor and Anderson and Wight — who can feel like a rock star if they’re singing your songs but getting them wrong?

There will always be Eppie Calvins out there inside and outside of academia -for them, women are wooly because of the stress. But injecting just some openness, making some communication visible – won’t stop Eppie Calvin, but might keep the next person from replicating his mistakes. And that’s a good thing.