it is too much, let me sum up

There was a little flurry of conversation in my social networks about Mark Bauerlein’s recent offering on the Brainstorm blog (at the Chronicle), and i just realized that it was almost all in the rhet/comp corners of those networks – so in case library friends haven’t seen it – it’s worth looking at:

All Summary, No Critical Thinking 

Pull Quote:

From now on, my syllabus will require no research papers, no analytical tasks, no thesis, no argument, no conclusion.  No critical thinking and no higher-order thinking skills.  Instead, the semester will run up 14 two-page summaries (plus the homework exercises).

Students will read the great books and assignments will ask them to summarize designated parts.

A soft description of the conversations I saw would be “skeptical.” There were those who thought this was an April Fool’s joke, until they noticed the byline.  I think it reads like an effort to solve a problem that’s not really about summary, but about reading.  I italicize “think” there, because I don’t really get the summary idea – it seems to me that people who only engage enough with argumentative writing to cherry-pick quotes from source texts will be just as able to create “summaries” that don’t reflect any more than a superficial understanding of those source texts.

Michael Faris pointed out Alex Reid’s excellent response, which does a much better job of problematizing the summary than I could:

The Role of Summary in Composition (digital digs)

I believe we misidentify the challenges of first-year composition when we focus on student lack and specifically on the lack of “skills.” Our challenge is to take students who do not believe they are writers (despite all the writing they do outside school), who do not value writing, who do not believe they have the capacity to succeed as writers, and who simply wish to get done with this course and give them a path to developing a lasting writing practice that will extend beyond the end of the semester.

Isn’t that a great, um, summary of why writing teaching matters?

Can we substitute “researchers” for “writers” here?  I kind of like the resulting statement, but it makes me uncomfortable as well, because – can we do, are we doing that with our current models?

it has been a while, yes

Wow, that was kind of unplanned hiatus.  Since I last posted:  my library has hired a new University Librarian, I received tenure, I gave some talks, almost all of Spring term has gone by, I was surprised by a completely unexpected but lovely award, I finally finished the IS executive committee minutes from Midwinter, and I submitted an epic proposal for IRB approval.

I am also almost done with an actual blog post.  Until that’s done, though, here’s something awesome:  a news website article (from The Guardian UK) entitled This is a news website article about a scientific paper.

A sneak preview -

This paragraph elaborates on the claim, adding weasel-words like “the scientists say” to shift responsibility for establishing the likely truth or accuracy of the research findings on to absolutely anybody else but me, the journalist.

If I could summarize one of my goals for library instruction it would be – to make sure OSU students understand the scholarly article better than this.

so whatever happened with that fear factor book?

Or, sort of peer-reviewed Monday!  Not quite, but a book review.

I didn’t want to list the name of the book in question before because I hadn’t read it yet, and didn’t want to answer questions from people who might have found the post by googling the book title.  Especially if they were people who really liked the book, because I didn’t know yet if I liked it.   And there are people who really, really, really like it –

Best book ever on how to prepare students for college (Jay Mathews, Washington Post blogs)

I don’t really agree with the title there – the point of this book didn’t seem to me to be about preparing students for college so much as it is about preparing college for students.

Citation:  Cox, Rebecca D. (2009).  The College Fear Factor. (Cambridge, MA: Harvard University Press).

The book is based on 5 years of data gathered from community college students.  The author herself did two studies examining community college classrooms.  One was a basic writing class and the other looked at 6 sections of an english composition class.  Each lasted a semester, and gathered qualitative data about the classroom conditions that had an impact on a student’s successful completion of the course.  She also participated in a large scale field study of 15 community colleges in 6 states, and another national study of technological education.

The argument in the book comes from research on community college students, but it is still of interest to those of us who work with students managing the transition to college at any institution.  It is perhaps more relevant to those of us at institutions that attract a significant number of first-generation college students.

I am not sure entirely what I think of the book – on the one hand, it was a quick easy read and I enjoyed it as I usually enjoy well-reported stories drawn from qualitative investigation.  On the other hand, it struck me as one of those books that reports on an important conclusion, but one that could have been well-covered in an article-length treatment.  The conclusion is drawn over and over again in this longer work, so sometimes chapters go by without me feeling like I had really encountered anything new.

What is that conclusion?  I said to Caleb earlier that I wasn’t sure where the fear factor part of the title came into the book (becuase at that point I was on about page 17) and I have to say now that the title is good insofar as the real point of this book is on fear, and how that emotional state affects student success.  (Insofar as it evokes a really awful reality show, on the other hand, not so good).

And in this, I think the book is valuable.  We don’t think and talk about the role of affect enough in higher ed – at least not on the academic side – nor about the intersections between affect and cognition and affect and everything else we do, and this book is an important corrective to that.  Basically, Cox argues that students can be scared away from completing college – not because they are not capable of doing college-level work, but because they have not been prepared to do it before they get to college, and they are not helped to do it once they arrive.

The many students who seriously doubted their ability to succeed, however, were anxiously waiting for their shortcomings to be exposed, at which point they would be stopped from pursuing their goals.  Fragile and fearful, these students expressed their concern in several ways: in reference to college professors, particular courses or subject matter, and the entire notion of college itself — whether at the two- or the four-year level.  At the core of different expressions of fear, however, were the same feelings of dread and the apprehension that success in college would prove to be an unrealizable dream.

Cox argues that these fears are exacerbated when one doesn’t come in to college knowing how to DO college.  And that most first-generation, non-traditional and other groups of our students don’t come to college knowing what the culture and mores of academia are.  They have expectations, but those aren’t based on experience (theirs or others’) and when those expectations are challenged, their reaction is to think they can’t do it at all or to convince themselves that it is not worth doing .

Professors too have their own set of expectations about how good students approach their education, and when faced with student behaviors that are different than those expectations would suggest, they make some faulty assumptions about why students are behaving the way they are. A student who attends class every day but never turns anything in  — that’s incomprehensible behavior to the professor who doesn’t understand how that student possibly thinks they are going to pass.  After reading Cox’s book, you consider the possibility that that student doesn’t think they are going to pass, but are just playing out the semester in a depressed

I still feel like I am missing from this book much of a sense of why professors have these expectations  —  besides “that’s the way we’ve always done things.”  In other words, it doesn’t really work for me (nor do I think Cox is really claiming) that there is no value at all to the way that professors were trained, and that that they are hanging on to methods that don’t work simply because they went through it so others should have to.  Yeah, yeah, there are professors like that.  But my sense as a person engaged in higher ed is that a lot of professors think that there is value in the way they look at learning, meaning-making and knowledge creation and the joy they get from teaching comes from working with students who can share that joy.

Cox does a good job arguing that many of the students they have are not ready to do that, but I don’t get the sense from her book that she doesn’t see the value in that view of education.  I have a much clearer vision from this book what the students Cox interviewed value —  mostly the economic benefit they connect to the credential — but because her research didn’t extend to the teachers, I don’t have that same sense from them.

Here’s the thing – universities aren’t just about the teaching. They’re not going to be just about teaching and it’s not a really hard argument to make that they shouldn’t be just about the teaching.  A lot of professors were hired for their research, and the research they do makes the world better, and connecting students to that kind of knowledge creation is cool.  And even when they are about teaching they’re not just about the teaching of first-year undergraduates making the transition into college.  Even those students in a few years, immersed in a major, are going to need something different than they need when they first hit campus.

Just as it is not useful to sit in meetings about teaching and spend all your time discussing the students you should have (and yeah, we’ve all heard those discussions).  I’m sure I’m not the only one to say that at some point you have to put your energy into the students you have.  But when I say that, and when a lot of people say that, they don’t mean – the students we have can’t learn how to participate in academic culture.  We don’t mean that – academic culture has no value to these students.  Which is the really valuable point in this book – unprepared does not equal incapable.   I don’t want to say the book offers no solutions, because it does.  I guess what I do have to say is that I don’t find those solutions convincing in a research university environment.

All of this, of course, goes well beyond the scope of Cox’s book and Cox’s research, which is about particular students in a particular setting where teaching, and the transition to college, is paramount.

It’s a long way of saying that while the book has value to those outside the community college setting, that value only  goes so far.  There is more work to be done figuring answers to the questions she raises in other environments.

Which is why the chapter that was probably the most interesting to me is chapter 5 – which examines the work being done by two composition instructors – instructors who by most accounts are doing everything “right” in their classrooms  — right by the Chickering-type standards of active learning and engagement and right by what we are constantly told these hands-on, tech-savvy experiential-learning-wanting students today need.  In other words, they’re doing the things we think we should be doing in the research university to connect the students to what it is that scholars do – and they’re failing.

The idea that students have to be forced to be free is not a new one, but it is a point that gets lost sometimes in discussions about what is wrong with higher ed.  We hear that lectures are dead, that students can’t learn that way, that they hate lecturing, they tune out, they want to learn for themselves, and … it just doesn’t always reflect my experience. They may hate lectures, but that doesn’t mean that’s not what they think higher education should be.  They have their expectations that they bring with them, and professors that try to turn some control over the classroom and over learning to their students can be shot down for “not doing their job.”  That’s what Cox found, and I’ve certainly seen it happen.  The assumptions that these professors are falling victim to aren’t assumptions that students are going to be unprepared, or ignorant, or unwilling to learn – they’re more the opposite.  They assume that students will be curious, will have a voice they want to get out there, will have learning they want to take responsibility for.

So, I’m glad I didn’t bail – but I’m also glad the book didn’t take more than a couple nights to read.

Not quite peer-reviewed Monday, but related!

So slammed, so briefly (well, for me).  Via CrookedTimber, a pointed to this post by Julian Sanchez on argumentative fallacies, experts, non-experts and debates about climate change. It’s well worth reading, especially if you are interested in the question of how non-experts can evaluate and use expert information, which is a topic that I think should be of interest to any academic librarian.

Obviously, when it comes to an argument between trained scientific specialists, they ought to ignore the consensus and deal directly with the argument on its merits. But most of us are not actually in any position to deal with the arguments on the merits.

Sanchez argues that most of us have to rely upon the credibility of the author — which is a strategy many librarians also espouse — in part because someone who truly wants to confuse them can do so, and sound very plausible.

Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic.

Further, he suggests that the person who wants to confuse a complex issue actually has an advantage over those who want to talk about the complexity:

Actually, I have a plausible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience:

Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument.

And that’s where we get to the evaluation piece.  We need to know how much we know to know whether it even makes sense to try and evaluate the arguments.  Because if we don’t know enough, trying to evaluate the quality of the actual argument will probably steer us astray more often than using credibility as our evaluation metric.

If we don’t sometimes defer to the expert consensus, we’ll systematically tend to go wrong in the face of one-way-hash arguments, at least our own necessarily limited domains of knowledge.

(Note:  I skipped most of the paragraph where he really explains the one-way hash argument – you should read it there)

The thing I really want to focus on is this – that one word, consensus.  Because I don’t think we do much with that idea in beginning composition courses, or beginning communication courses, or many other examples of “beginning” courses which often serve as a student’s first introduction to scholarly discourse.

And by “we” here, I’m talking about higher ed in general, not OSU in particular.  I think we ask students in these beginning classes to find sources related to their argument; their own argument or interest is the thing that organizes the research they find.  They work with that article outside of any context, except which might be presented in the literature review – they don’t know if it’s steadily mainstream, a freakish outlier, or suggesting something really new.

So they go out and find their required scholarly sources, and they read them and they think about the argument in the scholarly paper and how it relates to the argument they are making in their own paper and try to evaluate it – and of course, they evaluate mostly on the question of how well it fits into their paper.   And what other option do they have?

Sanchez argues, and it rings true to me, that we usually don’t have the skills to evaluate the quality of the argument or research ourselves.  And I know that I am not at all comfortable with the “it was in a scholarly journal so it is good” method of evaluation.  Even if they find the author’s bona fides, I’m not sure that helps unless they can find out what their reputation is in the field, and isn’t that just another form of figuring out consensus?

In some fields, meta-analyses would be helpful here, or review essays in others, but so many students choose topics where neither of those tools would be available, that it’s hard to figure out how to use that in the non-disciplinary curriculum.

And perhaps it doesn’t matter – maybe just learning that there are scholarly journals and that there are disciplinary conventions, is enough at the beginning level.  But if that’s the case, then maybe we should let that question of evaluation, when it comes to scholarly arguments, go at that level too?

peer-review, what it is good for? (peer-reviewed Monday)

ResearchBlogging.org

In a lot of disciplines, the peer reviewed literature is all about the new, but while the stories may be new, they’re usually told in the same same same old ways.  This is a genre that definitely has its generic conventions.  So while the what of the articles is new, it’s pretty unusual to see someone try to find a new way to share the what.  I’ll admit it, that was part of the attraction here.

And also attractive is that it is available.  Not really openly available, but it is in the free “sample” issue of the journal Perspectives on Psychological Science.  I’m pulling this one thing out of that issue, but there are seriously LOTS of articles that look interesting and relevant if you think about scholarship, research, or teaching/learning those things — articles about peer review, IRB, research methods, evidence-based practice, and more.

Trafimow, D., & Rice, S. (2009). What If Social Scientists Had Reviewed Great Scientific Works of the Past? Perspectives on Psychological Science, 4 (1), 65-78 DOI: 10.1111/j.1745-6924.2009.01107.x

So here’s the conceit – the authors take several key scientific discoveries, pretend they have been submitted to social science/ psychology journals, and write up some typical social-science-y editors’ decisions.  The obvious argument of the paper is that reviewers in social science journals are harsher than those in the sciences, and as a result they are less likely to publish genius research.

No Matter Project (flickr)

I think that argument is a little bit of a red herring; the real argument of the paper is more nuanced.  The analogy I kept thinking about was the search committee with 120 application packets to go through – that first pass through, you have to look for reasons to take people out of the running, right?  That’s what they argue is going on with too many reviewers – they’re looking for reasons to reject.  They further argue that any reviewer can find things to criticize in any manuscript, and that just because an article can be criticized doesn’t mean it shouldn’t be published:

A major goal is to dramatize that a review who wishes to find fault is always able to do so.  Therefore, the mere fact that a manuscript can be criticized provides insufficient reason to evaluate it negatively.

So, according to their little dramatizations, Eratosthenes, Galileo, Newton, Harvey, Stahl, Michelson and Morley, Einstein, and Borlaug would each have been rejected by social science reviewers, or at least some social science reviewers.  I won’t get into the specifics of the rejection letters – Einstein is called “insane” (though also genius – reviewers disagree, you know) and Harvey “fanciful” but beyond these obviously amusing conclusions are some deeper ideas about peer review and epistemology.

In their analysis section, Trafimow and Rice come up with 9 reasons why manuscripts are frequently rejected:

  • it’s implausible
  • there’s nothing new here
  • there are alternative explanations
  • it’s too complex
  • there’s a problem with methodology (or statistics)
  • incapable of falsification
  • the reasoning is circular
  • I have different questions I ask about applied work
  • I am making value judgments

A few of these relate to the inherent conservatism of science and peer review, that has been well established (and which was brought up here a few months ago).  For example, plausibility refers to reviewers are inclined to accept what is already “known” as plausible, and challenges to that received knowledge as implausible, no matter how strong the reasoning behind the challenging interpretation.

A few get at that “trying to find fault” thing I mentioned above.  You can always come up with some “alternative explanation” for a researcher’s results, and you can always suggest some other test or some other measurement a researcher “should have” done.  The trick is to suggest rejection only when you can show how that missing test, or alternative explanation really matters, but they suggest that a lot of reviewers don’t do this.

Interestingly, Female Science Professor had a similar post today, about reviewers who claim that things are not new, but who do not provide citations to verify that claim.  Trafimow and Rice spend a bit of time themselves on the “nothing new” reason for rejectin.  They suggest that there are five levels at which new research or knowledge can make a new contribution:

  • new experimental paradigm
  • new finding
  • new hypothesis
  • new theory
  • new unifying principle

They posit that few articles will be “new” in all of these ways, and that reviewers who want to reject an article can focus on the dimension where the research isn’t new, while ignoring what is.

Which relates to the value judgments, or at least to the value judgment they spend the most time on – the idea that social science reviewers value data, empirical data, more than anything else, even at the expense of potentially groundbreaking new theory that might push the discourse in that field forward.  They suggest that a really brilliant theory should be published in advance of the data – that other, subsequent researchers can work on that part.

And that piece is really interesting to me because the central conceit of this article focuses our attention, with hindsight, on rejections of stuff that would fairly routinely be considered genius.  And even the most knee-jerk, die hard advocate of the peer review process would not make the argument that most of the knowledge reported in peer-reviewed journals is also genius.  So what they’re really getting at here isn’t does the process work for most stuff so much as it is, are most reviewers in this field able to recognize genius when they see it, and are our accepted practices likely to help them or hinder them?

More Revision, Djenan (flickr)

And here’s the thing – I am almost thinking that they think that recognizing genius isn’t up to the reviewers.  I know!  Crazytalk.  But one of the clearest advantages to peer review is that revision based on thoughtful, critical, constructive commentary by experts in the field will, inherently, make a paper better.  That’s an absolute statement but one I’m pretty comfortable making.

What I found striking about Trafimow and Rice’s piece is that over and over again I kept thinking that the problems with the problems they were identifying was that they led to reviews that weren’t helpful to the authors.  They criticize suggestions that won’t make the paper better, that conventions that shouldn’t apply to all research, and the like.  They focus more on bad reviews than good and they don’t really talk explicitly about the value of peer review but if I had to point at the implicit value of peer review as suggested by this paper, that would be it.

There are two response pieces alongside this article, and the first one picks up this theme.  Raymond Nickerson does spend some time talking about one purpose of reviews being to ensure that published research meets some standard of quality, but he talks more about what works in peer review and what authors want from reviewers – and in this part of his response he talks about reviews that help authors make papers better.  In a small survey he did:

Ninety percent of the respondents expected reviewers to do substantially more than advise an editor regarding a manuscript’s publishability.  A majority (77%) expressed preferences for an editorial decision with detailed substantive feedback regarding problems and suggestions for improvement…

(Nickerson also takes issue with the other argument implied by the paper’s title – that the natural and physical sciences have been so much kinder to their geniuses.  And in my limited knowledge of this area, that is a point well taken.  That inherent conservatism of peer rview certainly attaches in other fields – there’s a reason why Einstein’s Theory of Special Relativity is so often put forward as the example of the theory published in advance of the data.  It’s not the only one, but it is not like there are zillions of examples to choose from.)

Nickerson does agree with Trafimow and Rice’s central idea – that just because criticisms exist doesn’t mean new knowledge should be rejected.  M. Lynne Cooper, in the second response piece, also agrees with this premise but spends most of her time talking about the gatekeeper, or quality control, aspects of the peer review process.  And as a result, her argument, at least to me, is less compelling.

She seems too worried that potential reviewers will read Trafimow and Rice and conclude that they should not ever question methodology, or whether something is new — that just because Trafimow and Rice argue that these lines of evaluation can be mis-used, that potential reviewers will assume that they cannot be properly used.  That seems far-fetched to me, but what do I know?  This isn’t my field.

Cooper focuses on what Trafimow and Rice don’t: what makes a good review.  A good review should:

  • Be evaluative and balanced between positives and negatives
  • Evaluate connections to the literature
  • Be factually accurate and provide examples and citations where criticisms are made
  • Be fair and unbiased
  • Be tactful
  • Treat the author as an equal

But I’m less convinced by Cooper’s suggestions for making this happen.  She rejects the idea of open peer review in two sentences, but argues that the idea of authors giving (still anonymous) reviewers written feedback at the end of the process might cause reviewers to be more careful with their work.  She does call, as does Nickerson, for formal training.  She also suggests that the reviewers’ burden needs to decrease to give them time to do a good job, but given other things I have read make me wonder about her suggestion that there be fewer reviewers per paper.

In any event, these seem at best like bandaid solutions for a much bigger problem.   See, what none of these papers do (and it’s not their intent to do this) is talk about the bigger picture of scholarly communication and peer review.  And that’s relevant, particularly when you start looking at these solutions.  I was just at a presentation recently where someone argued that peer review was on it’s way out not for any of the usual reasons but because they were being asked to review more stuff, and they had time to review less.  Can limiting reviewing gigs to the best reviewers really work; can the burden on those reviewers be lightened enough?

The paper’s framing device includes science that pre-dates peer review, that pre-dates editorial peer review as we know it, that didn’t go through the full peer-review process, which begs the question – do we need editorial peer review to make this knowledge creation happen?  Because the examples they’re putting out there aren’t Normal Science examples.  These are the breaks and the shifts and the genius that Normal Science process kind of by definition has trouble dealing with.

And I’m not saying that editors and reviewers and traditional practices don’t work for genius, that’s would be ridiculous.  But I’m wondering if the peer-reviewed article is really the only way to get at all of the kinds of knowledge creation, of innovation, that the authors talk about in this article – is this process really the best way for scholars to communicate all of those five levels/kinds of new knowledge outlined above?  I don’t want to lose the idealistic picture of expert, mentor scholars lending their expertise and knowledge to help make others’ contributions stronger.  I don’t want to lose that which extended reflection, revision and collaboration can create.

I am really not sure that all kinds of scholarly communication or scholarly knowledge creation benefit from the iterative, lengthy, revision-based process of peer review.  I guess what I’m saying is that I don’t think problems with peer review by themselves are why genius sometimes gets stifled, and I don’t think fixing peer review will mean genius gets shared.  I don’t think the authors of any of these pieces think that either, but these pieces do beg that question – what else is there.

discovery and creation and… lies!

I’ve never really understood the whole pirate thing. Talk like a pirate day can come and go without my noticing, and despite the presence of Johnny Depp, I didn’t make it through the whole Pirates of the Caribbean trilogy.

So even if I had seen the mentions of the Last American Pirate hoax on the blogs I read all the time, I’m not sure that I would have bothered to follow the links. But maybe I would have. This story does combine two of my favorite things – scholarly uses of social media and history. Still, amidst holiday preparations and Oregon-style snowapocalypses I totally missed the initial stories on the topic.

Which is relevant in that I’m not a disgruntled blog reader feeling taken in. I was not personally hurt in any way by the deliberate historical hoax created by the students of History 389 at George Mason University last term.

And yet.  I keep thinking about it and I’m not sure I can really articulate why.

So, quick recap.  Professor Mills Kelly of eduwired.org fame taught a class on historical hoaxes last term.  Early in the term, he gave advance notice that his class would be perpetuating their own historical hoax.  The class created a fake story about the search for the Last American Pirate, a guy named Edward Owens.  The search was chronicled by fake student Jane on this fake blog, discussed in these fake interviews on YouTube, and finally reported as fact in this fake Wikipedia article.  Some people were taken in by said hoax, most notably a pop culture blogger at USA Today.  Kelly reportedly pulled the plug on the hoax when some of his real-world colleagues were taken in and the whole thing was revealed in the December 19th  Chronicle of Higher Education in an article only found behind the paywall.

So why do I keep thinking about it?  There has been a fair amount of discussion about it, some I really like.  Some talking about things I really don’t care about.  There are some people that love the experiment.  I’m not really moved by any of those arguments.   They seem to be mostly focused on the idea that kids today can’t get into traditional historical research, so this is a good, creative alternative.

The criticisms i find most compelling are found here, where Michael Feldstein explains why vandalising Wikipedia for the sake of a lesson is uncool and here in the comments on Dr. Kelly’s reveal post.  Commenter Martha, in particular, talks about the impact of this kind of project on trust networks.  Given that trust networks are, I think, a crucial part of meaningful information evaluation on the social web and thus a tool any information literate student should know how to use in this context, an assignment that deliberately devalues and damages those networks strikes me as problematic, even if there is some small benefit on the cautionary tale scale.

But that’s not what I keep thinking about except in a tangential way.  No, what’s got me thinking is what this project means for teaching information literacy and research — first in terms of the evaluation skills that are an overt, intended outcome articulated in the syllabus but also, and more deeply, in terms of research itself – why we do it and why we want students to do it.   These are, I suspect, related, but I’m not sure how.  Maybe if I write about it they’ll come together.  Maybe this will be in two parts.

Dr. Kelly says at the top that he is hoping for an information-literacy, information evaluation benefit to this assignment.

I’m hoping that this will mean that my students dig in and do some excellent historical research. I’m also hoping that they’ll learn a number of technical skills, will learn to work in a group, and will develop greater “information literacy” as we like to call it here. And, of course, I’m hoping they’ll have fun.

Specifically (from the syllabus – opens in PDF):

I do have some specific learning goals for this course. I hope that you’ll improve your research and analytical skills and that you’ll become a much better consumer of historical information. I hope you’ll become more skeptical without becoming too skeptical for your own good. I hope you’ll learn some new skills in the digital realm that can translate to other courses you take or to your eventual career. And, I hope you’ll be at least a little sneakier than you were before you started the course.

So the quick issue I have with this is that I just don’t see where the information literacy skills here translate into what most students need in their real work with online information sources.  Increasingly, I just think that a focus on deliberate hoaxes isn’t a very good way to teach students how to evaluate information.

Now I get that the work done to create the hoax might give the students in this class a greater appreciation for stuff that could make them more information literate, and that knowing specifically what they did to create a fake site might give them some stuff to look for in other sites, but I don’t really see the larger benefit here beyond the reminder that stuff on the Internet can be fake and I honestly don’t think that our students don’t know that full well already.

Because here’s the first thing – helping students learn that there is stuff on the wild, wild web that was put there just to trick them,  to punk them or to prank them – well, there’s not a lot of value in that.  The punker or the pranker will either be really good at it, in which case all of the abstract stuff we might teach them about how to identify bad information won’t help them because the good pranker isn’t going to do any of that stuff.  Or, and this is more likely, the prank won’t be all that good.  And our students – I really think they’re very able to identify the obvious crap that exists online.

They don’t need help identifying stuff that is fake or wrong just for the sake of being fake or wrong because there’s not a ton of stuff like that out there.  Honestly, our ability to identify stuff that exists for no other reason than to trick us is not a real-world problem that keeps me up at night. Most people who put fake or wrong or misleading information out there on the Internet have an agenda beyond April Fool’s – they’re trying to do more than trick us and what our students need is help identifying those agendas. They need help identifying the information that isn’t flat out lies, but that is a particular kind of truth.

There’s not a lot of historical information TO evaluate on the pieces of this hoax that are available to the public – the blog talks a a lot (I mean, a LOT) about how painful and difficult research in archives and mircofilm collections is – but the details about the sources themselves are pretty light.  Most sources are presented as transcripts  (“once I found the articles, there was no way to get a copy of them, apparently the machine is broken, so I had to transcribe them by hand,” that kind of thing).  The main thing that is presented as a digitized image is a will, not found in any archive or collection that could be investigated further – it is from the private attic-type collection of one of Edward Owens’ “descendants.”

Very clever.

No, what we have to consider here if we are evaluating information is not the quality of the historical sources in question (for the most part).  We don’t have the information to evaluate most of the fake sources, and beyond that – most historical sources in the world aren’t on blogs or YouTube so the skills that would help us evaluate them there wouldn’t necessarily translate to evaluating sources in archives.  What we really have to evaluate here are the classic foci of Internet evaluation: the authority of the scholar/author  herself and the nature of the digital tools used to present that scholarship.  And here is where I think it is useful to return to the criticisms mentioned above  – the tools we need to use to filter the social web are different than the tools of historical scholarship – and this project made those tools less useful for the rest of us.

Yes, we should remember that our trust networks and Wikipedia pages aren’t infallible.  Treating them as if they are is dumb and dangerous, of course.  But not starting from the assumption that someone is willing to do all this work just to fake you out? That’s not unreasonable.  Creating a hoax like this just for its own  sake, after all, is not more fun than the work it takes to do it is not fun.  This one took an entire class of students working for a whole term with the great big huge carrot of the GRADE as motivation, after all.  When someone, or a class of someones, does deliberately put false information out there – and I’m not talking here about the fake historical documents, but the fake blog posts and tweets and comments and pointers – it makes it harder for all of us to use the skills that really do help us navigate and evaluate the social web.

I think it’s pretty significant that outside of the USA Today blogger, most of the people who got excited about this story – excited enough to blog about it – weren’t excited because of the history beyond the “that’s kind of cool” level.  The excitement was about how “Jane” leveraged social media tools to present her research broadly:

This undergraduate took her research to the next level by framing the experience on her blog, full with images and details from her Library of Congress research, video interviews with scholars and her visit to Owens house, her bibliography, along with a link to the Wikipedia page she created for this little known local pirate.

Or stated more directly, after the reveal:

But I want to concentrate on something else. Amidst all the fiction, alternate and virtual realities, hoaxes and pranks, one thing jumps out at me as utterly real, wholly genuine, honest. Read Jim’s post on this when he first came across the project. Here is passion and excitement, a celebration of what a student might be able to achieve with the tools now available, given the right puzzle to work on and a supportive network and intellectual environment.

And I agree with all of this in theory, but in terms of this specific hoax there is still something missing to me, and it’s an important something.  It’s research – and inquiry – and discovery.

I know I am only seeing a tiny portion of what is going on in this classroom – and from the syllabus just the idea that one of the goals of the class is to show that hoaxes can themselves be the topic of serious historical research, just like wars or elections, is something I find fairly awesome.  I have no idea how the process of discovery was inculcated in the other projects the students did.  All I have is the public pieces of the course – the blogs, the videos, and the rest.

And that’s a piece of this discussion that shouldn’t be missed.  By putting this material up on the real web, on the public web, by consciously trying to get people to access and engage with this material the question of what kind of learning experience does this material provide for those of us NOT in the class is a valid one.   Is our learning experience supposed to be related to information literacy as well?  To history? Or is it just a clever, creative prank?

Because here’s the next thing – I don’t think that there is much of a learning experience for the rest of us in this project – at least not in terms of information literacy.

Don’t get me wrong, I value creation and creativity.  I value world-building and imagination.  And I don’t think those things are separate from academic research.  There is definitely creativity and imagination in scholarly inquiry, in looking at sources and seeing what might have been or what could be and re-searching based on that new potential meaning.  Watching a class of students using the social web to extend and communicate such a learning process would itself be valuable in that information literacy context.

And I think there’s room in that picture for fiction as well – in telling a story that you know in your bones to be a kind of truth even though you can’t prove it, at least not in a way that would be recognized as proof, epistemologically speaking.  I think there are truths and stories and voices that can only be captured with fiction.  So it’s not the made up or false part that gives me pause.

But in the case of this project, as it is laid out for us to see — the public pieces of this class project combine to celebrate what a truly information-literate student can do to take control of their own learning – but all the time that information literacy only exists on the surface.

This is why I have problems thinking about the pirate hoax as a great new way to talk about or teach information literacy. Because beyond the fact that I don’t think hoaxes are a great way to teach evaluation, I’m also not sure they are a great way to talk about research and scholarly creativity. At its heart, I think information literacy is inherently linked to inquiry, and discovery.   It’s about the ability to learn from information – not just to find the sources worth learning from but to use that new information to change the way you understand things, and change the way you approach the next question.

“Jane” talks endlessly about the physical pain she feels as a result of days of looking at microfilm:

But, I have no idea how I am functioning right now…I can barely look at the screen without wanting to throw up, my eyes are in so much pain.

And she goes on about how frustrating it is not to find that evidence in the documents that will prove that her pirate existed:

After my failed trip to the town, I was really discouraged. I found out enough information to keep me going, but nothing really substantial. I have not gotten any closer to figuring out a name, and my trips to the library that last four hours at a time to look through the microfilm (I’m convinced I’m causing permanent damage to my eyes), have yielded absolutely no results.

But she never talks about that other kind of pain and frustration that comes with research and learning – one of the big things that makes research hard – feeling stupid, or having to question what you thought you knew before.   That’s what I mean when I say “Jane’s” process is all surface-level.  She never finds anything in her research that leads her in a new direction. She finds additional things she can use on the path she’s already on, but that’s not the same.

In the end, it is a lucky break that brings Jane’s process to a close.  The lucky break isn’t the issue — the real issue is that at the end of the research process described in the blog she finds exactly the single document perfect right source she had been looking for from the start.  The perfect right source she imagined might exist that would answer the narrow question she formulated before she even know much about her topic at all.  That’s not how research usually works.  You could argue that that’s not how good research ever works.

And that’s the last and main thing.  At no point does Jane really engage with something that leads her to change her mind about anything, to reevaluate her process, to go back over the same ground with a new understanding or a new set of questions.  It’s needle in the haystack searching she does – she has to be creative to find different ways into the haystacks but at the same time she’s not going into the haystacks to find out what’s there.  She’s going in to look for that one needle that she thinks/hopes must be there.

And yes, I get that she’s pretend, but the fictional process the real class came up with does suggest that historical research is difficult and tedious and one doesn’t make the great discovery by engaging with sources in an open-minded way.   If the class had been engaged in a discovery-based research process I would hope that that would have come through in their fictional avatar’s narrative.  It doesn’t.  There is no doubt that this group of students were truly engaged – playing with history, creating a new world and the characters to fill it.

I can’t find it now, but when I was reading about this project earlier I was struck by the description of how the topic was selected in the first place – all of the considerations were practical – not too well known, not too likely to inspire a lawsuit if the hoax was discovered, and so on.  The reasons for piracy were practical as well – a topic of broad popular interest, local, not likely to be something anyone would already be an expert on, etc.  They didn’t talk about discovering the space in the historical record for their hoax to exist, they talked about creating it.

And if it’s mainly about creativity, about the class’ engagement around creating this alternate reality, around engaging with each other, and about engaging with others on the social web, then I’m not sure I see the value in making it a hoax.  Except that that was the topic of the rest of the class to which we were not privy.  If the skills they were learning were about creativity and world-building it seems like the resulting project could have taken the form of an ARG or a similar project where those creative muscles could be flexed in the service of creating a world for the rest of us to play in, too.