it is too much, let me sum up

There was a little flurry of conversation in my social networks about Mark Bauerlein’s recent offering on the Brainstorm blog (at the Chronicle), and i just realized that it was almost all in the rhet/comp corners of those networks – so in case library friends haven’t seen it – it’s worth looking at:

All Summary, No Critical Thinking 

Pull Quote:

From now on, my syllabus will require no research papers, no analytical tasks, no thesis, no argument, no conclusion.  No critical thinking and no higher-order thinking skills.  Instead, the semester will run up 14 two-page summaries (plus the homework exercises).

Students will read the great books and assignments will ask them to summarize designated parts.

A soft description of the conversations I saw would be “skeptical.” There were those who thought this was an April Fool’s joke, until they noticed the byline.  I think it reads like an effort to solve a problem that’s not really about summary, but about reading.  I italicize “think” there, because I don’t really get the summary idea – it seems to me that people who only engage enough with argumentative writing to cherry-pick quotes from source texts will be just as able to create “summaries” that don’t reflect any more than a superficial understanding of those source texts.

Michael Faris pointed out Alex Reid’s excellent response, which does a much better job of problematizing the summary than I could:

The Role of Summary in Composition (digital digs)

I believe we misidentify the challenges of first-year composition when we focus on student lack and specifically on the lack of “skills.” Our challenge is to take students who do not believe they are writers (despite all the writing they do outside school), who do not value writing, who do not believe they have the capacity to succeed as writers, and who simply wish to get done with this course and give them a path to developing a lasting writing practice that will extend beyond the end of the semester.

Isn’t that a great, um, summary of why writing teaching matters?

Can we substitute “researchers” for “writers” here?  I kind of like the resulting statement, but it makes me uncomfortable as well, because – can we do, are we doing that with our current models?

it has been a while, yes

Wow, that was kind of unplanned hiatus.  Since I last posted:  my library has hired a new University Librarian, I received tenure, I gave some talks, almost all of Spring term has gone by, I was surprised by a completely unexpected but lovely award, I finally finished the IS executive committee minutes from Midwinter, and I submitted an epic proposal for IRB approval.

I am also almost done with an actual blog post.  Until that’s done, though, here’s something awesome:  a news website article (from The Guardian UK) entitled This is a news website article about a scientific paper.

A sneak preview -

This paragraph elaborates on the claim, adding weasel-words like “the scientists say” to shift responsibility for establishing the likely truth or accuracy of the research findings on to absolutely anybody else but me, the journalist.

If I could summarize one of my goals for library instruction it would be – to make sure OSU students understand the scholarly article better than this.

so whatever happened with that fear factor book?

Or, sort of peer-reviewed Monday!  Not quite, but a book review.

I didn’t want to list the name of the book in question before because I hadn’t read it yet, and didn’t want to answer questions from people who might have found the post by googling the book title.  Especially if they were people who really liked the book, because I didn’t know yet if I liked it.   And there are people who really, really, really like it –

Best book ever on how to prepare students for college (Jay Mathews, Washington Post blogs)

I don’t really agree with the title there – the point of this book didn’t seem to me to be about preparing students for college so much as it is about preparing college for students.

Citation:  Cox, Rebecca D. (2009).  The College Fear Factor. (Cambridge, MA: Harvard University Press).

The book is based on 5 years of data gathered from community college students.  The author herself did two studies examining community college classrooms.  One was a basic writing class and the other looked at 6 sections of an english composition class.  Each lasted a semester, and gathered qualitative data about the classroom conditions that had an impact on a student’s successful completion of the course.  She also participated in a large scale field study of 15 community colleges in 6 states, and another national study of technological education.

The argument in the book comes from research on community college students, but it is still of interest to those of us who work with students managing the transition to college at any institution.  It is perhaps more relevant to those of us at institutions that attract a significant number of first-generation college students.

I am not sure entirely what I think of the book – on the one hand, it was a quick easy read and I enjoyed it as I usually enjoy well-reported stories drawn from qualitative investigation.  On the other hand, it struck me as one of those books that reports on an important conclusion, but one that could have been well-covered in an article-length treatment.  The conclusion is drawn over and over again in this longer work, so sometimes chapters go by without me feeling like I had really encountered anything new.

What is that conclusion?  I said to Caleb earlier that I wasn’t sure where the fear factor part of the title came into the book (becuase at that point I was on about page 17) and I have to say now that the title is good insofar as the real point of this book is on fear, and how that emotional state affects student success.  (Insofar as it evokes a really awful reality show, on the other hand, not so good).

And in this, I think the book is valuable.  We don’t think and talk about the role of affect enough in higher ed – at least not on the academic side – nor about the intersections between affect and cognition and affect and everything else we do, and this book is an important corrective to that.  Basically, Cox argues that students can be scared away from completing college – not because they are not capable of doing college-level work, but because they have not been prepared to do it before they get to college, and they are not helped to do it once they arrive.

The many students who seriously doubted their ability to succeed, however, were anxiously waiting for their shortcomings to be exposed, at which point they would be stopped from pursuing their goals.  Fragile and fearful, these students expressed their concern in several ways: in reference to college professors, particular courses or subject matter, and the entire notion of college itself — whether at the two- or the four-year level.  At the core of different expressions of fear, however, were the same feelings of dread and the apprehension that success in college would prove to be an unrealizable dream.

Cox argues that these fears are exacerbated when one doesn’t come in to college knowing how to DO college.  And that most first-generation, non-traditional and other groups of our students don’t come to college knowing what the culture and mores of academia are.  They have expectations, but those aren’t based on experience (theirs or others’) and when those expectations are challenged, their reaction is to think they can’t do it at all or to convince themselves that it is not worth doing .

Professors too have their own set of expectations about how good students approach their education, and when faced with student behaviors that are different than those expectations would suggest, they make some faulty assumptions about why students are behaving the way they are. A student who attends class every day but never turns anything in  — that’s incomprehensible behavior to the professor who doesn’t understand how that student possibly thinks they are going to pass.  After reading Cox’s book, you consider the possibility that that student doesn’t think they are going to pass, but are just playing out the semester in a depressed

I still feel like I am missing from this book much of a sense of why professors have these expectations  –  besides “that’s the way we’ve always done things.”  In other words, it doesn’t really work for me (nor do I think Cox is really claiming) that there is no value at all to the way that professors were trained, and that that they are hanging on to methods that don’t work simply because they went through it so others should have to.  Yeah, yeah, there are professors like that.  But my sense as a person engaged in higher ed is that a lot of professors think that there is value in the way they look at learning, meaning-making and knowledge creation and the joy they get from teaching comes from working with students who can share that joy.

Cox does a good job arguing that many of the students they have are not ready to do that, but I don’t get the sense from her book that she doesn’t see the value in that view of education.  I have a much clearer vision from this book what the students Cox interviewed value –  mostly the economic benefit they connect to the credential — but because her research didn’t extend to the teachers, I don’t have that same sense from them.

Here’s the thing – universities aren’t just about the teaching. They’re not going to be just about teaching and it’s not a really hard argument to make that they shouldn’t be just about the teaching.  A lot of professors were hired for their research, and the research they do makes the world better, and connecting students to that kind of knowledge creation is cool.  And even when they are about teaching they’re not just about the teaching of first-year undergraduates making the transition into college.  Even those students in a few years, immersed in a major, are going to need something different than they need when they first hit campus.

Just as it is not useful to sit in meetings about teaching and spend all your time discussing the students you should have (and yeah, we’ve all heard those discussions).  I’m sure I’m not the only one to say that at some point you have to put your energy into the students you have.  But when I say that, and when a lot of people say that, they don’t mean – the students we have can’t learn how to participate in academic culture.  We don’t mean that – academic culture has no value to these students.  Which is the really valuable point in this book – unprepared does not equal incapable.   I don’t want to say the book offers no solutions, because it does.  I guess what I do have to say is that I don’t find those solutions convincing in a research university environment.

All of this, of course, goes well beyond the scope of Cox’s book and Cox’s research, which is about particular students in a particular setting where teaching, and the transition to college, is paramount.

It’s a long way of saying that while the book has value to those outside the community college setting, that value only  goes so far.  There is more work to be done figuring answers to the questions she raises in other environments.

Which is why the chapter that was probably the most interesting to me is chapter 5 – which examines the work being done by two composition instructors – instructors who by most accounts are doing everything “right” in their classrooms  — right by the Chickering-type standards of active learning and engagement and right by what we are constantly told these hands-on, tech-savvy experiential-learning-wanting students today need.  In other words, they’re doing the things we think we should be doing in the research university to connect the students to what it is that scholars do – and they’re failing.

The idea that students have to be forced to be free is not a new one, but it is a point that gets lost sometimes in discussions about what is wrong with higher ed.  We hear that lectures are dead, that students can’t learn that way, that they hate lecturing, they tune out, they want to learn for themselves, and … it just doesn’t always reflect my experience. They may hate lectures, but that doesn’t mean that’s not what they think higher education should be.  They have their expectations that they bring with them, and professors that try to turn some control over the classroom and over learning to their students can be shot down for “not doing their job.”  That’s what Cox found, and I’ve certainly seen it happen.  The assumptions that these professors are falling victim to aren’t assumptions that students are going to be unprepared, or ignorant, or unwilling to learn – they’re more the opposite.  They assume that students will be curious, will have a voice they want to get out there, will have learning they want to take responsibility for.

So, I’m glad I didn’t bail – but I’m also glad the book didn’t take more than a couple nights to read.

Not quite peer-reviewed Monday, but related!

So slammed, so briefly (well, for me).  Via CrookedTimber, a pointed to this post by Julian Sanchez on argumentative fallacies, experts, non-experts and debates about climate change. It’s well worth reading, especially if you are interested in the question of how non-experts can evaluate and use expert information, which is a topic that I think should be of interest to any academic librarian.

Obviously, when it comes to an argument between trained scientific specialists, they ought to ignore the consensus and deal directly with the argument on its merits. But most of us are not actually in any position to deal with the arguments on the merits.

Sanchez argues that most of us have to rely upon the credibility of the author — which is a strategy many librarians also espouse — in part because someone who truly wants to confuse them can do so, and sound very plausible.

Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic.

Further, he suggests that the person who wants to confuse a complex issue actually has an advantage over those who want to talk about the complexity:

Actually, I have a plausible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience:

Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument.

And that’s where we get to the evaluation piece.  We need to know how much we know to know whether it even makes sense to try and evaluate the arguments.  Because if we don’t know enough, trying to evaluate the quality of the actual argument will probably steer us astray more often than using credibility as our evaluation metric.

If we don’t sometimes defer to the expert consensus, we’ll systematically tend to go wrong in the face of one-way-hash arguments, at least our own necessarily limited domains of knowledge.

(Note:  I skipped most of the paragraph where he really explains the one-way hash argument – you should read it there)

The thing I really want to focus on is this – that one word, consensus.  Because I don’t think we do much with that idea in beginning composition courses, or beginning communication courses, or many other examples of “beginning” courses which often serve as a student’s first introduction to scholarly discourse.

And by “we” here, I’m talking about higher ed in general, not OSU in particular.  I think we ask students in these beginning classes to find sources related to their argument; their own argument or interest is the thing that organizes the research they find.  They work with that article outside of any context, except which might be presented in the literature review – they don’t know if it’s steadily mainstream, a freakish outlier, or suggesting something really new.

So they go out and find their required scholarly sources, and they read them and they think about the argument in the scholarly paper and how it relates to the argument they are making in their own paper and try to evaluate it – and of course, they evaluate mostly on the question of how well it fits into their paper.   And what other option do they have?

Sanchez argues, and it rings true to me, that we usually don’t have the skills to evaluate the quality of the argument or research ourselves.  And I know that I am not at all comfortable with the “it was in a scholarly journal so it is good” method of evaluation.  Even if they find the author’s bona fides, I’m not sure that helps unless they can find out what their reputation is in the field, and isn’t that just another form of figuring out consensus?

In some fields, meta-analyses would be helpful here, or review essays in others, but so many students choose topics where neither of those tools would be available, that it’s hard to figure out how to use that in the non-disciplinary curriculum.

And perhaps it doesn’t matter – maybe just learning that there are scholarly journals and that there are disciplinary conventions, is enough at the beginning level.  But if that’s the case, then maybe we should let that question of evaluation, when it comes to scholarly arguments, go at that level too?

peer-review, what it is good for? (peer-reviewed Monday)

ResearchBlogging.org

In a lot of disciplines, the peer reviewed literature is all about the new, but while the stories may be new, they’re usually told in the same same same old ways.  This is a genre that definitely has its generic conventions.  So while the what of the articles is new, it’s pretty unusual to see someone try to find a new way to share the what.  I’ll admit it, that was part of the attraction here.

And also attractive is that it is available.  Not really openly available, but it is in the free “sample” issue of the journal Perspectives on Psychological Science.  I’m pulling this one thing out of that issue, but there are seriously LOTS of articles that look interesting and relevant if you think about scholarship, research, or teaching/learning those things — articles about peer review, IRB, research methods, evidence-based practice, and more.

Trafimow, D., & Rice, S. (2009). What If Social Scientists Had Reviewed Great Scientific Works of the Past? Perspectives on Psychological Science, 4 (1), 65-78 DOI: 10.1111/j.1745-6924.2009.01107.x

So here’s the conceit – the authors take several key scientific discoveries, pretend they have been submitted to social science/ psychology journals, and write up some typical social-science-y editors’ decisions.  The obvious argument of the paper is that reviewers in social science journals are harsher than those in the sciences, and as a result they are less likely to publish genius research.

No Matter Project (flickr)

I think that argument is a little bit of a red herring; the real argument of the paper is more nuanced.  The analogy I kept thinking about was the search committee with 120 application packets to go through – that first pass through, you have to look for reasons to take people out of the running, right?  That’s what they argue is going on with too many reviewers – they’re looking for reasons to reject.  They further argue that any reviewer can find things to criticize in any manuscript, and that just because an article can be criticized doesn’t mean it shouldn’t be published:

A major goal is to dramatize that a review who wishes to find fault is always able to do so.  Therefore, the mere fact that a manuscript can be criticized provides insufficient reason to evaluate it negatively.

So, according to their little dramatizations, Eratosthenes, Galileo, Newton, Harvey, Stahl, Michelson and Morley, Einstein, and Borlaug would each have been rejected by social science reviewers, or at least some social science reviewers.  I won’t get into the specifics of the rejection letters – Einstein is called “insane” (though also genius – reviewers disagree, you know) and Harvey “fanciful” but beyond these obviously amusing conclusions are some deeper ideas about peer review and epistemology.

In their analysis section, Trafimow and Rice come up with 9 reasons why manuscripts are frequently rejected:

  • it’s implausible
  • there’s nothing new here
  • there are alternative explanations
  • it’s too complex
  • there’s a problem with methodology (or statistics)
  • incapable of falsification
  • the reasoning is circular
  • I have different questions I ask about applied work
  • I am making value judgments

A few of these relate to the inherent conservatism of science and peer review, that has been well established (and which was brought up here a few months ago).  For example, plausibility refers to reviewers are inclined to accept what is already “known” as plausible, and challenges to that received knowledge as implausible, no matter how strong the reasoning behind the challenging interpretation.

A few get at that “trying to find fault” thing I mentioned above.  You can always come up with some “alternative explanation” for a researcher’s results, and you can always suggest some other test or some other measurement a researcher “should have” done.  The trick is to suggest rejection only when you can show how that missing test, or alternative explanation really matters, but they suggest that a lot of reviewers don’t do this.

Interestingly, Female Science Professor had a similar post today, about reviewers who claim that things are not new, but who do not provide citations to verify that claim.  Trafimow and Rice spend a bit of time themselves on the “nothing new” reason for rejectin.  They suggest that there are five levels at which new research or knowledge can make a new contribution:

  • new experimental paradigm
  • new finding
  • new hypothesis
  • new theory
  • new unifying principle

They posit that few articles will be “new” in all of these ways, and that reviewers who want to reject an article can focus on the dimension where the research isn’t new, while ignoring what is.

Which relates to the value judgments, or at least to the value judgment they spend the most time on – the idea that social science reviewers value data, empirical data, more than anything else, even at the expense of potentially groundbreaking new theory that might push the discourse in that field forward.  They suggest that a really brilliant theory should be published in advance of the data – that other, subsequent researchers can work on that part.

And that piece is really interesting to me because the central conceit of this article focuses our attention, with hindsight, on rejections of stuff that would fairly routinely be considered genius.  And even the most knee-jerk, die hard advocate of the peer review process would not make the argument that most of the knowledge reported in peer-reviewed journals is also genius.  So what they’re really getting at here isn’t does the process work for most stuff so much as it is, are most reviewers in this field able to recognize genius when they see it, and are our accepted practices likely to help them or hinder them?

More Revision, Djenan (flickr)

And here’s the thing – I am almost thinking that they think that recognizing genius isn’t up to the reviewers.  I know!  Crazytalk.  But one of the clearest advantages to peer review is that revision based on thoughtful, critical, constructive commentary by experts in the field will, inherently, make a paper better.  That’s an absolute statement but one I’m pretty comfortable making.

What I found striking about Trafimow and Rice’s piece is that over and over again I kept thinking that the problems with the problems they were identifying was that they led to reviews that weren’t helpful to the authors.  They criticize suggestions that won’t make the paper better, that conventions that shouldn’t apply to all research, and the like.  They focus more on bad reviews than good and they don’t really talk explicitly about the value of peer review but if I had to point at the implicit value of peer review as suggested by this paper, that would be it.

There are two response pieces alongside this article, and the first one picks up this theme.  Raymond Nickerson does spend some time talking about one purpose of reviews being to ensure that published research meets some standard of quality, but he talks more about what works in peer review and what authors want from reviewers – and in this part of his response he talks about reviews that help authors make papers better.  In a small survey he did:

Ninety percent of the respondents expected reviewers to do substantially more than advise an editor regarding a manuscript’s publishability.  A majority (77%) expressed preferences for an editorial decision with detailed substantive feedback regarding problems and suggestions for improvement…

(Nickerson also takes issue with the other argument implied by the paper’s title – that the natural and physical sciences have been so much kinder to their geniuses.  And in my limited knowledge of this area, that is a point well taken.  That inherent conservatism of peer rview certainly attaches in other fields – there’s a reason why Einstein’s Theory of Special Relativity is so often put forward as the example of the theory published in advance of the data.  It’s not the only one, but it is not like there are zillions of examples to choose from.)

Nickerson does agree with Trafimow and Rice’s central idea – that just because criticisms exist doesn’t mean new knowledge should be rejected.  M. Lynne Cooper, in the second response piece, also agrees with this premise but spends most of her time talking about the gatekeeper, or quality control, aspects of the peer review process.  And as a result, her argument, at least to me, is less compelling.

She seems too worried that potential reviewers will read Trafimow and Rice and conclude that they should not ever question methodology, or whether something is new — that just because Trafimow and Rice argue that these lines of evaluation can be mis-used, that potential reviewers will assume that they cannot be properly used.  That seems far-fetched to me, but what do I know?  This isn’t my field.

Cooper focuses on what Trafimow and Rice don’t: what makes a good review.  A good review should:

  • Be evaluative and balanced between positives and negatives
  • Evaluate connections to the literature
  • Be factually accurate and provide examples and citations where criticisms are made
  • Be fair and unbiased
  • Be tactful
  • Treat the author as an equal

But I’m less convinced by Cooper’s suggestions for making this happen.  She rejects the idea of open peer review in two sentences, but argues that the idea of authors giving (still anonymous) reviewers written feedback at the end of the process might cause reviewers to be more careful with their work.  She does call, as does Nickerson, for formal training.  She also suggests that the reviewers’ burden needs to decrease to give them time to do a good job, but given other things I have read make me wonder about her suggestion that there be fewer reviewers per paper.

In any event, these seem at best like bandaid solutions for a much bigger problem.   See, what none of these papers do (and it’s not their intent to do this) is talk about the bigger picture of scholarly communication and peer review.  And that’s relevant, particularly when you start looking at these solutions.  I was just at a presentation recently where someone argued that peer review was on it’s way out not for any of the usual reasons but because they were being asked to review more stuff, and they had time to review less.  Can limiting reviewing gigs to the best reviewers really work; can the burden on those reviewers be lightened enough?

The paper’s framing device includes science that pre-dates peer review, that pre-dates editorial peer review as we know it, that didn’t go through the full peer-review process, which begs the question – do we need editorial peer review to make this knowledge creation happen?  Because the examples they’re putting out there aren’t Normal Science examples.  These are the breaks and the shifts and the genius that Normal Science process kind of by definition has trouble dealing with.

And I’m not saying that editors and reviewers and traditional practices don’t work for genius, that’s would be ridiculous.  But I’m wondering if the peer-reviewed article is really the only way to get at all of the kinds of knowledge creation, of innovation, that the authors talk about in this article – is this process really the best way for scholars to communicate all of those five levels/kinds of new knowledge outlined above?  I don’t want to lose the idealistic picture of expert, mentor scholars lending their expertise and knowledge to help make others’ contributions stronger.  I don’t want to lose that which extended reflection, revision and collaboration can create.

I am really not sure that all kinds of scholarly communication or scholarly knowledge creation benefit from the iterative, lengthy, revision-based process of peer review.  I guess what I’m saying is that I don’t think problems with peer review by themselves are why genius sometimes gets stifled, and I don’t think fixing peer review will mean genius gets shared.  I don’t think the authors of any of these pieces think that either, but these pieces do beg that question – what else is there.

discovery and creation and… lies!

I’ve never really understood the whole pirate thing. Talk like a pirate day can come and go without my noticing, and despite the presence of Johnny Depp, I didn’t make it through the whole Pirates of the Caribbean trilogy.

So even if I had seen the mentions of the Last American Pirate hoax on the blogs I read all the time, I’m not sure that I would have bothered to follow the links. But maybe I would have. This story does combine two of my favorite things – scholarly uses of social media and history. Still, amidst holiday preparations and Oregon-style snowapocalypses I totally missed the initial stories on the topic.

Which is relevant in that I’m not a disgruntled blog reader feeling taken in. I was not personally hurt in any way by the deliberate historical hoax created by the students of History 389 at George Mason University last term.

And yet.  I keep thinking about it and I’m not sure I can really articulate why.

So, quick recap.  Professor Mills Kelly of eduwired.org fame taught a class on historical hoaxes last term.  Early in the term, he gave advance notice that his class would be perpetuating their own historical hoax.  The class created a fake story about the search for the Last American Pirate, a guy named Edward Owens.  The search was chronicled by fake student Jane on this fake blog, discussed in these fake interviews on YouTube, and finally reported as fact in this fake Wikipedia article.  Some people were taken in by said hoax, most notably a pop culture blogger at USA Today.  Kelly reportedly pulled the plug on the hoax when some of his real-world colleagues were taken in and the whole thing was revealed in the December 19th  Chronicle of Higher Education in an article only found behind the paywall.

So why do I keep thinking about it?  There has been a fair amount of discussion about it, some I really like.  Some talking about things I really don’t care about.  There are some people that love the experiment.  I’m not really moved by any of those arguments.   They seem to be mostly focused on the idea that kids today can’t get into traditional historical research, so this is a good, creative alternative.

The criticisms i find most compelling are found here, where Michael Feldstein explains why vandalising Wikipedia for the sake of a lesson is uncool and here in the comments on Dr. Kelly’s reveal post.  Commenter Martha, in particular, talks about the impact of this kind of project on trust networks.  Given that trust networks are, I think, a crucial part of meaningful information evaluation on the social web and thus a tool any information literate student should know how to use in this context, an assignment that deliberately devalues and damages those networks strikes me as problematic, even if there is some small benefit on the cautionary tale scale.

But that’s not what I keep thinking about except in a tangential way.  No, what’s got me thinking is what this project means for teaching information literacy and research — first in terms of the evaluation skills that are an overt, intended outcome articulated in the syllabus but also, and more deeply, in terms of research itself – why we do it and why we want students to do it.   These are, I suspect, related, but I’m not sure how.  Maybe if I write about it they’ll come together.  Maybe this will be in two parts.

Dr. Kelly says at the top that he is hoping for an information-literacy, information evaluation benefit to this assignment.

I’m hoping that this will mean that my students dig in and do some excellent historical research. I’m also hoping that they’ll learn a number of technical skills, will learn to work in a group, and will develop greater “information literacy” as we like to call it here. And, of course, I’m hoping they’ll have fun.

Specifically (from the syllabus – opens in PDF):

I do have some specific learning goals for this course. I hope that you’ll improve your research and analytical skills and that you’ll become a much better consumer of historical information. I hope you’ll become more skeptical without becoming too skeptical for your own good. I hope you’ll learn some new skills in the digital realm that can translate to other courses you take or to your eventual career. And, I hope you’ll be at least a little sneakier than you were before you started the course.

So the quick issue I have with this is that I just don’t see where the information literacy skills here translate into what most students need in their real work with online information sources.  Increasingly, I just think that a focus on deliberate hoaxes isn’t a very good way to teach students how to evaluate information.

Now I get that the work done to create the hoax might give the students in this class a greater appreciation for stuff that could make them more information literate, and that knowing specifically what they did to create a fake site might give them some stuff to look for in other sites, but I don’t really see the larger benefit here beyond the reminder that stuff on the Internet can be fake and I honestly don’t think that our students don’t know that full well already.

Because here’s the first thing – helping students learn that there is stuff on the wild, wild web that was put there just to trick them,  to punk them or to prank them – well, there’s not a lot of value in that.  The punker or the pranker will either be really good at it, in which case all of the abstract stuff we might teach them about how to identify bad information won’t help them because the good pranker isn’t going to do any of that stuff.  Or, and this is more likely, the prank won’t be all that good.  And our students – I really think they’re very able to identify the obvious crap that exists online.

They don’t need help identifying stuff that is fake or wrong just for the sake of being fake or wrong because there’s not a ton of stuff like that out there.  Honestly, our ability to identify stuff that exists for no other reason than to trick us is not a real-world problem that keeps me up at night. Most people who put fake or wrong or misleading information out there on the Internet have an agenda beyond April Fool’s – they’re trying to do more than trick us and what our students need is help identifying those agendas. They need help identifying the information that isn’t flat out lies, but that is a particular kind of truth.

There’s not a lot of historical information TO evaluate on the pieces of this hoax that are available to the public – the blog talks a a lot (I mean, a LOT) about how painful and difficult research in archives and mircofilm collections is – but the details about the sources themselves are pretty light.  Most sources are presented as transcripts  (“once I found the articles, there was no way to get a copy of them, apparently the machine is broken, so I had to transcribe them by hand,” that kind of thing).  The main thing that is presented as a digitized image is a will, not found in any archive or collection that could be investigated further – it is from the private attic-type collection of one of Edward Owens’ “descendants.”

Very clever.

No, what we have to consider here if we are evaluating information is not the quality of the historical sources in question (for the most part).  We don’t have the information to evaluate most of the fake sources, and beyond that – most historical sources in the world aren’t on blogs or YouTube so the skills that would help us evaluate them there wouldn’t necessarily translate to evaluating sources in archives.  What we really have to evaluate here are the classic foci of Internet evaluation: the authority of the scholar/author  herself and the nature of the digital tools used to present that scholarship.  And here is where I think it is useful to return to the criticisms mentioned above  – the tools we need to use to filter the social web are different than the tools of historical scholarship – and this project made those tools less useful for the rest of us.

Yes, we should remember that our trust networks and Wikipedia pages aren’t infallible.  Treating them as if they are is dumb and dangerous, of course.  But not starting from the assumption that someone is willing to do all this work just to fake you out? That’s not unreasonable.  Creating a hoax like this just for its own  sake, after all, is not more fun than the work it takes to do it is not fun.  This one took an entire class of students working for a whole term with the great big huge carrot of the GRADE as motivation, after all.  When someone, or a class of someones, does deliberately put false information out there – and I’m not talking here about the fake historical documents, but the fake blog posts and tweets and comments and pointers – it makes it harder for all of us to use the skills that really do help us navigate and evaluate the social web.

I think it’s pretty significant that outside of the USA Today blogger, most of the people who got excited about this story – excited enough to blog about it – weren’t excited because of the history beyond the “that’s kind of cool” level.  The excitement was about how “Jane” leveraged social media tools to present her research broadly:

This undergraduate took her research to the next level by framing the experience on her blog, full with images and details from her Library of Congress research, video interviews with scholars and her visit to Owens house, her bibliography, along with a link to the Wikipedia page she created for this little known local pirate.

Or stated more directly, after the reveal:

But I want to concentrate on something else. Amidst all the fiction, alternate and virtual realities, hoaxes and pranks, one thing jumps out at me as utterly real, wholly genuine, honest. Read Jim’s post on this when he first came across the project. Here is passion and excitement, a celebration of what a student might be able to achieve with the tools now available, given the right puzzle to work on and a supportive network and intellectual environment.

And I agree with all of this in theory, but in terms of this specific hoax there is still something missing to me, and it’s an important something.  It’s research – and inquiry – and discovery.

I know I am only seeing a tiny portion of what is going on in this classroom – and from the syllabus just the idea that one of the goals of the class is to show that hoaxes can themselves be the topic of serious historical research, just like wars or elections, is something I find fairly awesome.  I have no idea how the process of discovery was inculcated in the other projects the students did.  All I have is the public pieces of the course – the blogs, the videos, and the rest.

And that’s a piece of this discussion that shouldn’t be missed.  By putting this material up on the real web, on the public web, by consciously trying to get people to access and engage with this material the question of what kind of learning experience does this material provide for those of us NOT in the class is a valid one.   Is our learning experience supposed to be related to information literacy as well?  To history? Or is it just a clever, creative prank?

Because here’s the next thing – I don’t think that there is much of a learning experience for the rest of us in this project – at least not in terms of information literacy.

Don’t get me wrong, I value creation and creativity.  I value world-building and imagination.  And I don’t think those things are separate from academic research.  There is definitely creativity and imagination in scholarly inquiry, in looking at sources and seeing what might have been or what could be and re-searching based on that new potential meaning.  Watching a class of students using the social web to extend and communicate such a learning process would itself be valuable in that information literacy context.

And I think there’s room in that picture for fiction as well – in telling a story that you know in your bones to be a kind of truth even though you can’t prove it, at least not in a way that would be recognized as proof, epistemologically speaking.  I think there are truths and stories and voices that can only be captured with fiction.  So it’s not the made up or false part that gives me pause.

But in the case of this project, as it is laid out for us to see — the public pieces of this class project combine to celebrate what a truly information-literate student can do to take control of their own learning – but all the time that information literacy only exists on the surface.

This is why I have problems thinking about the pirate hoax as a great new way to talk about or teach information literacy. Because beyond the fact that I don’t think hoaxes are a great way to teach evaluation, I’m also not sure they are a great way to talk about research and scholarly creativity. At its heart, I think information literacy is inherently linked to inquiry, and discovery.   It’s about the ability to learn from information – not just to find the sources worth learning from but to use that new information to change the way you understand things, and change the way you approach the next question.

“Jane” talks endlessly about the physical pain she feels as a result of days of looking at microfilm:

But, I have no idea how I am functioning right now…I can barely look at the screen without wanting to throw up, my eyes are in so much pain.

And she goes on about how frustrating it is not to find that evidence in the documents that will prove that her pirate existed:

After my failed trip to the town, I was really discouraged. I found out enough information to keep me going, but nothing really substantial. I have not gotten any closer to figuring out a name, and my trips to the library that last four hours at a time to look through the microfilm (I’m convinced I’m causing permanent damage to my eyes), have yielded absolutely no results.

But she never talks about that other kind of pain and frustration that comes with research and learning – one of the big things that makes research hard – feeling stupid, or having to question what you thought you knew before.   That’s what I mean when I say “Jane’s” process is all surface-level.  She never finds anything in her research that leads her in a new direction. She finds additional things she can use on the path she’s already on, but that’s not the same.

In the end, it is a lucky break that brings Jane’s process to a close.  The lucky break isn’t the issue — the real issue is that at the end of the research process described in the blog she finds exactly the single document perfect right source she had been looking for from the start.  The perfect right source she imagined might exist that would answer the narrow question she formulated before she even know much about her topic at all.  That’s not how research usually works.  You could argue that that’s not how good research ever works.

And that’s the last and main thing.  At no point does Jane really engage with something that leads her to change her mind about anything, to reevaluate her process, to go back over the same ground with a new understanding or a new set of questions.  It’s needle in the haystack searching she does – she has to be creative to find different ways into the haystacks but at the same time she’s not going into the haystacks to find out what’s there.  She’s going in to look for that one needle that she thinks/hopes must be there.

And yes, I get that she’s pretend, but the fictional process the real class came up with does suggest that historical research is difficult and tedious and one doesn’t make the great discovery by engaging with sources in an open-minded way.   If the class had been engaged in a discovery-based research process I would hope that that would have come through in their fictional avatar’s narrative.  It doesn’t.  There is no doubt that this group of students were truly engaged – playing with history, creating a new world and the characters to fill it.

I can’t find it now, but when I was reading about this project earlier I was struck by the description of how the topic was selected in the first place – all of the considerations were practical – not too well known, not too likely to inspire a lawsuit if the hoax was discovered, and so on.  The reasons for piracy were practical as well – a topic of broad popular interest, local, not likely to be something anyone would already be an expert on, etc.  They didn’t talk about discovering the space in the historical record for their hoax to exist, they talked about creating it.

And if it’s mainly about creativity, about the class’ engagement around creating this alternate reality, around engaging with each other, and about engaging with others on the social web, then I’m not sure I see the value in making it a hoax.  Except that that was the topic of the rest of the class to which we were not privy.  If the skills they were learning were about creativity and world-building it seems like the resulting project could have taken the form of an ARG or a similar project where those creative muscles could be flexed in the service of creating a world for the rest of us to play in, too.

“Peer reviewed” might not be code for awesome but hey! it’s not code for useless either

So I’ve spent a lot of time in the last year talking about how we need to understand what peer review really is. Most of that time it leads to posts about the limitations of the system. Today, not so much.

I keep going back and forth about whether to write this this morning because while I’ve been thinking about it since reading this post at Information Wants to be Free yesterday, it really isn’t just about that post.

And it really isn’t even about the one snippet of the post that got me worked up. Seriously, the snippet isn’t even about the main point of the post, and it isn’t expressing any sentiment I haven’t heard a million times before, starting in library school and again, and again and again since.

And it feels like piling on to just pull out one throwaway line and write a whole post about it, especially by someone who has been dealing with kidney stones, who I have never met in person, on whose blog I do not regularly comment, and who may have not even meant this just as it sounds. It’s like “nice to meet you, way to totally miss the point.” I did get the point of the post, and I realize this snippet isn’t it. But it’s a snippet from an academic library blog and it is expressing a sentiment that I have heard a million times, and I think it’s a problematic sentiment, especially from academic librarians. And my blog is also a blog and I need to have something to link to to respond to so here we go.

Here’s the snippet:

I don’t write for peer reviewed journals since I’m not tenure-track and I actually want my work to be read. So this doesn’t make me particularly annoyed. To me, it’s just another reminder that peer-reviewed journals are completely irrelevant to me. So many peer-reviewed journals publish absolutely useless studies that were primarily done for the sake of getting the authors tenure. But at least I felt they had some sort of quality standards.

Do you see what I mean? Maybe not. Here’s what I mean – how can we as academic librarians pretend to have any relevance at all when it comes to helping students find, use and create their own scholarship — to helping students be successful in college — when so many of us have absolutely no respect for what it is that scholars actually do?

Now, the first time I wrote that I wrote “for the scholarship in our own discipline” I get that she’s probably talking here about the library literature, not articles in Science or Nature or The William and Mary Quarterly or Physics Review Letters or The Journal of Modern Literature or The American Journal of Sociology, though there’s nothing there to really indicate that distinction. But it really doesn’t matter – I do think this goes deeper than saying the library literature blows.

I mean, there is an issue with the do as I say not as I do thing that must be going on when academic librarians who disdain what is in peer reviewed journals in library science tell students that they should care about the scholarly literature in their own disciplines. But most of the time, even when it is articulated as the library literature isn’t timely enough/ cutting edge enough/ rigorous enough to meet my needs – I don’t think that’s the whole picture. The perception that there is an academic/real world gulf is so ingrained in our culture that it’s okay to state it as kind of a truism. This kind of thing – look at the comments in this piece from the Librarian in Black last summer.

And that’s the deeper issue that I think is there. I think there’s a perception that academic studies not directly and deliberately intended to inform practice can have no relevance for practice. That knowledge for knowledge’s sake has no value or relevance to the real world and that in a field like ours that is dominated by practitioners that means the academic research going on is hopelessly, inherently useless to the vast majority of the field. The work being done in other fields might be valuable to those fields, but only because those fields are academic and not as about practitioners — it’s okay for them to be useless to practice, it’s okay for them to be academic and theoretical. Knowledge for knowledge’s sake is useful in those fields because those whole fields are somewhere other than the real world.

Which could be read as librarian self-deprecation or self-hatred – “we’re just not real academics – they’re good and we’re bad.” But I think this cuts deeper than saying the library literature could be better – I tried to parse this snippet this way, and I think the other people being quoted in the post are thinking of the issue that way – but I don’t think this statement can be read to mean the library literature needs to be better. There’s no way that the peer-review model can be the go-to place for practitioners who want cutting edge answers to current problems, who want what they get from blogs and other dynamic information sources – that’s not a matter of better or worse.

The truth of the matter is even if academic research in library science was as cutting-edge, current, and rigorous as any academic research could ever be – a lot of it would still not be intended to inform practice.

When I hear people talking about how useless or stultifying or hard to understand or badly written they find the peer-reviewed literature – there’s a pride there in being a practitioner instead of an academic. There’s a sense that we are doing the real work in a way the academics never can. There’s nothing wrong with being proud of practice, don’t get me wrong – I am absolutely not saying that. It’s the “instead of an academic” piece that I have issues with because theory/practice isn’t a zero-sum thing. There’s no need to do either/or. There is something wrong with cutting yourself off from something that can and does inform practice in ways that nothing else can simply because it wasn’t created specifically to do so.

And there’s especially something wrong with academic librarians cutting themselves off from that because a huge part of our job, from collection building to information literacy, is all about connecting students to scholarship. There’s no way to compartmentalize that to the library literature – there’s no way to say “well, I think the scholarly study in librarianship is useless but in sociology, or social work, it’s totally awesome.”

Because here’s another thing – when I hear people talking about how useless or stultifying or hard to understand or badly written they find the peer-reviewed literature, they sound just like year after year of students I’ve heard complaining about their classroom reading. Classroom reading not found by a keyword search in Library Literature, but carefully selected and assigned by experts in the field who are saying “this, this is an important thing you need to understand to understand what knowledge is in our discipline.” Yes, a lot of what is in the peer-reviewed literature, in all fields, is not well written. A lot of it is not well researched. A lot of it is published only because it needed to be for the author’s tenure hit. This isn’t just true for us – it’s true across the board. It might be more true for some fields and less true for others but it’s true on some level for all of them. And not recognizing that it is not ALL like that, that sometimes the language is hard because the concepts are hard, sometimes you have to read it three, four or five times not because it’s badly written but because it’s talking about really complex things that take three, four or five readings to understand means closing yourself off from a type of knowledge and a way of understanding that can absolutely inform practice — not understanding that will keep a student from being successful in college. More than that, I think not understanding that hurts the practitioner as well.

Most of our students are going to be practitioners, not academics. We can’t just assume that they will magically understand the value of knowledge for knowledge’s sake because they start taking 400 level classes. It takes a particular skill set to apply theory to practice – it takes practice to apply theory to practice. Our students don’t come to us knowing how to do those things. They need help understanding not just how to find scholarly sources – but how to read and use them. One of our writing faculty was telling me the other day that the students in her class, when they are required to find “speaker sources” – sources that take a stand on issues – almost never use academic sources even though they are required to cite them somewhere in the paper. They use the academic sources as background sources instead of speaker sources. See, the point is that they don’t have the skills or the knowledge yet to see the academic sources as speakers. They can easily identify a policy agenda, but they don’t know yet how to identify the scholarly argument, agenda or point of view. Just like we can easily identify a practical problem that needs solving, but we think that academic problems are pointless.

Our students will be better at what they do – whether that is working, voting, or heck, even dieting, if they have the skills to be informed by what the research says, what the science says – even though that research will almost never have been created simply to inform them. But I don’t know that they can learn that skill set or gain that understanding from librarians who don’t have it, or at least who don’t use or practice it, themselves.

The peer-reviewed literature is what it is. It can be a whole lot better – but that doesn’t mean more obviously and immediately practical. As someone who spends an awful lot of time going on and on and on and on about the problems with the library literature, I still have to say if you can’t find any research in that literature to inform your practice, you aren’t trying very hard.

Will you find stuff on “how can I troubleshoot this problem I’m having today?” Probably not. Can you find stuff on “how can I deal with this issue in a really cutting-edge and awesomely new way?” Probably definitely not. Can you find stuff that gets you thinking about how to frame the problem in a new way, how to understand potential solutions in a new way, how to understand the root of the problem in a new way? I would certainly hope so.

See, here’s my last thing – sometimes the questions that scholars are interested in ARE different than the questions that arise out of daily practice. Sometimes the problems that they are passionate about solving are not the same problems that keep practitioners up at night. But the questions they ask and the answers they come up with are still valuable to practitioners if those practitioners are willing to accept the research for what it is instead of focusing entirely on what it is not.

There’s going to be a little feature in an OSU publication about undergraduate information literacy instruction at my library and I was looking at the most recent draft just before I went to read my feeds. The author came to watch one of my instruction sessions to get a feel for what that kind of teaching was like – and she told me that she saw my interactions with the students more as interactions between peers than traditional teacher/student. I thought about that and realized that what she was seeing was that, to me, the purpose of most undergraduate instruction — across the disciplines but especially in the library — is to bring these new college students into an existing community of scholars, and giving them the skills, concepts, data and sharing the knowledge that will let them find their own place within that community.

To do that, we don’t all need to be scholars ourselves in same way – we are practitioners and for most of us that is one of the wonderful things about librarianship. But we need to respect, value and celebrate those who are and what they do.

critically thinking about comment threads

So this study, the one from Science suggesting that gender isn’t such a useful variable when trying to predict if an individual will be good or math at not – is all over my feeds and my del.icio.us network. And it’s got me thinking about critical thinking, perception, and the really big thing we’re trying to support with our talk and our teaching about information literacy.

So the study in question basically looked at NCLB data from a lot of states, looking at how students performed on the math sections by gender. The differences they found were statistically insignificant at every level, from primary to secondary grades. They concluded that,

for grades 2 to 11, the general population no longer shows a gender difference in math skills….There is evidence of slightly greater male variability in scores, although the causes remain unexplained. Gender differences in math performance, even among high scorers, are insufficient to explain lopsided gender patterns in participation in some STEM fields.

So what does this have to do with critical thinking? The study itself isn’t really what I’m interested in here so much as the reaction to it. Because one of the things critical thinking is about is how we react when we come across information that challenges what we thought to be true. And for every math teacher that reacted to this study with a “duh” there are a lot of people around there who have some ingrained assumptions about how boys are better at math and girls are better at reading.

One of the most common narratives about boys and girls and math goes like this – boys and girls show similar aptitude so long as the math is easy. But when it gets complex, boys are better. That’s the line that used to explain why girls stopped taking math in high school, and now it’s used to explain why they don’t do math as much in college. So it’s not like I was surprised to see that that a whole bunch of commenters go Right There.

But the thing is – the study’s authors deal with this. They talk about the complexity question (they used a different data set to get at that) and they talk about the SAT scores thing. It’s not buried – it’s a whole section with a heading and everything.

And we can’t blame bad science reporting or Science’s paywall on this – the posts or linked stories mention the complexity question because the authors didn’t just mention it in the study – they emphasized it. This is a super-short article, and they spend some of their very limited time to say that our NCLB tests kind of suck – they don’t test for much, at least not for what they should be testing for. I mean really – that topic is their big finish, the last line -

An unexpected finding was that state assessments designed to meet NCLB requirements fail to test complex problem-solving of the kind needed for success in STEM careers, a lacuna that should be fixed.

Now I’m not saying that these commenters should automatically buy the analysis presented, but they should notice it. They should engage with it. Not to do so suggests, well, a lack of a disposition to think critically.

In the very late 80′s the APA engaged in a Delphi project to define critical thinking in a way that would be useful for higher education and for educational assessment. A panel of experts on critical thinking instruction, assessment and theory was convened and together they developed an influential consensus* of an ideal critical thinker as -

1. Someone who can think critically, has a set of skills, including : interpretation, analysis, evaluation, inference, explanation and self-evaluation. This skill dimension is an essential part of critical thinking.

2. Someone with the disposition to use those skills, to learn. Critical thinkers are sensitive about their own biases. They are open-minded. They are inquisitive, questioning people. They have an eagerness for knowledge and learning.

(An aside – the Delphi Method of research that grounded this project is pretty cool itself if you’re geeky like me)

Some definitive examples of lacking the disposition to think critically can be found at ABC News’ coverage of the gender/math study (ETA – in the comments, not ABC’s report) — there’s this:

The fact remains, boys tend to do better in math than girls. And there’s no shame in that. Just like girls tend to do better in languages.I wonder who skewed these figures?

And there’s this:

That doesn’t make any sense. There is no rational reason for this gap to disappear. It is a fact that men are better then women at certain tasks and worse then them at others. I think that the disappearance of this gap speaks more to our educators doing a better job of “teaching the tests” then to students actually understanding the material better.

In other words, “I read this thing. It contradicts what I believe. So I will simply restate my previously held beliefs and perhaps suggest a conspiracy.”

Now, these people obviously aren’t worth engaging with – I mean, they’re commenting on a story at ABC News dot com, and they’re not doing so especially well. But the thing is – I’ve read things just like this from my students before.
We have them write a bunch of stuff about the things they encounter in their early exploratory research stages and so we get a lot of information about how they’re reacting to the ideas they encounter. Sometimes their reaction is exactly this – “this article says X which is wrong because I believe Y.”

And that’s not a slam on my students – learning how to think critically, and developing the disposition to think critically is something that we should expect people to do in the college years. But that aspect of it – that willingness to examine your own biases and to accept new information that challenges your absolute world view as potentially valid – that’s the critical thinking big leagues. It’s not easy stuff. Not for anyone.

And what the many, many online discussions of this study have got me thinking about is how many different ways that one can resist thinking critically – the discussions on Slashdot and at the Chronicle, for example, are at an obviously higher level than the one at ABC News – they’re discussions, for one. And the arguments raised are more complex, and mostly subtler than “nuh uh.” But I think there’s still a lot of knee-jerk refusals to consider information that challenges worldviews, mental models, belief structures or whatever you want to call all of that cognitive and affective and mental baggage we bring along with us when we encounter new information going on in those comment threads.

With some others, Paul Facione (the guy who wrote the executive summary for the Delphi project) talks more thoroughly about the disposition to think critically** and in particular this article talks about what we might expect from new college students. There’s a lot of good stuff here but I’m going to engage in some super-simplistic summary and say that the authors show that college students are positively disposed to think critically in many ways – but the one that hangs them up some is this “truth-seeking” aspect.

I’m not in love with the phrase “truth-seeking” here but I’m fine with what they mean by that phrase – someone with a positive disposition towards truth seeking is “eager to seek the best knowledge in a given context, courageous about asking questions and honest and objective about pursuing inquiry if the findings do not support one’s self-interests or one’s preconceived opinions.”

Just as interesting is the related finding – that, for the most part, these students were rewarded more in their first year of college for showing positive dispositions along other scales (most notably “analyticity” or the ability to evaluate and create reasoned arguments) than for truth-seeking. That piece feels true to me, at least so far as my experiences with argument papers and comm 111 speeches extends.

While we encourage students to choose a topic they want to learn about, not just one they feel strongly about (and in this our composition faculty are taking a different tack than the one in most of the books I’ve seen)m many students still choose to write on topics they “already know.” Sometimes it is clear from the start that they feel so strongly about their chosen topic that they will not be able to learn from their research process. And, of course, some of them can craft beautifully-reasoned arguments without ever really engaging with sources in a way that leaves them open to changing their minds on a topic. I know I’ve done it.

We do focus on their argument-building ability more than their truth-seeking, and perhaps that isn’t where they need the most help to become critical thinkers. Over the years, we have added some dimensions of the latter into their work, asking them to reflect on their own biases and preconceptions, for example, but I suspect we could do more. Something to think about – hopefully critically and open-mindedly.

___________

*Facione, Peter A. (1990). Executive Summary, “The Delphi Report” (opens in PDF

**Facione, P.A., Sanchez (Giancarlo), C.A., Facione, N.C. & Gainen, J. (1995) The disposition toward critical thinking. Journal of General Education, 44:1, 1-25.