untangled thoughts

The Set-Up

I don’t know what it is about Veronica’s posts*, but they always include a thought, or a line, or even a snippet of something that either unlocks a new thought, or clarifies something I’ve been thinking and couldn’t articulate.

In this one today, it was this line
Our emails to our colleagues always start with, “This week is CRAZY busy,” or “I have so much to do,” or “I have meeting after meeting; class after class.” I recognize that some of these statements might be genuine venting. People are tired and they sometimes need to share their woes.

Now, I’m sure some of you are thinking — um, yeah, what’s surprising about that?  And I get that.  I am kind of saying it too.  See, this is about me. This line hit me right where I needed to be hit in the moment.  I’ve read and thought a lot about these issues and questions, and about related issues and questions as these conversations have emerged and re-emerged over the years (in many fields and professions).  I have a lot of thoughts, is what I am saying.

In this moment, something about this line and the way it was expressed made me step back and think about all of those thoughts and conversations together.  Thoughts about the cultural issues that valorize overwork, and the ways that the messages we send, implicitly and explicitly, shape those cultures.  Thoughts about performative or competitive busyness, and the ways that we reinforce that without (and with) intent to do so.  Thoughts about structural and resource issues, about doing more with less. Thoughts about rigor and gatekeeping and mission and values, and about hegemonicassumptions and vocational awe (though that last not as usefully before Fobazi named it as after).

In all of the years that I’ve been thinking about these interlocking questions, I have focused on the parts, not the whole. I’ve been struggling with the parts a lot (where “a lot” means both frequently and  intensely) this past year as a relatively new administrator and especially as someone doing that work in a constantly under-resourced and over-performing place. I mean, it’s a true fact that these questions are many of the reasons that I decided to continue in administration after my stint as a (rotating) department head was done — not because I had answers, but because I thought the questions needed answering and I wanted to be in a position to act on answers, that at least in some small ways pushed beyond the individual solutions we often end up with. 

In the reality of management, though, all of these intersecting issues and questions create a tangled mess.  I pull one thread, and the tangle unravels for a few yards and then reemerges and intensifies in another spot.  And when I read Veronica’s post, it unsnarled some of those thoughts and that’s what I really want to talk about here. This is going to be interesting (to me at least).  I’m talking about looking at a big issue more broadly, but about looking more broadly at my narrow experience.  These issues matter for lots of reasons, they’re big and they’re snarly and they extend beyond libraries and beyond academia.  They intersect with other big issues. And I’m going to be ignoring most of that for something that is specific, and grounded, and situated and about me and my relationship to these questions in this moment. 

Finally, The Point

I realized recently, though I didn’t have the words, that one of the things I’ve been struggling with is seeing and feeling the difference between these different forms and drivers of busy.  I’m experiencing these things in a space where trying to survive busy coexists with enthusiastic busy and switches off with overwhelmed and unhealthy busy and those all sit next to performative busy, and social busy and competitive busy. See – and this is important — the point is not that there is some busy that is real and some that is not.  

All of these types of busy are real. They are. And all of them matter to the organization and all of them matter to me. But some of these issues are cultural and some are structural and there’s also a healthy intersect between cultural and structural, and as a manager I can have power and ability and positioning to change things but only if I understand what it is that I am changing. Because changing structures isn’t the same as changing culture and its unlikely that anything is wholly one or the other. 

The types of busy that we might think of as performative or competitive, that are driven by a desire to meet expectations or align with norms — tacit, overt, assumed, experienced — get tangled together with the resource issues, the unexamined inequalities, the problematic reward structures.  I’ve written here before about how helpful I find this idea: culture is what people do. When busy is what we do, when it’s the de facto answer to “how are you?” then it can be really hard to untangle those pieces that make the solutions way more complicated than “so do something else.”  

I wouldn’t have thought this to be true, and maybe its down to being a beginner manager, but the issue is this — when the culture pushes us to busy and overwork, we can’t effect real change unless we deal with the underlying structures that shape the culture.  But when the culture is like that, it makes it super hard to see those structures to change them — to see what they’re doing, what they do, to tease out and draw those connections in a useful way.  It does. It really, really does.  And here’s the other thing.  I’m part of the problem, and I’m more of the problem as a manager and part of it because I am a manager. 

I understand feeling the need to match the busy I see around me — to keep up with the accomplishment and productivity and — really — with the busy. Weirdly — or maybe not so weirdly — this feeling has only gotten stronger as I’ve moved into positions with more privilege and more power within the organization. Surrounded by people who do so much and succeed so hard and shine so bright, well, if I have a named professorship, or become department head or AUL, shouldn’t I be the busiest one of all? And I feel that and it comes out in the answers I give when people ask how I am. I emphasize the busy, the deadlines, the meetings and all of the tangible output of the work I do. 

For sure, I do have those times when I am up against it. Sometimes it’s self-imposed and the result of too much yes. Sometimes it’s drop everything all hands on deck because the university says we have a fortnight to do three months’ worth of work. But a lot of the time, I’m not the busiest one in the room. I usually have a solid to-do list, but I also schedule time to read, to write, and to think. I don’t usually miss lunch. I go home and I have hours where I can knit, cook, eat, do laundry, and watch the Olympics. And I see almost fifty movies a year. In the theater. With my phone turned off.

BstGN-_CEAA4OMB.jpg

That’s me, reading, knitting and hanging with my dog.

So I became a manager in to have the power to effect change, but having that power has pushed me to behave in ways that entrench the culture that needs to be changed while it also obscures those structures that need to be changed to change the culture.

That’s a fun conundrum.

And it makes me wonder, when I talk to people about not trying to do more with less, or when the university talks about work-life balance, while at the same time we all answer every “how’s it going?” with “I have so much to do,” are we just giving people something else to feel inadequate about, or creating some other set of expectations that they have to perform to? I’ve noticed, as these issues have been raised by more and more people in more and more contexts (both in my library and out) that people are starting to put qualifiers on their statements of busy: “I know we’re all overworked,” “I don’t have more to do than _______, she’s REALLY busy,” or “I have four deadlines tomorrow, but it’s self-imposed, so that’s okay.”

(That last one is me. I said that.)

Of course, I am not saying we shouldn’t have those conversations, or that there’s no way to have those conversations without just swinging that pendulum wildly from one side to the other. I guess I am just musing on the challenges of moving these types of cultural conversations (conversations I value) in the workplace to where they need to be if the result is going to be meaningful, to dig into the structures, and to create the kind of change that requires trust.

Which brings me back to those feelings. I can start with myself, right? I can start when I’m not under the gun, under the wire, and needing to vent. I can start talking about the things that Veronica talks about when people ask me how things are going. I can be honest about not being the busiest person in the room. I can talk about reading and writing, even when I feel guilty that I have time for those things. I can talk about meeting deadlines and projects I enjoy and saying no and saying yes. And then, when the busy times inevitably come, I can talk about being busy in a way that maybe doesn’t send the implicit message that everyone else had better be busy too.

*So, obviously, I can’t wait to hear more about Veronica’s book deal.

it is too much, let me sum up

There was a little flurry of conversation in my social networks about Mark Bauerlein’s recent offering on the Brainstorm blog (at the Chronicle), and i just realized that it was almost all in the rhet/comp corners of those networks – so in case library friends haven’t seen it – it’s worth looking at:

All Summary, No Critical Thinking 

Pull Quote:

From now on, my syllabus will require no research papers, no analytical tasks, no thesis, no argument, no conclusion.  No critical thinking and no higher-order thinking skills.  Instead, the semester will run up 14 two-page summaries (plus the homework exercises).

Students will read the great books and assignments will ask them to summarize designated parts.

A soft description of the conversations I saw would be “skeptical.” There were those who thought this was an April Fool’s joke, until they noticed the byline.  I think it reads like an effort to solve a problem that’s not really about summary, but about reading.  I italicize “think” there, because I don’t really get the summary idea – it seems to me that people who only engage enough with argumentative writing to cherry-pick quotes from source texts will be just as able to create “summaries” that don’t reflect any more than a superficial understanding of those source texts.

Michael Faris pointed out Alex Reid’s excellent response, which does a much better job of problematizing the summary than I could:

The Role of Summary in Composition (digital digs)

I believe we misidentify the challenges of first-year composition when we focus on student lack and specifically on the lack of “skills.” Our challenge is to take students who do not believe they are writers (despite all the writing they do outside school), who do not value writing, who do not believe they have the capacity to succeed as writers, and who simply wish to get done with this course and give them a path to developing a lasting writing practice that will extend beyond the end of the semester.

Isn’t that a great, um, summary of why writing teaching matters?

Can we substitute “researchers” for “writers” here?  I kind of like the resulting statement, but it makes me uncomfortable as well, because – can we do, are we doing that with our current models?

it has been a while, yes

Wow, that was kind of unplanned hiatus.  Since I last posted:  my library has hired a new University Librarian, I received tenure, I gave some talks, almost all of Spring term has gone by, I was surprised by a completely unexpected but lovely award, I finally finished the IS executive committee minutes from Midwinter, and I submitted an epic proposal for IRB approval.

I am also almost done with an actual blog post.  Until that’s done, though, here’s something awesome:  a news website article (from The Guardian UK) entitled This is a news website article about a scientific paper.

A sneak preview –

This paragraph elaborates on the claim, adding weasel-words like “the scientists say” to shift responsibility for establishing the likely truth or accuracy of the research findings on to absolutely anybody else but me, the journalist.

If I could summarize one of my goals for library instruction it would be – to make sure OSU students understand the scholarly article better than this.

so whatever happened with that fear factor book?

Or, sort of peer-reviewed Monday!  Not quite, but a book review.

I didn’t want to list the name of the book in question before because I hadn’t read it yet, and didn’t want to answer questions from people who might have found the post by googling the book title.  Especially if they were people who really liked the book, because I didn’t know yet if I liked it.   And there are people who really, really, really like it —

Best book ever on how to prepare students for college (Jay Mathews, Washington Post blogs)

I don’t really agree with the title there – the point of this book didn’t seem to me to be about preparing students for college so much as it is about preparing college for students.

Citation:  Cox, Rebecca D. (2009).  The College Fear Factor. (Cambridge, MA: Harvard University Press).

The book is based on 5 years of data gathered from community college students.  The author herself did two studies examining community college classrooms.  One was a basic writing class and the other looked at 6 sections of an english composition class.  Each lasted a semester, and gathered qualitative data about the classroom conditions that had an impact on a student’s successful completion of the course.  She also participated in a large scale field study of 15 community colleges in 6 states, and another national study of technological education.

The argument in the book comes from research on community college students, but it is still of interest to those of us who work with students managing the transition to college at any institution.  It is perhaps more relevant to those of us at institutions that attract a significant number of first-generation college students.

I am not sure entirely what I think of the book – on the one hand, it was a quick easy read and I enjoyed it as I usually enjoy well-reported stories drawn from qualitative investigation.  On the other hand, it struck me as one of those books that reports on an important conclusion, but one that could have been well-covered in an article-length treatment.  The conclusion is drawn over and over again in this longer work, so sometimes chapters go by without me feeling like I had really encountered anything new.

What is that conclusion?  I said to Caleb earlier that I wasn’t sure where the fear factor part of the title came into the book (becuase at that point I was on about page 17) and I have to say now that the title is good insofar as the real point of this book is on fear, and how that emotional state affects student success.  (Insofar as it evokes a really awful reality show, on the other hand, not so good).

And in this, I think the book is valuable.  We don’t think and talk about the role of affect enough in higher ed – at least not on the academic side – nor about the intersections between affect and cognition and affect and everything else we do, and this book is an important corrective to that.  Basically, Cox argues that students can be scared away from completing college – not because they are not capable of doing college-level work, but because they have not been prepared to do it before they get to college, and they are not helped to do it once they arrive.

The many students who seriously doubted their ability to succeed, however, were anxiously waiting for their shortcomings to be exposed, at which point they would be stopped from pursuing their goals.  Fragile and fearful, these students expressed their concern in several ways: in reference to college professors, particular courses or subject matter, and the entire notion of college itself — whether at the two- or the four-year level.  At the core of different expressions of fear, however, were the same feelings of dread and the apprehension that success in college would prove to be an unrealizable dream.

Cox argues that these fears are exacerbated when one doesn’t come in to college knowing how to DO college.  And that most first-generation, non-traditional and other groups of our students don’t come to college knowing what the culture and mores of academia are.  They have expectations, but those aren’t based on experience (theirs or others’) and when those expectations are challenged, their reaction is to think they can’t do it at all or to convince themselves that it is not worth doing .

Professors too have their own set of expectations about how good students approach their education, and when faced with student behaviors that are different than those expectations would suggest, they make some faulty assumptions about why students are behaving the way they are. A student who attends class every day but never turns anything in  — that’s incomprehensible behavior to the professor who doesn’t understand how that student possibly thinks they are going to pass.  After reading Cox’s book, you consider the possibility that that student doesn’t think they are going to pass, but are just playing out the semester in a depressed

I still feel like I am missing from this book much of a sense of why professors have these expectations  —  besides “that’s the way we’ve always done things.”  In other words, it doesn’t really work for me (nor do I think Cox is really claiming) that there is no value at all to the way that professors were trained, and that that they are hanging on to methods that don’t work simply because they went through it so others should have to.  Yeah, yeah, there are professors like that.  But my sense as a person engaged in higher ed is that a lot of professors think that there is value in the way they look at learning, meaning-making and knowledge creation and the joy they get from teaching comes from working with students who can share that joy.

Cox does a good job arguing that many of the students they have are not ready to do that, but I don’t get the sense from her book that she doesn’t see the value in that view of education.  I have a much clearer vision from this book what the students Cox interviewed value —  mostly the economic benefit they connect to the credential — but because her research didn’t extend to the teachers, I don’t have that same sense from them.

Here’s the thing – universities aren’t just about the teaching. They’re not going to be just about teaching and it’s not a really hard argument to make that they shouldn’t be just about the teaching.  A lot of professors were hired for their research, and the research they do makes the world better, and connecting students to that kind of knowledge creation is cool.  And even when they are about teaching they’re not just about the teaching of first-year undergraduates making the transition into college.  Even those students in a few years, immersed in a major, are going to need something different than they need when they first hit campus.

Just as it is not useful to sit in meetings about teaching and spend all your time discussing the students you should have (and yeah, we’ve all heard those discussions).  I’m sure I’m not the only one to say that at some point you have to put your energy into the students you have.  But when I say that, and when a lot of people say that, they don’t mean – the students we have can’t learn how to participate in academic culture.  We don’t mean that – academic culture has no value to these students.  Which is the really valuable point in this book – unprepared does not equal incapable.   I don’t want to say the book offers no solutions, because it does.  I guess what I do have to say is that I don’t find those solutions convincing in a research university environment.

All of this, of course, goes well beyond the scope of Cox’s book and Cox’s research, which is about particular students in a particular setting where teaching, and the transition to college, is paramount.

It’s a long way of saying that while the book has value to those outside the community college setting, that value only  goes so far.  There is more work to be done figuring answers to the questions she raises in other environments.

Which is why the chapter that was probably the most interesting to me is chapter 5 – which examines the work being done by two composition instructors – instructors who by most accounts are doing everything “right” in their classrooms  — right by the Chickering-type standards of active learning and engagement and right by what we are constantly told these hands-on, tech-savvy experiential-learning-wanting students today need.  In other words, they’re doing the things we think we should be doing in the research university to connect the students to what it is that scholars do – and they’re failing.

The idea that students have to be forced to be free is not a new one, but it is a point that gets lost sometimes in discussions about what is wrong with higher ed.  We hear that lectures are dead, that students can’t learn that way, that they hate lecturing, they tune out, they want to learn for themselves, and … it just doesn’t always reflect my experience. They may hate lectures, but that doesn’t mean that’s not what they think higher education should be.  They have their expectations that they bring with them, and professors that try to turn some control over the classroom and over learning to their students can be shot down for “not doing their job.”  That’s what Cox found, and I’ve certainly seen it happen.  The assumptions that these professors are falling victim to aren’t assumptions that students are going to be unprepared, or ignorant, or unwilling to learn – they’re more the opposite.  They assume that students will be curious, will have a voice they want to get out there, will have learning they want to take responsibility for.

So, I’m glad I didn’t bail – but I’m also glad the book didn’t take more than a couple nights to read.

Not quite peer-reviewed Monday, but related!

So slammed, so briefly (well, for me).  Via CrookedTimber, a pointed to this post by Julian Sanchez on argumentative fallacies, experts, non-experts and debates about climate change. It’s well worth reading, especially if you are interested in the question of how non-experts can evaluate and use expert information, which is a topic that I think should be of interest to any academic librarian.

Obviously, when it comes to an argument between trained scientific specialists, they ought to ignore the consensus and deal directly with the argument on its merits. But most of us are not actually in any position to deal with the arguments on the merits.

Sanchez argues that most of us have to rely upon the credibility of the author — which is a strategy many librarians also espouse — in part because someone who truly wants to confuse them can do so, and sound very plausible.

Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic.

Further, he suggests that the person who wants to confuse a complex issue actually has an advantage over those who want to talk about the complexity:

Actually, I have a plausible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience:

Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument.

And that’s where we get to the evaluation piece.  We need to know how much we know to know whether it even makes sense to try and evaluate the arguments.  Because if we don’t know enough, trying to evaluate the quality of the actual argument will probably steer us astray more often than using credibility as our evaluation metric.

If we don’t sometimes defer to the expert consensus, we’ll systematically tend to go wrong in the face of one-way-hash arguments, at least our own necessarily limited domains of knowledge.

(Note:  I skipped most of the paragraph where he really explains the one-way hash argument – you should read it there)

The thing I really want to focus on is this – that one word, consensus.  Because I don’t think we do much with that idea in beginning composition courses, or beginning communication courses, or many other examples of “beginning” courses which often serve as a student’s first introduction to scholarly discourse.

And by “we” here, I’m talking about higher ed in general, not OSU in particular.  I think we ask students in these beginning classes to find sources related to their argument; their own argument or interest is the thing that organizes the research they find.  They work with that article outside of any context, except which might be presented in the literature review – they don’t know if it’s steadily mainstream, a freakish outlier, or suggesting something really new.

So they go out and find their required scholarly sources, and they read them and they think about the argument in the scholarly paper and how it relates to the argument they are making in their own paper and try to evaluate it – and of course, they evaluate mostly on the question of how well it fits into their paper.   And what other option do they have?

Sanchez argues, and it rings true to me, that we usually don’t have the skills to evaluate the quality of the argument or research ourselves.  And I know that I am not at all comfortable with the “it was in a scholarly journal so it is good” method of evaluation.  Even if they find the author’s bona fides, I’m not sure that helps unless they can find out what their reputation is in the field, and isn’t that just another form of figuring out consensus?

In some fields, meta-analyses would be helpful here, or review essays in others, but so many students choose topics where neither of those tools would be available, that it’s hard to figure out how to use that in the non-disciplinary curriculum.

And perhaps it doesn’t matter – maybe just learning that there are scholarly journals and that there are disciplinary conventions, is enough at the beginning level.  But if that’s the case, then maybe we should let that question of evaluation, when it comes to scholarly arguments, go at that level too?

peer-review, what it is good for? (peer-reviewed Monday)

ResearchBlogging.org

In a lot of disciplines, the peer reviewed literature is all about the new, but while the stories may be new, they’re usually told in the same same same old ways.  This is a genre that definitely has its generic conventions.  So while the what of the articles is new, it’s pretty unusual to see someone try to find a new way to share the what.  I’ll admit it, that was part of the attraction here.

And also attractive is that it is available.  Not really openly available, but it is in the free “sample” issue of the journal Perspectives on Psychological Science.  I’m pulling this one thing out of that issue, but there are seriously LOTS of articles that look interesting and relevant if you think about scholarship, research, or teaching/learning those things — articles about peer review, IRB, research methods, evidence-based practice, and more.

Trafimow, D., & Rice, S. (2009). What If Social Scientists Had Reviewed Great Scientific Works of the Past? Perspectives on Psychological Science, 4 (1), 65-78 DOI: 10.1111/j.1745-6924.2009.01107.x

So here’s the conceit – the authors take several key scientific discoveries, pretend they have been submitted to social science/ psychology journals, and write up some typical social-science-y editors’ decisions.  The obvious argument of the paper is that reviewers in social science journals are harsher than those in the sciences, and as a result they are less likely to publish genius research.

No Matter Project (flickr)

I think that argument is a little bit of a red herring; the real argument of the paper is more nuanced.  The analogy I kept thinking about was the search committee with 120 application packets to go through – that first pass through, you have to look for reasons to take people out of the running, right?  That’s what they argue is going on with too many reviewers – they’re looking for reasons to reject.  They further argue that any reviewer can find things to criticize in any manuscript, and that just because an article can be criticized doesn’t mean it shouldn’t be published:

A major goal is to dramatize that a review who wishes to find fault is always able to do so.  Therefore, the mere fact that a manuscript can be criticized provides insufficient reason to evaluate it negatively.

So, according to their little dramatizations, Eratosthenes, Galileo, Newton, Harvey, Stahl, Michelson and Morley, Einstein, and Borlaug would each have been rejected by social science reviewers, or at least some social science reviewers.  I won’t get into the specifics of the rejection letters – Einstein is called “insane” (though also genius – reviewers disagree, you know) and Harvey “fanciful” but beyond these obviously amusing conclusions are some deeper ideas about peer review and epistemology.

In their analysis section, Trafimow and Rice come up with 9 reasons why manuscripts are frequently rejected:

  • it’s implausible
  • there’s nothing new here
  • there are alternative explanations
  • it’s too complex
  • there’s a problem with methodology (or statistics)
  • incapable of falsification
  • the reasoning is circular
  • I have different questions I ask about applied work
  • I am making value judgments

A few of these relate to the inherent conservatism of science and peer review, that has been well established (and which was brought up here a few months ago).  For example, plausibility refers to reviewers are inclined to accept what is already “known” as plausible, and challenges to that received knowledge as implausible, no matter how strong the reasoning behind the challenging interpretation.

A few get at that “trying to find fault” thing I mentioned above.  You can always come up with some “alternative explanation” for a researcher’s results, and you can always suggest some other test or some other measurement a researcher “should have” done.  The trick is to suggest rejection only when you can show how that missing test, or alternative explanation really matters, but they suggest that a lot of reviewers don’t do this.

Interestingly, Female Science Professor had a similar post today, about reviewers who claim that things are not new, but who do not provide citations to verify that claim.  Trafimow and Rice spend a bit of time themselves on the “nothing new” reason for rejectin.  They suggest that there are five levels at which new research or knowledge can make a new contribution:

  • new experimental paradigm
  • new finding
  • new hypothesis
  • new theory
  • new unifying principle

They posit that few articles will be “new” in all of these ways, and that reviewers who want to reject an article can focus on the dimension where the research isn’t new, while ignoring what is.

Which relates to the value judgments, or at least to the value judgment they spend the most time on – the idea that social science reviewers value data, empirical data, more than anything else, even at the expense of potentially groundbreaking new theory that might push the discourse in that field forward.  They suggest that a really brilliant theory should be published in advance of the data – that other, subsequent researchers can work on that part.

And that piece is really interesting to me because the central conceit of this article focuses our attention, with hindsight, on rejections of stuff that would fairly routinely be considered genius.  And even the most knee-jerk, die hard advocate of the peer review process would not make the argument that most of the knowledge reported in peer-reviewed journals is also genius.  So what they’re really getting at here isn’t does the process work for most stuff so much as it is, are most reviewers in this field able to recognize genius when they see it, and are our accepted practices likely to help them or hinder them?

More Revision, Djenan (flickr)

And here’s the thing – I am almost thinking that they think that recognizing genius isn’t up to the reviewers.  I know!  Crazytalk.  But one of the clearest advantages to peer review is that revision based on thoughtful, critical, constructive commentary by experts in the field will, inherently, make a paper better.  That’s an absolute statement but one I’m pretty comfortable making.

What I found striking about Trafimow and Rice’s piece is that over and over again I kept thinking that the problems with the problems they were identifying was that they led to reviews that weren’t helpful to the authors.  They criticize suggestions that won’t make the paper better, that conventions that shouldn’t apply to all research, and the like.  They focus more on bad reviews than good and they don’t really talk explicitly about the value of peer review but if I had to point at the implicit value of peer review as suggested by this paper, that would be it.

There are two response pieces alongside this article, and the first one picks up this theme.  Raymond Nickerson does spend some time talking about one purpose of reviews being to ensure that published research meets some standard of quality, but he talks more about what works in peer review and what authors want from reviewers – and in this part of his response he talks about reviews that help authors make papers better.  In a small survey he did:

Ninety percent of the respondents expected reviewers to do substantially more than advise an editor regarding a manuscript’s publishability.  A majority (77%) expressed preferences for an editorial decision with detailed substantive feedback regarding problems and suggestions for improvement…

(Nickerson also takes issue with the other argument implied by the paper’s title – that the natural and physical sciences have been so much kinder to their geniuses.  And in my limited knowledge of this area, that is a point well taken.  That inherent conservatism of peer rview certainly attaches in other fields – there’s a reason why Einstein’s Theory of Special Relativity is so often put forward as the example of the theory published in advance of the data.  It’s not the only one, but it is not like there are zillions of examples to choose from.)

Nickerson does agree with Trafimow and Rice’s central idea – that just because criticisms exist doesn’t mean new knowledge should be rejected.  M. Lynne Cooper, in the second response piece, also agrees with this premise but spends most of her time talking about the gatekeeper, or quality control, aspects of the peer review process.  And as a result, her argument, at least to me, is less compelling.

She seems too worried that potential reviewers will read Trafimow and Rice and conclude that they should not ever question methodology, or whether something is new — that just because Trafimow and Rice argue that these lines of evaluation can be mis-used, that potential reviewers will assume that they cannot be properly used.  That seems far-fetched to me, but what do I know?  This isn’t my field.

Cooper focuses on what Trafimow and Rice don’t: what makes a good review.  A good review should:

  • Be evaluative and balanced between positives and negatives
  • Evaluate connections to the literature
  • Be factually accurate and provide examples and citations where criticisms are made
  • Be fair and unbiased
  • Be tactful
  • Treat the author as an equal

But I’m less convinced by Cooper’s suggestions for making this happen.  She rejects the idea of open peer review in two sentences, but argues that the idea of authors giving (still anonymous) reviewers written feedback at the end of the process might cause reviewers to be more careful with their work.  She does call, as does Nickerson, for formal training.  She also suggests that the reviewers’ burden needs to decrease to give them time to do a good job, but given other things I have read make me wonder about her suggestion that there be fewer reviewers per paper.

In any event, these seem at best like bandaid solutions for a much bigger problem.   See, what none of these papers do (and it’s not their intent to do this) is talk about the bigger picture of scholarly communication and peer review.  And that’s relevant, particularly when you start looking at these solutions.  I was just at a presentation recently where someone argued that peer review was on it’s way out not for any of the usual reasons but because they were being asked to review more stuff, and they had time to review less.  Can limiting reviewing gigs to the best reviewers really work; can the burden on those reviewers be lightened enough?

The paper’s framing device includes science that pre-dates peer review, that pre-dates editorial peer review as we know it, that didn’t go through the full peer-review process, which begs the question – do we need editorial peer review to make this knowledge creation happen?  Because the examples they’re putting out there aren’t Normal Science examples.  These are the breaks and the shifts and the genius that Normal Science process kind of by definition has trouble dealing with.

And I’m not saying that editors and reviewers and traditional practices don’t work for genius, that’s would be ridiculous.  But I’m wondering if the peer-reviewed article is really the only way to get at all of the kinds of knowledge creation, of innovation, that the authors talk about in this article – is this process really the best way for scholars to communicate all of those five levels/kinds of new knowledge outlined above?  I don’t want to lose the idealistic picture of expert, mentor scholars lending their expertise and knowledge to help make others’ contributions stronger.  I don’t want to lose that which extended reflection, revision and collaboration can create.

I am really not sure that all kinds of scholarly communication or scholarly knowledge creation benefit from the iterative, lengthy, revision-based process of peer review.  I guess what I’m saying is that I don’t think problems with peer review by themselves are why genius sometimes gets stifled, and I don’t think fixing peer review will mean genius gets shared.  I don’t think the authors of any of these pieces think that either, but these pieces do beg that question – what else is there.