Making one-shots better – what the research says (Peer Reviewed Monday, part 2)

ResearchBlogging.org

And now, on to Peer-Reviewed Monday, part two but still not Monday.

Mesmer-Magnus, J., & Viswesvaran, C. (2010). The role of pre-training interventions in learning: A meta-analysis and integrative review☆ Human Resource Management Review, 20 (4), 261-282 DOI: 10.1016/j.hrmr.2010.05.001

As I said earlier this week, this was started by a link to this article, a meta-analysis trying to dig deeper into the questions: which of the pre-practice interventions examined in the Cannon-Bowers, et al study are most effective?  For what type of learning outcomes?  And under what conditions?

The first part of the paper reviews what each of the pre-training interventions are, and presents hypotheses about what the research will reveal about their effectiveness.

METHOD

They reviewed 159 studies, reported in 128 manuscripts.  For this work, they considered only studies that met all of the following conditions:

  • they involved the administration of a pre-training intervention
  • the study population included adult learners
  • the intervention was part of a training program
  • the study measured at least one learning outcome
  • the study provided enough information to compute effect sizes.

The studies were coded for: the type of pre-practice intervention; the type of learning outcome; the method of training delivery; and the content of the training.

The codes for pre-practice intervention were drawn from Cannon-Bowers, et al: attentional advice, metacognitive strategies, advance organizers, goal orientation, and preparatory information.

The codes for learning outcomes were drawn from the Kraiger, et al (1993) taxonomy:

  • Cognitive learning (can be assessed at 3 stages: verbal knowledge, knowledge organization and cognitive strategies)
  • Skill-based learning (also assessed at 3 stages: skill acquisition, skill complication, and skill automaticity)
  • Affective learning (attitudinal outcomes, self-efficacy outcomes and disposition outcomes)

Training methods coded were very relevant to information literacy instruction: traditional classroom; self-directed or distance learning or simulations, such as role-plays or virtual reality.

Training content was coded as: intellectual, interpersonal, task-related or attitude.

RESULTS & DISCUSSION — so, what does the research say:

For attentional advice — this was one that I was able to immediately think of one-shot related applications for, so it was particularly interesting to me that medium to large positive effects were found for both skill-based and cognitive outcomes, with the largest gains found for skill-based outcomes — given that so much of what is taught in one-shots is skill-based, intended to promote success on particular assignments.  These effects are strongest when general, not specific, advice is given.

Metacognitive strategies –

The authors identified two main forms of meta-cognitive strategies that were studied: strategies that involved the learner asking why questions, and strategies where the learner was prompted to think aloud during learning activities.

The research shows that meta-cognitive strategies seem to promote all levels of cognitive and skill-based learning.  Why-based strategies had more consistent effects for all levels of cognitive learning, which supports the authors’ initial hypothesis — but think-aloud strategies do a better job of supporting skill-based outcomes, which does not.

Advance organizers —

Positive results were found for these for both cognitive and skill-based outcomes.  Of particular note for instruction librarians is this finding:  “stronger results were found for graphic organizers than text-based ones across all levels of skill-based outcomes.”

Goal orientation —

When compared with situations were no overt goal was provided to the learners, goal orientations seem to support all types of learning: cognitive, skill-based and affective, with the strongest effects (just by a little bit) in the affective domain.

The authors also hypothesized that mastery goals would be better than performance goals.  The findings suggest this hypothesis is true for skill-based learning and for affective learning.  They were not able to test it for cognitive learning.  They did find something odd with regards to affective learning – when they compared performance goals and mastery goals separately against no-goal situations, then performance goals showed greater effects.  But when they compared mastery goals and performance goals, stronger effects were found for mastery goals.

Preparatory information –

This showed positive effects for skill-based and affective learning, but they weren’t able to test it for cognitive learning outcomes.

SO WHAT ELSE COULD HAVE AN EFFECT?

The training conditions and content were coded to see if those things had an effect on which pre-practice interventions were most effective.  Of particular interest to me were the finding that stronger effects for cognitive learning were found for advance organizers paired with self-directed training (e.g. tutorials) than for traditional classrooms or simulations.  (Of course, it’s important to remember that those showed positive effects too).

RESULTS BY TYPE OF OUTCOME

This turned out to be the most interesting way to think about it for me, so I’m going to include all of these probably at a certain level of length…

For skill-based outcomes, broken down – the strategies that work best seem to be:

  • skill acquisition: mastery goals & graphic advance organizers.
  • skill compilation: think-aloud meta-cognitive strategies, attentional advice and goals.
  • skill automaticity: graphic organizers and pre-training goals.

This seems to suggest pretty strongly that librarians should find a way to communicate goals to students prior to the one-shot.  Obviously, the best way to do this would probably be via the classroom faculty member, which is why this also makes me think about the implicit message in the goals we do send to students – most specifically, I mean the implicit message sent by requirements like “find one of these, two of these, three of these and use them in your paper.  It does seem like this could be considered a performance goal more than a mastery goal and even if the main impact on students is added stress to perform, is that stress that is serving any purpose or should it be eliminated?

For cognitive outcomes, also broken down – these strategies emerged from the literature:

  • verbal knowledge: specific attentional advice, why-based meta-cognitive strategies, and graphic advance organizers had the largest effect.
  • knowledge organization: general attentional advice and think-aloud metacognitive strategies
  • development of cognitive strategies: why-based strategies and attentional advice.

This is interesting, of course, because while we know that teaching on this cognitive-outcome level is pretty hard in 50 minutes, a lot of the topics we’re asked to address in the one shot are really asking students to perform in that domain.  Ideas like information ethics, intellectual honestly, scholarly communication, identifying a good research article – these all require more than a set of skills, but also require a way of thinking.  So in this area, I am thinking okay, we can’t teach this in 50 minutes, but if we can prep them in advance, maybe we have a better chance of getting to something meaningful in that time.

For affective outcomes –

  • Overall, a pre-training goal orientation and attentional advice were most effective in this domain.

These might not seem relevant in the one-shot, but they really are.  We’re talking in many cases about teaching them something with the hope that they’ll use it later, when they really get to that stage of their research process, their confidence and self-efficacy is clearly relevant, as is their disposition to believe that you’re teaching them something valuable!  In fact, I think this might be as worth or more worth focusing on that cognitive outcomes.  So that makes these findings particularly interesting:

  • post training self-efficacy AND disposition toward future use of the training material were most influence when a performance goal orientation was used.
  • Attentional advice, mastery goals and preparatory information are also promising here.

Prepping for the one-shot (Peer Review Wednesday)

ResearchBlogging.org

Via the Research Blogging Twitter stream – I came across an article the other day that seemed like it would be of particular interest to practitioners of the one-shot, but as I was reading it I realized that it drew so heavily on an earlier model, that I should read that one too – so this week’s Peer Review Monday (on Wednesday) is going to be a two-parter.

Today’s article presents a Framework for Understanding Pre-Practice Conditions and their Impact on Learning. In other words — is there stuff we can do with students before a training session that will make for better learning in the training session? The authors say yes, that there are six categories of things we can do, which raises the related question – are all of the things we can do created equal?

CANNON-BOWERS, J., RHODENIZER, L., SALAS, E., & BOWERS, C. (1998). A FRAMEWORK FOR UNDERSTANDING PRE-PRACTICE CONDITIONS AND THEIR IMPACT ON LEARNING Personnel Psychology, 51 (2), 291-320 DOI: 10.1111/j.1744-6570.1998.tb00727.x

This article also reviews the existing literature on each category, but I’m really not going to recap that piece because that is also the focus of the other article, which was published this year and why look at both?

So I have started to feel very strongly that instruction in typical one-shots much more closely resembles training than teaching – at least how I think about teaching.  I’ve had some experiences this year where I have had to do the kind of “what does teaching mean to you” reflective writing that put into focus that there are some serious disconnects between some of the things that are important to me about teaching and the one-shot format, and it makes me wonder if some of the frustration I feel with instruction at times – and that others might be feeling as well – comes from fighting against that disconnect.  Instead of thinking about what I think about teaching (thoughts that started developing a long time ago, when I was teaching different content in credit courses that met several times a week over the course of several weeks) and trying to fit it into the one-shot model, perhaps it makes more sense to spend some time thinking about the model we have and what it could mean.

So, the training literature. Can a deep-learning loving, constructivism believing, sensemaking fangirl like me find inspiration there?  Well, apparently yes.

In their first section…

…the authors define what they mean by “practice.”  Practice in the context of this paper means the “physical or mental rehearsal of a task, skill or knowledge,” and it is done for the specific purpose of getting better at it (or in the case of knowledge, getting better at applying, explaining, etc. it).  It is not, itself, learning but it does facilitate learning.

They distinguish between practice conditions that exist before the training, during the training, and after it is done.  This article focuses on the before-the-training group – which I think is what makes it really interesting in that one-shot context.

In the second section…

…they dig into the six different types of pre-practice conditions that they categorized out of the available literature on the subject.  In their review of the literature, they tried to limit the studies included to empirical studies that focused on adult learners, but they were not always able to do so.

Attentional Advice

Attentional advice refers to providing attendees with information that they can use to get the most out of the training.  This information should not be information about how to do the thing you are going to be teaching — but information about the thing you are teaching.  This information should allow the learner to create a mental model that will help them make sense of what is being learned, and which will help connect what is being learned to what they already know.

The example they give describes a training for potential call-center employees.  The attentional advice given before the training includes information about the types of calls that come in, how to recognize and categorize them.  Not information about how to answer the calls directly.

This one got me thinking a lot about the possibilities of providing information about the types of sources students will find in an academic research process (as simple as scholarly articles/popular articles/books or more complex – review articles/ empirical research/ metaanalyses, and so on) as attentional advice before a session, instead of trying to teach it in a one-shot session where you have two options – to spend five minutes quickly describing it yourself, or to spend half of the session having the students do something active like examining the sources themselves and then teaching each other.

Metacognitive Strategies

Most instruction librarians can probably figure out what this one is – metacognitive strategies refer to strategies that help the learner manage their own learning.  These are not about the content of the session directly, but instead information about how the learner can be aware of and troubleshoot their own learning process.  The examples provided take the form of questions that learners can ask themselves during the training or instruction session.

Advance Organizers

I am sure the metacognitive strategies will spark some ideas for me, but it didn’t happen immediately – I think because I was distracted by this next category.  Advance organizers give learners, in advance, a way to organize the information they will be learning in the session.  So a really obvious example of this would be – if you want students to learn the content of a website, you could provide information in advance about the website’s navigational structure, and how that structure organizes the information.

This one really got me thinking too.  Another piece of information literacy instruction that is really, really important and about which we have a bunch of research and information backing us up is the research process – the iterative, recursive, back and forth learning process that is academic research.  We even have some useful and interesting models describing the process.  But in a one-shot, you’re working with students during a moment of that process and it’s really, really hard to push that session beyond the piece of the process that is relevant where they are at the moment.  What about providing advance information about the process – does that require rethinking the content of the session or the learning activities of the session — probably.  But would it provide a way for students to contextualize what you teach in the session.  I’m not sure, but I’m going to be thinking about this one more.

Goal Orientation

This one is pretty interesting in the more recent article.  There are two types of goals – mastery goals and performance goals.  Mastery goals focus attention on the learning process itself, while performance goals focus on achieving specific learning outcomes.  As a pre-practice condition, this means giving learners information about what they should focus on during the session.  As an example, they say that a performance goal orientation tells students in a training for emergency dispatchers to focus on dispatching the correct unit to an emergency in a particular amount of time.  A mastery goal orientation, on the other hand, tells the students to focus on identifying the characteristics they should consider when deciding which unit to dispatch to a particular emergency.

So – an performance goal orientation in the information literacy context might tell students to focus on retrieving a certain number of peer-reviewed articles during the session.  A mastery goal tells them to focus on identifying the characteristics of a peer-reviewed article.

Preparatory Information

This seems like it would be pretty much the same as Attentional Advice, but it’s not.  In this one you focus on letting the learner know stuff about the session environment itself — the examples they gave were situations where the training was likely to be very stressful, physically or emotionally difficult.

Pre-Practice Briefs

Finally, there’s this one, which refers specifically to team or group training.  In this one, you give the group information about performance expectations.  You establish the group members’ roles and responsibilities before the team training begins.

In the third and fourth sections…

The authors attempt to develop an integrated model for understanding all of these conditions, but they’re not able to do it.  Instead, they present a series of propositions drawn from the existing research.  Finally, they examine implications for day-to-day trainers and identify questions for future research.  The most essential takeaway here is – not all preparation for practice is equal and that we should do more research figuring out what works best, with what kind of tasks, and for what kind of learners.

Stay tuned for the second installment, where current-day researchers examine the last 12 years of research to see if this has happened – and where it has, they tell us what was found.


it’s the math

I’m not sure that even my tendency to see information literacy connections everywhere will explain why I’m posting this, but I just thought it was really interesting.  This morning, I got pointed to this article (via a delicious network) which argues that hands-on, unstructured, discovery-based learning doesn’t do the trick for many science students at the secondary level.  Using preparedness for college science as their definition of success, most students are more successful if their high school science learning is significantly structured for them by their teachers.

Structure More Effective in High School Science Classes, Study Reveals

What jumped out at me here was that the reason seemed to be linked to the math – students with good preparation in the math, did benefit from unstructured, discovery based learning.  And then there was another “similar articles to this one link at the bottom of the page, pointing to another study, making another point -which supports this idea too (which is not hugely surprising because both items point to different papers by the same researchers).

You do better in college if you’ve taken high school classes in chemistry, better in physics if you’ve taken physics – but the one big exception to the success in one doesn’t generalize argument?  You do better in everything if you’re well-prepared in math.

College Science Success Linked to Math and Same-Subject Preparation

After that there are more “articles like this one links” leading to articles about middle-school math teachers in the US being really ill-prepared, or things about gender and math and science which really got me thinking about further implications of those findings – if math is such a lynchpin.  So there is something there about how this dynamic, browsable environment makes your brain work in ways that make research better.

There’s also something there about context – getting the “math teachers aren’t prepared” article in the context of the “math is key” research made the significance of the former clearer, made how I could *use* that research much clearer than it would have been if I came upon it alone.  There’s also something there about the power of sites like ScienceDaily (and ScienceBlogs, and ResearchBlogging.org and others) to pull together research, present it in an accessible way in spaces where researchers/readers can make those connections.

And there might even be something there about foundational, cognitive skills that undergird other learning. But mostly, I just found it interesting.

—————

Studies referenced were reported on here:

Sadler, Philip M. & Tai, Robert, H.   The two high-school pillars supporting college science (Education Forum)  Science 27 July 2007:  Vol. 317. no. 5837, pp. 457 – 458.   DOI: 10.1126/science.1144214  (paywall)
Tai & Sadler, Same Science for all?  Interactive association of structure in learning activities and academic attainment background on college science performance in the USAInternational Journal of Science Education, Volume 31, Issue 5 March 2009 , pages 675 – 696.  DOI: 10.1080/09500690701750600

what do huge numbers + powerful computers tell us about scholarly literature? (peer-reviewed Monday)

ResearchBlogging.org A little more than a month ago, I saw a reference to an article called Complexity and Social Science (by a LOT of authors).  The title intrigued me, but when I clicked through I found out that it was about a different kind of complexity than I had been expecting.

Still, because the authors had made the pre-print available, I started to read it anyway and found myself making my way through the whole thing. The article is about what might be possible with computers and data and powerful computers able to crunch lots of data – what might be possible for the social sciences, not just the life sciences or the physical sciences. The reason it grabbed me was this here -

Computational social science could easily become the almost exclusive domain of private companies and government agencies. Alternatively, there might emerge a “Dead Sea Scrolls” model, with a privileged set of academic researchers sitting on private data from which they produce papers that cannot be critiqued or replicated. Neither scenario will serve the long-term public interest in the accumulation, verification, and dissemination of knowledge.

See, the paper opens by making the point that research in fields like biology and physics have been incontrovertibly transformed by “capacity to collect and analyze massive amounts of data” but while lots and lots of people are doing stuff online every day – stuff that leaves “breadcrumbs” that can be noticed, counted, tracked and analyzed, the literature in the social sciences includes precious few examples of that kind of data analysis.  Which isn’t to say that it isn’t happening – it is and we know it is, but it’s the googles and the facebooks and the NSA’s that are doing it. The quotation about gets at the implications of that.

The article is brief and well worth a scan even if you, like me, need a primer to really understand the kind of analysis they are talking about.  I read it, bookmarked it, briefly thought about writing about it here but couldn’t really come up with the information literacy connection I wanted (there is definitely stuff there – if nowhere else it’s in the discussion of privacy, but the connection I wasn’t looking for wasn’t there for me at that moment) so I didn’t.

But then last week, I saw this article, Clickstream Data Yields High-Resolution Maps of Science, linked in the ResearchBlogs twitter feed (and since then at Visual Complexity, elearnspaceStephen’s Web, Orgtheory.net, and EcoTone).

And they connect – because while this specific type of inquiry isn’t one of the examples listed in the Science article, this is exactly what happens when you turn the huge amounts of data available, all of those digital breadcrumbs, into a big picture of what people are doing on the web — in this case what they are doing when they work with the scholarly literature. And it’s a really cool picture:

The research is based on data gathered from “scholarly web portals” – from publishers, journals, aggregators and institutions.  The researchers collected nearly 1 billion interactions from these portals, and used them to develop a journal clickstream model, which was then visualized as a network.

For librarians, this is interesting because it adds richness to our picture of how people, scholars, engage with the scholarly literature – dimensions not captured by traditional measures of impact data.  For example, what people cite and what they actually access on the web aren’t necessarily the same thing, and a focus on citation as the only measure of significance has always provided only a part of whatever picture there is out there.  Beyond this, as the authors point out, clickstream data allows analysis of scholarly activity in real-time, while to do citation analysis one has to wait out the months-and-years delay of the publication cycle.

It’s also interesting in that it includes data not just from the physical or natural sciences, but from the social sciences and humanities as well.

What I also like about this, as an instruction librarian, is the picture that it provides of how scholarship connects.  It’s another way of providing context to students who don’t really know what disciplines are, don’t really know that there are a lot of different scholarly discourses, and who don’t really have the tools yet to contextualize the scholarly literature they are required to use in their work.  Presenting it as a visual network only highlights this potential for this kind of research more.

And finally – and pulling this back to the Science article mentioned at the top, this article is open – published in an open-access journal and I have to think that the big flurry of attention is has received in the blogs I read, blogs with no inherent disciplinary or topical connection to each other, is in part because of that.

———————-

Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabasi, A., Brewer, D., Christakis, N., Contractor, N., Fowler, J., Gutmann, M., Jebara, T., King, G., Macy, M., Roy, D., & Van Alstyne, M. (2009). SOCIAL SCIENCE: Computational Social Science Science, 323 (5915), 721-723 DOI: 10.1126/science.1167742

Bollen, J., Van de Sompel, H., Hagberg, A., Bettencourt, L., Chute, R., Rodriguez, M., & Balakireva, L. (2009). Clickstream Data Yields High-Resolution Maps of Science PLoS ONE, 4 (3) DOI: 10.1371/journal.pone.0004803

peer-review, what it is good for? (peer-reviewed Monday)

ResearchBlogging.org

In a lot of disciplines, the peer reviewed literature is all about the new, but while the stories may be new, they’re usually told in the same same same old ways.  This is a genre that definitely has its generic conventions.  So while the what of the articles is new, it’s pretty unusual to see someone try to find a new way to share the what.  I’ll admit it, that was part of the attraction here.

And also attractive is that it is available.  Not really openly available, but it is in the free “sample” issue of the journal Perspectives on Psychological Science.  I’m pulling this one thing out of that issue, but there are seriously LOTS of articles that look interesting and relevant if you think about scholarship, research, or teaching/learning those things — articles about peer review, IRB, research methods, evidence-based practice, and more.

Trafimow, D., & Rice, S. (2009). What If Social Scientists Had Reviewed Great Scientific Works of the Past? Perspectives on Psychological Science, 4 (1), 65-78 DOI: 10.1111/j.1745-6924.2009.01107.x

So here’s the conceit – the authors take several key scientific discoveries, pretend they have been submitted to social science/ psychology journals, and write up some typical social-science-y editors’ decisions.  The obvious argument of the paper is that reviewers in social science journals are harsher than those in the sciences, and as a result they are less likely to publish genius research.

No Matter Project (flickr)

I think that argument is a little bit of a red herring; the real argument of the paper is more nuanced.  The analogy I kept thinking about was the search committee with 120 application packets to go through – that first pass through, you have to look for reasons to take people out of the running, right?  That’s what they argue is going on with too many reviewers – they’re looking for reasons to reject.  They further argue that any reviewer can find things to criticize in any manuscript, and that just because an article can be criticized doesn’t mean it shouldn’t be published:

A major goal is to dramatize that a review who wishes to find fault is always able to do so.  Therefore, the mere fact that a manuscript can be criticized provides insufficient reason to evaluate it negatively.

So, according to their little dramatizations, Eratosthenes, Galileo, Newton, Harvey, Stahl, Michelson and Morley, Einstein, and Borlaug would each have been rejected by social science reviewers, or at least some social science reviewers.  I won’t get into the specifics of the rejection letters – Einstein is called “insane” (though also genius – reviewers disagree, you know) and Harvey “fanciful” but beyond these obviously amusing conclusions are some deeper ideas about peer review and epistemology.

In their analysis section, Trafimow and Rice come up with 9 reasons why manuscripts are frequently rejected:

  • it’s implausible
  • there’s nothing new here
  • there are alternative explanations
  • it’s too complex
  • there’s a problem with methodology (or statistics)
  • incapable of falsification
  • the reasoning is circular
  • I have different questions I ask about applied work
  • I am making value judgments

A few of these relate to the inherent conservatism of science and peer review, that has been well established (and which was brought up here a few months ago).  For example, plausibility refers to reviewers are inclined to accept what is already “known” as plausible, and challenges to that received knowledge as implausible, no matter how strong the reasoning behind the challenging interpretation.

A few get at that “trying to find fault” thing I mentioned above.  You can always come up with some “alternative explanation” for a researcher’s results, and you can always suggest some other test or some other measurement a researcher “should have” done.  The trick is to suggest rejection only when you can show how that missing test, or alternative explanation really matters, but they suggest that a lot of reviewers don’t do this.

Interestingly, Female Science Professor had a similar post today, about reviewers who claim that things are not new, but who do not provide citations to verify that claim.  Trafimow and Rice spend a bit of time themselves on the “nothing new” reason for rejectin.  They suggest that there are five levels at which new research or knowledge can make a new contribution:

  • new experimental paradigm
  • new finding
  • new hypothesis
  • new theory
  • new unifying principle

They posit that few articles will be “new” in all of these ways, and that reviewers who want to reject an article can focus on the dimension where the research isn’t new, while ignoring what is.

Which relates to the value judgments, or at least to the value judgment they spend the most time on – the idea that social science reviewers value data, empirical data, more than anything else, even at the expense of potentially groundbreaking new theory that might push the discourse in that field forward.  They suggest that a really brilliant theory should be published in advance of the data – that other, subsequent researchers can work on that part.

And that piece is really interesting to me because the central conceit of this article focuses our attention, with hindsight, on rejections of stuff that would fairly routinely be considered genius.  And even the most knee-jerk, die hard advocate of the peer review process would not make the argument that most of the knowledge reported in peer-reviewed journals is also genius.  So what they’re really getting at here isn’t does the process work for most stuff so much as it is, are most reviewers in this field able to recognize genius when they see it, and are our accepted practices likely to help them or hinder them?

More Revision, Djenan (flickr)

And here’s the thing – I am almost thinking that they think that recognizing genius isn’t up to the reviewers.  I know!  Crazytalk.  But one of the clearest advantages to peer review is that revision based on thoughtful, critical, constructive commentary by experts in the field will, inherently, make a paper better.  That’s an absolute statement but one I’m pretty comfortable making.

What I found striking about Trafimow and Rice’s piece is that over and over again I kept thinking that the problems with the problems they were identifying was that they led to reviews that weren’t helpful to the authors.  They criticize suggestions that won’t make the paper better, that conventions that shouldn’t apply to all research, and the like.  They focus more on bad reviews than good and they don’t really talk explicitly about the value of peer review but if I had to point at the implicit value of peer review as suggested by this paper, that would be it.

There are two response pieces alongside this article, and the first one picks up this theme.  Raymond Nickerson does spend some time talking about one purpose of reviews being to ensure that published research meets some standard of quality, but he talks more about what works in peer review and what authors want from reviewers – and in this part of his response he talks about reviews that help authors make papers better.  In a small survey he did:

Ninety percent of the respondents expected reviewers to do substantially more than advise an editor regarding a manuscript’s publishability.  A majority (77%) expressed preferences for an editorial decision with detailed substantive feedback regarding problems and suggestions for improvement…

(Nickerson also takes issue with the other argument implied by the paper’s title – that the natural and physical sciences have been so much kinder to their geniuses.  And in my limited knowledge of this area, that is a point well taken.  That inherent conservatism of peer rview certainly attaches in other fields – there’s a reason why Einstein’s Theory of Special Relativity is so often put forward as the example of the theory published in advance of the data.  It’s not the only one, but it is not like there are zillions of examples to choose from.)

Nickerson does agree with Trafimow and Rice’s central idea – that just because criticisms exist doesn’t mean new knowledge should be rejected.  M. Lynne Cooper, in the second response piece, also agrees with this premise but spends most of her time talking about the gatekeeper, or quality control, aspects of the peer review process.  And as a result, her argument, at least to me, is less compelling.

She seems too worried that potential reviewers will read Trafimow and Rice and conclude that they should not ever question methodology, or whether something is new — that just because Trafimow and Rice argue that these lines of evaluation can be mis-used, that potential reviewers will assume that they cannot be properly used.  That seems far-fetched to me, but what do I know?  This isn’t my field.

Cooper focuses on what Trafimow and Rice don’t: what makes a good review.  A good review should:

  • Be evaluative and balanced between positives and negatives
  • Evaluate connections to the literature
  • Be factually accurate and provide examples and citations where criticisms are made
  • Be fair and unbiased
  • Be tactful
  • Treat the author as an equal

But I’m less convinced by Cooper’s suggestions for making this happen.  She rejects the idea of open peer review in two sentences, but argues that the idea of authors giving (still anonymous) reviewers written feedback at the end of the process might cause reviewers to be more careful with their work.  She does call, as does Nickerson, for formal training.  She also suggests that the reviewers’ burden needs to decrease to give them time to do a good job, but given other things I have read make me wonder about her suggestion that there be fewer reviewers per paper.

In any event, these seem at best like bandaid solutions for a much bigger problem.   See, what none of these papers do (and it’s not their intent to do this) is talk about the bigger picture of scholarly communication and peer review.  And that’s relevant, particularly when you start looking at these solutions.  I was just at a presentation recently where someone argued that peer review was on it’s way out not for any of the usual reasons but because they were being asked to review more stuff, and they had time to review less.  Can limiting reviewing gigs to the best reviewers really work; can the burden on those reviewers be lightened enough?

The paper’s framing device includes science that pre-dates peer review, that pre-dates editorial peer review as we know it, that didn’t go through the full peer-review process, which begs the question – do we need editorial peer review to make this knowledge creation happen?  Because the examples they’re putting out there aren’t Normal Science examples.  These are the breaks and the shifts and the genius that Normal Science process kind of by definition has trouble dealing with.

And I’m not saying that editors and reviewers and traditional practices don’t work for genius, that’s would be ridiculous.  But I’m wondering if the peer-reviewed article is really the only way to get at all of the kinds of knowledge creation, of innovation, that the authors talk about in this article – is this process really the best way for scholars to communicate all of those five levels/kinds of new knowledge outlined above?  I don’t want to lose the idealistic picture of expert, mentor scholars lending their expertise and knowledge to help make others’ contributions stronger.  I don’t want to lose that which extended reflection, revision and collaboration can create.

I am really not sure that all kinds of scholarly communication or scholarly knowledge creation benefit from the iterative, lengthy, revision-based process of peer review.  I guess what I’m saying is that I don’t think problems with peer review by themselves are why genius sometimes gets stifled, and I don’t think fixing peer review will mean genius gets shared.  I don’t think the authors of any of these pieces think that either, but these pieces do beg that question – what else is there.

doodling as pedagogy

ResearchBlogging.org

This one has been all over the news in the last two days, but if you haven’t seen it, it’s an Early View article in the journal Applied Cognitive Psychology. The article suggests that people who doodle while they are listening to stuff retain more of what they hear than non-doodlers do.

As an unabashed doodler, for me it’s usually fancy typography-like versions of my dog’s name, this isn’t all that surprising. But my brain keeps going back to it — should we be figuring out ways to encourage our students to doodle in library sessions?

See, the article doesn’t say definitively why the doodling works.  But the author, Jackie Andrade, does suggest that it might have something to do with keeping the brain engaged just enough to prevent daydreaming, but not enough to be truly distracting:

A more specific hypothesis is that doodling aids concentration by reducing daydreaming, in situations where daydreaming might be more detrimental to perfomance than doodling itself.

So you’ve got an information literacy session in the library, with a librarian-teacher you have no relationship at all, about a topic about which you may or may not think you need instruction.  That sounds like a perfect situation for daydreaming.

And it’s not too hard to think of ways to encourage doodling.  Handouts with screenshots of the stuff you’re talking about – encourage them to draw on the handouts.  Maybe even provide pencils?  I don’t know – it’s not an idea where I’ve fully figured out the execution, but I’m interested.

My students, most of the time, don’t take notes while I’m talking.  Part of this is my style, I talk fast and I don’t talk for very long in any one stretch before switching to hands-on.  But I don’t think that’s all of it – most of them don’t even take out note-taking materials unless they are told to do so by their professor (and then they ALL do) or unless I say “you should make a note of this” (then most of them do).   And this isn’t something I’ve worried about.  I have course pages they can look at if they need to return to something, and I’m confident that most of them know how to get help after the fact if they need it.

But the no-notetaking thing means that they aren’t even in a position to do any doodling.  And as someone who needs that constant hands/part of the brain occupation to stay focused, I wonder why I’ve never thought about that as a problem before.

This study specifically tried to make sure that the subjects were prone to boredom.  They had them do this task right after they had just finished another colleagues experiment, thinking that would increase the chance that they would be bored.  And they gave them a boring task – monitoring a voice message.  Half doodled, half did not, and then they were tested on their recall of the voice message.

I don’t mean to suggest that information literacy sessions are inherently boring; I don’t actually think they are.  But I think some of the conditions for boredom are there, particularly in the one-shot setting, and I don’t think there’s stuff that we can do about all of those conditions.  Some of them are inherent.  The idea of using the brain research that’s out there to figure out some strategies for dealing with that interests me a lot.

——————–
Jackie Andrade (2009). What does doodling do? Applied Cognitive Psychology DOI: 10.1002/acp.1561

Peer-reviewed Monday (plus 24 hours) – has anyone tried out this Delphi method?

ResearchBlogging.org

So this is a little different for peer-reviewed Monday, even though it is a peer-reviewed article about information literacy. It’s different in that I chose the article because of the research method – the infolit topic was just a bonus. I’m going to be involved in a project for the Oregon Library Association that is going to be using this same method, so I wanted to check it out.

(It’s the Delphi method, if you’re curious. And I talked about another project that used it about halfway through this post.)

In the January issue of portal: Libraries in the Academy, Laura Sanders from Simmons College uses the Delphi method to do some forecasting about information literacy and academic libraries. The Delphi method is frequently used for forecasting, which is actually how my project will be using it, so we’re off to a good start.

So, in short, the Delphi method involved identifying a set of experts on a topic. Then these experts are asked to complete a survey – an open-ended survey with lots of room for them to talk about what they think is important. The researcher collects the surveys and synthesizes the responses, and then sends out another set of questions (which may be new and which may be repeats) for another set of responses. This goes on with the goal being consensus – expert consensus on the topic in question. In this case, the experts were asked to examine some potential scenarios for the future of information literacy:

This study develops possible scenarios for the future of library instruction services and offers practitioners, administrators, and library users a sense of how existing technologies, resources, and skills can best be employed to meet this vision.

Saunders identified her experts by their participation in information literacy organizations, publishing, presenting, and research. She identified 27, 14 agreed to participate and 13 eventually did. She did two rounds of surveys. She pulled some potential futures for information literacy out of the literature (things stay the same, librarians get replaced by faculty who take over all information literacy instruction, and librarians and faculty collaborate) and asked the expert panel to talk about four things:

  1. the scenario they thought was most reasonable or likely and why
  2. any obstacles they could see getting in the way of realizing these scenarios
  3. alternative possibilities or scenarios
  4. other comments

After the first round, she kept one scenario (the collaborative one) and created another composite scenario based on responses from the experts (this last scenario posited a reduced need to teach information literacy because of improvements in the technology). Participants could reiterate their initial choice or choose the new scenario. It isn’t clear from the methods section whether or not the participants were given the same four questions again, nor is it clear what information they were given from the first round – did they see everyone’s responses, a synthesis or summary of those responses, or just the new questions?

The research showed that these experts were largely optimistic about the future of information literacy, and that they overwhelmingly thought the collaborative scenario was the most likely. They identified faculty resistant as a major obstacle, and also mentioned staff and money issues as obstacles. They believed that librarians should leverage their expertise to play a stronger role establishing information literacy goals at the institutional level.

They saw partnerships on instructional design and assignment design as a place where librarians would continue to play a role in information literacy instruction, even if classroom faculty took on more responsibility for teaching, but they expressed concern that librarians aren’t ready to take on those roles. They were also not sure that library schools were providing new practitioners with these skills and that knowledge:

For librarians to be truly integrated into the curriculum rather than offering one-shot sessions, they must have much more pedagogical and theoretical knowledge. Although practicing librarians might have experience with library instruction, few have the background to transition easily to the [consulting] roles being described. Furthermore, respondents were unsure that library school programs were developing courses to adequately prepare future graduates for these responsibilities

The experts also argued that assessment needs to be a concern, and they also raised the age-old question of what do we really mean by information literacy anyway. Following along the lines of the researchers discussed in two recent peer reviewed Mondays, they generally agreed that context is significant when it comes to information literacy, and that information literacy must be understood more broadly than “library research skills.” At the same time, some argued for the more reductionist, standards-based definitions because they are easier to assess.

On a methods level, I found this study compelling, though there were two things that nagged at me. First was the idea that 13 is just not enough experts. Too often Saunders was forced to spend significant time on a point only to undercut its significance when the reader realizes that it had been articulated by only one person on the panel. Some of them were necessary correctives or added important subtleties to the conversation, so I don’t fault her for including them. But when it’s just one voice it is just not possible to entirely dismiss the idea that that voice is alone because it’s wacky.

The second thing was related to this and that is that I didn’t get any sense from the article that any kind of real consensus had been reached, or that the experts involved had changed or refined their views as a result of the process. Those who were outliers after round one of the surveys remained outliers. As it was described to us, the Delphi process offers the interactivity and social learning benefits of a focus group, while allowing the participants to provide individual, thoughtful, reflective feedback. That may have gone on in this study, but I don’t feel like I really saw it if it did.

On a content level, I found the argument that instruction librarians needed more pedagogical and theoretical knowledge intriguing. I was struck by the extent to which the expert panel focused on teaching knowledge as the thing separating the faculty and the librarians, as shown by this quote from one of the panelists: “faculty ‘view librarians as having no pedagogic understanding.’”

Librarians frequently talk about teaching as something faculty don’t think we can do, but in my experience it isn’t teaching but research that causes this gap. I haven’t run across many faculty members who don’t think librarians can teach, though some certainly don’t know that they do, but I regularly run across faculty members who are surprised to hear that there is information science research. And when it comes to what would make me feel comfortable approaching faculty and saying “this is what your students need” – it is research on what students actually do need, what they do and don’t know, and more that would help me do that – not better knowledge or teaching techniques. This may be a function of spending a lot of time at research-focused institutions, but I’m not sure. And it may be that this is exactly what these experts mean by knowledge that is more pedagogical and theoretical, but again, I’m not sure.

And this relates to the assessment piece and the definitions of information literacy piece as well. Because these are both examples of places where there is a danger of following the path of least resistance – of defining information literacy like faculty understand it, or of assessing what other people think is important. Not that that has to be what happens, but the danger exists.

And I do wonder how these experts have themselves avoided the gaps they see in others – how they have developed the knowledge of theory and pedagogy that they think librarians need. Or maybe they haven’t – maybe they are including themselves in the number of librarians who aren’t ready to be faculty partners in this way. I’m not sure. But I have been thinking lately that the Delphi method might be useful for getting at that question as well – how do expert instruction librarians develop the knowledge they need to do what they do?

————-
Laura Saunders (2008). The Future of Information Literacy in Academic Libraries: A Delphi Study portal: Libraries and the Academy, 9 (1), 99-114 DOI: 10.1353/pla.0.0030