Yes, we did write that up

Finally!

Kate and I finally got an article related to our LOEX of the West presentation (from 2008!) finished and published.  This peer-reviewed article delay had nothing to do with publishing cycles and everything to do with writing process.  But it’s available (in pre-print) now, and I pretty much like it.

Beyond Peer-Reviewed Articles: Using Blogs to Enrich Students’ Understanding of Scholarly Work

Making one-shots better – what the research says (Peer Reviewed Monday, part 2)

ResearchBlogging.org

And now, on to Peer-Reviewed Monday, part two but still not Monday.

Mesmer-Magnus, J., & Viswesvaran, C. (2010). The role of pre-training interventions in learning: A meta-analysis and integrative review☆ Human Resource Management Review, 20 (4), 261-282 DOI: 10.1016/j.hrmr.2010.05.001

As I said earlier this week, this was started by a link to this article, a meta-analysis trying to dig deeper into the questions: which of the pre-practice interventions examined in the Cannon-Bowers, et al study are most effective?  For what type of learning outcomes?  And under what conditions?

The first part of the paper reviews what each of the pre-training interventions are, and presents hypotheses about what the research will reveal about their effectiveness.

METHOD

They reviewed 159 studies, reported in 128 manuscripts.  For this work, they considered only studies that met all of the following conditions:

  • they involved the administration of a pre-training intervention
  • the study population included adult learners
  • the intervention was part of a training program
  • the study measured at least one learning outcome
  • the study provided enough information to compute effect sizes.

The studies were coded for: the type of pre-practice intervention; the type of learning outcome; the method of training delivery; and the content of the training.

The codes for pre-practice intervention were drawn from Cannon-Bowers, et al: attentional advice, metacognitive strategies, advance organizers, goal orientation, and preparatory information.

The codes for learning outcomes were drawn from the Kraiger, et al (1993) taxonomy:

  • Cognitive learning (can be assessed at 3 stages: verbal knowledge, knowledge organization and cognitive strategies)
  • Skill-based learning (also assessed at 3 stages: skill acquisition, skill complication, and skill automaticity)
  • Affective learning (attitudinal outcomes, self-efficacy outcomes and disposition outcomes)

Training methods coded were very relevant to information literacy instruction: traditional classroom; self-directed or distance learning or simulations, such as role-plays or virtual reality.

Training content was coded as: intellectual, interpersonal, task-related or attitude.

RESULTS & DISCUSSION — so, what does the research say:

For attentional advice — this was one that I was able to immediately think of one-shot related applications for, so it was particularly interesting to me that medium to large positive effects were found for both skill-based and cognitive outcomes, with the largest gains found for skill-based outcomes — given that so much of what is taught in one-shots is skill-based, intended to promote success on particular assignments.  These effects are strongest when general, not specific, advice is given.

Metacognitive strategies –

The authors identified two main forms of meta-cognitive strategies that were studied: strategies that involved the learner asking why questions, and strategies where the learner was prompted to think aloud during learning activities.

The research shows that meta-cognitive strategies seem to promote all levels of cognitive and skill-based learning.  Why-based strategies had more consistent effects for all levels of cognitive learning, which supports the authors’ initial hypothesis — but think-aloud strategies do a better job of supporting skill-based outcomes, which does not.

Advance organizers —

Positive results were found for these for both cognitive and skill-based outcomes.  Of particular note for instruction librarians is this finding:  “stronger results were found for graphic organizers than text-based ones across all levels of skill-based outcomes.”

Goal orientation —

When compared with situations were no overt goal was provided to the learners, goal orientations seem to support all types of learning: cognitive, skill-based and affective, with the strongest effects (just by a little bit) in the affective domain.

The authors also hypothesized that mastery goals would be better than performance goals.  The findings suggest this hypothesis is true for skill-based learning and for affective learning.  They were not able to test it for cognitive learning.  They did find something odd with regards to affective learning – when they compared performance goals and mastery goals separately against no-goal situations, then performance goals showed greater effects.  But when they compared mastery goals and performance goals, stronger effects were found for mastery goals.

Preparatory information –

This showed positive effects for skill-based and affective learning, but they weren’t able to test it for cognitive learning outcomes.

SO WHAT ELSE COULD HAVE AN EFFECT?

The training conditions and content were coded to see if those things had an effect on which pre-practice interventions were most effective.  Of particular interest to me were the finding that stronger effects for cognitive learning were found for advance organizers paired with self-directed training (e.g. tutorials) than for traditional classrooms or simulations.  (Of course, it’s important to remember that those showed positive effects too).

RESULTS BY TYPE OF OUTCOME

This turned out to be the most interesting way to think about it for me, so I’m going to include all of these probably at a certain level of length…

For skill-based outcomes, broken down – the strategies that work best seem to be:

  • skill acquisition: mastery goals & graphic advance organizers.
  • skill compilation: think-aloud meta-cognitive strategies, attentional advice and goals.
  • skill automaticity: graphic organizers and pre-training goals.

This seems to suggest pretty strongly that librarians should find a way to communicate goals to students prior to the one-shot.  Obviously, the best way to do this would probably be via the classroom faculty member, which is why this also makes me think about the implicit message in the goals we do send to students – most specifically, I mean the implicit message sent by requirements like “find one of these, two of these, three of these and use them in your paper.  It does seem like this could be considered a performance goal more than a mastery goal and even if the main impact on students is added stress to perform, is that stress that is serving any purpose or should it be eliminated?

For cognitive outcomes, also broken down – these strategies emerged from the literature:

  • verbal knowledge: specific attentional advice, why-based meta-cognitive strategies, and graphic advance organizers had the largest effect.
  • knowledge organization: general attentional advice and think-aloud metacognitive strategies
  • development of cognitive strategies: why-based strategies and attentional advice.

This is interesting, of course, because while we know that teaching on this cognitive-outcome level is pretty hard in 50 minutes, a lot of the topics we’re asked to address in the one shot are really asking students to perform in that domain.  Ideas like information ethics, intellectual honestly, scholarly communication, identifying a good research article – these all require more than a set of skills, but also require a way of thinking.  So in this area, I am thinking okay, we can’t teach this in 50 minutes, but if we can prep them in advance, maybe we have a better chance of getting to something meaningful in that time.

For affective outcomes –

  • Overall, a pre-training goal orientation and attentional advice were most effective in this domain.

These might not seem relevant in the one-shot, but they really are.  We’re talking in many cases about teaching them something with the hope that they’ll use it later, when they really get to that stage of their research process, their confidence and self-efficacy is clearly relevant, as is their disposition to believe that you’re teaching them something valuable!  In fact, I think this might be as worth or more worth focusing on that cognitive outcomes.  So that makes these findings particularly interesting:

  • post training self-efficacy AND disposition toward future use of the training material were most influence when a performance goal orientation was used.
  • Attentional advice, mastery goals and preparatory information are also promising here.

Prepping for the one-shot (Peer Review Wednesday)

ResearchBlogging.org

Via the Research Blogging Twitter stream – I came across an article the other day that seemed like it would be of particular interest to practitioners of the one-shot, but as I was reading it I realized that it drew so heavily on an earlier model, that I should read that one too – so this week’s Peer Review Monday (on Wednesday) is going to be a two-parter.

Today’s article presents a Framework for Understanding Pre-Practice Conditions and their Impact on Learning. In other words — is there stuff we can do with students before a training session that will make for better learning in the training session? The authors say yes, that there are six categories of things we can do, which raises the related question – are all of the things we can do created equal?

CANNON-BOWERS, J., RHODENIZER, L., SALAS, E., & BOWERS, C. (1998). A FRAMEWORK FOR UNDERSTANDING PRE-PRACTICE CONDITIONS AND THEIR IMPACT ON LEARNING Personnel Psychology, 51 (2), 291-320 DOI: 10.1111/j.1744-6570.1998.tb00727.x

This article also reviews the existing literature on each category, but I’m really not going to recap that piece because that is also the focus of the other article, which was published this year and why look at both?

So I have started to feel very strongly that instruction in typical one-shots much more closely resembles training than teaching – at least how I think about teaching.  I’ve had some experiences this year where I have had to do the kind of “what does teaching mean to you” reflective writing that put into focus that there are some serious disconnects between some of the things that are important to me about teaching and the one-shot format, and it makes me wonder if some of the frustration I feel with instruction at times – and that others might be feeling as well – comes from fighting against that disconnect.  Instead of thinking about what I think about teaching (thoughts that started developing a long time ago, when I was teaching different content in credit courses that met several times a week over the course of several weeks) and trying to fit it into the one-shot model, perhaps it makes more sense to spend some time thinking about the model we have and what it could mean.

So, the training literature. Can a deep-learning loving, constructivism believing, sensemaking fangirl like me find inspiration there?  Well, apparently yes.

In their first section…

…the authors define what they mean by “practice.”  Practice in the context of this paper means the “physical or mental rehearsal of a task, skill or knowledge,” and it is done for the specific purpose of getting better at it (or in the case of knowledge, getting better at applying, explaining, etc. it).  It is not, itself, learning but it does facilitate learning.

They distinguish between practice conditions that exist before the training, during the training, and after it is done.  This article focuses on the before-the-training group – which I think is what makes it really interesting in that one-shot context.

In the second section…

…they dig into the six different types of pre-practice conditions that they categorized out of the available literature on the subject.  In their review of the literature, they tried to limit the studies included to empirical studies that focused on adult learners, but they were not always able to do so.

Attentional Advice

Attentional advice refers to providing attendees with information that they can use to get the most out of the training.  This information should not be information about how to do the thing you are going to be teaching — but information about the thing you are teaching.  This information should allow the learner to create a mental model that will help them make sense of what is being learned, and which will help connect what is being learned to what they already know.

The example they give describes a training for potential call-center employees.  The attentional advice given before the training includes information about the types of calls that come in, how to recognize and categorize them.  Not information about how to answer the calls directly.

This one got me thinking a lot about the possibilities of providing information about the types of sources students will find in an academic research process (as simple as scholarly articles/popular articles/books or more complex – review articles/ empirical research/ metaanalyses, and so on) as attentional advice before a session, instead of trying to teach it in a one-shot session where you have two options – to spend five minutes quickly describing it yourself, or to spend half of the session having the students do something active like examining the sources themselves and then teaching each other.

Metacognitive Strategies

Most instruction librarians can probably figure out what this one is – metacognitive strategies refer to strategies that help the learner manage their own learning.  These are not about the content of the session directly, but instead information about how the learner can be aware of and troubleshoot their own learning process.  The examples provided take the form of questions that learners can ask themselves during the training or instruction session.

Advance Organizers

I am sure the metacognitive strategies will spark some ideas for me, but it didn’t happen immediately – I think because I was distracted by this next category.  Advance organizers give learners, in advance, a way to organize the information they will be learning in the session.  So a really obvious example of this would be – if you want students to learn the content of a website, you could provide information in advance about the website’s navigational structure, and how that structure organizes the information.

This one really got me thinking too.  Another piece of information literacy instruction that is really, really important and about which we have a bunch of research and information backing us up is the research process – the iterative, recursive, back and forth learning process that is academic research.  We even have some useful and interesting models describing the process.  But in a one-shot, you’re working with students during a moment of that process and it’s really, really hard to push that session beyond the piece of the process that is relevant where they are at the moment.  What about providing advance information about the process – does that require rethinking the content of the session or the learning activities of the session — probably.  But would it provide a way for students to contextualize what you teach in the session.  I’m not sure, but I’m going to be thinking about this one more.

Goal Orientation

This one is pretty interesting in the more recent article.  There are two types of goals – mastery goals and performance goals.  Mastery goals focus attention on the learning process itself, while performance goals focus on achieving specific learning outcomes.  As a pre-practice condition, this means giving learners information about what they should focus on during the session.  As an example, they say that a performance goal orientation tells students in a training for emergency dispatchers to focus on dispatching the correct unit to an emergency in a particular amount of time.  A mastery goal orientation, on the other hand, tells the students to focus on identifying the characteristics they should consider when deciding which unit to dispatch to a particular emergency.

So – an performance goal orientation in the information literacy context might tell students to focus on retrieving a certain number of peer-reviewed articles during the session.  A mastery goal tells them to focus on identifying the characteristics of a peer-reviewed article.

Preparatory Information

This seems like it would be pretty much the same as Attentional Advice, but it’s not.  In this one you focus on letting the learner know stuff about the session environment itself — the examples they gave were situations where the training was likely to be very stressful, physically or emotionally difficult.

Pre-Practice Briefs

Finally, there’s this one, which refers specifically to team or group training.  In this one, you give the group information about performance expectations.  You establish the group members’ roles and responsibilities before the team training begins.

In the third and fourth sections…

The authors attempt to develop an integrated model for understanding all of these conditions, but they’re not able to do it.  Instead, they present a series of propositions drawn from the existing research.  Finally, they examine implications for day-to-day trainers and identify questions for future research.  The most essential takeaway here is – not all preparation for practice is equal and that we should do more research figuring out what works best, with what kind of tasks, and for what kind of learners.

Stay tuned for the second installment, where current-day researchers examine the last 12 years of research to see if this has happened – and where it has, they tell us what was found.


Word of the day: Advertorial

So advertorial – one of those words (like “anecdata”) that has meaning the first time you hear it.  A piece of writing that is made to look like one thing (usually an article) but which is really another thing (an advertisement).

While the most famous example of this for instruction librarians is undoubtedly the advertisements for Big Pharma in the form of scholarly journals flying under Elsevier’s flag of convenience, they are apparently and not surprisingly a well-established tool in the public relations toolkit.  They even give awards for them.  In the last round of Bronze Anvil Awards (given by the Public Relations Society of America to “recognize outstanding public relations tactics”) there were two awards given to advertorials — one to InSinkerator for something called InSinkerator Gets Home Builders to Think Green, and one to the Florida Department of Citrus, for their Florida Grapefruit Makes Headlines.

So why am I thinking about advertorials today?  Because they are wrecking one of my favorite places to go on the Internet — ScienceBlogs.

In short, ScienceBlogs disastrously, inexplicably, weirdly, agreed to allow a new nutrition blog to join ScienceBlogs – which is an invitation-only type networks of blogs about science and scientific research.  The weird, disastrous, etc. thing about this new blog, called Food Frontiers, was that it was produced by PepsiCo, and the decision to fairly radically change the type of content that was part of the ScienceBlogs network was made in an uncommunicative, opaque, closed way.

Summaries of the fallout – which bloggers are staying which are going, where the going bloggers are now – can be best found here, at Carl Zimmer’s blog (associated with Discover magazine) — Oh, Pepsi, What Hast Thou Wrought.

A Note from ScienceBlogs can be found on the former site of the Pepsi blog, explaining the decision to take it down.

So what does this all mean and why do I care?  Lots of people know that I love ScienceBlogs, or that I have loved ScienceBlogs, as a librarian who teaches – my love for it has only grown.  So now what?

Dave Mosher provides a good summary of what this means for content from ScienceBlogs in the future – which is the issue about which I am really concerned.  I don’t

That effort signals a fundamental change to the way their content is structured:

Before: Blogs.
After: Editorial blogs. | Advertorial blogs.

I type “signals” and not “is” because the transformation isn’t complete.

Because part of this is about reputation – and not reputation in that individual kind of way, but reputation in that authority/publishing/information literacy kind of way that means so much to students struggling to find their way through the scholarly landscape –

From Dave Munger (emphasis added) -

The hypocrisy in handing a nutrition podium to a company that is seriously implicated in the global obesity crisis was astonishing, and even worse, the dozens of bloggers who’ve worked for years to build ScienceBlogs’ reputation were taken completely by surprise.

Former ScienceBlogger David Dobbs nails the key irony here (again, emphasis added), arguing that PepsiCo is “buying credibility generated by others even as they damage same.”

As PalMD and others have pointed out, PepsiCo hardly lacks platform. The only value they can gain from writing here is to draw on the credibility created by a bunch of independent voices engaged in earnest,= thoughtful (well, most of the time), and genuine conversation.

What these (and countless other) commentaries point out is that the reputation of the site matters – that the name ScienceBlogs is supposed to mean something and one of the things it is supposed to mean is no corporate agendas — the fact that just anyone couldn’t write for ScienceBlogs, the fact that ScienceBloggers were writing independently, the fact that their creators, Seed Media, proclaims this lofty agenda (from their About page) all adds up to a set of expectations about what the content on the site was supposed to be.

We believe that science literacy is a pre-condition for progress in the 21st century. At a time when public interest in science is high but public understanding of science remains weak, we have set out to create innovative media ventures to improve science literacy and to advance global science culture.

While those expectations were not always reasonable and there were ads on the site, and whatever else might also have been a little muddy or murky before – there was an idea behind the project that was an important part of why this project was useful to me in the classroom and at the reference desk and in my own work.  It is not that this content was all supposed to be good, or right, or true, or even civil – but the reasons for it being written?  They were supposed to relate to improving the public understanding of science and science literacy.  So what does that mean in a world when that content is either editorial or advertorial?  No matter how easy it is to tell which is which on the site (and the RSS feed?  the Twitter stream??) – that changes things.

Bora Zivkovic hones in on this question of a network’s reputation in his post, explaining his reasons for leaving ScienceBlogs…

We have built an enormous reputation, and we need to keep guarding it every single day. Which is why the blurring of lines between us who are hired and paid to write (due to our own qualities and expertise which we earned), and those who are paying to have their material published here is deeply unethical. Scientists and journalists share some common ethical principles: transparency, authenticity and truth-telling. These ethical principles were breached. This ruins our reputation, undermines our work, and makes it more unpalatable for good blogger to consider joining Sb in the future.

Zivkovic goes on to discuss the ways in which the existence and influence of the ScienceBlogs network makes the people who blog there de facto science journalists – whether they are aware of (or willing to embrace) that fact or now. It is not surprising in this context (the context of how important science blogging has become to science journalism) that some of the first reactions to the Pepsi blog controversy came not from quick-on-the-draw bloggers, but from mainstream media outlets and watchdogs.

I don’t blog at ScienceBlogs (not many librarians do) and it’s not a crucial part of my everyday professional knowledge building because most of the content on the site isn’t directly aimed at my professional needs — it’s more the idea of the project that is important to my work than the reality of what is posted there on a day-to-day basis. That’s not true for everyone.  But as a libriarian, particularly a librarian working with first-year students making the transition to academic thinking, reading, and writing, ScienceBlogs was (and probably is) a go-to site for me.

A lot of the reason for this is the authority/credibility/reputation issues discussed above.  Not that my students could or should automatically trust any of the content on that or any site, but because I felt like I could tell them (quickly, in a 50-minute one-shot) why and how that information had been created in a way that could guide their critical and effective use of the site as a tool — an incredibly valuable tool – that would help them navigate expert research and academic writing.

But another part of the reason is good old fashioned findability.  As Zivkovic says in his discussion of the network effect at ScienceBlogs, most people don’t track blogs using RSS readers or other tools – they find the content when they search for it.  And when they search for it and find it on one blog in the network, all of the blogs in the network are made stronger.  I don’t expect my first-year students to really figure out yet what pieces of the dynamic web they want to track for scholarly or professional purposes – most of them, at 18, are still figuring out what those purposes will be. They may want to track stuff for a particular class, or a particular term, but yeah – for most of them the searchability and the browsability of this site was key to its being useful.  ResearchBlogging is good for that too, and there are other collections of resources that I can point indivdiual students to – but nothing else out there does what ScienceBlogs does (did?) as a place to illustrate the importance and utility of science blogging and academic blogging.

Carl Zimmer puts his finger on one of the main issues for me – if the bloggers leave ScienceBlogs that may be (probably will be) good for the quality of the content but bad for the findability of the content, and those things are not totally unrelated.

What I find particularly galling about this whole affair is that bloggers who don’t want to associate themselves with this kind of nonsense have to go through the hassle of leaving Scienceblogs and setting up their blog elsewhere. The technical steps involved may be wonderfully easy now (export files, open account on WordPress, import), but the social steps remain tedious.

Munger picks up the theme -

If they want to continue to have the kind of influence they used to have at ScienceBlogs, I think the bloggers who have left the site need to do something more than just start or restart their old, independent blogs. They need to form a new network — perhaps built around different principles, but a network nonetheless.

I think so too – I think if they lose the network effect, individual blogs and bloggers and small groups of same will be able to connect with one type of reader, and an important type of reader, but they’ll lose the true neophyte who stumbles on to new ways of talking about evidence and knowledge coming in through a Google Search — or because a librarian says “browse here for a while” when they’re stuck looking for topics.

Ira Flatow (NPR’s Science Friday) offers to talk about hosting departing ScienceBlogs bloggers’ blogs on the Science Friday site instead. And again here.  I suspect that even the benign oversight of NPR might seem too much to the gunshy bloggers who left ScienceBlogs, but I hope they do find each other again somewhere, or that they build new somewheres elsewhere.

Google Scholar search alerts

Searching today for articles about collaborative teaching philosophies (don’t ask)  – I saw this new little icon on the Google Scholar results – how long has this been here?

new search alert icon - Google Scholar result list

I clicked it, thinking it would give me the chance to email results to myself (which is something my students sometimes ask for, though not nearly as often as they ask why Google Scholar won’t format their citations for them).  But instead, it’s a chance to set up an alert for this search.

Google Scholar search alert, with articles only setI don’t actually know that I’ll use this because I don’t really want anything else coming to my email — an RSS feed would be nice.  But has this been around for months and I’ve just noticed it?  That could definitely be true – we’ll see how it works.

“how does the study measure up”

Here’s a great example of way that an academic blog post, written for a general audience, can be a crucial supplement or starting point for a student trying to decipher the peer-reviewed literature.  From Momma Data –

Blame Mom for High School Beer Binges: The Power of a Self-Fulfilling Prophecy

This post:

  • describes the study
  • identifies the important (and discipline-specific) concepts used by the authors
  • analyzes the study design
  • gives an opinion about the quality of the paper
  • explains the significance of the journal to the discourse

The only thing I’d like to see is a more robust comment stream – maybe some discussion/ refinement of the ideas. But all in all, a great example and on a topic students sometimes write about too!

History and libraries, but not always history of libraries.

Nicholas and I presented this afternoon at Online NW.  Presentation materials are available here, on Nicholas’ blog.  Good times!

We used Prezi to create the presentation.  This is what it looked like, all together, when it was done.  I know that some people I know have found it difficult to get used to, but I kind of really liked it.  Plus, I’ve used it so far on three very different computers in three very different contexts and it’s worked smoothly every time.

Plus, no dongle drama.

I am an unscrupulous, unscrupulous formatter

Knowing about my constant and abiding interest in all things peer-review, a colleague handed me this pamphlet the other day.  Published by a project I like, Sense about Science (and funded by, among others, Elsevier, Blackwell, the Royal Pharmaceutical Society of Great Britain, the Institute of Biology and the Medical Research Council), this pamphlet provides a good summary of a lot of reasons why people should value peer-reviewed research.

I really like its focus on the reproducability of research, the role that peer review plays in getting science out there to be acted upon by other scientists.  And this statement here gets at a lot of what I have been thinking about information evaluation lately – about how important it is that we evaluate sources within contexts, not in a vacuum:

If it is peer-reviewed, you can look for more information on what other scientists say about it, the size and approach of  the study and whether it is a part of a body of evidence pointing towards the same conclusions.

But this has me mystified.  A callout box titled How can you tell whether reported results have been peer reviewed? A question any academic reference librarian has struggled to answer at some point, right?

Their answer totally mystifies me.  I keep reading it and reading it and I can’t make it make any sense.   Seriously – they say the full reference to peer-reviewed papers is likely to look like this, and then they present – two formatted article citations, one from the New England Journal of Medicine and one from Science.  The Science one is APA, but I’m not even sure exactly what style the second one follows.

just formatted citations, right?

just formatted citations, right?

So under the citations, there’s a word balloon that says that unscrupulous people might “use this style on websites and articles to cite work that is not peer reviewed. But fortunately, this is rare.”

!

Wait, what?   So yeah, it turns out that I’m totally unscrupulous!  And so are you if you use APA to cite an article from the New Republic, or Time or The Journal of Really Lousy Non-Peer Reviewed Science!

I am so confused!  What do they mean by this?

mental debrief from WILU

There’s something about spring term that’s always crazy.  Last week was my last presentation obligation of the term – the WILU conference in Montreal.  WILU is one of my favorite conferences, based on the one time I’ve been before, and luckily we presented on Tuesday, so I was able to enjoy most of it without imminent presentation pressure looming over my head.

Kate and I presented on some very early findings from a research project we have been working on for the last several months – examining stories that instruction librarians tell.  I told Kate at the end that if I ended up blogging about this presentation at this early stage, it would be to write something up about how incredibly valuable it can be to present on research in the early stages, even in the very early stages.

Basically, the segment of the research that we presented on at WILU was drawn from an online survey where we asked instruction librarians to share some stories.  Our interest is … epistemological.  We were hoping to identify some themes that would suggest what we “know” as instruction librarians and professionals, as well as  some ideas of what we talk about, worry about, and feel proud about when it comes to  our professional practice.  This work was primarily intended to inform another round of story-gathering, done as interviews, but we were also hoping that these preliminary results would be interesting on their own.

ETA -it was brought to my attention that some more information about the kinds of stories we gathered might be useful.  This is the slide listing the prompts we used to elicit work stories.  They’re adapted from a chapter in this book.

story prompts

So beyond the obvious benefit of a deadline and potential audience forcing you into your data to figure out what it might say early on, presenting even those early findings was a really positive experience.  For one, other people are as interested in the story method as we are, which is awesome.

For another, a whole room full of other pairs of eyes is a fabulous thing.  Kate and I started the conversations that started this project talking about this conversation between Kate and I and some others (and further framed into reflective practice talk by Kirsten at Into the Stacks) though I don’t think it has stayed there.  There has definitely been research-question creep along the way.

We started the project thinking about theory/practice, a is obvious from the conversation linked above.  And we made the connection to reflective practice based on that as well – based on the idea that scholarship represents another way of knowing what we know, and thinking about ways that scholarship can inform and push our reflections on practice.

And we got a great question about whether it makes sense to conflate scholarship with theory in this context, especially when, as another commenter mentioned, much of the LIS literature isn’t clear when it comes to any theoretical frameworks the author used.  A really useful question to think about that scope of the project creep – and also exactly the kind of question I can never answer on the spot.

Theory vs. practice is useful shorthand, especially in a short session like these were.  And I do think that including non-theory generating scholarship in the initial conversations that sparked the project reflected some of the ambivalence we were seeing.  As I said at that time, I really don’t think all of that ambivalence is tied up in “if the scholarship in librarianship was more useful, or more rigorous, or more scholarly, or better-written, or  more theoretically grounded, I would totally use it.”

I also think that Schon’s Reflective Practitioner allows these things to be discussed together as well, not because he conflates them, but because he sets the Reflective Practitioner in contrast to both the pure theorist and the applied scientific researcher:

As one would expect from the hierarchical model of professional knowledge, research is institutionally separate from practice, connected to it by carefully defined relationships of exchange.  researchers are supposed to provide the basic and applied science from which to derive techniques for diagnosing and solving the problems of practice.  Practitioners are supposed to furnish researchers with problems for study and with tests of the utility of research results.

Schon argues that this hierarchical model of professional knowledge has dominated the way we understand, and teach, professional practice – and it is in both the development of grounding theory (basic, disciplinary knowledge) and the development of a body of rigorous, scientific applied knowledge for problem-solving that the practitioners, and the practitioner’s unique ways of knowing, are left out of the equation.

Which is a long way of saying that the initial connections we were making still have value for me mentally when thinking about these questions, but I’m not sure we want to stop there.  All of this begs the question of whether thinking about these questions, and thinking about the stories, with a clearer distinction between theory and practice in mind might be more useful.  I think maybe it would be.  On the one hand, clarity is good, and a lack of clarity in prior discussions might actually suggest the need for more clarity all by itself.

But the conversation brought a couple of additional thoughts to the forefront, neither of which were really clear until the mental presenting-dust settled.

Here and there along the way, I’ve been thinking about the real-world information literacy literature and its connection to this discussion.  One reason to not discuss it in our 30 minutes was the fact that some of what I have read in that literature recently (as relates to real-world information literacy in professional contexts) examines the differences between the ideal knowing captured in our professional texts/ training/ theories and the real-world/ tacit/ experiential knowing that comes with actually dealing with the uncertainty of practice.  The connections to our original questions probably seem clear, but I wasn’t comfortable calling the peer-reviewed literature our abstract, ideal text-based knowing in the same way as the firefighter’s manuals were understood in this article, for example.

Which on the one level is part of the subject of our next steps with this project – figuring out what our abstract, ideal, text-based knowing IS in instruction librarianship.  But on another level points to the problems with conflating theory and scholarship – parsing them out more clearly I think would make the connections to this body of literature more useful.

Related to this comes the question of our training (or lack therof) as instruction librarians, in LIS education and after that.  Between us, we saw several sessions about professional development for new librarians, which dovetailed with conversations we’d had about the distinction between the stuff we read related to information literacy in grad school and most of the stuff in the literature today.

Kate mentioned that the articles she read in library school instruction classes weren’t the articles about practice, but about theory.  I didn’t take a specific instruction class, but I would say the same was true at my school, and was definitely true in the learning theory class that I took.  I think to follow up on that question usefully will also require parsing that discussion more clearly.

So thanks to all of the people who participated in this great (for us) conversation, and we’ll be contacting people soon for the next round of work on the project.

Final lesson from WILU?  I’m still useless when I try to speak from notes.  Not necessarily the speaking part, though it is defintiely not natural for me, but more the actually using the information in my notes part.  I tried in this talk, not throughout but just in one moment at the end, and I still made a total mess of the process.  I walk away from them, get lost, talk past where I am in the notes, and leave things out anyway.  It’s weird that speaking from notes is as much a learned skill as speaking without them is, but it totally is.  I think I blame high school debate, and I suspect it’s too late for me now.

pay no attention to all that money behind the curtain

I give up.

You know that there is an intersection between science and marketing  – 4 of 5 doctors agree that X works for Y?

Most of the marketing goes on below public radar; it’s not directed at us, but at other medical professionals.   This 2005 article at PLoS Medicine couldn’t state it more strongly:  Medical Journals are an Extension of the Marketing Arm of Pharmaceutical Companies.

This article is talking about sponsored trials – research that is sponsored by drug companies, that finds that the drug in question works:

Overall, studies funded by a company were four times more likely to have results favourable to the company than studies funded from other sources. In the case of the five studies that looked at economic evaluations, the results were favourable to the sponsoring company in every case.  The evidence is strong that companies are getting the results they want, and this is especially worrisome because between two-thirds and three-quarters of the trials published in the major journals—Annals of Internal Medicine, JAMA, Lancet, and New England Journal of Medicine—are funded by the industry (citation here, Egger M, Bartlett C, Juni P. Are randomised controlled trials in the BMJ different? BMJ. 2001;323:1253.)

Which has been a topic of conversation for a while, but why stop there?  If the drug companies can create a bunch of the research, why don’t they create the journals too?  Just create a journal.  Don’t pretend that it’s reporting knowledge for the public good, don’t make it so the public can even find it, don’t make it so the doctors can even find it – don’t index it in Medline, don’t even put a website up.

That’s apparently what Merck and Elsevier did.

The full original story is behind The Scientist’s registration-wall, so here’s a good summary with extra added TOC analysis from Mitch André Garcia at Chemistry Blog.

See, I talked briefly here a while back about my frustration with people like Andrew Keen and Michael Gorman when they accept uncritically the idea of traditional media gatekeepers serving a quality-control or talent-identifying role, without acknowledging that the corporate media makes many decisions that are not based on a mission of guaranteeing quality or identifying genius.

And Kate and I talk frequently about how traditional methods of scholarly publishing are not intended to guarantee quality in terms of identifying the best articles, or even the most true or accurate articles, but that those methods are instead intended to create a body of knowledge that supports further knowledge creation.

We’ve managed to fill presentations about peer review pretty easily without focusing on the corporatization of scholarly publishing — there’s a lot of discussion of this corporatization in open access conversations already and a lot of confusion that comes up about the implications of open access for peer review.  Sometimes it seems like every open access conversation in the broader higher education world gets bogged down by misunderstandings about peer review.  So it is has seemed true that drawing this artificial, but workable, line between what we are talking about and what we’re not, just makes it easier to keep our focus on peer review itself.

But man – it might be just too artificial.  Maybe we can’t talk about peer review at all anymore without talking about the future of a system of knowledge reporting that is almost entirely dependent upon on the volunteer efforts of scholars and researchers, almost entirely dependent upon their professionalism and commitment to the quality of their disciplines, in a world where ultimate control is passing away from those scholars’ and researchers’ professional societies and into the hands of  corporate entities whose decisions are driven not by commitment to quality, knowledge creation or disciplinary integrity.

We’ve been focusing on “why pay attention to scholarly work and conversations going on on the participatory web” mostly in terms of how these things help us give our students access to scholarly material, how they help our students contextualize and understand scholarly debates, how they lay bare the processes of knowledge creation that lie under the surface of the perfect, final-product article you see in scholarly journals.  And all of those things are important.  But I think we’re going to have to add that “whistleblower” aspect — we need to pay attention to scholars on the participatory web so they can point out where the traditional processes are corrupt, and where the gatekeepers are making decisions that aren’t in the interests of the rest of us.

pointed to the story by friends on Twitter and Facebook

Here’s the article at BoingBoing

blog.bioethics.net (American Journal of Bioethics)

Drug Injury watch blog has links to reports of the Australian court case where the story was noticed earlier.