So, according to TechCrunch in 2010, Bill Gates thinks that by 2015, people won’t have to go away to college anymore because

Five years from now on the web for free you’ll be able to find the best lectures in the world…. It will be better than any single university.

Fast forward to this year and Harvard and MIT launch edX, designed to bring an interactive course experience to anyone with an Internet connection (so, not just lectures) – building a “global community of learners” and strengthening programs back on campus as well.

Online education and it’s potential to disrupt college as we know it is a talked-about thing, is what I am saying.

But despite that, I have never really thought about this.

(via Walking Paper)

It’s kind of a longish video with a pace that is measured, or slow – so if you didn’t read it basically it seems to be a platform that manages online course offerings – potential teachers can upload their classes, potential students can find and sign up for classes.  There’s some consistency in offerings – they’re all one-day, in-person workshops that cost $20.

Here’s the thing, I can see this working with enough critical mass — but I’m not sure I can see it working on a college campus.  But I think it should work on a college campus – like, I can see it working on a campus that’s not all that different than the ones we have.  Why?  Well, reasons…

  1. We have a lot of really smart students who know how to do stuff.  We also have a lot of really smart faculty and staff who know how to do stuff, but I haven’t figured out yet if it works better in my head to be something bringing the whole community together – building a learning community that encompasses the physical community — or if it’s better as a student-teaching-students thing.
  2. We have students (and faculty and staff) who have a lot of interests – who want to learn how to do stuff.
  3. We talk a lot about high impact educational practices — those practices that increase  student success and engagement.  What’s important about these practices isn’t so much “if students get these experiences then school will be easier for our students” so much as “if students get these experiences then they’ll develop the networks, resources and resilience to get through the tough parts, stay in school and ultimately figure out how to succeed.  Taking on the teaching role doesn’t directly fit any of these practices, but it seems to fit in spirit — basically, if the teaching feels like it’s part of what makes the community the community, then participating would increase attachment to the community.

But on the other hand, other reasons …

When school pressures hit, there’s very little that survives. Which is what I mean when I say I can see this working on a college that is similar but not exactly the same as what I see outside my window.  (or what I would see if I had a window).  Basically, what I mean here is that I find it hard to see our students finding time for this kind of, well, dabbling a lot of the time — they can use working out or even parties as a legit reason not to study — one keeps you healthy and the other keeps you in friends – but taking a class on fixing your bike?  No, that I can’t see being treated as a legit reason not to focus on the classes and learning you’re actually paying for.

And I’m not sure what that means – I can easily see something like this working with my students just after they leave college.  Well, not easily, but realistically, I can imagine this kind of ecosystem taking root.  In college, on the other hand, it’s a lot harder.  I’m not sure what I think about that.

But here’s the thing – this seems like a great thing for libraries to manage.  This is information literacy, browsing, exploration and curiosity.  Exactly the kind of thing we are all about in college – but think about the ecosystem we build to support it.  What’s missing?  This kind of collaborative sharing of expertise — the people networks.

Which brings it back to he discussions of online learning I started with  — see, I’m pulling it back around.  Seriously, I’m as surprised as you are.

One thing that got me (and really, almost everyone else) two years ago when that Bill Gates quote appeared was just what a top-down, boring view of education it suggested — sitting in front of lectures, absorbing the knowledge =/= education.

And I’m a known lecture defender, but seriously – what made college worth it for me was the people.  And not just the faculty, though they were important, but my peers as well.

Which is why I think, on one level, that I couldn’t stop thinking about this community-teaching model after seeing it this morning.  Because it’s using technology to develop the community, but it gets at something that could only work on campus – that reflects part of why I love our campus community (and all of the campuses and communities I’ve been a part of).  It gets at part of the reason why, even though I had to do a distance library degree, I chose a program where I had classmates.

Of course, I learned a lot from my classmates, and of course I learned a lot from my interactions with faculty.  But even more than that – those relationships (especially with peers) are what created the culture of learning that existed in my college experience — the expectations, the standards, the ideas about what was worth your time and what weren’t – -those things were all social, shared values that we gave each other.  Some campuses did it really well, building a culture that really pushed me beyond where I would have been on my own.  Some, well, showed me how great I used to have it.

Even though I think it wouldn’t work – I keep trying to think about why it would.  Because a college that developed the kind of culture where that kind of sharing and learning was possible, was rewarded, was considered important enough to do even alongside the classes you’re paying for — that would be really cool.

Peer Reviewed Monday – Scaffolding Evaluation Skills

ResearchBlogging.org
So this week we’re also behind a paywall, I think.  Someday I will have time to actually go looking for Peer Reviewed Monday articles that meet a set of standards, but right now we’re still in the “something I read in real life this week” phase.

And this one was interesting – so far, when I have found articles that are specifically about deliberate interventions designed to teach something about peer review or about research articles, it is almost always in this literature, about the teaching of science.  Not surprising, but it does beg the question of disciplinary differences.  Still, the overarching takeaway of this article isn’t that everyone should teach about evaluating scientific evidence like we did so much as it is everyone should be teaching this on purpose, and over and over.

Which is a message I can get behind.  And one, I suspect, that is true across disciplines.

So the article has two parts.  One is a presentation of the model the authors used to teach students to evaluate evidence, and the second is a report on their research assessing the use of the model in a class.  Their students are not college students, but advanced high school students.

The authors open by arguing for the significance of evaluation skills in science –

Students, more frequently now than before, are faced with important socio-scientific dilemmas and they are asked or they will be asked in the future to take action on them.  They should be in position to have reflective discussions on such debates and not accept data at face value.

They further argue that students are not being taught this now – that most problem-based or inquiry-based curricula takes the data as a given, and doesn’t include “question the data” as part of the lesson. ( I almost think they are arguing that this is even more important now then it had been before because of the current emphasis on active, experiential learning.  That they’re suggesting that this type of pedagogy requires evaluation skills that the old lecture model didn’t, but that teaching evaluation skills hasn’t been built into these curricula.  That’s an interesting idea.)

In the lit review, the authors spend some time on the question of what “credibility” means.  For the purposes of this paper, that are arguing that there are two main components to the assessment of the credibility of evidence:  the source of the evidence and the method, how it was constructed.  This interpretation is heavily influenced by Driver, et al, 2001).

Questions to ask of the source:

  • Is there evident bias or not?
  • Was it peer-reviewed?
  • Who is the author? What is their reason for producing the evidence? What is their background?
  • What is the funding source?

Questions to ask of the methodology:

  • Does the evidence refer to a comparison of two different groups?
  • Is there any control of variables?
  • Were the results replicated?

The review of the literature suggests that there is ample evidence to support the claim that students are uncertain about how to evaluate evidence and assess claims.  This holds true across grade levels and disciplines.  They also suggest that there is very little research on whether these skills can be improved.

image of steel building framework

Credibility Assessment Framework

The authors then turn their attention to the Credibility Assessment Framework, which they believe will help high school students build the skills they need to assess evidence in inquiry situations.  The framework is based on two specific theoretical concepts: Learning-for-use framework (Edelson 2001) and scaffolding design framework (Quintana, et al 2004).  The framework is intended to help designers create good learning activities that include:

  • authentic contexts
  • authentic activities
  • multiple perspectives
  • coaching and scaffolding by the teacher at critical times
  • authentic assessment of learning within the tasks
  • support for the collaborative construction of knowledge
  • support for reflection about and articulation of learning

What they did

The team spent eight months building the learning environment for a class of secondary school science students.  They built their evaluation learning activities around a project where students were supposed to be doing hands-on work on an ill-structured and complex problem (food and GMOs) — a context where their work should naturally and authentically benefit from the critical evaluation of multiple sources of evidence.

One thing that is significant here, is that the authors supplied the reserach for the students to evaluate — they didn’t include a “finding stuff” piece to this work.  But they also modified the sources that the students were going to use, when they felt it was important to do so to decrease the cognitive load on students.  What was really interesting to me about this was what they added in – context, why the study was done and where it fit.  This is exactly what I feel (feel, because I haven’t got data) my students are missing when they’re just assigned “peer-reviewed articles.”

This information was put in a database in the students’ online learning space.  This space includes both an “inquiry” environment and a reflective “WorkSpace” environment; the project used both.

Scaffolding was built in, using both human-provided information (from the teacher) and computer-supported information (available online for the duration of the unit).  And the unit as a whole lasted elevent weeks.  There were 11 90 minute lesson plans.  The students started out doing hands-on experiments, and then spent the remainder of the unit doing groupwork which included data evaluation.  Then at the end the groups presented their findings.

In the first four lessons, the students were evaluating the provided sources without direct instruction. In the fifth lesson, they did a specific exercise where they evaluated the credibility of two sources unrelated to the class’ topic — this was done to reveal the criteria that the students had been unconsciously using as they attempted to evaluate provided sources in the first four weeks.

What they found out

The authors gathered pre- and post- test data using two instruments.  One measured the mastery of concepts and the other the evaluation skills.  They also videotaped the class sessions and used data captured from the online learning environment.  There was a control class as well, which did not have any of the specific evaluation lessons. The authors found that for the study group, there was a statistically significant difference between the pre- and post- tests for both conceptual understanding and evaluation skills.  For the control group there was no significant difference.

Two findings I found particularly interesting:

  • Including the qualitative data gave more insight.  In the pre-tests students were abel to identify the more credible sources, but they were not able to articulate WHY those sources were more credibile.
  • Within the particular components of credibility that the authors identified (source and method) the students did fine on author/author background by themselves, but needed help with: type of publication and funding source.

The students needed scaffolding help on methodological criteria, and even with it, many students didn’t get it (though they got more of it than they had coming in – this was a totally new concept for most of them).

Here’s the piece that I found the most interesting.  The impact of the study, as interpreted by me, was not so much on the students’ ability to tell the really good or the really bad sources.  It sounded to mek like the real impact was that the students were able to do more meaningful navigation of the sources in the middle.  And I think that’s really important — and something that most students don’t know they need to know on their own.  Related to this – the students were likely to mistrust ALL “internet” sources at the beginning, but by the end they were able to identify a journal article, even if that journal article was published online.  That’s significant to me too – that shows the start of that more sophisticated understanding of evaluation that I think is necessary to really evaluate the scholarly literature.

Finally, the authors found that the students had most of the conversations they did have about evaluation as the result of instruction – not on their own – which they took to prove that instruction was needed.

As I said before, the point of the paper seemed to me to be more about the fact taht this kind of direct intervention is needed, not that this specific intervention is the be all and end all of instruction in this area.  Beyond this, I think the paper is interesting because it illustrates how big a job “evaluation” is to teach – that it includes not only a set of skills but a related set of epistemological ideas — that the students need to know something about knowledge and why and how it’s created.  That’s a big job, and I’m not surprised it took 3 months to do here.

Nicolaidou, I., Kyza, E., Terzian, F., Hadjichambis, A., & Kafouris, D. (2011). A framework for scaffolding students’ assessment of the credibility of evidence Journal of Research in Science Teaching DOI: 10.1002/tea.20420

I wrote a book chapter!

When I first started at OSU, I was browsing through some composition texts because I knew that part of my job was going involve working closely with the writing program on the beginning composition class. While I was doing that, I came across some descriptions of different writing styles outlined by OSU professor Lisa Ede in her book Work in Progress and immediately recognized myself in her description of the “heavy reviser.”

(Seriously, she could have included my picture)

Reading that really had an impact on me – not that it changed how I write, at all, but it changed how I felt about it.  And most of all, it made it easier for me to write collaboratively.  Knowing my style as one of many meant knowing what to warn people about – knowing that my willingness to slash and burn through a draft just might freak someone who writes that draft more deliberately than I do the heck out.

So it is especially wonderful that now I have collaborated with Lisa herself.  A year-plus ago she told me that she was substantially revising her textbook The Academic Writer, and asked if I would collaborate with her on the research chapter.  Chapter 6 and I spent many hours together over the next several months, and I am pretty happy with the results – even if the scope of the whole made it difficult for me to see the forest for all the trees while I was immersed in creation.

Doing Research: Joining the Scholarly conversation is available here, in OSU Libraries’ Scholars’ Archive – I hope it’s of interest and ultimately of use!

Zotero assignment revisions

So, in the end the Zotero assignment worked very well on the Zotero side, and less well on the information literacy side.  So I’m spending this week revising it and designing some new activities.  A few quick takeaways:

The assignment was trying to do too much.  It was the main way to assess:

  • Students’ ability to recognize different source types and explain where the fit into the scholarly process.
  • Students’ ability to track down those different source types.
  • Students’ understanding of what the scholarly and creative output of their department (and by extension the scope of intellectual activity within their discipline).
  • Students’ ability to use research tools to organize and manage their sources.

Way too much.  Illustrated mainly by the fact that there were a few students to managed to do all of those things in their work.  That made it very clear what others were missing and made me want to figure out a way for all students to be able to get to where the few did in this class.

So here’s the thing – the first two outcomes up there were the problem, not the technology or logistics of syncing libraries and the like.  The bibliography project should really be about the 3rd and 4th outcomes.  The collaborative nature of the bibliography (and ability to see the breadth of what our faculty produces) was lost on students who had to work to hard to meet all of the format requirements that were in place to measure the first two outcomes.  All of the format requirements I put in to meet the first two outcomes took away from the authenticity of the experience, and of the evaluation and contextualization I had hoped the students would be able to do.

So this term, I am planning to get at those first two outcomes in different ways, and then make some changes to the bibliography assignment:

  1. drop the number of sources required in the annotated bibliography from 5 to 3.
  2. increase the emphasis on evaluation (and multiple methods of evaluation) in the annotations.
  3. change the workflow a bit – have students create a broad, pre-evaluated body of resources in a personal library and then have them select their 3 sources from that larger pool, annotate them and add them to the collaborative bibliography.
  4. build in a required conference so that I talk directly to each student about the process fairly early on.
  5. drop the format requirements altogether and allow students to add any 3 resources they want (while increasing their responsibility to justify those choices in multiple ways in their annotations).
  6. push the due date for the sources up a week, add a week between the final sources due date and the final reflection due date, and target and focus the scope final reflection essay significantly.

(Big hat tip to my students.  Many of these changes were also articulated by them when I asked them to help – in some cases their input was what really allowed me to put my finger on the problems).

What about the tech?

In the end, syncing did cause problems for a few, and Zotero hurdles did cause problems for a few.  Students who were, for whatever reason, not able to spend a focused amount of time at some point earlier in the term learning the mechanics of Zotero found it very challenging to manage finding sources and figuring out Zotero in the context of a last-minute scramble.

I had thought that my students would have to do the bulk of their Zotero work at home because of having to re-download and sync Zotero every time in the classroom.  MY Zotero library was still very difficult to sync in the classroom (I assume the hugeness is a factor) but the students rarely had to wait for more than 2-3 minutes.  Clearly, I can and should rely a lot more on classroom time as a place where students can be working with Zotero.

Most students were very positive about Zotero.  A few found it cumbersome.  There was a clear pattern though that I found interesting, but troubling in that there is nothing I can do with it.  The pattern was this — those students who had reason to use Zotero for real, for a real research project, during the term were much, much clearer in their evaluation of its value.  And by extension, I believe that they are the ones most likely to keep using it.

My class is a 1-credit class.  I can’t assign an authentic student-y scholarly research project that would take that little work.  But whether or not they have reason to use it in another class is nothing I can control.  It’s troubling because it points to a deeper issue about this class’ place within the major – issues we all know about but aren’t sure how to fix.

Yes, we did write that up

Finally!

Kate and I finally got an article related to our LOEX of the West presentation (from 2008!) finished and published.  This peer-reviewed article delay had nothing to do with publishing cycles and everything to do with writing process.  But it’s available (in pre-print) now, and I pretty much like it.

Beyond Peer-Reviewed Articles: Using Blogs to Enrich Students’ Understanding of Scholarly Work

Critical Literacy for Research – Sort of Peer-Reviewed Friday

Unexpectedly it’s Peer Reviewed Friday.  Well sort of.  Harvard Educational Review is a student-run journal, with an editorial board made up of graduate students deciding which articles get published.

I was teaching a class in our small classroom – where I never teach – so I went up early to make sure that I still knew how to work the tech.  It’s on the 5th floor, where the L’s are shelved, so I was flipping through the Fall 2009 issue while I waited for them to show up.  This article caught my eye — well worth reading, both for the content/ideas and because it is very enjoyably written.

Harouni, Houman (Fall 2009). High School Research and Critical Literacy: Social Studies with and Despite Wikipedia. Harvard Educational Review, 79:3. 473-493.

It’s a reflective, case-study type description of the author’s experiences reworking his research assignments in high school social studies classes. There’s a ton here to talk about – the specific exercises he developed and describes, the way the piece works as an example of critical reflective practice — but mainly I want to unpack this bit, which I think is the central theme of the work:

If students do not engage in the process of research inside the classroom, then it is natural for them to view the assignment in a results-oriented manner — the only manifestation of their work being their final paper and presentation.  It is not surprising then, that they are willing to quickly accept the most easily accessible and seemingly accurate information that satisfies the assignment and spares them the anxiety of questioning their data.  And when their final products did not meet my expectations, the students responded not by rethinking the research process itself but by simply attempting to adjust the product in light of what they perceived to be personal preferences. (476-77)

(emphasis mine)

Basically, the narrative he lays out says that his research projects had been unsuccessful for a while, but it wasn’t until he noticed his students’ heavy and consistent reliance on Wikipedia as a source that he started digging into why, what that meant, what he really wanted to teach, and what he really wanted students to learn.  And he changed stuff based on those reflections.

Harouni’s thinking about information literacy (which he calls “critical literacy for research”) was initially sparked by students who were not evaluating sources or showing any sign of curiosity as they researched, but it was further sparked when his first attempts at addressing student gaps didn’t work, sparked by students who were trying, and failing, to evaluate texts they weren’t yet ready to evaluate.

Along the way, he talks about the limitations of a checklist, or “algorithmic” approach to evaluation — limitation he discovered when he reflected on what his students actually did when he tried to use that approach in his classroom:

Two observations confirmed the shallowness of the learning experience created through the exercise: first, the students did not apply their learning unless I asked them to do so; second, they remained dependent on the list of rules and questions to guide their inquiry. (480)

In other words, they could do the thing he asked them to do (apply the checklist to information sources) but it didn’t affect their actual practice as researchers, nor did it change how they viewed the information they were getting from Wikipedia.

And also why it is important to help students understand the openness and dynamism of Wikipedia, but that that itself is not enough:  “knowledge of the uncertainties of a source does not automatically translate into an awareness of one’s relationship with the information (477).”

This piece is, I think, essential at getting at what I think is the real value of his insights and experience — many of our students want to find certainty in their research processes.  They want to know that a source is good or bad.  Wikipedia bans feed that.  Checklists feed that too, especially when they are not taught as an initial step in an evaluation process, but as the process itself.  What we really want students to be able to do when they research is to manage uncertainty — to say I know this is uncertain and I can figure out what it means for me as I try to answer my real, important, and complex question.

Harouni’s process his is an excellent reminder of how teachers want clarity too – and how they have to be willing to embrace uncertainty themselves if they are to guide students through a process of authentic inquiry:

In teaching critical literacy for research, I have had to separate research from its dry, academic context and consider it as an everyday practice of becoming informed about issues that have an impact on students’ lives.  I must value not answers but instead questions that represent the continued renewal of the search.  I must value uncertainty and admit complexity in the study of all things. (490)

In this, he knocks on the door of a question that I frequently have as an instruction librarian (one which I think many instruction librarians have — how much can I really accomplish as a teacher on my own).  If the classroom instructor – the person creates, assigns, explains, and evaluates the research assignment isn’t actively engaged with the students’ research process – are there limits to what I can do?  I do think there are.  I don’t think those limits means that I should do nothing, far from it – but I do think those limits affect what I think I should be trying to accomplish on my own and affect the other ways I should be thinking about furthering my goals for students, inquiry and learning.

At the end of the day, one of Harouni’s basic assumptions is that it is part of his job as a social studies teacher to foster inquiry and curiosity in his students, “[f]or two semesters, research projects remained a part of my curriculum — not because they were wonderful learning experiences, but because I could not justify, to myself, a social studies class that did not work to improve the way students navigated the ocean of available information (474-5).”  In other words, he believes that teaching information literacy is an essential part of what he does.   And that is key.  You can’t have that perspective and also value coverage – of content information – above all else.  It’s one or the other.  (is it?  Yeah, I think it is).

Every faculty member isn’t going to have that idea of what their job is.  And every librarian isn’t either – but I think maybe for instruction librarian it should be.  It is true that rules and clarity make coverage easier.  There was a question on ILI-L yesterday from someone (responding to an ongoing discussion about teaching web evaluation)  asking “how do you even have time to talk about web evaluation when you have to cover all this other stuff.”

Rules make it easier to “cover” web evaluation.  Faculty want us to “cover” lots of different tools.  WE want to “cover” lots of different tools.

(N.B. I am not suggesting that everyone who engaged in the “web evaluation” discussion just “covers” it and doesn’t teach it.  Nor am I suggesting that the people who worry about covering what the faculty want them to cover are only interested in coverage.  I do think though that the pressure to “cover” is as true for us as it is for people in the disciplines and these discussions spark reminders of that)

But if we want students to think about research as a process, if we want research to BE a learning process, then we have to engage in teaching the process.  And that’s extra hard for us – we can’t do that in the one-shot by ourselves.  And we can’t do it if we’re worried about coverage — about covering everything the library has to offer.  And I’m not just saying that about “we can’t teach everything about the library in a one-shot” — I think we all know that.  I think I am saying that it can’t be about that at all – that the point has to be about the process, about authenticity, about this –

I now understand that whatever research strategies students use in their day-to-day lives, which no doubt will vary depending on who the learners are, must be investigated and taken into account by their teacher.  Neither this goal nor the goal of improving these strategies can be attained unless students have time to engage in research while they are in the classroom.  And inviting students to the computer lab and remaining attentive to their interaction with online sources is as important as accompanying students to the library. (490)

And maybe this means not worrying about teaching research as a recursive learning process in the one-shot.  Maybe this means rethinking what and where we teach and maybe it’s work with faculty that gets at that overarching goal.  I don’t know.  I do know, though, that I have some great ideas for rethinking my credit class next term.

Classroom activities to promote critical literacy for research:

1. A (relatively innocuous) vandalism example demonstrated in class.  He didn’t change the content of pages, just the accompanying photo to illustrate the process of editing.

2. Students work in pairs to evaluate a Wikipedia article on a topic they know a lot about (for example, one student used the article about her former high school). Through this exercise he was able to teach about:  skepticism & its place in the research process, identifying controversial claims in a text, citations and footnotes, and verifying claims by checking outside sources.

3. Judging a book by its first sentence. He brought in 5 history textbooks, showed the covers and provided the first sentence.  Then he asked students to describe what they could figure out about the book from that first sentence.  With this exercise he was able to teach: authorial bias or point of view; finding the author’s voice.

4. Research beyond the first sentence.  When they tried to apply these critical skills to the texts they found in their research projects, though, they still had trouble because they didn’t know enough about the stuff they were researching.  So he looked for a way through this problem. Enter Wikipedia.  He provided a list of pages identified by Wikipedia editors as biased or lacking a neutral point of view, and asked the students to choose an article on a somewhat familiar topic and write a brief essay, with specific references to the text, with suggestions for improving the piece to meet the Wikipedia’s neutrality standard.

5. Contributing as an author.  Similar to other projects like this, it was one option for his students as a final project.  Interesting in that he collaboratively developed the assignment and rubric with interested students.

Making one-shots better – what the research says (Peer Reviewed Monday, part 2)

ResearchBlogging.org

And now, on to Peer-Reviewed Monday, part two but still not Monday.

Mesmer-Magnus, J., & Viswesvaran, C. (2010). The role of pre-training interventions in learning: A meta-analysis and integrative review☆ Human Resource Management Review, 20 (4), 261-282 DOI: 10.1016/j.hrmr.2010.05.001

As I said earlier this week, this was started by a link to this article, a meta-analysis trying to dig deeper into the questions: which of the pre-practice interventions examined in the Cannon-Bowers, et al study are most effective?  For what type of learning outcomes?  And under what conditions?

The first part of the paper reviews what each of the pre-training interventions are, and presents hypotheses about what the research will reveal about their effectiveness.

METHOD

They reviewed 159 studies, reported in 128 manuscripts.  For this work, they considered only studies that met all of the following conditions:

  • they involved the administration of a pre-training intervention
  • the study population included adult learners
  • the intervention was part of a training program
  • the study measured at least one learning outcome
  • the study provided enough information to compute effect sizes.

The studies were coded for: the type of pre-practice intervention; the type of learning outcome; the method of training delivery; and the content of the training.

The codes for pre-practice intervention were drawn from Cannon-Bowers, et al: attentional advice, metacognitive strategies, advance organizers, goal orientation, and preparatory information.

The codes for learning outcomes were drawn from the Kraiger, et al (1993) taxonomy:

  • Cognitive learning (can be assessed at 3 stages: verbal knowledge, knowledge organization and cognitive strategies)
  • Skill-based learning (also assessed at 3 stages: skill acquisition, skill complication, and skill automaticity)
  • Affective learning (attitudinal outcomes, self-efficacy outcomes and disposition outcomes)

Training methods coded were very relevant to information literacy instruction: traditional classroom; self-directed or distance learning or simulations, such as role-plays or virtual reality.

Training content was coded as: intellectual, interpersonal, task-related or attitude.

RESULTS & DISCUSSION — so, what does the research say:

For attentional advice — this was one that I was able to immediately think of one-shot related applications for, so it was particularly interesting to me that medium to large positive effects were found for both skill-based and cognitive outcomes, with the largest gains found for skill-based outcomes — given that so much of what is taught in one-shots is skill-based, intended to promote success on particular assignments.  These effects are strongest when general, not specific, advice is given.

Metacognitive strategies —

The authors identified two main forms of meta-cognitive strategies that were studied: strategies that involved the learner asking why questions, and strategies where the learner was prompted to think aloud during learning activities.

The research shows that meta-cognitive strategies seem to promote all levels of cognitive and skill-based learning.  Why-based strategies had more consistent effects for all levels of cognitive learning, which supports the authors’ initial hypothesis — but think-aloud strategies do a better job of supporting skill-based outcomes, which does not.

Advance organizers —

Positive results were found for these for both cognitive and skill-based outcomes.  Of particular note for instruction librarians is this finding:  “stronger results were found for graphic organizers than text-based ones across all levels of skill-based outcomes.”

Goal orientation —

When compared with situations were no overt goal was provided to the learners, goal orientations seem to support all types of learning: cognitive, skill-based and affective, with the strongest effects (just by a little bit) in the affective domain.

The authors also hypothesized that mastery goals would be better than performance goals.  The findings suggest this hypothesis is true for skill-based learning and for affective learning.  They were not able to test it for cognitive learning.  They did find something odd with regards to affective learning – when they compared performance goals and mastery goals separately against no-goal situations, then performance goals showed greater effects.  But when they compared mastery goals and performance goals, stronger effects were found for mastery goals.

Preparatory information —

This showed positive effects for skill-based and affective learning, but they weren’t able to test it for cognitive learning outcomes.

SO WHAT ELSE COULD HAVE AN EFFECT?

The training conditions and content were coded to see if those things had an effect on which pre-practice interventions were most effective.  Of particular interest to me were the finding that stronger effects for cognitive learning were found for advance organizers paired with self-directed training (e.g. tutorials) than for traditional classrooms or simulations.  (Of course, it’s important to remember that those showed positive effects too).

RESULTS BY TYPE OF OUTCOME

This turned out to be the most interesting way to think about it for me, so I’m going to include all of these probably at a certain level of length…

For skill-based outcomes, broken down – the strategies that work best seem to be:

  • skill acquisition: mastery goals & graphic advance organizers.
  • skill compilation: think-aloud meta-cognitive strategies, attentional advice and goals.
  • skill automaticity: graphic organizers and pre-training goals.

This seems to suggest pretty strongly that librarians should find a way to communicate goals to students prior to the one-shot.  Obviously, the best way to do this would probably be via the classroom faculty member, which is why this also makes me think about the implicit message in the goals we do send to students – most specifically, I mean the implicit message sent by requirements like “find one of these, two of these, three of these and use them in your paper.  It does seem like this could be considered a performance goal more than a mastery goal and even if the main impact on students is added stress to perform, is that stress that is serving any purpose or should it be eliminated?

For cognitive outcomes, also broken down – these strategies emerged from the literature:

  • verbal knowledge: specific attentional advice, why-based meta-cognitive strategies, and graphic advance organizers had the largest effect.
  • knowledge organization: general attentional advice and think-aloud metacognitive strategies
  • development of cognitive strategies: why-based strategies and attentional advice.

This is interesting, of course, because while we know that teaching on this cognitive-outcome level is pretty hard in 50 minutes, a lot of the topics we’re asked to address in the one shot are really asking students to perform in that domain.  Ideas like information ethics, intellectual honestly, scholarly communication, identifying a good research article – these all require more than a set of skills, but also require a way of thinking.  So in this area, I am thinking okay, we can’t teach this in 50 minutes, but if we can prep them in advance, maybe we have a better chance of getting to something meaningful in that time.

For affective outcomes —

  • Overall, a pre-training goal orientation and attentional advice were most effective in this domain.

These might not seem relevant in the one-shot, but they really are.  We’re talking in many cases about teaching them something with the hope that they’ll use it later, when they really get to that stage of their research process, their confidence and self-efficacy is clearly relevant, as is their disposition to believe that you’re teaching them something valuable!  In fact, I think this might be as worth or more worth focusing on that cognitive outcomes.  So that makes these findings particularly interesting:

  • post training self-efficacy AND disposition toward future use of the training material were most influence when a performance goal orientation was used.
  • Attentional advice, mastery goals and preparatory information are also promising here.

Prepping for the one-shot (Peer Review Wednesday)

ResearchBlogging.org

Via the Research Blogging Twitter stream – I came across an article the other day that seemed like it would be of particular interest to practitioners of the one-shot, but as I was reading it I realized that it drew so heavily on an earlier model, that I should read that one too – so this week’s Peer Review Monday (on Wednesday) is going to be a two-parter.

Today’s article presents a Framework for Understanding Pre-Practice Conditions and their Impact on Learning. In other words — is there stuff we can do with students before a training session that will make for better learning in the training session? The authors say yes, that there are six categories of things we can do, which raises the related question – are all of the things we can do created equal?

CANNON-BOWERS, J., RHODENIZER, L., SALAS, E., & BOWERS, C. (1998). A FRAMEWORK FOR UNDERSTANDING PRE-PRACTICE CONDITIONS AND THEIR IMPACT ON LEARNING Personnel Psychology, 51 (2), 291-320 DOI: 10.1111/j.1744-6570.1998.tb00727.x

This article also reviews the existing literature on each category, but I’m really not going to recap that piece because that is also the focus of the other article, which was published this year and why look at both?

So I have started to feel very strongly that instruction in typical one-shots much more closely resembles training than teaching – at least how I think about teaching.  I’ve had some experiences this year where I have had to do the kind of “what does teaching mean to you” reflective writing that put into focus that there are some serious disconnects between some of the things that are important to me about teaching and the one-shot format, and it makes me wonder if some of the frustration I feel with instruction at times – and that others might be feeling as well – comes from fighting against that disconnect.  Instead of thinking about what I think about teaching (thoughts that started developing a long time ago, when I was teaching different content in credit courses that met several times a week over the course of several weeks) and trying to fit it into the one-shot model, perhaps it makes more sense to spend some time thinking about the model we have and what it could mean.

So, the training literature. Can a deep-learning loving, constructivism believing, sensemaking fangirl like me find inspiration there?  Well, apparently yes.

In their first section…

…the authors define what they mean by “practice.”  Practice in the context of this paper means the “physical or mental rehearsal of a task, skill or knowledge,” and it is done for the specific purpose of getting better at it (or in the case of knowledge, getting better at applying, explaining, etc. it).  It is not, itself, learning but it does facilitate learning.

They distinguish between practice conditions that exist before the training, during the training, and after it is done.  This article focuses on the before-the-training group – which I think is what makes it really interesting in that one-shot context.

In the second section…

…they dig into the six different types of pre-practice conditions that they categorized out of the available literature on the subject.  In their review of the literature, they tried to limit the studies included to empirical studies that focused on adult learners, but they were not always able to do so.

Attentional Advice

Attentional advice refers to providing attendees with information that they can use to get the most out of the training.  This information should not be information about how to do the thing you are going to be teaching — but information about the thing you are teaching.  This information should allow the learner to create a mental model that will help them make sense of what is being learned, and which will help connect what is being learned to what they already know.

The example they give describes a training for potential call-center employees.  The attentional advice given before the training includes information about the types of calls that come in, how to recognize and categorize them.  Not information about how to answer the calls directly.

This one got me thinking a lot about the possibilities of providing information about the types of sources students will find in an academic research process (as simple as scholarly articles/popular articles/books or more complex – review articles/ empirical research/ metaanalyses, and so on) as attentional advice before a session, instead of trying to teach it in a one-shot session where you have two options – to spend five minutes quickly describing it yourself, or to spend half of the session having the students do something active like examining the sources themselves and then teaching each other.

Metacognitive Strategies

Most instruction librarians can probably figure out what this one is – metacognitive strategies refer to strategies that help the learner manage their own learning.  These are not about the content of the session directly, but instead information about how the learner can be aware of and troubleshoot their own learning process.  The examples provided take the form of questions that learners can ask themselves during the training or instruction session.

Advance Organizers

I am sure the metacognitive strategies will spark some ideas for me, but it didn’t happen immediately – I think because I was distracted by this next category.  Advance organizers give learners, in advance, a way to organize the information they will be learning in the session.  So a really obvious example of this would be – if you want students to learn the content of a website, you could provide information in advance about the website’s navigational structure, and how that structure organizes the information.

This one really got me thinking too.  Another piece of information literacy instruction that is really, really important and about which we have a bunch of research and information backing us up is the research process – the iterative, recursive, back and forth learning process that is academic research.  We even have some useful and interesting models describing the process.  But in a one-shot, you’re working with students during a moment of that process and it’s really, really hard to push that session beyond the piece of the process that is relevant where they are at the moment.  What about providing advance information about the process – does that require rethinking the content of the session or the learning activities of the session — probably.  But would it provide a way for students to contextualize what you teach in the session.  I’m not sure, but I’m going to be thinking about this one more.

Goal Orientation

This one is pretty interesting in the more recent article.  There are two types of goals – mastery goals and performance goals.  Mastery goals focus attention on the learning process itself, while performance goals focus on achieving specific learning outcomes.  As a pre-practice condition, this means giving learners information about what they should focus on during the session.  As an example, they say that a performance goal orientation tells students in a training for emergency dispatchers to focus on dispatching the correct unit to an emergency in a particular amount of time.  A mastery goal orientation, on the other hand, tells the students to focus on identifying the characteristics they should consider when deciding which unit to dispatch to a particular emergency.

So – an performance goal orientation in the information literacy context might tell students to focus on retrieving a certain number of peer-reviewed articles during the session.  A mastery goal tells them to focus on identifying the characteristics of a peer-reviewed article.

Preparatory Information

This seems like it would be pretty much the same as Attentional Advice, but it’s not.  In this one you focus on letting the learner know stuff about the session environment itself — the examples they gave were situations where the training was likely to be very stressful, physically or emotionally difficult.

Pre-Practice Briefs

Finally, there’s this one, which refers specifically to team or group training.  In this one, you give the group information about performance expectations.  You establish the group members’ roles and responsibilities before the team training begins.

In the third and fourth sections…

The authors attempt to develop an integrated model for understanding all of these conditions, but they’re not able to do it.  Instead, they present a series of propositions drawn from the existing research.  Finally, they examine implications for day-to-day trainers and identify questions for future research.  The most essential takeaway here is – not all preparation for practice is equal and that we should do more research figuring out what works best, with what kind of tasks, and for what kind of learners.

Stay tuned for the second installment, where current-day researchers examine the last 12 years of research to see if this has happened – and where it has, they tell us what was found.


cream colored ponies and crisp apple strudel

Another essentially no more than bullet points post — I have a lot of formal writing I have to be doing now, so this will end at some point.  So, cool stuff…

via Dave Munger (twitter) Alyssa Milano pushing peer-reviewed research — see, it is relevant after you leave school!

via A Collage of Citations (blog).  Former OSU grad student/ writing instructor turned Penn State PhD candidate Michael Faris’ First-Year Composition assignment using archival sources to spark inquiry and curiosity.  Note especially the research-as-learning-process focus of the learning goals.

via Erin Ellis (facebook) plus then via a bunch of other people — proof that, in the age of social media, an awesome title can boost your impact factor.  But the content stands on its own as well – I’ve been thinking a lot about different information seeking style, and how different people gravitate naturally towards different approaches.  By Karen Janke and Emily Dill: “New shit has come to light”: Information seeking behavior in The Big Lebowski

via @0rb (twitter) Journalism warning labels

via Cool Tools (blog) Longform to InstapaperLong Form by itself is pretty cool, it aggregates some of the best long-form (mostly magazine) writing on all kinds of topics.  But what makes it really cool is that it integrates seamlessly with Instapaper, meaning that I can find something there, push a button and have it available on my iPad to read offline the next time I am stuck somewhere boring.

Related – Cool Tools’ post on the best magazine articles ever.

via Cliopatria (blog).  Obligatory history-related resource — London Lives: 1690-1800.  Pulling together documents from 8 archives & 15 datasets, this online archive asks “What was it like to live in the world’s first million person city?”