Behind the Paywall: Grading, Bias and Class Participation

This article was recommended all over my twitter the other day, and the topic looks pretty interesting.  So let’s launch a new (hopefully) regular feature, Behind the Paywall. 

Citation

Alanna Gillis. “Reconceptualizing Participation Grading as Skill Building.” Teaching Sociology, 47:1, 10-21. January 2019.  DOI: 10.1177/0092055X18798006

The Access Experience

Paywall: Sage, accessed at my library.

I had a lot of trouble loading the PDF, which I blamed on my local wifi for a while. Seriously, for a college town, we have terrible wifi options in Corvallis.  But when the same thing happened a few days later, and everything else around it loaded okay?  I think it was clearly a problem with Sage.

TL;DR

Untangling everything that is wrong with how we measure and reward class participation would take forever. Not only do our dominant methods rely on instructors to be free from bias and have perfect recall, but they rest on assumptions about students’ willingness, ability and preparedness to participate in class that are deeply problematic. By continuing to reward participation in these ways, teachers — even when they do not want to — are replicating and reinforcing inequalities. Reframing class participation as a skill-building opportunity and building in robust opportunities for students to reflect on their performance is a better way to go.

Here we go…

So we start off by situating this paper within the context of teaching and learning in the classroom. We know that students who are engaged and participating in class learn more.  Knowing this, professors have an interest in motivating students to participate, so many of them grade class participation.

I am liking this problem statement in its recognition that the target audience has spent many years in school, knows that participation grading is a thing, and doesn’t need eight different citations showing that to be true.  The author goes on to say, yes, I haven’t done any systematic inquiry to nail down objective participation grading themes, but I also don’t have to pretend we don’t all know what we know.  And based on living in the world as both students and teachers, we know that there are two basic ways that participation grading works:

  • Teacher gives grades based on recall. A few times a term (or once at the end) they remember how many times each student talked, and assign a grade based on that.  
  • Teacher gives grades based on actually counting how many times students talk during the term. More complex applications of this method might count specific types of participation (asking questions, answering questions, etc.).

There are issues with both of these methods. Teachers do not have perfect recall. Teachers are human and subject to bias in all the ways humans are biased. And, finally, more talking does not necessarily mean more learning.

OH. I think this next bit though is why this paper is getting so much love.  It’s because it goes down to the next level and points out that the deeper problem with all of this participation grading is that these method of motivating class participation are built on several problematic assumptions: that all students are equally prepared to speak in class; that students all understand class participation in the same way; that students have all been rewarded (or not) for classroom behaviors in the same way; that all students are bringing the same skill set to the classroom.

There’s truly no reason to believe that those things are true. And there are a lot of good reasons to believe that they are not.

SO.

Gillis has three intersecting goals in this paper:

  1. Unpack the assumptions behind participation grading as it happens most frequently now.
  2. Re-frame participation grading as an opportunity for skill development, and re-focus it on more meaningful goals.
  3. Show the evidence that says this new framework is worth implementing in real classrooms.

Literature Review

Let’s unpack some assumptions.

We know student evals of teachers are super biased.  We acknowledge and understand that that bias works in both directions:  students’ biases affect their evaluations of teachers AND teachers’ biases affect their evaluations  of students.  However, when it comes to participation grading, we have a tendency to acknowledge that bias as a reality without really understanding or unpacking its dynamics.

(I’m going to summarize the lit review pretty significantly, and link to some key sources)

The research documents general biases that affect student evaluation: teachers tend to reward students they like,  squishy factors like attitude affect evaluations, and factors like race, gender, ability, and socioeconomic class definitely affect assessment in many ways.

We also know that we work in a world where teachers don’t always remember their students’ names, so systems that rely on accurate recall are inherently suspect.  But the issues with memory go beyond this.  Teachers are more likely to remember extreme situations (outbursts, falling asleep in class) than mundane normalcy. Teachers tend to remember giving students more chances to participate than students remember getting.  

There is also a ton of research that challenges the idea that all students are equally ready and willing to participate in class.  There are  a ton of things going into how students are socialized to understand their role in the classroom, or what appropriate interactions with teachers look like.  Some come from outside school — parents’ messages to children are shaped by their own experiences with school or authority structures, for example. Some come from the lived experience of being in school. Students bring very different experiences with consequences and rewards when it comes to asking questions, offering opinions, sharing stories, suggesting counternarratives, and classroom behavior.  And all of these dynamics — inside the school and out — are shaped by factors (including race, gender, class, ability, language and more) that create and reinforce inequality, and which also need to be analyzed and understood intersectionally. 

Then, we have one of the most pervasive dynamics in the teaching literature, at least in the literature that focuses on motivation and learning.  There is a lot of work in this area coming from psychology, using lenses that focus inquiry on personality. This shapes the discourse and produces research (and policy) that frames behaviors like class participation as the result of hardwired personality traits — shyness, introversion or extraversion — and not as behaviors that are built out of skills that can be learned. 

Selected Sources:

New Framework, Different from the Old Framework

“Instead, I propose that instructors conceptualize participation grades in undergraduate classrooms as opportunities to incentivize and reward skill building” (13).

This framework:

  • Conceptualizes participation as a set of learnable, interconnected skills
  • Recognizes and rewards skills that students already believe reflect their engagement in class (peer editing, prepping with classmates in study groups, active listening, coming to office ours) but which are usually not captured by “class participation” grades.
  • Encourages students to work in different skills simultaneously, and to start to understand these skills as interconnected.

Application (Methods)

This is how the author applied the framework in class.

  • 2 sociology classes. 1 400-level and 1 100-level.
  • 45 students per class.
  • Class participation is broken into 5 dimensions:Attendance and tardiness
    • Preparation for each class meeting
    • Participation in small group discussions
    • Participation in full class discussions
    • Participation in other ways (office hours, writing center visits, study groups, and more)
  • Evaluation is conducted using a “self-reporting goal-centered approach.”

Start of the term:

  • Students use a 5 point Likert scale to self-rate along each of the 5 dimensions: How well do you usually do in classes like this with this behavior.  They also write 1 sentence justifying their numerical rating.
  • Students identify 3 concrete, measurable goals for themselves during the term and write out a plan to achieve these goals.
  • Teacher reads and gives feedback on the goals and plan.

During the term:

  • Periodic, informal check-ins.
  • At least one formal self-reflection.  Students re-rate themselves along each dimension and submit a reflection justifying their rating, reporting on progress twoards goals, and adjusting goals/plan as needed.
  • Instructor gives feedback on goals and plan, and if there is a disconnect between the students’ self-rating and the instructor’s perception, meets to calibrate this.

End of the term: 

  • Student submits a self-report that is similar to the mid-term report, but in which they assign themselves a participation grade and justify it.
  • Instructor reviews the reflective material from throughout the term, and the students’ progress towards goals, assigns a grade, and explains it with written feedback.

The instructor reports that there was rarely a disconnect between the students self-reported grade and the instructor’s perception.

(Note, I would expect from my experience that there would be a group of students who would grade themselves too harshly, describing similar activities and evidence to other students but assigning themselves a lower grade than I would, or than those other students would. I wonder if that happened.  It would be pretty easy to re-calibrate at midterm).

Goals/benefits of this approach:

  • Reward a fuller range of behaviors.
  • Reward something more than quantity.

Analysis:

  • Did math with their numerical evaluations and counted how many achieved goals
  • Also inductively distilled themes from the written reflections.

Results

Skill building: Students came to see speaking in class as a skill.  Related to this — they were able to articulate progress even when they were still feeling nervous about participation, or still identified as “shy” or “introverted.”

Starting is the hard part, and then it gets easier.

(Note: most students focused on participating more, but some students worked on skills around participating less, or participating intentionally. These themes cut across both of these goals.)

Connections: The five dimensions of participation are interconnected.

Transfer: Some students reported that they practiced their participation skills in other classes too.

Discussion

I am going to skip most of the discussion because this is super long, and I feel like many of the insights are grounded really well in the rest of the paper.  But I will tell you what the author identified as limitations:

  • Having to rely of self-reporting.  This is the big one.  They tried some forms of triangulation, but most came up short for obvious reasons, like, “I am trying to evaluate things that happen both inside and outside the classroom.”   So far, in the rare cases when there was a significant teacher/student mismatch, course correcting at the midterm check in addressed the problem.
  • The experience so far demonstrates the need to do more, intentionally and formally, to train students how to participate in class.

Final thought

“Sociologists must take issues of inequality as seriously in our grading as we do in our instructional content, and moving toward a skill development participation assessment system is a good step in that direction.” (20)

 

I wrote a thing. Over there –>

I haven’t written here in a super long while.  But Hannah and I wrote a thing, published In that Library with the Lead Pipe place.

Sparking Curiosity — Librarians’ Role in Encouraging Exploration

Lots of you know this has been a long time coming — we first talked about it in public, I think, at Online Northwest in the 2014 Snowpocalypse.  I thought it might be fun to pull together all of the posts about it here:

Online Northwest 2014 (with Chad Iwertz)

(Which also led to the Curiosity Self Assessment Scoring Guide)

Library Instruction West 2014

Oregon Library Association Annual Conference 2016

LILAC 2016

European Conference on Information Literacy 2016

This work has also been a part of a lot of the professional development workshops I’ve taught, and Hannah and I have taught, in the last five years.  In fact, we’ll be teaching on this topic (and others) the day after tomorrow at Colorado Mountain College, and we’re very excited.

AMICAL 2015 — It Takes a Campus: Creating Research Assignments that Spark Curiosity and Collaboration.

University of San Francisco 2015 — Fostering Curiosity and Inquiry with First-Year Students

 

 

Again with curiosity (Library Instruction West 2014)

So, not only was this conference in Portland but it was also awesome.  Thanks one more time to Joan Petit, Sara Thompson, and the rest of the conference committee who put on such a great event.

Marijuana Legalization Papers Got You Down?  You Won’t Believe What We Did About It!

Hannah Gascho Rempel & Anne-Marie Deitering (OSU Libraries & Press)

Title slide for a presentation. The word curiosity is displayed across the top. Several images of sparks are below.

Download the slides (PDF)

Download the slides + presenter notes (PDF)

Session handout

Take the Curiosity Self Assessment

Scoring Guide to the Curiosity Self Assessment

 

thoughts about learning sparked by that note taking study

Remember a couple of weeks ago when news articles like this, or this or this were all over your social media?  Mine too.  I’m a little late to replying, but I didn’t want to do it until I’d read the actual study.  I read a couple of the news articles and something about the coverage was bugging me.  Me, an avowed taking-notes-by-hand-notetaker!  

Today, I read it, and I think I know what’s been bugging me.  It’s that when we didn’t have the tools that make things easy, we learned a lot, so the tools are bad narrative.

In other words, the technology (in this case, a pen) puts up a barrier, and what we have to do to get around that barrier turns out to be a useful learning experience.  We learn new skills because we’re motivated to get around the barrier, and we don’t even really notice we’re learning them because we have our eyes on the prize.

So when a new technology comes that removes the barrier we love it and adopt it, but worry about everyone who isn’t going have the important experience of getting over the barrier.  Or worse, we look at those who grew up without the barrier and decide that they’re deficient in some way.

Does this sound familiar?  Of course it does.  How many times have we heard variations of it in libraries?  A million?  A zillion?

The problem with ____________ is that students don’t learn how to _____________ anymore.

First, a quick recap of the study

(I crack myself up, it won’t be all that quick)

Context:  There are 2 main theories about the value of notetaking that were considered here:

  • External storage — this is the idea that notes give you something to study later.
  • Encoding — this is the idea that the cognitive work you do to turn information into notes improves your learning, even if you don’t review them again.

Since laptops enable a more transcription-like type of  notetaking, the authors hypothesize that they will find benefits to pen-and-paper notetaking over laptop-supported notetaking and they designed 3 related studies to test that:

Study 1 — let’s compare laptop notetaking to paper notetaking, without doing much else.

2 groups of students were asked to take notes on the same material, with no instruction on how to take notes.  They were randomly assigned laptops or pen/paper to do the task. Afterwards, they answered both factual/recall and conceptual/application questions about the material.  In addition, their notes were coded and analyzed by the researchers.

Both groups of students did about the same on the factual/recall questions, but the students who took notes by hand did significantly better on the conceptual/application questions.

Those who took notes in longhand wrote fewer words, and had fewer examples of direct transcription in their notes.

Study 2 — let’s do pretty much the same thing, but this time we’ll tell them not to transcribe.

So this time the students essentially did the same thing, but the students who got laptops were split into two groups.  One of those were told to take notes as they usually do, the other was also told that studies show transcription doesn’t work, and that they shouldn’t transcribe.

In this case, the differences between the groups were less significant, but the handwritten notes group still did better.  There was no difference in the laptop groups — inserting a paragraph telling students “don’t transcribe” didn’t have an effect.

Study 3 — this time, we’ll have them study the notes again later.

Instead of TED talks, four prose paragraphs were selected and then read by a grad student from a teleprompter to simulate a lecture. The paragraphs included 2 “seductive details” — information that is interesting but not useful. Students were told they’d be tested later before they took their notes.  Again, some were given laptops and some were given pen and paper.  A week later they came back, half were given the chance to study their notes for 10 minutes, half weren’t.

The results here were more complicated.  You have to look at the intersection between notetaking medium and study time to find significant differences.  Those who took longhand notes and studied did better than any other group of conditions. Additionally, among those who studied, verbatim notetaking and transcription negatively affected performance.

Okay, enough recap, on to my thoughts:

(For more details about the study — see the end of the post. It’s paywalled, so I’m feeling responsible for making sure you have the details the news articles don’t include)

I want to start off by saying that I don’t have a problem with this study – I think it’s useful, I think it’s interesting, and I am fairly certain I will come back to it again and use it in real life.  My issue is with the conclusions that have been drawn from it — mostly in all of those news stories, but also by most of the people who tweeted, facebooked or tumblr-ed those articles.

Reading the actual study – there’s nothing in there that says much about the medium.  Beyond the fact that most people type faster than they write, and therefore can get closer to transcription on a laptop, there’s really nothing at all.  What the study found was that if you transcribe, you don’t learn as well and, as they point out themselves, we knew that already.

See, I don’t think the takeaway is “don’t take notes with laptops.”  I think the takeaway is — we have to start teaching people how to take notes. Better yet, we have to start teaching people how to use the information they gain from lectures, videos, infographics, textbooks, readings and learning objects.

There’s definitely no way one could consider the Just Say No to Transcription intervention in Study #2 “teaching” — this study surely did not prove that people can’t take good notes with laptops, it only suggested that they don’t.

There’s nothing magical about taking notes by hand that makes people process and think and be cognitively aware of what they’re doing — if that’s all you have and you want  good notes, over time you will figure that out because you can’t write fast enough to transcribe.  But that’s not magic, it’s motivation.  It’s still a learned behavior, even if the teacher could remain blissfully unaware of that learning.

And when we learned how subject headings worked, or that we could find more sources by using the bibliography at the end of the book, or that the whole section where that one book was had interesting stuff, or that both the article title and the journal title were important — learning that stuff wasn’t the point and we might not have noticed that learning.  But we learned how to think like the people who organized and used the information because learning that was the fastest and easiest way to getting our papers done.

(Hey, do you think that when copy machines were invented, and we could just make a copy of the article instead of having to read, digest and take notes on it in the library people argued for No Copy Machines?)

Even if we take laptops out of the classroom, I don’t think that students will feel like they have to learn how to think about, digest, remix and capture their thoughts about a lecture in order to function.  I think that ship has probably sailed, that horse is out of the barn, that genie’s out of the bottle.

If a student knows they can record the lectures on their phone, or if the slidedeck and lecture notes are posted before every class, they’re not going to feel like they have to get it down or risk failure.  And if the lectures are already recorded and re-watchable in a flipped or online class — they’re not going to suddenly think they need to be flexing their best cognitive muscles because they have a pen in their hand.

I don’t hear the “put the barriers back up” when it comes to digital information from instruction librarians much anymore.  And I think it’s fair to say that I’m hearing it less from faculty too.  But I still worry when I see things like the coverage of this study — because it’s not like I disagree that things are getting lost when these barriers come down.  Skill type things, tacit knowledge type things and also habits of mind type things — the tools I had to work with as a young learner left me with a lot that still serves me well now, when I have better tools. If my students can’t learn those things the way I did – and they can’t — how will they?  I don’t think answers like “ban laptops,” or “just use a pen” are going to get them what they need.

Study details

Mueller, P.A. & Oppenheimer, D.M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological Science.  doi:10.1177/0956797614524581 Study 1

  • Princeton students.  n=67 (33 men, 33 women & 1 other).
  • Laptops had no internet connection.
  • Students watched 3 TED talks and took notes.  No instruction on taking notes.
  • Taken to another room to provide data:
    • complete 2 distractor tasks
    • complete 1 taxing working memory task
    • answer factual/recall questions
    • answer conceptual/application questions
    • provide demographic data
  • Notes were coded and analyzed.
  • Results:
    • factual/recall = both groups the same
    • conceptual/application = laptops significantly worse
    • more notes = positive predictor
    • less verbatim notes = positive predictor

Study 2

  • UCLA students
  • Laptop groups = 1 control (take notes as you normally would), 1 intervention – studies show that students who take notes verbatim don’t do as well on tests. Don’t do that.
  • Data:
    • Complete a typing test
    • Complete the Need for Cognition Scale
    • Complete Academic self-efficacy scales
    • Complete a shorter version of the reading span task
    • Complete the same dependent measures (questions) as study 1.
    • Demographic data
    • Notes were coded and analyzed
  • Longhand students did better, but not significantly.
  • None of the other measures had an effect
  • Longhand students took fewer notes than any of the laptop groups and took fewer verbatim notes.
  • Telling people not to take notes verbatim had no effect.

Study 3

  • UCLA students
  • 4 prose passages were read from a teleprompter by a grad student standing at a lectern simulating a lecture.
  • Students saw the lectures in big groups, wearing headphones
  • 2 “seductive details” — interesting, but not important information — were inserted into the prose passages.
  • Students were told they would be tested on the material before taking notes.
  • Tests were 1 week later.
  • Study group was given 10 minutes to study notes in advance of taking the tests.
  • Data:
    • 40 questions, 10 per lecture, 2 in each of five categories:  seductive details, concepts, facts, inferences, applications
    • notes were analyzed
  • No main effects of note taking medium or chance to study.
  • Significant interaction between note taking medium and chance to study.
  • Longhand notes + study = significantly better than any other condition.
  • For those who studied, verbatim negatively predicted performance.

Images

The new way of taking lecture notes. Some rights reserved by Natalie Downe (flickr) https://www.flickr.com/photos/nataliedowne/1558297/

reading my notes. Some rights reserved by gordonr (flickr) https://www.flickr.com/photos/gordonr/430546423/

pen. Some rights reserved by Walwyn (flickr) https://www.flickr.com/photos/overton_cat/2267349191/

Copycard. Some rights reserved by reedinglessons (flickr). https://www.flickr.com/photos/reedinglessons/5909073392/

Sailboat. Some rights reserved by jordaneileenlucas (flickr) https://www.flickr.com/photos/jordanlucas/4027830675/

Before you tell me not to take notes

Don’t.

I mean it.  Please don’t. Just don’t.

You’re not encouraging me to engage with your talk; you’re not making your class more fun or easier for me.

hand writing math notes with a green stylus on a tablet computer
some rights reserved by Viking Photography (flickr)

I need to take notes, preferably by hand. These days that means with a tablet and stylus.   I use a tablet and keyboard when I forget and bring the bad stylus and in meetings. And in some situations, I post notes on Twitter.

When you tell me not to do any or all of those things, you’re actually alienating me. You’re making me feel unwelcome. And you’re stressing me out.

(And if any part of your talk has to do with reaching all learners – you’ve lost me already)

Don’t misunderstand.  I’m not saying that everyone should take notes.  I’m not saying that anyone but me should take notes.  I’m not going to project my preferences and my learning habits on to you — I’m just asking that you don’t project yours on to me.

Here’s a secret.  My brain is a super busy place. Not always a productive or focused place. Seriously, say one interesting thing and I am off to the races. It doesn’t even have to be interesting, really. Even something that just reminds me of something that’s interesting will do.

handwritten mindmap describing faceted classification including circles squares arrows and text
some rights reserved by Jason-Morrison (flickr)

(Okay, that probably isn’t much of a secret)

And I’m not complaining about this. I spend a lot of time in my brain and most of the time, I like it there. I like to think. I get excited by ideas and connections. I get an almost visceral thrill when thoughts snap into place.

And don’t take this the wrong way, but there’s almost nothing you can do, no amount of humor or engaging activities you can build in, that will be more fun or compelling to me than thinking about what you say. The more awesome you are? The more I want to play with your ideas.

Taking notes is how I stay grounded in your thoughts. Taking notes is how I stay present. Taking notes keeps me from chasing my thoughts down those intellectual rabbit holes right now – I wrote a note, I drew a star and a circle and an arrow to the other thing, I can relax now and go back to it later.

And I know you’ve given me a handout or put up a website with all your references on it. I really appreciate it – I do! I do this too. Who wants to be scrambling to write down sources and links? I don’t, but I’m going to write down the why, and draw the circles and the arrows to show how they fit in and work for me.

(And if I ever gave you the impression I didn’t want you to take notes when I pointed out the URL for one of those resource lists – I’m sorry. That’s not what I meant!)

Man with wedding ring  scanning a handwritten notebook page into Evernote with his cell phone
some rights reserved by Evernote (flickr)

If it makes you feel better, I even take notes when I’m alone. I couldn’t start reading on my tablet until I figured out a note taking workflow.

For marginalia and highlighting, that’s PDF + stylus + Notability, if you’re interested. But there’s also my Evernote moleskine, which I use to create my holding pen notes — a writing trick I learned from Vicki Tolar Burton that I also use now for reading.

The holding pen is basically a place to put all of those questions and thoughts I don’t want to lose, but which will keep me from reading to the end of the article (or writing this paragraph or section) in the time I have if I don’t put them somewhere —

This might explain that theme we pulled out of the interviews, but I can’t remember exactly what she said. Argh, didn’t that Juarez paper I read last year dealt with this trait. Hey, Laurie’d be interested in this to help turn that one project into a paper idea. Oh, maybe that term will work better in PsycINFO. OMG that’s a good example to use in class. Wait, no, I don’t think that’s what she was really arguing in that book. Ooh, that methodology might work for me with the other study.

Basically, I’ve been doing this a long time – learning in classes, in workshops, from books and texts, in lectures and presentations. I’ve had decades at this point to figure out how to make learning work for me, and while there’s always more to learn, I need you to trust me that I know what I’m doing, and to remember that for some of us, engagement looks a little different.

What? So What? Now What?

So I was at the First-Year Experience conference in San Diego a couple of weeks ago.  There were many highlights — starting with a conference that is actually in my time zone, to my excellent walking commute —

View of the Little Italy sign in San Diego, California
Walking commute from Little Italy to the conference hotel

— to the views from the conference hotel.

View towards the harbor from the Manchester Grant Hyatt in San Diego
trust me, this wasn’t even one of the best ones

Another highlight came in a late session by Catherine Sale Green and Kevin Clarke from the University 101 program at the University of South Carolina.  I wasn’t the only OSU person at this conference (far from it).  After I got back to campus, I was helping Ruth, who coordinates our FYE, with an info session for faculty thinking of applying to teach FYS next year and she started to say “what? so what….” and I finished with “now what” – because while it was a content-rich session, that short phrase was probably the most memorable part of it.

What?

It’s a guide to help students with reflective writing. Three simple questions to answer.

So what?

It probably won’t shock anyone to know that I find reflective writing pretty easy. It’s a reason this blog exists, and definitely a reason for the tagline. While the actual writing of some reflective documents (teaching philosophies, anyone?) kills me as dead as anyone, the how and the why of reflective writing has never been difficult for me.

Honestly, when I realized that it doesn’t come easily for every one (or even for most people) I started to feel more than a little narcissistic.  I realized that pretty quickly once I started teaching — I’d assign the kinds of reflective writing prompts I used to see in classes, and I’d get back papers where the students really struggled with trying to figure out the right answers, or what I wanted to hear, but that lacked any real reflection of their own thinking.  The problem is, when you’ve never had to (ahem) reflect on how to do something or why to do it — it’s super hard to figure out how to help people who are struggling.

What I like about these three questions is how they start with something relatively simple — description is usually straightforward — what happened, what did you do, what did you notice, what did you learn, and so forth.  But they don’t let students end there.  They push to more complex analysis — why does that thing matter?  And then they push beyond that to something equally challenging (what does it mean for you) that, if students do it successfully, will also demonstrate the value of reflection or metathinking itself.

Now what?

(Wikimedia Commons)

Well, here’s the thing – I will undoubtedly teach credit courses again and when I do I will undoubtedly assign reflective writing.  So this is going to help me there, in its intended context I have no doubt

But I also think this is a fantastic way to think about the process of analyzing and evaluating information.  We all know I don’t like checklists when it comes to teaching evaluating.  Truthfully, I’ll argue against any tool that tries to make a complex thing like evaluation simple (seriously – it’s at the top of some versions of Bloom’s! The top!)

And I’ll argue against any tool or trick that suggests you can evaluate all types of information the same way without context and without… yes… reflection, on your own needs, your own message, and your own rhetorical situation.  That’s my problem with checklists.  At best, they are useful tools to help you describe a thing.

An example — the checklist asks, “who’s the author?”  The student answers – William Ripple.  That’s descriptive, nothing more.  But think about it with all three questions.

Some rights reserved by Gouldy99 (flickr)

What?  The author of this article is William Ripple.

So what? Pushed to answer this question – the student will have to do some additional research.  They will find that William Ripple is on the faculty of OSU’s College of Forestry, and the director of the Trophic Cascades program.  He has conducted original research and authored or co-authored dozens of articles examining the role of large predators in ecological communities.

Now what? This question pushes the student to consider their own needs — what they’re trying to say, who they’re trying to convince and what type of evidence that audience will find convincing.

Now, move away from that fairly obvious checklist item and let’s consider a more complicated one, bias.

I’ve linked here before to this old but still excellent post explaining why identifying bias is not evaluation.  And yet, we all know that this is still where a lot of students are in their analysis — they want facts, bias is a reason to reject a source. But bias is no different than author – identifying it, being able to describe it, that’s not evaluation.

What?  I actually think this one could be a step forward in itself — instead of just saying a source is biased, a good answer will specify what that bias is, and what the evidence for it is.

So what? This could push a student to consider how that bias affects the message/argument/ validity of the piece.

Now what? And this is the real benefit — what does this mean for me? How does this bias affect my use of the source, how will my audience read it, how might it help me/ hinder me as I communicate my message?

Now, of course, a student could answer the questions “this source is biased, that matters because I need facts, so I will throw it out and look for something that says what I already believe.”  That could still happen.  And probably will sometimes.  But I like the idea of teaching evaluation as a reflective process, grounded in a rigorous description and examination of a source.

all mistakes are not created equal

I try my best to keep up with Inside Higher Ed bloggers, but I don’t always succeed.  Monday’s post from the Community College Dean jumped out at me (probably because of the title – The Ballad of the Red Pen) and then once it had jumped out at me, it got me thinking.

red pen lying on a page of black-and-white text
some rights reserved by Cellar Door Films (flickr)

So the post isn’t really about using the red pen so much as not using it.

(BTW, the only thing I clearly remember from the award winning one week of training I got before heading into the classroom as a graduate Teaching Assistant was this advice – Never Use a Red Pen.

The argument was that the red pen had become so stigmatized that just the sight of red ink could send students into panic mode.  To this day, I use something else)

Anyway, at the heart of this post (according to me) lies the concept of “stretch errors.”  These are those errors that happen when someone is trying to grow and develop — when they’re trying new things.  The suggestion is that one should be “thoughtful” about using the red pen too much when the errors you see fall into that category – too much discouragement to a student taking a risk and trying something new = problems.

This got me thinking about information literacy and research instruction and what I was saying in the Good Library Assignments posts.  If a big part of what we’re doing with college level research instruction is helping students grow, try new things, expand their repertoire — then we must be seeing “stretch errors,” right?  I mean, unless we’re totally failing.

But I’m a little stuck on what those would look like in the research context?  I have a whole stack of metathinking research narratives that I’m using for another project and I’m thinking I might go through them to see if anything comes to me.

(Please share if something came to you!)

As a starting point, it would probably be useful to think about where they’re likely to stretch.  Choosing sources has to be one of those areas.  It’s one the areas where we’re really pushing students to expand their toolbox, to try something new. There must be situations where students are trying to choose something scholarly, complex, expert and failing — but failing in a stretch error way, because they are trying something new.

Citing sources correctly is definitely something new, something they’ve not done before, but it’s hard for me to think about the formatting aspect of this as leading to stretch errors.  The question of when and where to cite though, the question of paraphrasing and summarizing and using sources in ways other than Quote Then Cite — then yes, I think we may be seeing some there.

colorful patchwork sewn in a crazy quilt pattern
some rights reserved by marylouisemain (flickr)

In fact, the very first thing that came to mind when reading this post was the Citation Project and its discussion of patchwriting.

Patch writing kind of blew me away when I first read about it because it was one of those concepts that explained so much.

WordPress tells me I have cited TCP a LOT, so I probably don’t need to say, but patchwriting is a kind of almost-plagiarism — defined as “restating a phrase, clause, or one or more sentences while staying close to the language or syntax of the source.” 

The piece that really grabbed me when I first read about patchwriting in what is (I think) the first Citation Project paper was the idea that this happens when students are trying to do the right thing.  That they’re looking at the examples of academic writing we’re making them use – peer reviewed articles — and trying to mimic what they see.  They don’t have the domain knowledge, the vocabulary, or the experience yet to write this way for themselves, so they end up veering too close to their original sources in an attempt to mimic that genre of writing.  That just made so much sense to me, and now seems like a classic example of a stretch error.

Now, to find some more.

Good library assignments, part final

So we left off with the idea that research is scary and difficult, that it’s much easier to follow a familiar path than to try something new. I think the last two truisms really get at the place where all three of those factors that students need to be research-brave converge: affect, skills and practicalities.

Students won’t automatically understand the connections between research assignments and course outcomes.

Part of this, I think, is because many students don’t come to college with the idea that research is something is a learning process – in their experience, it’s been more like a stringing together quotes process. But to really get the learning process idea, I think, you have to think about knowledge as something that is constructed, not discovered and you also have to think you have the capacity to construct it yourself. That’s a pretty advanced way of thinking about knowledge — it’s where we want them to get as they become information literate.

A lot of courses have objectives that fall into the “learn about X” category — if you think that “learning” means “find out the truth from an authority,” then it can be hard to see a research paper as a part of that. But even with smaller concepts – a lot of what we require for academic research writing can seem to be more of a hoop you jump through within the boundaries of a class, not something you’ll carry forward out of the academic environment.

Here’s an example. I do a guest bit in a class for beginner engineers every year (and every year I panic about it because I am not an engineer and every year it turns out to be delightful — you’d think I’d learn). This year, though, I had some legit reasons to panic because the faculty member asked me to spend 10 minutes or so teaching them about citations and plagiarism.

(She didn’t put that time limit on it, that was just the amount of time more than I had from last year — and she also didn’t mind when I spent more time on it — this isn’t a war story — just a note about where my head was).

So anyway, I had just read Project Information Literacy’s great report on the First Year Out data — explaining how new graduates face information problems in the workplace. I was very struck by their finding that a lot of new employees know that they were hired with an expectation that people their age are good at technology and that they therefore feel a they should be doing things quickly and online.

So to do this plagiarism thing, I broke the students into groups of 3 and had them do a think-trio-share thing. I told them to imagine that they were in an internship at a company they really wanted to work for. They’d just been given their first task — something like researching a new scheduling software tool for the team to use — and they were going to be expected to write a report in a week with a recommendation.

I asked them if they agreed with my assumption that their new boss would draw some conclusions about them from the results of this – the first major project they delivered — they agreed. So then, I asked them to think about how they’d like their new boss to describe them, based on their work on this project. I told them each to come up with 5 adjectives. And then in groups I asked them to come to consensus on 3 that they thought were really important. Then I asked them to do it again – but this time think of what they would like their new boss to know about their process – about how they approach a task. Then they came up and wrote their words on the board – if someone else had the same one, they wrote over it. Kind of a low-tech tag cloud.

Unfortunately, I am disorganized and did not take a photo. But the words were pretty great – a combination of: articulate, decisive, open-minded, out-of-the-box thinker, creative, comprehensive, critical, concise, thorough, efficient, resourceful, smart, intelligent and so on.

(“technology savvy” and “fast on the Internet” did not come up – which I do not think undercuts PIL’s finding at all — I think in the safe confines of the classroom, they didn’t think those things mattered – which is not the same thing at all as being in a job where you know you’re expected to be a technological whiz-kid)

So then we talked about how the sources they chose to consult would/could communicate these things about them as an employee, and about their work process. I said that’s a major reason we cite – to present a particular picture of ourselves. And then we shifted into a conversation about what types of sources would help them do this for the assignment they had in that class.

So how does this connect to anything? Well, one of the major outcomes of this particular class is that students will develop basic skills they need to work as a professional in the field of environmental engineering. Now, think about the plagiarism thing. The professor wasn’t asking me to talk about that as it connected to that outcome. Her main focus was good citations in her class projects, right? And there’s nothing wrong with that. But taught that way – then citations (and implicitly, the sources you choose) become just another hoop you have to navigate in school projects – that are totally disconnected from anything that might extend beyond.

A lot of our courses have an explicit connection to beyond — they’re intended to teach people to think and communicate like an historian, a rangeland ecologist, a soil scientist, an environmental engineer, and so on. And in libraries we think (I believe) that most of what we have to teach should support our students in what they do in the classroom and beyond. So, lay those connections bare, is what I’m saying.

(I was talking about this activity in a workshop for faculty in another context and one small group started talking about how they could take this premise for talking about citations and build on it – how they could bring in examples of professional writing that students could analyze to see what types of sources are used in the field – or to include that concept in questions to guest speakers.)

Research freedom isn’t all it’s cracked up to be.

One of our learning technology people told me years and years ago when we were chatting about teaching that he believes we shouldn’t force students to make too many choices to be successful — that if you want to give them freedom to choose a topic, then you should provide a lot of structure in terms of form – and so on. That’s kind of like a rule, but it has stuck with me.

See, I’m pretty good at interpreting assignments – actually, I’m pretty great at it. I didn’t stress out much when it came to predicting what teachers were really looking for, what would make them happy — I knew what they wanted to see. I actually enjoyed the unstructured “I can’t wait to see what you all come up with” types of assignments. But I realized in library school that I’m way in the minority there – that for others, these free for alls are incredibly stressful.

Here’s the thing – a lot of people who go into academia are pretty good at school. And a huge part of being good at school is knowing what’s really being asked for. I am guessing that a lot of professors probably loved getting to play with ideas and sources and concepts when they were students, and were good at it. And then we become professors and we want to design the exciting, enriching assignments we would have wanted as students. But in many cases we weren’t typical students – what we wanted wasn’t what everyone else wanted or needed?

I read an article years ago about the writing classroom where the teacher (I think she was a middle school teacher) asked the class to re-write a short story they’d just read from a different character’s perspective. I am pretty sure that I would have adored this assignment in the sixth grade — that’s just how my brain works. But the class pretty much crashed and burned. Instead of giving up on the assignment, or on them, she broke it down into a series of smaller exercises that helped the students re-frame the story, empathize with different characters and – and this is important – develop the confidence to create something themselves that was going to stand alongside (in their minds) the original story by a “real author.”

It is important to remember what a huge step it is to feel confident enough to say “no one else seems to be interpreting these facts this way, but this is what makes sense to me and I’m confident in my analysis and evidence.” Talk about unpacking – that’s a career’s worth of information literacy development embedded in that one sentence. And this brings us back to where we ended yesterday — that a huge part of what we do is give students the courage to take risks. Is it a good idea to ask them to do that in every stage of a multilayered project?

One concrete place where I really think this all comes together is the topic selection phase — a place were many students don’t get much guidance — and a place where many research projects fail. Not only do the affective dimensions loom really large at this stage, but topic selection is also a skill (that requires domain knowledge). And at the same time, there’s a hefty dose of practicality in play — you’re going to be judged by someone else, that means figuring out their rules.

For this, I’m going to turn to Project Information Literacy again – their 2010 paper on how students use information in the digital age has a great section on barriers students face and for many of those students (like, easily most) the biggest barrier is “getting started.” The finding here is that students approach topic selection extremely aware of the fact that they are navigating a host of unstated expectations on the part of their teacher — not just in terms of “that’s interesting” (or not) but from a much deeper and more complex level — “that’s a topic that will (or won’t) let you do the kind of analysis and use the kinds of sources I expect to see here.” It says they think of this as a gamble:

Instead, for many students we interviewed, course-related research was difficult because it was more akin to gambling than completing college-level work. Yes, gambling. The beginning of research is when the first bets were placed. Choosing a topic is fraught with risk for many students. As one student acknowledged in interviews: either a topic worked well or it failed when it was too late to change it.

In the last couple of terms a colleague and I have been experimenting with the information literacy models in our FYC class to see if we can’t improve them. We started out looking at delivery platforms, but something we saw during our assessment that term led us down the rabbit hole of curiosity and getting started. So this last term, we took five sections and built in a set of activities where they browsed for topics. Their course instructors sent them to ScienceDaily, and then led them through a process of topic selection. I wouldn’t say this was uncritically successful — there are things we want to tweak – but successful it definitely was. But one of the most striking things about the process was actually the conversations we had with the instructors before where they confirmed, from their experience, that yes – topic selection is super scary and stressful for students and for some, it’s a barrier they can’t overcome.

20130816-082429.jpg

I think activities and assignments that focus entirely on that crucial first step — what kinds of questions do people ask in this field – would be fantastic. But if you want to do a more fully-fledged research project in a class, then building in activities that provide structure, feedback and hopefully spark interest during the topic-selection stage are crucial. Browsing is a great way to get started with this — structured, guided, useful browsing that will expose students to sources and ideas they haven’t seen before. This is a map that some colleagues and I created for a workshop – we wanted a visual that would help students start to understand the scope and extent of research happening on our campus. We started the workshop with a browsing activity – and I think a lot of students would have stayed there the whole time if we’d let them.

Conclusion

I wouldn’t say I have any strong, definitive conclusions here — the closest thing to a big-c Conclusion is I think the idea that helping students take risks is what we need to do — and that our assignments should be authentic enough to make them take those cognitive or affective risks, but structured enough to give them what they need to be successful in their risk-taking.

But the workshop this was in service of happened, and the conversations were great. And I just checked back on my three strains of thought and while they may not have fully cohered — they’re all here in some way. So I’m calling this a win. Thanks for coming along with me.

Good Library Assignments, part 2

So if bad assignments are not better than nothing – what makes them good? Not what are the rules of good assignments, because tired of rules, but yes, there are some principles, or maxims or truisms that come to mind.

I bet these aren’t all of them either, but they are the ones I’ve synthesized from my thinking:

  1. Saying “use the library” doesn’t make the library useful.
  2. The best way to encourage students to use a research tool or collection is to design a task that is legitimately easier when one uses that tool.
  3. The library is not a shortcut. People who use the library can’t end-run thinking or evaluating.
  4. Requiring something is not the same as teaching it.
  5. Students won’t automatically understand the connections between research assignments and course outcomes.
  6. Research freedom isn’t all it’s cracked up to be.

Right now, I’m thinking about the first four. In fact, I would say the things on this list are a little bit apples and oranges. The first two are obviously coming from those assignments that throw in a “use the library” requirement, or a “use peer reviewed sources” requirement, or a “you must use print journal articles” requirements or even a “you must use ERIC” requirement.

(Though that print articles thing is getting a little long in the tooth. I know, I know, it still happens but not like it used to)

The next two are getting at some reasons why I think that faculty add those requirements.

So let’s dig in a little more and think about how these themes mesh with what we know about how students use information, go about research, and approach assignments.

Saying “use the library” doesn’t make the library useful.

The best way to encourage students to use a research tool or collection is to design a task that is legitimately easier when one uses that tool.

As I said, these are mostly about requirements within assignments, and I think the more interesting place to examine them is in the reasons why. But I also think that these cover those — “I just want them to go to the library and touch the books” assignments. And here’s the thing – those assignments don’t work either.

A couple of years ago, I spent a lot of time reading about library anxiety, which is a topic that I find resonates well with faculty audiences. At least a little of this is because of the Library Anxiety Scale – because that scale has been tested and validated and used in many circumstances it means when I say “we know” it gives me a familiar type of expertise — we know, because in my field, we have done this research.

The two features of library anxiety that I tend to emphasize are these:

  1. It’s situational – like white coat hypertension – it only kicks in in certain situations. And those situations? When students actually need to use the library to complete a task or solve a problem. On my campus, everyone studies in the library (no, not really, but we’re packed most of the time). But the way library anxiety works means that a student could come to the library every single night, could have “their” own chair, or carrel or study room and still, as soon as they actually had to use the library to write a research paper, destructive anxiety could kick in.
  2. It’s characterized by a sense of “I should know this” – accompanied by a sense of “everyone else does know this.”

Given these realities, it’s pretty easy to see why an assignment that is designed to get students into the library to touch the resources isn’t going to help. And if it’s an ill-designed assignment, where they’re not going to find the thing they need to touch – then it’s going to do damage.

And even if we have the stuff, if the assignment is written in such a way that it assumes students have had experiences with information that they have not had (reading paper newspapers), or that they know things they don’t know (research is published in things called journals) — it will make things worse.

When students already think “everyone else knows this but me” then an unfamiliar term like “peer review” or “LC” will send them over the edge. Barbara Fister’s recent post on Inside Higher Ed gets at this point in a much more practical and detailed way.

Feelings matter. In particular, how we feel about our ability to solve problems — our confidence — matters.

The library is not a shortcut. People who use the library can’t end-run thinking or evaluating.

I was working on a book chapter earlier this year – a textbook chapter for composition students. And one of the things that the editor and I had a lot of back and forth about was just this. She was bringing me information from the composition faculty who had reviewed the book about how they wanted this to be simpler, or that to be simpler.

And I would say back, yes I know that they would like X, where X = whatever shortcut we were talking about here: evaluation checklists, peer-reviewed journals ticky boxes, callout boxes explaining why library databases were better — I get these requests too.

I get why people want shortcuts. I really do. Especially in composition where the topics come from across several disciplines and you’re dealing with a whole bunch of discourses that you have no particular experience with — teaching how to find, recognize, use and choose information sources is really hard. I get why they don’t want to fall down the rabbit holes I fall down into when I try to teach “what is peer review and why should you care” quickly and efficiently. But still, at the end of the day, suggesting that there are shortcuts around thinking, evaluating and choosing don’t do students any favors.

I have a couple of short slideshows I use when I want to “show” people how difficult it is to navigate our information landscape as a student.

  • One shows the first page of four different articles. I lead off this one with the question: “which articles were peer-reviewed.”
  • One shows five screenshots of newspaper websites. For this one, the question is “what type of source is this.”

Both of those exercises are designed to illustrate how much we (faculty) already know about information and publishing and how we use that knowledge to make these calls — we’re bringing tacit knowledge to the table that many of our students don’t have.

The last one is a little different. It pulls out a set of sources easily found in library databases — it includes a partisan blog, a news aggregator, a newsletter, a small newspaper and some others. This one is designed to illustrate the no-shortcuts piece.

When I hear faculty complain that “my students just went to Google” I actually wonder how often their students ACTUALLY went straight to the library databases they were told to use? Given that they can easily find Google-like sources using Summon (and Lexis-Nexis, and Academic Search Premier, and so on) it has to be that some of these maligned students actually did use the library. The issue isn’t that they went to Google instead of the library – the issue is that they didn’t know what to do with what they found – and that’s an issue in both contexts.

Requiring something isn’t the same as teaching it

It would be great if we could just require what we wanted and know that students would be able go out and figure out what we meant, what we wanted, how to deliver it — and find the whole process enriching and interesting enough to carry into the future. We all know that’s not realistic.

When it comes to research, though what needs to be taught, and how much time and effort it takes to teach it can come as a surprise. I’ve linked this old post from Dr. Crazy’s excellent blog here more than once – but I think it does such a great job of communicating just how deep the rabbit holes go when you start teaching students about research and information. There are so many unwritten rules that define good practice in academic communication, and so many things we can easily assume are common knowledge — once you start unpacking those things for students, though, you can quickly find yourself lost in a web of “but to understand that, you need to know this — a full day just to teach MLA style? Yeah, that sounds about right.

Library anxiety is one reason why there’s a problem when we don’t unpack the requirements in our assignments, but it’s not the only one. This one looms especially large in those “bad assignments” that are categorized by mis-matches — between the requirements and the students’ ability levels or between the requirements and the point of the assignments themselves.

I’ve talked about student development before, at length, and I won’t do so here but tl:dr – students don’t come to college thinking about knowledge and knowledge creation the same way their teachers do. They’re not supposed to – they’re supposed to develop that way while they’re here. So when we require sources that have one set of epistemological assumptions embedded within them (like peer-reviewed articles) and we don’t unpack those assumptions, then students will try and fit the new sources into their current way(s) of knowing. When the sources don’t fit (as they inherently won’t) then they think the sources are just a series of hoops they have to navigate to make teachers happy.

If you, like me, think there’s value in the work scholars do, this should be worrying.

The thing is, unpacking those assumptions is a huge job — let’s look at the “you must use a peer reviewed article” requirement. This rabbit hole will take you almost all the way to China. To really understand and use these articles you need to know:

  • Scholars do research. Not “research paper” research but other types of original research.
  • Scholars frequently write articles about individual studies, which examine specific things – not every dimension of a topic.
  • Research is usually (but not always) reported in things called journals.
  • Scholars argue, but in a particular way. They aren’t necessarily trying to win (and end) a conversation when they argue — there’s always another question and that’s not a flaw.
  • The same scholars who write the articles in journals also review other people’s articles for quality.
  • When scholars review for quality they don’t repeat the experiment to see if it’s true.
  • Scholars continue examining and evaluating the quality of an article after its published.
  • Scholars belong to professional communities called disciplines.
  • Disciplines develop rules or best practices about conducting and reporting on research.They’re not all the same.

That’s a huge amount to unpack and you can’t really expect students to “get it” if you just mention it it once (even if you do so at length). And it doesn’t even get at the fact that most students don’t have the domain knowledge to read these articles critically.

So a huge part of “good library assignments” if figuring out what you, as the teacher, actually have the capacity to support. Can you devote a full day to teaching MLA citations? Can you spend a week on scholarly knowledge creation?

And there’s still another level to “teaching it” that’s equally important, and just as labor-intensive: feedback. Students need feedback on the choices they make when it comes to information sources and their research process. And they need the opportunity to apply that feedback and try again. Some colleagues and I did a small research-process study last summer (soon to be published in portal, if you’re interested) and our students reported that they rarely get feedback on the sources they choose. And this finding wasn’t a surprise.

Students know how to do school. It’s not hard for them to figure out what really matters — when teachers don’t invest time on the front end explaining a requirement, and don’t give meaningful feedback on the result – they’re quickly going to realize that they don’t need to put any real effort into meeting that requirement. That’s why we hear “as long as you put the web sources fourth or fifth in the bibliography, and the EBSCO sources on top you’ll be fine.”

It’s almost like teachers and students have silently agreed that library databases are going to be shorthand for quality. As long as students go through the motions of using them, then we’ll consider that requirement checked off and focus on other things.

But it doesn’t help them when they actually need information to solve problems or make decisions, and it doesn’t do us any good if they ultimately decide the work that scholars do and that librarians preserve, repackage and make useful is useless.

I was talking to a faculty member who teaches a class for first-years called science myth busters – and told me about an approach he uses that I think has a lot of potential across a lot of disciplines. He spends a full day teaching about the concepts of correlation and causation before he has students read research articles (and news reports about research). Then, when they read the articles, they analyze them — just on that concept. They consider how the news reporters understand it, and how the scholars talk about it.

What I love about this is that it gives the students a structure they can use to start to approach these sources like someone engaged in knowledge creation would — it gives them language they can use, and a concrete task to complete. It’s manageable for the instructor, and it’s meaningful for the student. And many fields or areas of study have key concepts that could be used in a similar way.

See, Project Information Literacy (and about a million other studies) tell us that students tend to stick with what they know. Once they have a research-process hammer, then they’ll try and turn every research problem into a nail. They’ll stick with the same type of sources, with the same research tool, with the same processes and methods. They port them from high school and will only adapt them as they need to.

I think a huge part of what we’re (the big we – the higher ed we) are about is getting them to expand beyond what they’ve done before- to consider different types of evidence, more complex processes and to build a bigger toolbox. But trying something new is scary. Feelings matter – and we have to create an environment that makes them feel they can do it. Skills matter – we have to give them the tools to do it. And practicalities matter – it has to be worth their while to do it too.

There will be one more part – hopefully tomorrow — but I’m heading out for some Oregon Shakespeare Festival in a few hours so it might be Monday.

Good library assignments, part 1

I’m putting together a workshop tomorrow for teaching librarians about good research assignments — so I went looking to see what else has been written on the topic. I found lots of good stuff (I’ll talk about that later) but mostly what I found were rules — do’s and don’ts — embedded into pages about “when to ask for library instruction.”

(I bet you can predict what the rules are).

But here’s the thing – I break the rules all the time. In the last five years I have:

  • Taught classes without the faculty member present!
  • Said. “okay, sure!” when I was asked for a scavenger hunt activity.
  • Scheduled workshops for classes that don’t have research assignments, and which aren’t going to have research assignments.
  • And in one memorable case – integrated a scavenger hunt into a workshop for a class that was in the library without their instructor, that was a third again too big for every student to have an hands-on computer AND that didn’t have any kind of research assignment.

I mean, I don’t break rules for the thrill of breaking rules. And it’s not like we have anything so structured as “rules” here anyway. But I know them, just like we all know them, which means that even though I had good reasons for doing all of those things, I felt I had to figure those reasons out and justify those choices.

But I realized this morning that … I’m tired of rules. Or, maybe it’s more that rules make me tired. The effort to control and regulate a bunch of external conditions to make the one-shot — which has a bunch of moving parts that are uncontrollable — work is really tiring.

(And the rules have a nasty little unstated flip side — the one that says if all of the rules are followed, then the only reason why the one-shot isn’t awesome is librarian failure. That exhausts me even more.)

So in thinking about “good library assignments” the last thing I feel like doing is coming up with more rules. That’s right, not even “no scavenger hunts.”

I’m trying to pull together 3 pieces of interconnected thinking here. I don’t think I’ll talk about them all today – but I am hoping they’ll cohere if I talk about them. Here they are:

War stories: Thinking over “bad library assignments” I have seen – what are the broader categories?

  1. Assignments that require students to use, locate or manipulate a thing that my library does not have.
  2. Assignments that require students to do a thing in an outdated or inefficient way.
  3. Assignments with no immediate payoff – that serve only an unknown future need.
  4. Mis-matches — between assignment requirements and students’ cognitive development.
  5. Mis-matches — between the assignment requirements and the audience/ rhetorical purpose of the assignment.

Truisms: What are some things that are usually true (from my experience) about research assignments and teaching research?

  1. Saying “use the library” doesn’t make the library useful.
  2. The best way to encourage students to use a research tool or collection is to design a task that is legitimately easier when one uses that tool.
  3. The library is not a shortcut. People who use the library can’t end-run thinking or evaluating.
  4. Requiring something is not the same as teaching it.
  5. Students won’t automatically understand the connections between research assignments and course outcomes.
  6. Research freedom isn’t all it’s cracked up to be.

Expertise: What do we know about how students interact with research assignments that many others on campus do not?

  1. Library anxiety is real, has cognitive consequences, and can’t be fixed by requiring students to enter the building or touch the books.
  2. There are a lot of terrible sources available in library databases and on library shelves.
  3. Students will stick with what they know.
  4. Topic selection is difficult and stressful, and can be a barrier to student success on research assignments.
  5. Sometimes, it’s trying to do the right thing that leads students to do the wrong thing.
  6. Teachers and librarians have had experiences with (and built up a body of knowledge about) research and information that their students have not.

I’m going to dig into this more tomorrow, I think but for now – what do these things have to do with the rules above?

The faculty member present thing – probably nothing.  I agree that an active, involved faculty member makes my sessions better.  But I also have a lot of faculty at this point I’ve been working with for a long time — if someone I’ve assignment-designed with, taught with and published with needs to go to a conference the same week that her students need the library, I’m going to say yes.

But the rest – the rest do relate.  Because basically, I don’t think that a thrown-together research assignment, a mediocre research assignment, or a research assignment that’s separate from the class and will never be talked about again is going to make my session better.

And when we’re thinking beyond my individual session — then, a bad research assignment is going to make things worse.  So at that point, I have a couple of options – do the session without one (which I’ve done) or say, “no thanks, not this term” (which I’ve also done).

Why do I think they make things worse?  Because there are implicit messages buried in each of those “bad assignment” characteristics — let’s revisit?

Assignments that require students to use, locate or manipulate a thing to be successful — and my library does not have that thing (or enough of that thing).

Subtext:  Libraries don’t have what you need.  And perhaps even worse – librarians don’t know what you need and cannot help you.

Assignments that require students to do a thing in an outdated or inefficient way.

Subtext: People who use libraries do so because they don’t know the best way to do things.

Or, as a colleague and I used to say “let’s teach them – whatever you do, DON’T use library resources!”  This actually came from an assignment that never happened.  We wanted students to get an overview of the topic before going to scholarly sources (as you do) and we thought we might be able to embed a discussion about the differences between traditional encyclopedias and Wikipedia in the unit (yeah, yeah, it was 2005.  It was how we thought then).

We opened up our online Encyclopedia Brittanica, took a stack of student research logs, and started plugging in the words and phrases that they’d used in their initial searches.  And OMG were the results ever terrible.  We compared twenty-five student searches (because rigor) but we knew after five that we were never going to send people to the Brittanica because we’d be sending the implicit message – “whatever you do, DON’T use library resources.”

Assignments with no immediate payoff – that serve only an unknown future need.

Subtext: 

Mis-matches — between assignment requirements and students’ cognitive development.
Mis-matches — between the assignment requirements and the audience/ rhetorical purpose of the assignment.

These are two different things, but the subtext I’m worried about is the same:  You have to use these sources, processes, and tools here in school, but once you graduate you’ll never use them again.

So what did I miss?  Plus, more to come.