Behind the Paywall: Grading, Bias and Class Participation

This article was recommended all over my twitter the other day, and the topic looks pretty interesting.  So let’s launch a new (hopefully) regular feature, Behind the Paywall. 

Citation

Alanna Gillis. “Reconceptualizing Participation Grading as Skill Building.” Teaching Sociology, 47:1, 10-21. January 2019.  DOI: 10.1177/0092055X18798006

The Access Experience

Paywall: Sage, accessed at my library.

I had a lot of trouble loading the PDF, which I blamed on my local wifi for a while. Seriously, for a college town, we have terrible wifi options in Corvallis.  But when the same thing happened a few days later, and everything else around it loaded okay?  I think it was clearly a problem with Sage.

TL;DR

Untangling everything that is wrong with how we measure and reward class participation would take forever. Not only do our dominant methods rely on instructors to be free from bias and have perfect recall, but they rest on assumptions about students’ willingness, ability and preparedness to participate in class that are deeply problematic. By continuing to reward participation in these ways, teachers — even when they do not want to — are replicating and reinforcing inequalities. Reframing class participation as a skill-building opportunity and building in robust opportunities for students to reflect on their performance is a better way to go.

Here we go…

So we start off by situating this paper within the context of teaching and learning in the classroom. We know that students who are engaged and participating in class learn more.  Knowing this, professors have an interest in motivating students to participate, so many of them grade class participation.

I am liking this problem statement in its recognition that the target audience has spent many years in school, knows that participation grading is a thing, and doesn’t need eight different citations showing that to be true.  The author goes on to say, yes, I haven’t done any systematic inquiry to nail down objective participation grading themes, but I also don’t have to pretend we don’t all know what we know.  And based on living in the world as both students and teachers, we know that there are two basic ways that participation grading works:

  • Teacher gives grades based on recall. A few times a term (or once at the end) they remember how many times each student talked, and assign a grade based on that.  
  • Teacher gives grades based on actually counting how many times students talk during the term. More complex applications of this method might count specific types of participation (asking questions, answering questions, etc.).

There are issues with both of these methods. Teachers do not have perfect recall. Teachers are human and subject to bias in all the ways humans are biased. And, finally, more talking does not necessarily mean more learning.

OH. I think this next bit though is why this paper is getting so much love.  It’s because it goes down to the next level and points out that the deeper problem with all of this participation grading is that these method of motivating class participation are built on several problematic assumptions: that all students are equally prepared to speak in class; that students all understand class participation in the same way; that students have all been rewarded (or not) for classroom behaviors in the same way; that all students are bringing the same skill set to the classroom.

There’s truly no reason to believe that those things are true. And there are a lot of good reasons to believe that they are not.

SO.

Gillis has three intersecting goals in this paper:

  1. Unpack the assumptions behind participation grading as it happens most frequently now.
  2. Re-frame participation grading as an opportunity for skill development, and re-focus it on more meaningful goals.
  3. Show the evidence that says this new framework is worth implementing in real classrooms.

Literature Review

Let’s unpack some assumptions.

We know student evals of teachers are super biased.  We acknowledge and understand that that bias works in both directions:  students’ biases affect their evaluations of teachers AND teachers’ biases affect their evaluations  of students.  However, when it comes to participation grading, we have a tendency to acknowledge that bias as a reality without really understanding or unpacking its dynamics.

(I’m going to summarize the lit review pretty significantly, and link to some key sources)

The research documents general biases that affect student evaluation: teachers tend to reward students they like,  squishy factors like attitude affect evaluations, and factors like race, gender, ability, and socioeconomic class definitely affect assessment in many ways.

We also know that we work in a world where teachers don’t always remember their students’ names, so systems that rely on accurate recall are inherently suspect.  But the issues with memory go beyond this.  Teachers are more likely to remember extreme situations (outbursts, falling asleep in class) than mundane normalcy. Teachers tend to remember giving students more chances to participate than students remember getting.  

There is also a ton of research that challenges the idea that all students are equally ready and willing to participate in class.  There are  a ton of things going into how students are socialized to understand their role in the classroom, or what appropriate interactions with teachers look like.  Some come from outside school — parents’ messages to children are shaped by their own experiences with school or authority structures, for example. Some come from the lived experience of being in school. Students bring very different experiences with consequences and rewards when it comes to asking questions, offering opinions, sharing stories, suggesting counternarratives, and classroom behavior.  And all of these dynamics — inside the school and out — are shaped by factors (including race, gender, class, ability, language and more) that create and reinforce inequality, and which also need to be analyzed and understood intersectionally. 

Then, we have one of the most pervasive dynamics in the teaching literature, at least in the literature that focuses on motivation and learning.  There is a lot of work in this area coming from psychology, using lenses that focus inquiry on personality. This shapes the discourse and produces research (and policy) that frames behaviors like class participation as the result of hardwired personality traits — shyness, introversion or extraversion — and not as behaviors that are built out of skills that can be learned. 

Selected Sources:

New Framework, Different from the Old Framework

“Instead, I propose that instructors conceptualize participation grades in undergraduate classrooms as opportunities to incentivize and reward skill building” (13).

This framework:

  • Conceptualizes participation as a set of learnable, interconnected skills
  • Recognizes and rewards skills that students already believe reflect their engagement in class (peer editing, prepping with classmates in study groups, active listening, coming to office ours) but which are usually not captured by “class participation” grades.
  • Encourages students to work in different skills simultaneously, and to start to understand these skills as interconnected.

Application (Methods)

This is how the author applied the framework in class.

  • 2 sociology classes. 1 400-level and 1 100-level.
  • 45 students per class.
  • Class participation is broken into 5 dimensions:Attendance and tardiness
    • Preparation for each class meeting
    • Participation in small group discussions
    • Participation in full class discussions
    • Participation in other ways (office hours, writing center visits, study groups, and more)
  • Evaluation is conducted using a “self-reporting goal-centered approach.”

Start of the term:

  • Students use a 5 point Likert scale to self-rate along each of the 5 dimensions: How well do you usually do in classes like this with this behavior.  They also write 1 sentence justifying their numerical rating.
  • Students identify 3 concrete, measurable goals for themselves during the term and write out a plan to achieve these goals.
  • Teacher reads and gives feedback on the goals and plan.

During the term:

  • Periodic, informal check-ins.
  • At least one formal self-reflection.  Students re-rate themselves along each dimension and submit a reflection justifying their rating, reporting on progress twoards goals, and adjusting goals/plan as needed.
  • Instructor gives feedback on goals and plan, and if there is a disconnect between the students’ self-rating and the instructor’s perception, meets to calibrate this.

End of the term: 

  • Student submits a self-report that is similar to the mid-term report, but in which they assign themselves a participation grade and justify it.
  • Instructor reviews the reflective material from throughout the term, and the students’ progress towards goals, assigns a grade, and explains it with written feedback.

The instructor reports that there was rarely a disconnect between the students self-reported grade and the instructor’s perception.

(Note, I would expect from my experience that there would be a group of students who would grade themselves too harshly, describing similar activities and evidence to other students but assigning themselves a lower grade than I would, or than those other students would. I wonder if that happened.  It would be pretty easy to re-calibrate at midterm).

Goals/benefits of this approach:

  • Reward a fuller range of behaviors.
  • Reward something more than quantity.

Analysis:

  • Did math with their numerical evaluations and counted how many achieved goals
  • Also inductively distilled themes from the written reflections.

Results

Skill building: Students came to see speaking in class as a skill.  Related to this — they were able to articulate progress even when they were still feeling nervous about participation, or still identified as “shy” or “introverted.”

Starting is the hard part, and then it gets easier.

(Note: most students focused on participating more, but some students worked on skills around participating less, or participating intentionally. These themes cut across both of these goals.)

Connections: The five dimensions of participation are interconnected.

Transfer: Some students reported that they practiced their participation skills in other classes too.

Discussion

I am going to skip most of the discussion because this is super long, and I feel like many of the insights are grounded really well in the rest of the paper.  But I will tell you what the author identified as limitations:

  • Having to rely of self-reporting.  This is the big one.  They tried some forms of triangulation, but most came up short for obvious reasons, like, “I am trying to evaluate things that happen both inside and outside the classroom.”   So far, in the rare cases when there was a significant teacher/student mismatch, course correcting at the midterm check in addressed the problem.
  • The experience so far demonstrates the need to do more, intentionally and formally, to train students how to participate in class.

Final thought

“Sociologists must take issues of inequality as seriously in our grading as we do in our instructional content, and moving toward a skill development participation assessment system is a good step in that direction.” (20)

 

I wrote a thing. Over there –>

I haven’t written here in a super long while.  But Hannah and I wrote a thing, published In that Library with the Lead Pipe place.

Sparking Curiosity — Librarians’ Role in Encouraging Exploration

Lots of you know this has been a long time coming — we first talked about it in public, I think, at Online Northwest in the 2014 Snowpocalypse.  I thought it might be fun to pull together all of the posts about it here:

Online Northwest 2014 (with Chad Iwertz)

(Which also led to the Curiosity Self Assessment Scoring Guide)

Library Instruction West 2014

Oregon Library Association Annual Conference 2016

LILAC 2016

European Conference on Information Literacy 2016

This work has also been a part of a lot of the professional development workshops I’ve taught, and Hannah and I have taught, in the last five years.  In fact, we’ll be teaching on this topic (and others) the day after tomorrow at Colorado Mountain College, and we’re very excited.

AMICAL 2015 — It Takes a Campus: Creating Research Assignments that Spark Curiosity and Collaboration.

University of San Francisco 2015 — Fostering Curiosity and Inquiry with First-Year Students

 

 

Again with curiosity (Library Instruction West 2014)

So, not only was this conference in Portland but it was also awesome.  Thanks one more time to Joan Petit, Sara Thompson, and the rest of the conference committee who put on such a great event.

Marijuana Legalization Papers Got You Down?  You Won’t Believe What We Did About It!

Hannah Gascho Rempel & Anne-Marie Deitering (OSU Libraries & Press)

Title slide for a presentation. The word curiosity is displayed across the top. Several images of sparks are below.

Download the slides (PDF)

Download the slides + presenter notes (PDF)

Session handout

Take the Curiosity Self Assessment

Scoring Guide to the Curiosity Self Assessment

 

thoughts about learning sparked by that note taking study

Remember a couple of weeks ago when news articles like this, or this or this were all over your social media?  Mine too.  I’m a little late to replying, but I didn’t want to do it until I’d read the actual study.  I read a couple of the news articles and something about the coverage was bugging me.  Me, an avowed taking-notes-by-hand-notetaker!  

Today, I read it, and I think I know what’s been bugging me.  It’s that when we didn’t have the tools that make things easy, we learned a lot, so the tools are bad narrative.

In other words, the technology (in this case, a pen) puts up a barrier, and what we have to do to get around that barrier turns out to be a useful learning experience.  We learn new skills because we’re motivated to get around the barrier, and we don’t even really notice we’re learning them because we have our eyes on the prize.

So when a new technology comes that removes the barrier we love it and adopt it, but worry about everyone who isn’t going have the important experience of getting over the barrier.  Or worse, we look at those who grew up without the barrier and decide that they’re deficient in some way.

Does this sound familiar?  Of course it does.  How many times have we heard variations of it in libraries?  A million?  A zillion?

The problem with ____________ is that students don’t learn how to _____________ anymore.

First, a quick recap of the study

(I crack myself up, it won’t be all that quick)

Context:  There are 2 main theories about the value of notetaking that were considered here:

  • External storage — this is the idea that notes give you something to study later.
  • Encoding — this is the idea that the cognitive work you do to turn information into notes improves your learning, even if you don’t review them again.

Since laptops enable a more transcription-like type of  notetaking, the authors hypothesize that they will find benefits to pen-and-paper notetaking over laptop-supported notetaking and they designed 3 related studies to test that:

Study 1 — let’s compare laptop notetaking to paper notetaking, without doing much else.

2 groups of students were asked to take notes on the same material, with no instruction on how to take notes.  They were randomly assigned laptops or pen/paper to do the task. Afterwards, they answered both factual/recall and conceptual/application questions about the material.  In addition, their notes were coded and analyzed by the researchers.

Both groups of students did about the same on the factual/recall questions, but the students who took notes by hand did significantly better on the conceptual/application questions.

Those who took notes in longhand wrote fewer words, and had fewer examples of direct transcription in their notes.

Study 2 — let’s do pretty much the same thing, but this time we’ll tell them not to transcribe.

So this time the students essentially did the same thing, but the students who got laptops were split into two groups.  One of those were told to take notes as they usually do, the other was also told that studies show transcription doesn’t work, and that they shouldn’t transcribe.

In this case, the differences between the groups were less significant, but the handwritten notes group still did better.  There was no difference in the laptop groups — inserting a paragraph telling students “don’t transcribe” didn’t have an effect.

Study 3 — this time, we’ll have them study the notes again later.

Instead of TED talks, four prose paragraphs were selected and then read by a grad student from a teleprompter to simulate a lecture. The paragraphs included 2 “seductive details” — information that is interesting but not useful. Students were told they’d be tested later before they took their notes.  Again, some were given laptops and some were given pen and paper.  A week later they came back, half were given the chance to study their notes for 10 minutes, half weren’t.

The results here were more complicated.  You have to look at the intersection between notetaking medium and study time to find significant differences.  Those who took longhand notes and studied did better than any other group of conditions. Additionally, among those who studied, verbatim notetaking and transcription negatively affected performance.

Okay, enough recap, on to my thoughts:

(For more details about the study — see the end of the post. It’s paywalled, so I’m feeling responsible for making sure you have the details the news articles don’t include)

I want to start off by saying that I don’t have a problem with this study – I think it’s useful, I think it’s interesting, and I am fairly certain I will come back to it again and use it in real life.  My issue is with the conclusions that have been drawn from it — mostly in all of those news stories, but also by most of the people who tweeted, facebooked or tumblr-ed those articles.

Reading the actual study – there’s nothing in there that says much about the medium.  Beyond the fact that most people type faster than they write, and therefore can get closer to transcription on a laptop, there’s really nothing at all.  What the study found was that if you transcribe, you don’t learn as well and, as they point out themselves, we knew that already.

See, I don’t think the takeaway is “don’t take notes with laptops.”  I think the takeaway is — we have to start teaching people how to take notes. Better yet, we have to start teaching people how to use the information they gain from lectures, videos, infographics, textbooks, readings and learning objects.

There’s definitely no way one could consider the Just Say No to Transcription intervention in Study #2 “teaching” — this study surely did not prove that people can’t take good notes with laptops, it only suggested that they don’t.

There’s nothing magical about taking notes by hand that makes people process and think and be cognitively aware of what they’re doing — if that’s all you have and you want  good notes, over time you will figure that out because you can’t write fast enough to transcribe.  But that’s not magic, it’s motivation.  It’s still a learned behavior, even if the teacher could remain blissfully unaware of that learning.

And when we learned how subject headings worked, or that we could find more sources by using the bibliography at the end of the book, or that the whole section where that one book was had interesting stuff, or that both the article title and the journal title were important — learning that stuff wasn’t the point and we might not have noticed that learning.  But we learned how to think like the people who organized and used the information because learning that was the fastest and easiest way to getting our papers done.

(Hey, do you think that when copy machines were invented, and we could just make a copy of the article instead of having to read, digest and take notes on it in the library people argued for No Copy Machines?)

Even if we take laptops out of the classroom, I don’t think that students will feel like they have to learn how to think about, digest, remix and capture their thoughts about a lecture in order to function.  I think that ship has probably sailed, that horse is out of the barn, that genie’s out of the bottle.

If a student knows they can record the lectures on their phone, or if the slidedeck and lecture notes are posted before every class, they’re not going to feel like they have to get it down or risk failure.  And if the lectures are already recorded and re-watchable in a flipped or online class — they’re not going to suddenly think they need to be flexing their best cognitive muscles because they have a pen in their hand.

I don’t hear the “put the barriers back up” when it comes to digital information from instruction librarians much anymore.  And I think it’s fair to say that I’m hearing it less from faculty too.  But I still worry when I see things like the coverage of this study — because it’s not like I disagree that things are getting lost when these barriers come down.  Skill type things, tacit knowledge type things and also habits of mind type things — the tools I had to work with as a young learner left me with a lot that still serves me well now, when I have better tools. If my students can’t learn those things the way I did – and they can’t — how will they?  I don’t think answers like “ban laptops,” or “just use a pen” are going to get them what they need.

Study details

Mueller, P.A. & Oppenheimer, D.M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological Science.  doi:10.1177/0956797614524581 Study 1

  • Princeton students.  n=67 (33 men, 33 women & 1 other).
  • Laptops had no internet connection.
  • Students watched 3 TED talks and took notes.  No instruction on taking notes.
  • Taken to another room to provide data:
    • complete 2 distractor tasks
    • complete 1 taxing working memory task
    • answer factual/recall questions
    • answer conceptual/application questions
    • provide demographic data
  • Notes were coded and analyzed.
  • Results:
    • factual/recall = both groups the same
    • conceptual/application = laptops significantly worse
    • more notes = positive predictor
    • less verbatim notes = positive predictor

Study 2

  • UCLA students
  • Laptop groups = 1 control (take notes as you normally would), 1 intervention – studies show that students who take notes verbatim don’t do as well on tests. Don’t do that.
  • Data:
    • Complete a typing test
    • Complete the Need for Cognition Scale
    • Complete Academic self-efficacy scales
    • Complete a shorter version of the reading span task
    • Complete the same dependent measures (questions) as study 1.
    • Demographic data
    • Notes were coded and analyzed
  • Longhand students did better, but not significantly.
  • None of the other measures had an effect
  • Longhand students took fewer notes than any of the laptop groups and took fewer verbatim notes.
  • Telling people not to take notes verbatim had no effect.

Study 3

  • UCLA students
  • 4 prose passages were read from a teleprompter by a grad student standing at a lectern simulating a lecture.
  • Students saw the lectures in big groups, wearing headphones
  • 2 “seductive details” — interesting, but not important information — were inserted into the prose passages.
  • Students were told they would be tested on the material before taking notes.
  • Tests were 1 week later.
  • Study group was given 10 minutes to study notes in advance of taking the tests.
  • Data:
    • 40 questions, 10 per lecture, 2 in each of five categories:  seductive details, concepts, facts, inferences, applications
    • notes were analyzed
  • No main effects of note taking medium or chance to study.
  • Significant interaction between note taking medium and chance to study.
  • Longhand notes + study = significantly better than any other condition.
  • For those who studied, verbatim negatively predicted performance.

Images

The new way of taking lecture notes. Some rights reserved by Natalie Downe (flickr) https://www.flickr.com/photos/nataliedowne/1558297/

reading my notes. Some rights reserved by gordonr (flickr) https://www.flickr.com/photos/gordonr/430546423/

pen. Some rights reserved by Walwyn (flickr) https://www.flickr.com/photos/overton_cat/2267349191/

Copycard. Some rights reserved by reedinglessons (flickr). https://www.flickr.com/photos/reedinglessons/5909073392/

Sailboat. Some rights reserved by jordaneileenlucas (flickr) https://www.flickr.com/photos/jordanlucas/4027830675/

Before you tell me not to take notes

Don’t.

I mean it.  Please don’t. Just don’t.

You’re not encouraging me to engage with your talk; you’re not making your class more fun or easier for me.

hand writing math notes with a green stylus on a tablet computer

some rights reserved by Viking Photography (flickr)

I need to take notes, preferably by hand. These days that means with a tablet and stylus.   I use a tablet and keyboard when I forget and bring the bad stylus and in meetings. And in some situations, I post notes on Twitter.

When you tell me not to do any or all of those things, you’re actually alienating me. You’re making me feel unwelcome. And you’re stressing me out.

(And if any part of your talk has to do with reaching all learners – you’ve lost me already)

Don’t misunderstand.  I’m not saying that everyone should take notes.  I’m not saying that anyone but me should take notes.  I’m not going to project my preferences and my learning habits on to you — I’m just asking that you don’t project yours on to me.

Here’s a secret.  My brain is a super busy place. Not always a productive or focused place. Seriously, say one interesting thing and I am off to the races. It doesn’t even have to be interesting, really. Even something that just reminds me of something that’s interesting will do.

handwritten mindmap describing faceted classification including circles squares arrows and text

some rights reserved by Jason-Morrison (flickr)

(Okay, that probably isn’t much of a secret)

And I’m not complaining about this. I spend a lot of time in my brain and most of the time, I like it there. I like to think. I get excited by ideas and connections. I get an almost visceral thrill when thoughts snap into place.

And don’t take this the wrong way, but there’s almost nothing you can do, no amount of humor or engaging activities you can build in, that will be more fun or compelling to me than thinking about what you say. The more awesome you are? The more I want to play with your ideas.

Taking notes is how I stay grounded in your thoughts. Taking notes is how I stay present. Taking notes keeps me from chasing my thoughts down those intellectual rabbit holes right now – I wrote a note, I drew a star and a circle and an arrow to the other thing, I can relax now and go back to it later.

And I know you’ve given me a handout or put up a website with all your references on it. I really appreciate it – I do! I do this too. Who wants to be scrambling to write down sources and links? I don’t, but I’m going to write down the why, and draw the circles and the arrows to show how they fit in and work for me.

(And if I ever gave you the impression I didn’t want you to take notes when I pointed out the URL for one of those resource lists – I’m sorry. That’s not what I meant!)

Man with wedding ring  scanning a handwritten notebook page into Evernote with his cell phone

some rights reserved by Evernote (flickr)

If it makes you feel better, I even take notes when I’m alone. I couldn’t start reading on my tablet until I figured out a note taking workflow.

For marginalia and highlighting, that’s PDF + stylus + Notability, if you’re interested. But there’s also my Evernote moleskine, which I use to create my holding pen notes — a writing trick I learned from Vicki Tolar Burton that I also use now for reading.

The holding pen is basically a place to put all of those questions and thoughts I don’t want to lose, but which will keep me from reading to the end of the article (or writing this paragraph or section) in the time I have if I don’t put them somewhere —

This might explain that theme we pulled out of the interviews, but I can’t remember exactly what she said. Argh, didn’t that Juarez paper I read last year dealt with this trait. Hey, Laurie’d be interested in this to help turn that one project into a paper idea. Oh, maybe that term will work better in PsycINFO. OMG that’s a good example to use in class. Wait, no, I don’t think that’s what she was really arguing in that book. Ooh, that methodology might work for me with the other study.

Basically, I’ve been doing this a long time – learning in classes, in workshops, from books and texts, in lectures and presentations. I’ve had decades at this point to figure out how to make learning work for me, and while there’s always more to learn, I need you to trust me that I know what I’m doing, and to remember that for some of us, engagement looks a little different.

What? So What? Now What?

So I was at the First-Year Experience conference in San Diego a couple of weeks ago.  There were many highlights — starting with a conference that is actually in my time zone, to my excellent walking commute —

View of the Little Italy sign in San Diego, California

Walking commute from Little Italy to the conference hotel

— to the views from the conference hotel.

View towards the harbor from the Manchester Grant Hyatt in San Diego

trust me, this wasn’t even one of the best ones

Another highlight came in a late session by Catherine Sale Green and Kevin Clarke from the University 101 program at the University of South Carolina.  I wasn’t the only OSU person at this conference (far from it).  After I got back to campus, I was helping Ruth, who coordinates our FYE, with an info session for faculty thinking of applying to teach FYS next year and she started to say “what? so what….” and I finished with “now what” – because while it was a content-rich session, that short phrase was probably the most memorable part of it.

What?

It’s a guide to help students with reflective writing. Three simple questions to answer.

So what?

It probably won’t shock anyone to know that I find reflective writing pretty easy. It’s a reason this blog exists, and definitely a reason for the tagline. While the actual writing of some reflective documents (teaching philosophies, anyone?) kills me as dead as anyone, the how and the why of reflective writing has never been difficult for me.

Honestly, when I realized that it doesn’t come easily for every one (or even for most people) I started to feel more than a little narcissistic.  I realized that pretty quickly once I started teaching — I’d assign the kinds of reflective writing prompts I used to see in classes, and I’d get back papers where the students really struggled with trying to figure out the right answers, or what I wanted to hear, but that lacked any real reflection of their own thinking.  The problem is, when you’ve never had to (ahem) reflect on how to do something or why to do it — it’s super hard to figure out how to help people who are struggling.

What I like about these three questions is how they start with something relatively simple — description is usually straightforward — what happened, what did you do, what did you notice, what did you learn, and so forth.  But they don’t let students end there.  They push to more complex analysis — why does that thing matter?  And then they push beyond that to something equally challenging (what does it mean for you) that, if students do it successfully, will also demonstrate the value of reflection or metathinking itself.

Now what?

(Wikimedia Commons)

Well, here’s the thing – I will undoubtedly teach credit courses again and when I do I will undoubtedly assign reflective writing.  So this is going to help me there, in its intended context I have no doubt

But I also think this is a fantastic way to think about the process of analyzing and evaluating information.  We all know I don’t like checklists when it comes to teaching evaluating.  Truthfully, I’ll argue against any tool that tries to make a complex thing like evaluation simple (seriously – it’s at the top of some versions of Bloom’s! The top!)

And I’ll argue against any tool or trick that suggests you can evaluate all types of information the same way without context and without… yes… reflection, on your own needs, your own message, and your own rhetorical situation.  That’s my problem with checklists.  At best, they are useful tools to help you describe a thing.

An example — the checklist asks, “who’s the author?”  The student answers – William Ripple.  That’s descriptive, nothing more.  But think about it with all three questions.

Some rights reserved by Gouldy99 (flickr)

What?  The author of this article is William Ripple.

So what? Pushed to answer this question – the student will have to do some additional research.  They will find that William Ripple is on the faculty of OSU’s College of Forestry, and the director of the Trophic Cascades program.  He has conducted original research and authored or co-authored dozens of articles examining the role of large predators in ecological communities.

Now what? This question pushes the student to consider their own needs — what they’re trying to say, who they’re trying to convince and what type of evidence that audience will find convincing.

Now, move away from that fairly obvious checklist item and let’s consider a more complicated one, bias.

I’ve linked here before to this old but still excellent post explaining why identifying bias is not evaluation.  And yet, we all know that this is still where a lot of students are in their analysis — they want facts, bias is a reason to reject a source. But bias is no different than author – identifying it, being able to describe it, that’s not evaluation.

What?  I actually think this one could be a step forward in itself — instead of just saying a source is biased, a good answer will specify what that bias is, and what the evidence for it is.

So what? This could push a student to consider how that bias affects the message/argument/ validity of the piece.

Now what? And this is the real benefit — what does this mean for me? How does this bias affect my use of the source, how will my audience read it, how might it help me/ hinder me as I communicate my message?

Now, of course, a student could answer the questions “this source is biased, that matters because I need facts, so I will throw it out and look for something that says what I already believe.”  That could still happen.  And probably will sometimes.  But I like the idea of teaching evaluation as a reflective process, grounded in a rigorous description and examination of a source.

all mistakes are not created equal

I try my best to keep up with Inside Higher Ed bloggers, but I don’t always succeed.  Monday’s post from the Community College Dean jumped out at me (probably because of the title – The Ballad of the Red Pen) and then once it had jumped out at me, it got me thinking.

red pen lying on a page of black-and-white text

some rights reserved by Cellar Door Films (flickr)

So the post isn’t really about using the red pen so much as not using it.

(BTW, the only thing I clearly remember from the award winning one week of training I got before heading into the classroom as a graduate Teaching Assistant was this advice – Never Use a Red Pen.

The argument was that the red pen had become so stigmatized that just the sight of red ink could send students into panic mode.  To this day, I use something else)

Anyway, at the heart of this post (according to me) lies the concept of “stretch errors.”  These are those errors that happen when someone is trying to grow and develop — when they’re trying new things.  The suggestion is that one should be “thoughtful” about using the red pen too much when the errors you see fall into that category – too much discouragement to a student taking a risk and trying something new = problems.

This got me thinking about information literacy and research instruction and what I was saying in the Good Library Assignments posts.  If a big part of what we’re doing with college level research instruction is helping students grow, try new things, expand their repertoire — then we must be seeing “stretch errors,” right?  I mean, unless we’re totally failing.

But I’m a little stuck on what those would look like in the research context?  I have a whole stack of metathinking research narratives that I’m using for another project and I’m thinking I might go through them to see if anything comes to me.

(Please share if something came to you!)

As a starting point, it would probably be useful to think about where they’re likely to stretch.  Choosing sources has to be one of those areas.  It’s one the areas where we’re really pushing students to expand their toolbox, to try something new. There must be situations where students are trying to choose something scholarly, complex, expert and failing — but failing in a stretch error way, because they are trying something new.

Citing sources correctly is definitely something new, something they’ve not done before, but it’s hard for me to think about the formatting aspect of this as leading to stretch errors.  The question of when and where to cite though, the question of paraphrasing and summarizing and using sources in ways other than Quote Then Cite — then yes, I think we may be seeing some there.

colorful patchwork sewn in a crazy quilt pattern

some rights reserved by marylouisemain (flickr)

In fact, the very first thing that came to mind when reading this post was the Citation Project and its discussion of patchwriting.

Patch writing kind of blew me away when I first read about it because it was one of those concepts that explained so much.

WordPress tells me I have cited TCP a LOT, so I probably don’t need to say, but patchwriting is a kind of almost-plagiarism — defined as “restating a phrase, clause, or one or more sentences while staying close to the language or syntax of the source.” 

The piece that really grabbed me when I first read about patchwriting in what is (I think) the first Citation Project paper was the idea that this happens when students are trying to do the right thing.  That they’re looking at the examples of academic writing we’re making them use – peer reviewed articles — and trying to mimic what they see.  They don’t have the domain knowledge, the vocabulary, or the experience yet to write this way for themselves, so they end up veering too close to their original sources in an attempt to mimic that genre of writing.  That just made so much sense to me, and now seems like a classic example of a stretch error.

Now, to find some more.