Behind the Paywall: Grading, Bias and Class Participation

This article was recommended all over my twitter the other day, and the topic looks pretty interesting.  So let’s launch a new (hopefully) regular feature, Behind the Paywall. 

Citation

Alanna Gillis. “Reconceptualizing Participation Grading as Skill Building.” Teaching Sociology, 47:1, 10-21. January 2019.  DOI: 10.1177/0092055X18798006

The Access Experience

Paywall: Sage, accessed at my library.

I had a lot of trouble loading the PDF, which I blamed on my local wifi for a while. Seriously, for a college town, we have terrible wifi options in Corvallis.  But when the same thing happened a few days later, and everything else around it loaded okay?  I think it was clearly a problem with Sage.

TL;DR

Untangling everything that is wrong with how we measure and reward class participation would take forever. Not only do our dominant methods rely on instructors to be free from bias and have perfect recall, but they rest on assumptions about students’ willingness, ability and preparedness to participate in class that are deeply problematic. By continuing to reward participation in these ways, teachers — even when they do not want to — are replicating and reinforcing inequalities. Reframing class participation as a skill-building opportunity and building in robust opportunities for students to reflect on their performance is a better way to go.

Here we go…

So we start off by situating this paper within the context of teaching and learning in the classroom. We know that students who are engaged and participating in class learn more.  Knowing this, professors have an interest in motivating students to participate, so many of them grade class participation.

I am liking this problem statement in its recognition that the target audience has spent many years in school, knows that participation grading is a thing, and doesn’t need eight different citations showing that to be true.  The author goes on to say, yes, I haven’t done any systematic inquiry to nail down objective participation grading themes, but I also don’t have to pretend we don’t all know what we know.  And based on living in the world as both students and teachers, we know that there are two basic ways that participation grading works:

  • Teacher gives grades based on recall. A few times a term (or once at the end) they remember how many times each student talked, and assign a grade based on that.  
  • Teacher gives grades based on actually counting how many times students talk during the term. More complex applications of this method might count specific types of participation (asking questions, answering questions, etc.).

There are issues with both of these methods. Teachers do not have perfect recall. Teachers are human and subject to bias in all the ways humans are biased. And, finally, more talking does not necessarily mean more learning.

OH. I think this next bit though is why this paper is getting so much love.  It’s because it goes down to the next level and points out that the deeper problem with all of this participation grading is that these method of motivating class participation are built on several problematic assumptions: that all students are equally prepared to speak in class; that students all understand class participation in the same way; that students have all been rewarded (or not) for classroom behaviors in the same way; that all students are bringing the same skill set to the classroom.

There’s truly no reason to believe that those things are true. And there are a lot of good reasons to believe that they are not.

SO.

Gillis has three intersecting goals in this paper:

  1. Unpack the assumptions behind participation grading as it happens most frequently now.
  2. Re-frame participation grading as an opportunity for skill development, and re-focus it on more meaningful goals.
  3. Show the evidence that says this new framework is worth implementing in real classrooms.

Literature Review

Let’s unpack some assumptions.

We know student evals of teachers are super biased.  We acknowledge and understand that that bias works in both directions:  students’ biases affect their evaluations of teachers AND teachers’ biases affect their evaluations  of students.  However, when it comes to participation grading, we have a tendency to acknowledge that bias as a reality without really understanding or unpacking its dynamics.

(I’m going to summarize the lit review pretty significantly, and link to some key sources)

The research documents general biases that affect student evaluation: teachers tend to reward students they like,  squishy factors like attitude affect evaluations, and factors like race, gender, ability, and socioeconomic class definitely affect assessment in many ways.

We also know that we work in a world where teachers don’t always remember their students’ names, so systems that rely on accurate recall are inherently suspect.  But the issues with memory go beyond this.  Teachers are more likely to remember extreme situations (outbursts, falling asleep in class) than mundane normalcy. Teachers tend to remember giving students more chances to participate than students remember getting.  

There is also a ton of research that challenges the idea that all students are equally ready and willing to participate in class.  There are  a ton of things going into how students are socialized to understand their role in the classroom, or what appropriate interactions with teachers look like.  Some come from outside school — parents’ messages to children are shaped by their own experiences with school or authority structures, for example. Some come from the lived experience of being in school. Students bring very different experiences with consequences and rewards when it comes to asking questions, offering opinions, sharing stories, suggesting counternarratives, and classroom behavior.  And all of these dynamics — inside the school and out — are shaped by factors (including race, gender, class, ability, language and more) that create and reinforce inequality, and which also need to be analyzed and understood intersectionally. 

Then, we have one of the most pervasive dynamics in the teaching literature, at least in the literature that focuses on motivation and learning.  There is a lot of work in this area coming from psychology, using lenses that focus inquiry on personality. This shapes the discourse and produces research (and policy) that frames behaviors like class participation as the result of hardwired personality traits — shyness, introversion or extraversion — and not as behaviors that are built out of skills that can be learned. 

Selected Sources:

New Framework, Different from the Old Framework

“Instead, I propose that instructors conceptualize participation grades in undergraduate classrooms as opportunities to incentivize and reward skill building” (13).

This framework:

  • Conceptualizes participation as a set of learnable, interconnected skills
  • Recognizes and rewards skills that students already believe reflect their engagement in class (peer editing, prepping with classmates in study groups, active listening, coming to office ours) but which are usually not captured by “class participation” grades.
  • Encourages students to work in different skills simultaneously, and to start to understand these skills as interconnected.

Application (Methods)

This is how the author applied the framework in class.

  • 2 sociology classes. 1 400-level and 1 100-level.
  • 45 students per class.
  • Class participation is broken into 5 dimensions:Attendance and tardiness
    • Preparation for each class meeting
    • Participation in small group discussions
    • Participation in full class discussions
    • Participation in other ways (office hours, writing center visits, study groups, and more)
  • Evaluation is conducted using a “self-reporting goal-centered approach.”

Start of the term:

  • Students use a 5 point Likert scale to self-rate along each of the 5 dimensions: How well do you usually do in classes like this with this behavior.  They also write 1 sentence justifying their numerical rating.
  • Students identify 3 concrete, measurable goals for themselves during the term and write out a plan to achieve these goals.
  • Teacher reads and gives feedback on the goals and plan.

During the term:

  • Periodic, informal check-ins.
  • At least one formal self-reflection.  Students re-rate themselves along each dimension and submit a reflection justifying their rating, reporting on progress twoards goals, and adjusting goals/plan as needed.
  • Instructor gives feedback on goals and plan, and if there is a disconnect between the students’ self-rating and the instructor’s perception, meets to calibrate this.

End of the term: 

  • Student submits a self-report that is similar to the mid-term report, but in which they assign themselves a participation grade and justify it.
  • Instructor reviews the reflective material from throughout the term, and the students’ progress towards goals, assigns a grade, and explains it with written feedback.

The instructor reports that there was rarely a disconnect between the students self-reported grade and the instructor’s perception.

(Note, I would expect from my experience that there would be a group of students who would grade themselves too harshly, describing similar activities and evidence to other students but assigning themselves a lower grade than I would, or than those other students would. I wonder if that happened.  It would be pretty easy to re-calibrate at midterm).

Goals/benefits of this approach:

  • Reward a fuller range of behaviors.
  • Reward something more than quantity.

Analysis:

  • Did math with their numerical evaluations and counted how many achieved goals
  • Also inductively distilled themes from the written reflections.

Results

Skill building: Students came to see speaking in class as a skill.  Related to this — they were able to articulate progress even when they were still feeling nervous about participation, or still identified as “shy” or “introverted.”

Starting is the hard part, and then it gets easier.

(Note: most students focused on participating more, but some students worked on skills around participating less, or participating intentionally. These themes cut across both of these goals.)

Connections: The five dimensions of participation are interconnected.

Transfer: Some students reported that they practiced their participation skills in other classes too.

Discussion

I am going to skip most of the discussion because this is super long, and I feel like many of the insights are grounded really well in the rest of the paper.  But I will tell you what the author identified as limitations:

  • Having to rely of self-reporting.  This is the big one.  They tried some forms of triangulation, but most came up short for obvious reasons, like, “I am trying to evaluate things that happen both inside and outside the classroom.”   So far, in the rare cases when there was a significant teacher/student mismatch, course correcting at the midterm check in addressed the problem.
  • The experience so far demonstrates the need to do more, intentionally and formally, to train students how to participate in class.

Final thought

“Sociologists must take issues of inequality as seriously in our grading as we do in our instructional content, and moving toward a skill development participation assessment system is a good step in that direction.” (20)

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s