So yesterday I noticed that Emily had posed a really good question/ insight about assessment —
Just wish assessment acknowledged roles race/class/etc play in student learning outcomes. Not being able to pay for books affects learning.
— Emily Drabinski (@edrabinski) May 9, 2014
I responded (and let’s just pretend that it wasn’t me responding to a week-old tweet without noticing the date) because I saw a really great session at AERA last month that was pretty on-point.
Then today I was pulling together the information on the presenters to send on and thinking “great, this is going to be a real pain to post at 140 characters per” when I remembered I have a blog.
It’s been a rough few days.
So AERA is an academic conference that follows the Chair – 4-6 papers – Discussant format. That’s why the 140 characters thing was going to be a challenge. It is a really researchy conference, where you’d expect that the papers would be published soon, but upon reflection, I’m not sure that’s true in this case. These felt more like researchers who’d been invited to make some more big-picture arguments about best practice in culturally relevant assessment and research, and they were mostly drawing on a body of work.
The discussion wasn’t limited to assessment and evaluation – there was a heavy focus on research methods too. I’m interested in both, so that didn’t bother me though it does take us beyond the scope of the initial question here.
One thing that did poke at me during this session though was just how far away we are from this kind of analysis in information literacy learning assessment, research and evaluation. To problematize standard models, you have to have standard models, and we’re not there yet. I don’t feel like we’ve got that community definition of what learning in this field looks like – though we have lots of great people thinking and working on it — that would serve as the baseline for this. In some ways, that’s probably good, but in others it makes it harder to have these shared conversations.
Francesca López (Arizona) – Teaching and learning outcomes for English learners: Contextual considerations for researchers
As you can tell from the title, this was not a research report, but a synthesis and distillation of best practices — part of her research agenda is examining the way that educational policy and reform efforts affect standardized (or standards-based) assessments. So it’s about assessment, but not as much learning outcomes classroom assessment as ASSESSMENTS. Anyway, the insights in this talk were clearly informed by her dissertation, which is available in UA’s repository.
Akane Zusho (Fordham) – Promoting a universalistic approach to the study of culture and learning
Again, this talk didn’t focus on a specific study but it was drawing more generally on her research and expertise to talk about improving research practice. She talks more in depth about what the “universalistic approach” means in her chapter in this book, and that chapter draws heavily on this article. I really enjoyed her talk — a lot of the discussion focused on why broad categories (like Latino/a or Asian) aren’t culturally meaningful enough to inform research and the insights that come out when you look at these categories in more complex ways.
(That was actually a theme in some of sessions I attended at the FYE conference, including one by some of my colleagues here in the College of Ed at OSU — Felisha Herrera and Lucy Arellano, joined by PhD candidate Janet Rocha from UCLA. Their research hasn’t been published yet, but the paper at the FYE conference was called Disaggregating the Complexities: Exploring Latino Postsecondary Pathways)
Linda Tillman (UNC Chapel Hill) – Telling their stories: Conducting and reporting research in African American communities
Dr. Tillman said that she was invited as the qualitative researcher (I think it’s fair to say that there was general agreement that the qualitative community has come farther in the area of culturally relevant research methods, but is still very far from central in the overall conversation about educational research and assessment). Anyway, her 2002 article is clearly very important for understanding the topic of culturally relevant research, and this 2006 article is also widely-cited.
Stafford Hood (UIUC) – Assessment of racial-minority students
This was one of those “I was asked to update something I wrote in the past that was really important, but instead I’m going to talk about something else” talks. So that title doesn’t really matter. Dr. Hood edited a special issue of this journal back in 1998 which was focused on this question. And this would turn into a really long paragraph if I linked everything else. But he’s also the director of the Center for Culturally Responsive Evaluation and Assessment (CREA) which has a great publications page.
In his abstract, he also listed Gwyneth Boodoo, Richard Duran and Audrey Quails as early leaders in this field.
Sadly, a fair amount of this part of the session was about how little has changed since that 1998/1999 conversation — and one of the interesting sub-topics brought up in the Q&A was about the role of IRB boards — that they should be demanding that research is culturally meaningful (or as CREA puts it “defensible”)
The discussant was Cynthia Hudley from UCSB and she was also great. Definitely synthesized the content and offered commentary – by far my favorite type of discussant.
My full Zotero folder related to this session (exported to Evernote)
4 thoughts on “So much more than you wanted to know about that one session on culturally relevant assessment”
My favorite topics! Thanks for the notes and Zotero folder, also. We don’t get to travel much at all right now, and you’re making it easier to participate vicariously :) I’m going to Sloan next month, and maybe it’s an occasion to consider blogging again, you’re inspiring!
I would love it if you blogged or tweeted Sloan. Dooooooo eeeeeeeeeeet.
I had a similar realization that there are many cultural blindspots in library research and assessment methodologies from a presentation by some OMSI evaluators at the Oregon Program Evaluation Network meeting last year. They talked about the continuum of cultural competency and how no one really reaches the end of that. It made me think that our methodologies need to be more robust to capture the true experiences of our patrons in libraries and how we often choose to gather evidence that is easiest rather than putting the extra effort to gather evidence that is inclusive and diverse.
So I see a workshop on cultural competence in evaluation is happening in Oregon at the end of the month. https://www.eventbrite.com/e/cultivating-cultural-competence-in-evaluation-because-evaluations-are-not-culture-free-tickets-11813381141
Asking the questions: How do cultural shape evaluation practice? How do cultural perspectives influence how we approach and conduct evaluations and share their results?