Peer Reviewed Monday – Scaffolding Evaluation Skills

ResearchBlogging.org
So this week we’re also behind a paywall, I think.  Someday I will have time to actually go looking for Peer Reviewed Monday articles that meet a set of standards, but right now we’re still in the “something I read in real life this week” phase.

And this one was interesting – so far, when I have found articles that are specifically about deliberate interventions designed to teach something about peer review or about research articles, it is almost always in this literature, about the teaching of science.  Not surprising, but it does beg the question of disciplinary differences.  Still, the overarching takeaway of this article isn’t that everyone should teach about evaluating scientific evidence like we did so much as it is everyone should be teaching this on purpose, and over and over.

Which is a message I can get behind.  And one, I suspect, that is true across disciplines.

So the article has two parts.  One is a presentation of the model the authors used to teach students to evaluate evidence, and the second is a report on their research assessing the use of the model in a class.  Their students are not college students, but advanced high school students.

The authors open by arguing for the significance of evaluation skills in science –

Students, more frequently now than before, are faced with important socio-scientific dilemmas and they are asked or they will be asked in the future to take action on them.  They should be in position to have reflective discussions on such debates and not accept data at face value.

They further argue that students are not being taught this now – that most problem-based or inquiry-based curricula takes the data as a given, and doesn’t include “question the data” as part of the lesson. ( I almost think they are arguing that this is even more important now then it had been before because of the current emphasis on active, experiential learning.  That they’re suggesting that this type of pedagogy requires evaluation skills that the old lecture model didn’t, but that teaching evaluation skills hasn’t been built into these curricula.  That’s an interesting idea.)

In the lit review, the authors spend some time on the question of what “credibility” means.  For the purposes of this paper, that are arguing that there are two main components to the assessment of the credibility of evidence:  the source of the evidence and the method, how it was constructed.  This interpretation is heavily influenced by Driver, et al, 2001).

Questions to ask of the source:

  • Is there evident bias or not?
  • Was it peer-reviewed?
  • Who is the author? What is their reason for producing the evidence? What is their background?
  • What is the funding source?

Questions to ask of the methodology:

  • Does the evidence refer to a comparison of two different groups?
  • Is there any control of variables?
  • Were the results replicated?

The review of the literature suggests that there is ample evidence to support the claim that students are uncertain about how to evaluate evidence and assess claims.  This holds true across grade levels and disciplines.  They also suggest that there is very little research on whether these skills can be improved.

image of steel building framework

Credibility Assessment Framework

The authors then turn their attention to the Credibility Assessment Framework, which they believe will help high school students build the skills they need to assess evidence in inquiry situations.  The framework is based on two specific theoretical concepts: Learning-for-use framework (Edelson 2001) and scaffolding design framework (Quintana, et al 2004).  The framework is intended to help designers create good learning activities that include:

  • authentic contexts
  • authentic activities
  • multiple perspectives
  • coaching and scaffolding by the teacher at critical times
  • authentic assessment of learning within the tasks
  • support for the collaborative construction of knowledge
  • support for reflection about and articulation of learning

What they did

The team spent eight months building the learning environment for a class of secondary school science students.  They built their evaluation learning activities around a project where students were supposed to be doing hands-on work on an ill-structured and complex problem (food and GMOs) — a context where their work should naturally and authentically benefit from the critical evaluation of multiple sources of evidence.

One thing that is significant here, is that the authors supplied the reserach for the students to evaluate — they didn’t include a “finding stuff” piece to this work.  But they also modified the sources that the students were going to use, when they felt it was important to do so to decrease the cognitive load on students.  What was really interesting to me about this was what they added in – context, why the study was done and where it fit.  This is exactly what I feel (feel, because I haven’t got data) my students are missing when they’re just assigned “peer-reviewed articles.”

This information was put in a database in the students’ online learning space.  This space includes both an “inquiry” environment and a reflective “WorkSpace” environment; the project used both.

Scaffolding was built in, using both human-provided information (from the teacher) and computer-supported information (available online for the duration of the unit).  And the unit as a whole lasted elevent weeks.  There were 11 90 minute lesson plans.  The students started out doing hands-on experiments, and then spent the remainder of the unit doing groupwork which included data evaluation.  Then at the end the groups presented their findings.

In the first four lessons, the students were evaluating the provided sources without direct instruction. In the fifth lesson, they did a specific exercise where they evaluated the credibility of two sources unrelated to the class’ topic — this was done to reveal the criteria that the students had been unconsciously using as they attempted to evaluate provided sources in the first four weeks.

What they found out

The authors gathered pre- and post- test data using two instruments.  One measured the mastery of concepts and the other the evaluation skills.  They also videotaped the class sessions and used data captured from the online learning environment.  There was a control class as well, which did not have any of the specific evaluation lessons. The authors found that for the study group, there was a statistically significant difference between the pre- and post- tests for both conceptual understanding and evaluation skills.  For the control group there was no significant difference.

Two findings I found particularly interesting:

  • Including the qualitative data gave more insight.  In the pre-tests students were abel to identify the more credible sources, but they were not able to articulate WHY those sources were more credibile.
  • Within the particular components of credibility that the authors identified (source and method) the students did fine on author/author background by themselves, but needed help with: type of publication and funding source.

The students needed scaffolding help on methodological criteria, and even with it, many students didn’t get it (though they got more of it than they had coming in – this was a totally new concept for most of them).

Here’s the piece that I found the most interesting.  The impact of the study, as interpreted by me, was not so much on the students’ ability to tell the really good or the really bad sources.  It sounded to mek like the real impact was that the students were able to do more meaningful navigation of the sources in the middle.  And I think that’s really important — and something that most students don’t know they need to know on their own.  Related to this – the students were likely to mistrust ALL “internet” sources at the beginning, but by the end they were able to identify a journal article, even if that journal article was published online.  That’s significant to me too – that shows the start of that more sophisticated understanding of evaluation that I think is necessary to really evaluate the scholarly literature.

Finally, the authors found that the students had most of the conversations they did have about evaluation as the result of instruction – not on their own – which they took to prove that instruction was needed.

As I said before, the point of the paper seemed to me to be more about the fact taht this kind of direct intervention is needed, not that this specific intervention is the be all and end all of instruction in this area.  Beyond this, I think the paper is interesting because it illustrates how big a job “evaluation” is to teach – that it includes not only a set of skills but a related set of epistemological ideas — that the students need to know something about knowledge and why and how it’s created.  That’s a big job, and I’m not surprised it took 3 months to do here.

Nicolaidou, I., Kyza, E., Terzian, F., Hadjichambis, A., & Kafouris, D. (2011). A framework for scaffolding students’ assessment of the credibility of evidence Journal of Research in Science Teaching DOI: 10.1002/tea.20420

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s