So slammed, so briefly (well, for me). Via CrookedTimber, a pointed to this post by Julian Sanchez on argumentative fallacies, experts, non-experts and debates about climate change. It’s well worth reading, especially if you are interested in the question of how non-experts can evaluate and use expert information, which is a topic that I think should be of interest to any academic librarian.
Obviously, when it comes to an argument between trained scientific specialists, they ought to ignore the consensus and deal directly with the argument on its merits. But most of us are not actually in any position to deal with the arguments on the merits.
Sanchez argues that most of us have to rely upon the credibility of the author — which is a strategy many librarians also espouse — in part because someone who truly wants to confuse them can do so, and sound very plausible.
Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic.
Further, he suggests that the person who wants to confuse a complex issue actually has an advantage over those who want to talk about the complexity:
Actually, I have a plausible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience:
Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument.
And that’s where we get to the evaluation piece. We need to know how much we know to know whether it even makes sense to try and evaluate the arguments. Because if we don’t know enough, trying to evaluate the quality of the actual argument will probably steer us astray more often than using credibility as our evaluation metric.
If we don’t sometimes defer to the expert consensus, we’ll systematically tend to go wrong in the face of one-way-hash arguments, at least our own necessarily limited domains of knowledge.
(Note: I skipped most of the paragraph where he really explains the one-way hash argument – you should read it there)
The thing I really want to focus on is this – that one word, consensus. Because I don’t think we do much with that idea in beginning composition courses, or beginning communication courses, or many other examples of “beginning” courses which often serve as a student’s first introduction to scholarly discourse.
And by “we” here, I’m talking about higher ed in general, not OSU in particular. I think we ask students in these beginning classes to find sources related to their argument; their own argument or interest is the thing that organizes the research they find. They work with that article outside of any context, except which might be presented in the literature review – they don’t know if it’s steadily mainstream, a freakish outlier, or suggesting something really new.
So they go out and find their required scholarly sources, and they read them and they think about the argument in the scholarly paper and how it relates to the argument they are making in their own paper and try to evaluate it – and of course, they evaluate mostly on the question of how well it fits into their paper. And what other option do they have?
Sanchez argues, and it rings true to me, that we usually don’t have the skills to evaluate the quality of the argument or research ourselves. And I know that I am not at all comfortable with the “it was in a scholarly journal so it is good” method of evaluation. Even if they find the author’s bona fides, I’m not sure that helps unless they can find out what their reputation is in the field, and isn’t that just another form of figuring out consensus?
In some fields, meta-analyses would be helpful here, or review essays in others, but so many students choose topics where neither of those tools would be available, that it’s hard to figure out how to use that in the non-disciplinary curriculum.
And perhaps it doesn’t matter – maybe just learning that there are scholarly journals and that there are disciplinary conventions, is enough at the beginning level. But if that’s the case, then maybe we should let that question of evaluation, when it comes to scholarly arguments, go at that level too?
One thought on “Not quite peer-reviewed Monday, but related!”