So, I saw how Stephen Francoeur is using Tumblr as a commonplace book, and thought that might be a way to solve a problem I was having with my iPad-dominated workflow — how to corral and find the stuff I come across serendipitously, and the stuff I come across more intentionally in Google Reader. So I am on tumblr, but I am really bad at being on tumblr for real – I haven’t even found anyone to follow yet.
I think that workflow issue might be a topic for another post.
Today, though, I want to make good on my promise over there that some of that stuff might show up here. One common thread in the things I’ve saved over there is examples that show how complicated evaluation really is – especially when it is not accompanied by the kind of disciplinary expertise that most first-year students don’t have.
I’ve been tagging those 10 minutes in a one-shot won’t do it
One example digs into Politifact – a resource that the composition faculty and I talk about in WR 222, an advanced composition class that focuses more on public than academic discourse.
While the course we use it in isn’t focused on academic discourse, this discussion at the American Historical Association blog is — particularly on the different ways that legal scholars and historians approach the same question, which makes the task assigning a singular and simple “true” or “false” rating to a political claim more complicated than it seems.
The claim in question relates to recent laws and measures that regulate voting — more specifically, do claims that use historically specific terms like “Jim Crow” and “poll tax” to make claims (by analogy) about current measures stand up to scrutiny? Politifact has evaluated three such claims in the last two years.
First, the AHA argues that Politifact “did its homework” in each of these cases —
Each time, Politifact editors called on historians to help them judge. Each time, their analysis and resulting judgments raised important questions about how historians, journalists, and politicians evaluate the nature of truth and how the past can best be mined for constructive analogy.
The list of historians and legal scholars consulted is lengthy and impressive. The AHA points out some of the ways that historians and legal scholars differ in their approach(es) to the question – historians may be more likely to take a broad view of the question, while legal scholars examined questions of results and intent in a more focused way. Overall, the message seems to be this – that the question “is this a suitable comparison” isn’t simple – and isn’t well served by the truth-o-meter approach. Many of the scholars questioned brought up subtleties – that individually could tip the meter either way, but taken together points most of all to the conclusion that “it’s more complicated than that.”
And perhaps this is the issue. Politifact admirably works to educate the public on the accuracy of politicians’ references to the past. Sometimes this is a straightforward task; often it is not. Politifact generally seeks to confirm or disprove one-for-one correspondences between the present and the past. The historians cited by Politifact appear more willing to allow for comprehensive thinking; recognize that categories like “Jim Crow” aren’t cut-and-dried; and accept the idea that intent matters. Historians, less attached to the tyranny of the Truth-o-Meter™, are more willing to engage questions by explaining issues of continuity and change, and greatly enlarging the context. Though Politifact has made a concerted effort to include historians in its analysis, the Truth-o-Meter™ might not be readily calibrated to measure their responses.
This doesn’t mean that I’m going to stop using Politifact in WR 222 – like it or not, the discourse that class examines does reflect the assumptions of the meter of truth, and it’s a useful addition to the boatload of resources I throw at them. But I’m also sending this discussion to the faculty who teach that class. Because this is just one example of what I am sure are many situations where “it’s more complicated than that” seems to be the best response to the truth-o-meter (and I’m sure some of those examples come up in class).
And just like all the subtleties the historians bring up show the limitations of the truth-o-meter for adjudicating complex questions, all of these examples show the limitations of any kind of list, or tool, or crutch that can be used to “teach” evaluation in 10 minutes in a one-shot.