History and libraries, but not always history of libraries.

Nicholas and I presented this afternoon at Online NW.  Presentation materials are available here, on Nicholas’ blog.  Good times!

We used Prezi to create the presentation.  This is what it looked like, all together, when it was done.  I know that some people I know have found it difficult to get used to, but I kind of really liked it.  Plus, I’ve used it so far on three very different computers in three very different contexts and it’s worked smoothly every time.

Plus, no dongle drama.

from the composition literature, old-school quotation

“Every university or public library centers in its reference room. This is the room for the first task of research, the bringing of books together. Among the thousands of books on the shelves, perhaps a score are needed for the particular topic. Which are these? Perhaps half of them require but a few minutes and the others demand extensive reading. Which is which? The object of the reference room is to answer these questions so far as possible in advance. (emphasis added)….

Its function then is two-fold: first to facilitate that preliminary survey which gives general notions and boundaries and shows how a particular investigation might be limited; secondly to facilitate comparison on particular points.”

-Charles Sears Baldwin, College Composition, (New York: Longmans, Green & Co., 1922).

I am an unscrupulous, unscrupulous formatter

Knowing about my constant and abiding interest in all things peer-review, a colleague handed me this pamphlet the other day.  Published by a project I like, Sense about Science (and funded by, among others, Elsevier, Blackwell, the Royal Pharmaceutical Society of Great Britain, the Institute of Biology and the Medical Research Council), this pamphlet provides a good summary of a lot of reasons why people should value peer-reviewed research.

I really like its focus on the reproducability of research, the role that peer review plays in getting science out there to be acted upon by other scientists.  And this statement here gets at a lot of what I have been thinking about information evaluation lately – about how important it is that we evaluate sources within contexts, not in a vacuum:

If it is peer-reviewed, you can look for more information on what other scientists say about it, the size and approach of  the study and whether it is a part of a body of evidence pointing towards the same conclusions.

But this has me mystified.  A callout box titled How can you tell whether reported results have been peer reviewed? A question any academic reference librarian has struggled to answer at some point, right?

Their answer totally mystifies me.  I keep reading it and reading it and I can’t make it make any sense.   Seriously – they say the full reference to peer-reviewed papers is likely to look like this, and then they present – two formatted article citations, one from the New England Journal of Medicine and one from Science.  The Science one is APA, but I’m not even sure exactly what style the second one follows.

just formatted citations, right?
just formatted citations, right?

So under the citations, there’s a word balloon that says that unscrupulous people might “use this style on websites and articles to cite work that is not peer reviewed. But fortunately, this is rare.”

!

Wait, what?   So yeah, it turns out that I’m totally unscrupulous!  And so are you if you use APA to cite an article from the New Republic, or Time or The Journal of Really Lousy Non-Peer Reviewed Science!

I am so confused!  What do they mean by this?

identity, information literacy and professors as celebrities

So, in the infamous checklists we tell students “make sure you can tell who the author is, check their credentials, are they expert, are they scholarly?” as a necessary part of scholarly information evaluation, right?  Well, wouldn’t you say that an assistant professor of history at Southern Baptist University would be well-qualified to talk about the historical implications of the current President?

Apparently, someone thought so, and a lot of other someones thought so enough that they started forwarding emails based on that assumption.  Via Historiann, in today’s Inside Higher Ed, professor of history Tim Wood tells the story of how his name was attached to a document with a clear political agenda, with which he did not agree — and how the professional identity that he had built was clearly lending credibility to the essay in question.

Wood also tells the story of what  he did, and what others might do, to prevent and contain similar situations.  This line really jumped out at me –

Moreover, this incident has led me to reconsider my somewhat adversarial relationship with technology. (I’m the guy who still refuses to buy a cell phone.) But one of the greatest difficulties I encountered in all of this was finding a platform from which to launch a rebuttal.

He suggests that actively building, policing and maintaining an online professional identity is a good way to protect that identity.  This, I think is an important information literacy skill – and one we don’t talk about a lot.  In Wood’s case, his university gave him a space to post a rebuttal, that could be then pointed to and linked to expose the lies.

Using that space, Wood directly links this back to the information literacy skills we do talk about a lot as well –

To navigate those potential pitfalls, historians check facts and look for other documents that conform (or contradict) the information found in our source. We seek to identify the author and understand his or her motives for writing. We try to understand the larger historical and cultural context surrounding a document. By doing our homework, we’re better able to judge when something or someone deserves to be “taken at their word.”

This episode has taught me that these skills have an important place even outside this history classroom.

I’ve said before that I don’t think we should focus our evaluation teaching on those situations where someone is actively trying to trick us, but that doesn’t mean that we should pretend that possibility doesn’t happen either.  The checklist doesn’t get us where we need to go when it comes to information evaluation – at best it gets us to where we need to be to start doing our real evaluation.  When you figure out what the author’s agenda is, you’re not done evaluating, right?  When you figure out if it’s peer-reviewed, that just tells you what you need to be looking for as you evaluate, right?

(Full disclosure, I haven’t read the essay in question, so I’m not sure how aligned the content is with the scholarly research of the professor in question, but I do think that for most students brand-new to thinking about what academic expertise means “professor of history” would probably be enough to establish credibility)

And in this case, when you figure out who the author is, you’re not done evaluating either.  Does the work match other work by the author – does it fit within their normal research agenda – is it part of a scholarly/expert consensus, or is the interpretation more on the whacked-out side?  That’s what this story has me thinking about – how to get students from “professor of history” to “there’s something seriously wrong here.”

ETA – apparently, Professor Wood isn’t the only one to be dealing with this.

the other kind of peer review

I think a lot about peer review, but it’s almost all about the journals side of things – the related-but-not-the-same issues of open access and peer review.  And by that which is called “editorial peer review” to distinguish it from peer review in the grants/funding world, a kind of peer review that is probably much more important to a lot of people than the journals-specific kind.

But a couple of recent notes about the other kind of peer review jumped out at me and connected – what do these, taken together, suggest about how we = beyond higher ed, as a society and a culture – value knowledge creation.  Or maybe more what I mean is what do they suggest about how we should value knowledge creation.

First, there’s this note today from Female Science Professor.  She’s responding to another article in Slate, but it’s the piece she’s responding to that I am interested in here too as well – the amount of time that faculty in different disciplines (and in different environments) spend writing proposals to get funding for their research.  The Slate article includes a quote suggesting that med school faculty at Penn spend half their time writing grant proposals.  That number has increased, it goes on to suggest, because of the effort to get in on stimulus funding.

The comments, with a few exceptions, suggest that the 50% number is not out of line in that environment.

So that, connected with this item from EcoTone last month – has to make you think, right?

(quoting the abstract of an article in Accountability in Research)  Using Natural Science and Engineering Research Council Canada (NSERC) statistics, we show that the $40,000 (Canadian) cost of preparation for a grant application and rejection by peer review in 2007 exceeded that of giving every qualified investigator a direct baseline discovery grant of $30,000 (average grant).

Obviously, there are stark differences in scope and scale between these disciplines.  Also obviously, the process of writing grant proposals isn’t entirely divorced from the goal of knowledge creation – the researcher undoubtedly benefits from going through the process – the project benefits from the work donw on the proposals – in some ways.

In others, they are undoubtedly a distraction, and the process becomes more about the process than about the knowledge creation.  No solutions offered here, not even a coherent articulation of a problem, more like it just makes you wonder what it says about us when, within the knowledge creation process itself, the problems and issues of getting funding take precedence over the problems and issues connected to the direct experience of creating new knowledge.

Peer Review 2.0, revised and updated

Watch this space – we may be able to put up a link to the actual talk at some point.  This version is being presented online, to librarians and faculty members from Seattle-area community colleges.

Sneak preview:

surprise004

Why 2.0?

Michael Gorman (Britannica Blog) Jabberwiki: The Educational Response, parts one and two

Shifting perspective – why journals?

Ann Schaffner (1994). The Future of Scientific Journals: A View from the Past (ERIC)

Archive of knowledge

(Skulls in the Stars), Classic Science Papers: The 2008 “Challenge” !

(Female Science Professor) Everyone knows that already

Community building

Sisyphus (Academic Cog), MMAP Update April 13: Publishing Advice from the Professionals

(Historiann), Peer review: Editors versus authors smackdown edition

Clickstream Data Yields High Resolution Maps of Science (PLoS ONE)

Quality control

BBC TV and Radio Follow-Up:  The Dark Secret of Henrik Schon

Bell Labs, press release.  Bell Labs announces results of inquiry into research misconduct.

Fiona Godlee, et al.  Effect on the Quality of Peer Review of Blinding Reviewers and Asking them to Sign their Reports (paywall)

Willy Aspinall (Nature Blogs: Peer-to-Peer), A metric for measuring peer-review performance

(Lounge of the Lab Lemming), What to do when you review?

Distributing rewards

Undine (Not of General Interest), From the Chronicle, Are Senior Scholars Abandoning Journal Publication (also includes a link to the original article behind the Chronicle’s paywall)

(PhD Comics) How professors spend their time

Report of the MLA Task Force on Evaluating Scholarship for Tenure

Openness – access

Directory of Open Access Journals

Openness – scope

ScienceBlogs

Fill His Head First with a Thousand Questions blog

Landes Bioscience Journals, RNA BiologyGuidelines for Authors (requires authors to submit a Wikipedia article)

(Crooked Timber) Seminar on Steve Teles’ The Rise of the Conservative Legal Movement

Henry (Crooked Timber), Are blogs ruining economic debate ?

Collaborative

WikiBooks – Human Physiology

Re-mixed

ResearchBlogging

ResearchBlogging on Twitter

Iterative

Nature Precedings

(sometimes) Digital

Current Anthropology

Stevan Harnad. Creative Disagreement: Open Peer Commentary Adds a Vital Dimension to Review Procedures.

(Peer-to-Peer) Nature Precedings and Open Peer Review, One Year On

Sara Kearns (Talking in the Library), Mind the Gap: Peer Review Opens Up

Miscellaneous

The awesome font we used on the slides is available for free from Typographia:

http://new.typographica.org/2007/type_commentary/saul-bass-website-and-hitchcock-font-are-back/

Photo credit (because it is tiny here) —  Surprise.  flickr user Jeremy Brooks.  http://www.flickr.com/photos/jeremybrooks/3330306480/

Not quite peer-reviewed Monday, but related!

So slammed, so briefly (well, for me).  Via CrookedTimber, a pointed to this post by Julian Sanchez on argumentative fallacies, experts, non-experts and debates about climate change. It’s well worth reading, especially if you are interested in the question of how non-experts can evaluate and use expert information, which is a topic that I think should be of interest to any academic librarian.

Obviously, when it comes to an argument between trained scientific specialists, they ought to ignore the consensus and deal directly with the argument on its merits. But most of us are not actually in any position to deal with the arguments on the merits.

Sanchez argues that most of us have to rely upon the credibility of the author — which is a strategy many librarians also espouse — in part because someone who truly wants to confuse them can do so, and sound very plausible.

Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic.

Further, he suggests that the person who wants to confuse a complex issue actually has an advantage over those who want to talk about the complexity:

Actually, I have a plausible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience:

Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument.

And that’s where we get to the evaluation piece.  We need to know how much we know to know whether it even makes sense to try and evaluate the arguments.  Because if we don’t know enough, trying to evaluate the quality of the actual argument will probably steer us astray more often than using credibility as our evaluation metric.

If we don’t sometimes defer to the expert consensus, we’ll systematically tend to go wrong in the face of one-way-hash arguments, at least our own necessarily limited domains of knowledge.

(Note:  I skipped most of the paragraph where he really explains the one-way hash argument – you should read it there)

The thing I really want to focus on is this – that one word, consensus.  Because I don’t think we do much with that idea in beginning composition courses, or beginning communication courses, or many other examples of “beginning” courses which often serve as a student’s first introduction to scholarly discourse.

And by “we” here, I’m talking about higher ed in general, not OSU in particular.  I think we ask students in these beginning classes to find sources related to their argument; their own argument or interest is the thing that organizes the research they find.  They work with that article outside of any context, except which might be presented in the literature review – they don’t know if it’s steadily mainstream, a freakish outlier, or suggesting something really new.

So they go out and find their required scholarly sources, and they read them and they think about the argument in the scholarly paper and how it relates to the argument they are making in their own paper and try to evaluate it – and of course, they evaluate mostly on the question of how well it fits into their paper.   And what other option do they have?

Sanchez argues, and it rings true to me, that we usually don’t have the skills to evaluate the quality of the argument or research ourselves.  And I know that I am not at all comfortable with the “it was in a scholarly journal so it is good” method of evaluation.  Even if they find the author’s bona fides, I’m not sure that helps unless they can find out what their reputation is in the field, and isn’t that just another form of figuring out consensus?

In some fields, meta-analyses would be helpful here, or review essays in others, but so many students choose topics where neither of those tools would be available, that it’s hard to figure out how to use that in the non-disciplinary curriculum.

And perhaps it doesn’t matter – maybe just learning that there are scholarly journals and that there are disciplinary conventions, is enough at the beginning level.  But if that’s the case, then maybe we should let that question of evaluation, when it comes to scholarly arguments, go at that level too?

what do huge numbers + powerful computers tell us about scholarly literature? (peer-reviewed Monday)

ResearchBlogging.org A little more than a month ago, I saw a reference to an article called Complexity and Social Science (by a LOT of authors).  The title intrigued me, but when I clicked through I found out that it was about a different kind of complexity than I had been expecting.

Still, because the authors had made the pre-print available, I started to read it anyway and found myself making my way through the whole thing. The article is about what might be possible with computers and data and powerful computers able to crunch lots of data – what might be possible for the social sciences, not just the life sciences or the physical sciences. The reason it grabbed me was this here –

Computational social science could easily become the almost exclusive domain of private companies and government agencies. Alternatively, there might emerge a “Dead Sea Scrolls” model, with a privileged set of academic researchers sitting on private data from which they produce papers that cannot be critiqued or replicated. Neither scenario will serve the long-term public interest in the accumulation, verification, and dissemination of knowledge.

See, the paper opens by making the point that research in fields like biology and physics have been incontrovertibly transformed by “capacity to collect and analyze massive amounts of data” but while lots and lots of people are doing stuff online every day – stuff that leaves “breadcrumbs” that can be noticed, counted, tracked and analyzed, the literature in the social sciences includes precious few examples of that kind of data analysis.  Which isn’t to say that it isn’t happening – it is and we know it is, but it’s the googles and the facebooks and the NSA’s that are doing it. The quotation about gets at the implications of that.

The article is brief and well worth a scan even if you, like me, need a primer to really understand the kind of analysis they are talking about.  I read it, bookmarked it, briefly thought about writing about it here but couldn’t really come up with the information literacy connection I wanted (there is definitely stuff there – if nowhere else it’s in the discussion of privacy, but the connection I wasn’t looking for wasn’t there for me at that moment) so I didn’t.

But then last week, I saw this article, Clickstream Data Yields High-Resolution Maps of Science, linked in the ResearchBlogs twitter feed (and since then at Visual Complexity, elearnspaceStephen’s Web, Orgtheory.net, and EcoTone).

And they connect – because while this specific type of inquiry isn’t one of the examples listed in the Science article, this is exactly what happens when you turn the huge amounts of data available, all of those digital breadcrumbs, into a big picture of what people are doing on the web — in this case what they are doing when they work with the scholarly literature. And it’s a really cool picture:

The research is based on data gathered from “scholarly web portals” – from publishers, journals, aggregators and institutions.  The researchers collected nearly 1 billion interactions from these portals, and used them to develop a journal clickstream model, which was then visualized as a network.

For librarians, this is interesting because it adds richness to our picture of how people, scholars, engage with the scholarly literature – dimensions not captured by traditional measures of impact data.  For example, what people cite and what they actually access on the web aren’t necessarily the same thing, and a focus on citation as the only measure of significance has always provided only a part of whatever picture there is out there.  Beyond this, as the authors point out, clickstream data allows analysis of scholarly activity in real-time, while to do citation analysis one has to wait out the months-and-years delay of the publication cycle.

It’s also interesting in that it includes data not just from the physical or natural sciences, but from the social sciences and humanities as well.

What I also like about this, as an instruction librarian, is the picture that it provides of how scholarship connects.  It’s another way of providing context to students who don’t really know what disciplines are, don’t really know that there are a lot of different scholarly discourses, and who don’t really have the tools yet to contextualize the scholarly literature they are required to use in their work.  Presenting it as a visual network only highlights this potential for this kind of research more.

And finally – and pulling this back to the Science article mentioned at the top, this article is open – published in an open-access journal and I have to think that the big flurry of attention is has received in the blogs I read, blogs with no inherent disciplinary or topical connection to each other, is in part because of that.

———————-

Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabasi, A., Brewer, D., Christakis, N., Contractor, N., Fowler, J., Gutmann, M., Jebara, T., King, G., Macy, M., Roy, D., & Van Alstyne, M. (2009). SOCIAL SCIENCE: Computational Social Science Science, 323 (5915), 721-723 DOI: 10.1126/science.1167742

Bollen, J., Van de Sompel, H., Hagberg, A., Bettencourt, L., Chute, R., Rodriguez, M., & Balakireva, L. (2009). Clickstream Data Yields High-Resolution Maps of Science PLoS ONE, 4 (3) DOI: 10.1371/journal.pone.0004803

What we did at our last library faculty meeting

Which I missed, because I was off campus that day.

But I totally, 100% supported this action from afar — we adopted an open access mandate!

Appropriately, it lives in the institutional repository.

Comment from Peter Suber at Open Access News.

More comments from my colleague Terry Reese.

This was made a little more meaningful to me because just in these last two weeks I’ve had a weird flurry of interest from different people/groups in this paper, which my co-author and I archived in the IR last year.  Pointing these people to the archived copy wasn’t just convenient – it actually did provide me with all of these golden opportunities to  talk to people (who might themselves be in a position to assert authors’ rights someday) about open access and author’s rights in a very organic way.

monday morning drive-by

I have been reading peer-reviewed articles. LOTS and LOTS of them. But the last two left me unclear on the concept – as in, I thought I understood the value of reflection and revision and I thought I liked thoughtful, academic writing but these went through the process and yet provided none of those things so far as I could see so is it the concept I’m not getting or just these articles. I just need to come across another one that leads me to go YES or NO, but that makes me think of something new. That’ll happen later today, I’m sure

For now, though, doesn’t it seem like this — Specifics on Newspapers from the “State of the News” Report (Editor & Publisher).

They don’t bury the lead here – The business of journalism is quickly running of out time to transform its model, a new study from the Project in Excellence in Journalism found.”

and this — News You can Endow (New York Times) — Yale’s Endowment officer David Swensen argues that newspapers should be endowed like colleges and universities, freeing them from a business model that won’t work and protecting them as necessary to the public good. (This one is a couple months old, but I just saw it today):

By endowing our most valued sources of news we would free them from the strictures of an obsolete business model and offer them a permanent place in society, like that of America’s colleges and universities. Endowments would transform newspapers into unshakable fixtures of American life, with greater stability and enhanced independence that would allow them to serve the public good more effectively.

ETA – related to this = How to Fix American Journalism, Part II (Sara Catania at Huffington Post)

and this — Daily News Habit Doubles among U.S. Mobile Users (TechCrunch)

The number of people who access news and information daily on their mobile phones doubled from 10.8 million in January, 2008 to 22.4 million in January, 2009.

are connected?

And connected as well to scholarly publishing – at least in the way that Alex Reid posits after getting back from the CCCC conference — the crisis of scholarly publication: a regurgitating choreography of CCCC 2009 (Digital Digs):

I think it is fair to say that we are in a related situation in terms of scholarly journals and books. Arguably the old system must break before a new one will have a chance to emerge. In the interim, and already, we can see a variety of measures and experiments–from blogs to online journals like Kairos to WAC Clearinghouse and Parlor Press to open source textbooks.

If you only click one of the links, I’d suggest the Digital Digs piece – it’s got a lot you’ve probably thought about before, but presented in a way that shook some things loose in my head