making Google Scholar work harder

This was weird. I followed the link from Catherine’s excellent comment on my last post over to her excellent instruction-focused blog and while I was browsing the archives to check it out, I came across this post, which I knew that I was going to share.

It’s a nifty tip on how to force Google Scholar to add a “cited by [insert scholarly work of choice here]” to a regular keyword search.  Or put another way, it’s a way to search within a list of “cited by” results within Google Scholar.  For anyone at an institution that doesn’t subscribe to a lot of databases that support cited-reference searching the value is obvious, but I would suspect most of us have wanted to do just that from time to time, access to Web of Science or no.

Then today, less than fifteen minutes after I browsed that post I opened Google Reader to see that Fred Stutzman has posted today on his (yes, excellent) blog exactly the same solution to exactly the same problem.  His post lays it out in step-by-step format, if that’s what your brain likes.

That’s some crazy synchronicity.

health research using Google – maybe not what you think when you hear that

This is really interesting.  In a letter to Nature a few days ago, (paywall – natch) six researchers (one from the CDC, and five from Google, including one named “Brilliant” – awesome) reported on their work using five years of query data to track the presence of flu-like symptoms within the population –

By processing hundreds of billions of individual searches from 5 years of Google web search logs, our system generates more comprehensive models for use in influenza surveillance, with regional and state-level estimates of ILI activity in the United States. Widespread global usage of online search engines may eventually enable models to be developed in international settings.

Information seeking behavior within a population as a way to figure out what’s going on in the population – I don’t know why I enjoy that concept so much, but I do.

Here’s where I saw this story on the Mystery Rays from Outer Space blog (by a MSU virologist) – linked from the ResearchBlogging twitter feed.

Here’s more about the “Google Flu Trends” tool including more about how it works, and a place to download the raw data for yourself.

And here’s the graph from the letter that sure makes it look like this works -

Peer-reviewed Monday – Making Ignorance

I meant to come up with the perfect, open-access peer-reviewed article on something different – something besides media literacy, publics, critical thinking or reflective practice.

But I failed.  I’ve read a few articles this week, but nothing really compelling or interesting.  So here’s a quick thing -the article I read is behind Sage’s paywall:

ResearchBlogging.org

S. H. Stocking, L. W. Holstein (2008). Manufacturing doubt: journalists’ roles and the construction of ignorance in a scientific controversy Public Understanding of Science, 18 (1), 23-42 DOI: 10.1177/0963662507079373

It does look like there is an earlier conference paper version (from the International Communication Association annual meeting in 2006) here.

Short recap – epidemiologist Steven Wing (UNC) published two studies suggesting that living next to  industrial, factory-farming hog farms is hazardous to one’s health, and that these facilities are more likely to be located in areas where poor, African-Americans live.

The statewide hog industry trade association, the North Carolina Pork Council, responded to both studies by characterizing the scientist as politically motivated and the studies as “pseudo-science” filled with holes and errors, charges they swiftly brought to the attention of the news media with rapidly issued news releases and follow-up interviews with reporters.

Informed by sociologist Michael Smithson’s work on the construction of scientific ignorance, Stocking and Holstein go into their study working from the research assumption that the industry implicated in Wick’s research would respond by using strategies based on uncertainty and ignorance – that they would “claim to journalists that the research was tainted by error, incompleteness, irrelevance and uncertainty and was therefore unsoung, untrustworthy and open to doubt.”  They, like Smithson, argue that we need to look as closely at how scientific ignorance is constructed as we do at how scientific knowledge is created.

Stocking and Holstein were interested in looking at how science journalists would report on these industry tactics – would they accept the industry’s assertions or challenge them, and how?  They suggested that the prior research on the construction of scientific ignorance provided little guidance on this question.

Unsurprisingly, they found that the industry did employ the predicted tactics – “[i]ndustry’s claims were sweeping and sometimes vitriolic.”

What’s interesting to me about this study was the finding that journalists’ responses to these industry claims were most strongly related to how the individual journalists perceived their own roles.  Their own understanding of the science they were reporting was related to how they reported it, but not as strongly as how they identified their roles as journalists.    To discuss this finding, they drew on Weaver and Wilhoit (1996) who articulated four such roles:  disseminators, interpretive/ investigative reporters,  populist mobilizers, and adversaries.

Disseminators put forward all points of view equally and trust the public to make up their own minds.

Interpretive/investigative reporters have trouble when it comes to science stories.  Most reporters aren’t really qualified to do their own research on the topic.  In this study, only one reporter – described as a “sophisticated consumer of science” did so.  It is perhaps not surprising that this reporter was an editorial writer who felt that he had a certain amount of freedom to build his own case and present his own interpretation of the issue.

Populist mobilizer is relatively newly defined by Weaver and Wilhoit, but it is based on an old muckraking tradition.  A few reporters fit this profile.  These reporters gave time to the industry claims and to the science.  And they also pulled in other voices – lay people and local people – who they believed had a stake in the narrative.  These were voices that might have otherwise been left out.  These stories tended, for different reasons, to skew towards verifying the science.

Adversarial – this role is fairly rare.  Most journalists tend to be conservative when it comes to established power structures.  In this case, the adversarial reporter attacked the institution behind the scientific research (UNC).

The authors concluded that even though the research in question came out of a respected institution, was conducted by a respected scholar and was reported in respectable outlets – “ignorance claims became a viable tool for an economically powerful but threatened industry that sought to follow the tobacco industry’s lead and manufacture doubt about science that jeopardized their interests.”

What’s interesting to me about this is that the industry poked not just at the conclusions – but at everything else – the research method, Wick himself, UNC and more and that every one of those jabs was recorded somewhere in the press.   These attacks weren’t designed to suggest any positive alternative, or to make any substantive criticisms — they were designed to suggest uncertainty and manufacture doubt – “we just don’t know.”  This isn’t a debate or a discussion – it’s a game.  The research Stocking and Holstein draw upon predicted this – this is an established and identifiable rhetorical strategy.

Stocking and Holstein acknowledge that the pool of journalists analyzed here is too small to draw any strong conclusions – but their suggestion that science journalists, just as much as other more obviously interested parties, construct ignorance when it serves their interests (when it supports their own professional identity).

On the one level, this is interesting because of the information literacy implications – people need to know how to find scientific research, to use it and to understand it because they can’t always trust science reporting to do those things for them.  But on another level, which I think is more interesting, is the why this article suggests – why the science reporting might not be doing what we need it to.

It’s not enough to say ” so and so wants to prove that hog farming is awesome because they have X or Y interests in hog farming.”  This article suggests that Reporter X might have no personal or financial or political interest at all in how his or her article skews on the story.  But still, their interest in presenting as an Investigative Journalist or as an Adversary Speaking Truth to Power skews the story anyway.  That’s a more interesting way of thinking about evaluation – rhetoric isn’t always obvious after all and there are a lot of dimensions to it.

—-

for more on the models in this article:

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms.  New York: Springer-Verlag.

Weaver, D.H. and Wilhoit, G.C. (1996). The American Journalist in the 1990’s: U.S. News People at the End of an Era.  Mahwah, NJ, Earlbaum.

I know what paleophidiothermometry means because of scholarly blogging

Way back when we first started talking about talking about peer review, Kate made a point that has stuck with me ever since – that we talk being accessible a lot in libraries but we usually talk about it in one sense of the word. To be fair, it is the first sense listed by the OED:

1. Capable of being used as an access; affording entrance; open, practicable. Const. to.

This meaning gets at access to information – our ability to physically get our hands (or our eyes) on the information we want or need, our ability to get past technical barriers, bad interfaces, or paywalls.  It also gets at our accessiblity in terms of open hours, our availability to answer questions and maybe even a little bit our openness in terms of friendliness.

What Kate pointed out that for our students, actually for a lot of us, the scholarly or scientific discourse is inaccessible in another way (OED’s 3rd)

c. Able to be (readily) understood or appreciated. Freq. applied to academic or creative work.

How many times do we teach students how to find scholarly articles by showing them the physical access points – the databases (or results limiting options) that will bring back articles that have been peer-reviewed, that will meet their professor’s requirement that for one, three or five “scholarly articles? while all the time we know that they will struggle with reading, understanding, and really USING these articles in their work?

How often do all of us begin poking around on a new topic to find scholarly articles that are too narrowly focused, assume too much about what we know about context and significance, full of technical terms, and just plain inaccessible to us, at least early on in our investigation?

And it’s not the articles’ fault.  The authors of peer reviewed articles have an audience to consider, and it’s not us.  Which is why I love the idea of these same authors writing for a different audience – and academic or science blogging is a great way to do that.  I know I’ve made this point here before, and I’ll probably do it again, but I thought this post today at Dracovenator (by Adam Yates, an Australian palaeontologist) was such a great example of it that I wanted to put it out there.

I only clicked on the link today (out of ResearchBlogging’s Twitter feed) because the title of the post was SO inaccessible to me.  I was just delighted by the post though – look how accessible it is, on every level.  Quick explanations of technical terms, a short summary of the research, an explanation of the context.  That context piece is one of my favorite parts, actually.  But then also a critique of the research.

And you can tell from the first line of the post that this isn’t a dumbed-down explanation written for the uninformed – the author assumes we’ve all heard about the study. I think there is so much positive potential in scholars and experts simply showing how they interact with the work in their field, how they understand it, how they read it, and how they talk about it.

Kicking off Peer Reviewed Mondays

ResearchBlogging.org

I am apparently not the only theory geek out there.  But I realized that I haven’t been doing a great job of putting my money where my mouth is where it comes to the value of peer-reviewed articles.  So my Monday evenings this term look like they are, for a variety of reasons, going to lend themselves to this experiment – Peer Reviewed Mondays.

Which means – Mondays I’ll try to at least point to a peer-reviewed article that I think is worth talking about.  I’ll also try to explain why.  That seems do-able.

So I’ve had the idea for this for a while, which of course means that I haven’t come across anything peer reviewed that really inspired me to write.  But today, I was inspired by a presentation I heard on campus to go poking around into journals I haven’t looked at before and I came across this article (PDF) from 2004, which takes a different kind of look at issues of evaluation when it comes to science and research and peer reviewed papers.  The meta of that delights me, and I think this paper is a useful addition to the stuff in my brain.

The article is called The Rhetorical Situation of the Scientific Paper and the “Appearance” of Objectivity.

(Useless aside – when the presenter today was being introduced I was unreasonably amused because to fully express the meaning of each journal article, the professor doing the introducing had to read out all of the punctuation – like the quotation marks above.  That doesn’t happen in library papers enough. )

Allen’s method is simple.  He articulates ideas and theories from the rhetoric/composition discourse, and then analyzes a single scholarly article -

in this essay, I analyze Renske Wassenberg, Jeffrey E. Max, Scott D. Lindgren, and Amy Schatz’s article, “Sustained Attention in Children and Adolescents after Traumatic Brain Injury: Relation to Severity of Injury, Adaptive Functioning, ADHD and Social Background” (herein referred to as “Sustained Attention in Children and Adolescents”), recently published in Brain Injury, to illustrate that the writer of an  experimental report in effect creates an exigence and then addresses it through rhetorical strategies that contribute to the appearance of objectivity.

Allen’s initial discussions, that scientific rhetoric is rhetoric, that scholarly objectivity has limits, and that specific rhetorical decisions (like the use, or what might be considered overuse in any other context, of the passive voice) are employed to enhance the impression of objectivity.  are interesting but not earth-shattering.  Where I found the nuggets of real interest were in the concluding sections.  Allen draws upon John Swales, who examined closely how scientists establish the idea that their research question is important.

Swales called the strategy Create a Research Space (CARS)  and identified three main rhetorical moves that most scholars make.  As Allen describes them:

  • Establish centrality within the research.
  • Establish a niche within the research.
  • Occupy the niche.

So what’s interesting about this to me, is Allen’s conclusion that – scientific authors seek to establish their study’s relevance through the implication that they share the same central assumptions and information base.

The reason this is interesting is that it dovetails neatly with Thomas Kuhn’s idea of normal science – the idea that a shared set of first principles is central to the idea of scholarly discourse, particularly a discourse that advances knowledge.   Where that intersects with the idea of peer review is in the idea of what makes it through peer review – the idea that an article or a piece of research needs to be more than just good or interesting research, it also needs to be a good example of what research is in a particular discipline, or discourse community.

Allen compares the relatively ordinary piece of research that he is anaylzing with a far more influential piece and finds that some of the rhetorical strategies that are used to establish the science-ness and the objectivity of the former are present, but far less present in the latter.  This – especially the idea that authors proposing truly ground-breaking research might be less likely to use the passive, “objective” voice – might be more likely to refer to themselves as active agents – is simply fascinating.

Allen points out that the very language we use to talk about scholarly research – creating meaning, knowledge, verifying truth claims – implies that the situation in which scientists communicate their findings is rhetorical.  He also points out that it is not just scientists – but the rest of us who rely in many ways on the meaning and knowledge scientists create – who need to understand these rhetorical practices.  His last sentence, in fact, is as good a justification for information literacy in higher ed as I’ve seen:

Certainly, scientists and researchers should be aware of embedded rhetorical strategies. But given the profound and pervasive influence of science in Western culture, we should all––scientist or not––be attentive to how our knowledge is shaped.

And now on to that delicious meta I love so much.

I love the idea of using this article to talk about what scholarship really is with undergraduates, particularly undergrads working on understanding academic writing.  But here’s the thing – the author of this article is himself an undergraduate writer.

Seriously!

The article appears in Young Scholars in Writing: Undergraduate Research in Writing and Rhetoric – a peer-reviewed journal, with an editorial board made up of faculty in rhetoric and composition.  The content of the journal is all by undergraduate researchers, and the peer reviewers themselves are undergraduates who have also published in the journal.

The journal reflects some of the practices of open peer review that fascinate me – especially as information literacy resources for students as they learn the practice of scholarly knowledge creation.  The review process itself is not made public, but each issue of the journal has a Comments and Responses section where student writers write 2-5 page responses to papers that have been published in the journal.

Which gets to the last piece of meta.  The one overwhelming benefit of the peer-review process, that is rarely discussed by us librarians in information literacy classes when we have to talk to students about finding peer reviewed articles – and that is rarely discussed by the faculty who require their students to find peer reviewed articles — the one thing that is pretty much unanimously agreed is that the process of peer review makes the paper better.  Maybe not all better, maybe not better in the same way as it would be better if it were reviewed by other people.  But better than it would have been had it not gone through that process.  This paper is beautifully written – clear and accessible and smart.

I love the idea of digging into the idea of peer review, using student engagement with the peer review process as an entry point — but to be able to do that with a paper that itself should spark new ideas about the value of scholarship, and how to evaluate scholarship – that looks kind of like a gift.

*************
Matthew Allen (2004). The Rhetorical Situation of the Scientific Paper and the “Appearance” of Objectivity Young Scholars in Writing: Undergraduate Research in Writing and Rhetoric, 2

pointing out those giants, there with the shoulders

So back in April, gg at Skulls in the Stars challenged science bloggers across the disciplines to read and research some classic article in their discipline, and then write a blog post about it.  The results are in, and they’re awesome.  Not just fascinating – this is a potential time suck (with none of the guilt I feel wasting time with old sports clips on YouTube – I mean, it’s reading about science.  Important science!) – but also a really intriguing way to think about introducing a lot of overlapping ideas about scholarship to students.

One – we all know that context is one of the hardest things to figure out when you’re taking your first steps into understanding a new topic or discipline. Which things to read, what do they mean, why were they important, why are they still important  – answers to these questions aren’t immediately apparent to an outsider and scholarship written for other experts takes a lot of the keys to unlocking this discourse for granted.  Each of these posts lifts some disciplinary curtain aside, telling us what to read and why – in language written not for experts but for smart, motivated people who don’t already have that contextual knowledge.

And by showing the significance of a work in a discourse, these bloggers also (in both text and subtext) show us something about what discourse is and how it works in science or scholarship or research.  My hands-down most favorite entry in this series is from the person who issued the initial challenge – the Gallery of Failed Atomic Models – and this entry really gets at what I’m talking about here.  From gg:

It is often said that history is “written by the victors”. While this statement is usually referring to the winners of a military or political conflict, a similar effect occurs in the history of science. Physics textbooks, for instance, often describe the development of a theory in a highly abbreviated manner, omitting many of the false starts and wrong turns that were taken before the correct answer was found. While this is perfectly understandable in a textbook (it is rather inefficient to teach students all of the wrong answers before teaching them the right answer), it can lead to an inaccurate and somewhat sterile view of how science actually works.

And that might be my favorite piece of this project – the view of how science actually works that you get from these articles is anything but sterile.  They’re planning a second go-round of this project, which will be hosted here in about a month.  I’m marking my calendar.  Well not literally.  But I’m glad this will be an ongoing thing.

There’s another version of the first set of posts up at A Blog Around the Clock – organized chronologically, with some great excerpts highlighting what makes each post good.

Thanks to Cognitive Daily for the pointer.

Why we should read it before we cite it — no, really!

Last week, Female Science Professor wrote a lovely pair of posts about scholars and scholarship, what it feels like when your work has an impact on someone and what it feels like to meet the people who have influenced you in that particular undefinable way where it’s hard to even express what they’ve meant to you. I shared one, saved the other and generally felt very good about being a small part of this world where rock star crushes on ideas and the people who share them are understood and embraced.

Way to ruin everything, Inside Higher Ed.

Okay, not really. But seriously, it’s a lot harder to feel like a rock star because someone has read and used your work if, as Malcolm Wright and J. Scott Armstrong suggest, they probably didn’t read it and if they did, they probably read it wrong.

That might be a little bit strong, but not by much. So what does it mean when a published, peer-reviewed article in a real life journal kicks off its final, concluding paragraph with this sentence – Authors should read the papers they cite.

!

This isn’t a library tutorial aimed at fifth-graders writing their first research paper, after all. This is a paper talking about what professional scholars, people responsible for the continued development of knowledge in disciplines, should do. It can’t mean anything good. Here’s the original article:

Article at Interfaces – requires subscription

Article at Dr. Anderson’s faculty page – does NOT require a subscription – (opens in PDF)

Nutshell – Dr. Anderson wrote one of the more impact-heavy articles in his discipline, and the only article that analyzes and explains how to correct for non-response bias in mail surveys (that’s bias caused by people who do not respond at all to the survey). By analyzing 1. how often research based on mail surveys includes a citation to this article, and 2. how often the later researchers seem to interpret and apply the original article correctly the authors conclude that many, many researchers are not reading all of the relevant literature. More disturbingly, many, many researchers aren’t even reading all of the articles they themselves cite.

Now, on one level this isn’t a shocker – anyone who has read moderately deeply in any body of literature has probably looked at at least one bloated literature review and said “hey – this person probably didn’t really read all of these books and articles.” This article suggests that it’s more complex than just lit-review padding, that scholarly authors also mis-cite and mis-use the resources they use to support the methods they use and the conclusions they draw.

Working on the assumption that if your research uses a mail survey, you should at least be considering the possibility of nonresponse bias, they found that:

…far less than one in a thousand mail surveys consider evidence-based findings related to nonresponse bias. This has occurred even though the paper was published in 1977 and has been available in full text on the Internet for many years.

Working on the further assumption that someone who makes a claim about nonresponse bias, and who reads, understands and cites an article that outlines a particular method for correcting nonresponse bias to support that claim, will follow the method outlined in the article they cited, the authors conclude that many authors are either not reading or are not understanding the articles they cite:

The net result is that whereas evidence-based procedures for dealing with nonresponse bias have been available since 1977, they are properly applied only about once every 50 times that they are mentioned, and they are mentioned in only about one out of every 80 academic mail surveys.

Most of the research that seriously digs into how well researchers use the sources they cite has come out of the sciences, particularly the medical sciences. This is one of the first articles I’ve seen dealing with the social sciences, and I think it’s worth reading more closely because this very rough and brief summary doesn’t really do justice to the issues it raises. But right now, I want to turn to the authors’ conclusions because I think they get at some of the things we’ve been talking about around here about how new technologies and the read/write web might have an impact on scholarship.

The first two outline author responsibilities:

  • First – read the sources you cite. I think we can take that as a given – a bare-minimum practice not a best practice.
  • Secondly, “authors should use the verification of citations procedure.” Here they’re calling for authors to contact all of the researchers whose work they want to cite to make sure that they’re citing it correctly. I’m going to come back to this one.

The second two put some of the burden on the journals:

  • Journals should require authors to attest that they have in fact read the work they cite and that they have performed due diligence to make sure their citations are correct. That seems a sad, largely symbolic, but not unreasonable precaution.
  • Finally, journals should provide easily accessible webspaces for other people to post additional work and additional research that is relevant to research that has been published in the journal. Going to come back to this one too because I think it’s actually related to the one above.

Basically – both of these recommendations suggest that more communication and more transparency would be more better for knowledge creation. And what is the read/write web about if not communication and transparency, networking and openness?

Some of the commenters on the IHE article expressed, shall we say, polite skepticism that an author should be obligated to contact every person they cite before citing them. These concerns were also raised by one of the formal comment pieces attached to the Interfaces article. And I have to say I agree with these concerns for a few reasons. Anderson made the claim more than once that he does this as an author, with good results, and that the process is not too onerous. But that doesn’t really address the question of how onerous it would be for a prolific or influential author to have to field all of those requests.

And I’ll also admit to having some author is dead reactions to this. What if I contact Author A and say I’m planning to use your work in this way and they say “well I didn’t intend it to be used in that way so you shouldn’t.” Does that really mean I shouldn’t? Really? It’s hard to see this kind of thing not devolving quickly into something that actually hinders the development of new knowledge because it hinders new researchers’ ability to push at and find new connections in work that has come before.

But not to throw everything out with this bathwater – the idea that more and better and faster communication between scholars (more and better and faster than can be provided within journals and the citation-as-communication tradition) makes better scholarly conversations and better scholarship – that’s something I think we need to hold on to. Anderson points out how talking to the researcher who really knows the area described in the thing you’re citing can point you to other, less cited but more useful resources – how they can expand your knowledge of the field you’re talking about:

We checked with Franke to verify that we cited his work correctly. He directed us to a broader literature, and noted that Franke (1980) provided a longer and more technically sophisticated criticism; this later paper has been cited in the ISI Citation Index just nine times as of August 2006.

This is an area where the transparency, speed and networking aspects of the emerging web might have a real impact on the quality of scholarship even if there are no material changes in the practice of producing journal articles. I might not be sure about the idea of making this communication a part of citation verification but it should be a part of knowledge creation. And it’s tied as well with the final recommendation – that journals should provide webspaces for some, not all but some, of this communication to happen.

The types of conversations between similarly interested scholars that Anderson is describing is nothing new – the emerging web offers some opportunities for those conversations to move off the backchannel. Or maybe it’s the idea that it’s still a backchannel, but the back channel being visible is interesting. Whether the journal has its own backchannel for errors, additions, omissions and new ideas to be posted, or whether that backchannel exists on blogs, in online knoweldge communities, or networking spaces doesn’t matter so much as it can exist. We certainly have the technology.

And the journal Interfaces itself I think provides a suggestion as to why this kind of addtional discourse and conversation is valuable. You may have noticed that what looks like a fifteen page article is really an eight page article with six pages of response pieces, followed by an authors’ response. The responses challenge parts of the original article, and enrich other parts with additional information and examples. They illustrate the collaborative nature of knowledge production in the disciplines in a way that citations alone cannot. I couldn’t find anything on the journal’s website about this practice – if it’s a regular thing, how responses are solicited, or more. These responses are a spot of openness in a fairly closed publication.

And that as well points to the last point to make here because this is far too long already – I don’t think we have to change everything to fix the problems raised here – and I don’t think if we did change everything it would fix all of the problems raised here. There’s that scene in Bull Durham where Eppie Calvin gets his guitar taken away because he won’t get the lyrics right. And that’s the connection between FemaleScienceProfessor and Anderson and Wight — who can feel like a rock star if they’re singing your songs but getting them wrong?

There will always be Eppie Calvins out there inside and outside of academia -for them, women are wooly because of the stress. But injecting just some openness, making some communication visible – won’t stop Eppie Calvin, but might keep the next person from replicating his mistakes. And that’s a good thing.

LOTW follow-up – from people who weren’t there!

Kate and I are still buzzing from the great conversation we had with the people who came to our session at LOEX of the West. It’s always an amazing and kind of surreal experience when you find out that other people are excited by the same ideas you are.

And it seems that other people really are. Almost the second we stopped talking, we started finding other people who were. All over the web.

At ACRLog, Barbara Fister brings up the issue of promotion and tenure, and how many committees find it difficult to evaluate the significance of publications that don’t fit into the traditional scholarly formats — particularly when they are trying to evaluate the impact of scholars from other disciplines. These ideas are strongly connected to the ideas about distributing professional rewards, and we really just got started talking about the question of expertise, and evaluating work outside your discipline at the end there – good to see and think more about it.

Dorothea Salo talks about the differences between informal writing on the participatory web (like blog posts) and scholarly journal writing. She brings up one benefit to scholarly journals that we only hinted at – the way that the lengthy give and take between author and editor in the traditional publication process can make an individual article better. Not bring it up to some objective standard of quality, but make it better than it was. She also talks about something we did spend a lot of time talking about – the archive of knowledge, or the scholarly record. But she goes a lot further than we did talking about the role academic libraries play in that process.

Then today, I saw Tenured Radical’s discussion of the Social Science Research Network. She’s asking why historians aren’t participating in this project, and looking at some of the implications of that lack of participation. The SSRN is a digital archive that has as its goal the rapid dissemination of research in the social sciences. It includes an abstract database (of scholarly working papers and forthcoming papers) and an e-library of downloadable papers. These resources are available to registered members for free; there are also entry points into some proprietary database holdings, for a fee.

Tenured Radical highlights one of the reasons we think it’s so important that we are all having these conversations – not to replace traditional forms of publication, but to make them accessible. Not to encourage scholars to write for the public instead of for each other, but to leverage technological change in ways that can keep that scholarly discourse available to those who want to find it –

An insistence that the only good work has been heavily vetted through our current refereeing practices may be a mistake, much as soliciting the criticisms of others does contribute to producing good work (although it doesn’t always, I’m afraid, as cases where flawed research has slipped through to publication or to a prize demonstrates.) In its current form, it may be a fetish that is doing us more harm than good, and may be something that our professional associations need to review to take advantage of an atmosphere of intellectual vigor offered by electronic and other forms of mass publication.

LOEX of the West presentation, 2008

Peer Review 2.0: Tomorrow’s Scholarship for Today’s Students

LOEX of the West, Las Vegas

Anne-Marie Deitering & Kate Gronemyer

WEB 2.0 BACKGROUND

Five Web 2.0 themes — from the ACRL Instruction Section’s Current Issues Discussion Forum, Research Instruction in a Web 2.0 World (Annual, 2006).

DANAH BOYD EXAMPLES

{Edit: These didn’t make it into the presentation, but they are examples of some discussions on the web over the last year that started our thinking on this topic.}

danah boyd – open-access is the future: boycott locked-down academic journals

danah boyd – Viewing American Class Divisions through Facebook and MySpace

danah boyd – editing a special issue of JCMC

NORMAL SCIENCE & INNOVATIONS

PEER REVIEW, QUALITY CONTOL & FRAUD

DISTRIBUTING PROFESSIONAL REWARDS

WHAT IF WE IGNORE NEW MODELS?

NEW MODELS

Current Anthropology

Expressive Processing: An Experiment in Blog-Based Peer Review – Noah Waldrip Fruin on Grand Text Auto

Cognitive Daily blog

ScienceBlogs – The World’s Largest Conversation about Science

BPR3: Bloggers for Peer-Reviewed Research Reporting — Icons

CiteULike

UsefulChem Wiki

Radiology Wiki

Open Notebook Science Using Blogs and Wikis (Jean-Claude Bradley, at Nature Precedings)

Rrresearch Blog

“The Academic Manuscript” — Wicked Anomie: Sociology Run Amok

Welcome to Nature Precedings

EDITED TO ADD:

Barbara Fister points to this article in the Chronicle:  Certifying Online Research (Gary Olson), about the challenges of evaluating online publications.  See also Barbara’s post at ACRLog: Peer (to Peer) Review.