This week is kind of like the week before Christmas for me. But a Christmas where I am super excited and at the same time super nervous because I really like the presents I made for everyone and I hope they like them too.
(That might be a little too much truth about how I feel around gift-giving holidays. And about how much of myself, and of others, is present in these pages.)
So for those of you who like to shake the box or peek inside the bags — here’s a teaser.
That’s the introduction to the book that I’ve been working on (with lots of other amazing people) for almost three years now. It’s my attempt to capture what I think about the method and about our need for it (and methods like it) in LIS.
The book itself will be available sometime near the end of next week. It’s kind of a thing for something that’s been three years in the making to become real. I’m kind of having a believe it when I see it reaction. But there are some hints that it is going to be real — see, isn’t it pretty?
Stay tuned – more to follow.
Reading this article next to #overlyhonestmethods – well it’s not all rosy.
One reason I like the hashtag is because it humanizes a process that I don’t think is humanized very often – finding out that, yeah, we ran that for this many hours so we could not get up at 2:00 am – that’s a nice reminder that scientists and scholars are real people.
And some of the rest, well it humanizes the process too, but in a different way. Instead of a reminder that scientists and scholars are real people who need to eat and sleep and interact with others and have fun and the rest of it – some of it shows scientists and scholars as real people who know exactly where their professional rewards are coming from, and who (no matter what Forbes may think) feel pressure to do the things that will earn those rewards. And there are consequences there, and no bright line to separate the shades of grey.
Simonsohn stressed that there’s a world of difference between data techniques that generate false positives, and fraud, but he said some academic psychologists have, until recently, been dangerously indifferent to both. Outright fraud is probably rare. Data manipulation is undoubtedly more common—and surely extends to other subjects dependent on statistical study, including biomedicine. Worse, sloppy statistics are “like steroids in baseball”: Throughout the affected fields, researchers who are too intellectually honest to use these tricks will publish less, and may perish. Meanwhile, the less fastidious flourish.
Christopher Shea (December 2012). “The Data Vigilante.” The Atlantic.
This is one of my favorite hashtags ever. I told a colleague before – there’s a great undergraduate learning experience buried in here somewhere, but mostly… it’s just funny.