Reading this article next to #overlyhonestmethods – well it’s not all rosy.
One reason I like the hashtag is because it humanizes a process that I don’t think is humanized very often – finding out that, yeah, we ran that for this many hours so we could not get up at 2:00 am – that’s a nice reminder that scientists and scholars are real people.
And some of the rest, well it humanizes the process too, but in a different way. Instead of a reminder that scientists and scholars are real people who need to eat and sleep and interact with others and have fun and the rest of it – some of it shows scientists and scholars as real people who know exactly where their professional rewards are coming from, and who (no matter what Forbes may think) feel pressure to do the things that will earn those rewards. And there are consequences there, and no bright line to separate the shades of grey.
Simonsohn stressed that there’s a world of difference between data techniques that generate false positives, and fraud, but he said some academic psychologists have, until recently, been dangerously indifferent to both. Outright fraud is probably rare. Data manipulation is undoubtedly more common—and surely extends to other subjects dependent on statistical study, including biomedicine. Worse, sloppy statistics are “like steroids in baseball”: Throughout the affected fields, researchers who are too intellectually honest to use these tricks will publish less, and may perish. Meanwhile, the less fastidious flourish.
Christopher Shea (December 2012). “The Data Vigilante.” The Atlantic.