[2b2k] Pyramid-shaped publishing model results in cheating on science?
Carl Zimmer has a fascinating article in the NYTimes, which is worth 1/10th of your NYT allotment. (Thank you for ironically illustrating the problem with trying to maintain knowledge as a scarce resource, NYT!)
Carl reports on what may be a growing phenomenon (or perhaps, as the article suggests, the bugs of the old system may just now be more apparent) of scientists fudging results in order to get published in the top journals. From my perspective the article provides yet another illustration how the old paper-based strictures on scientific knowledge caused by the scarcity of publishing outlets results not only in a reduction in the flow of knowledge, but a degradation of the quality of knowledge.
Unfortunately, the availability of online journals (many of which are peer-reviewed) may not reduce the problem much even though they open up the ol’ knowledge nozzle to 11 on the firehosedial. As we saw when the blogosphere first emerged, there is something like a natural tendency for networked ecosystems to create hubs with a lot of traffic, along with a very long tail. So, even with higher capacity hubs, there may still be some pressure to fudge results in order to get noticed by these hubs, especially since tenure decisions continue to place such high value on a narrow understanding of “impact.”
But: 1. With a larger aperture, there may be less pressure. 2. When readers are also commentators and raters, bad science may be uncovered faster and more often. Or so we can hope.
(There is the very beginnings of a Reddit discussion of Carl’s article here.)
Categories: science, too big to know dw