Post by Bozur on Jan 18, 2006 1:38:23 GMT -5
Idea Lab
Trial and Error
(The scientific-publishing system does little to prevent scientific fraud. Is there a better way?)
By DAVID DOBBS
Published: January 15, 2006
Many of us consider science the most reliable, accountable way of explaining how the world works. We trust it. Should we? John Ioannidis, an epidemiologist, recently concluded that most articles published by biomedical journals are flat-out wrong. The sources of error, he found, are numerous: the small size of many studies, for instance, often leads to mistakes, as does the fact that emerging disciplines, which lately abound, may employ standards and methods that are still evolving. Finally, there is bias, which Ioannidis says he believes to be ubiquitous. Bias can take the form of a broadly held but dubious assumption, a partisan position in a longstanding debate (e.g., whether depression is mostly biological or environmental) or (especially slippery) a belief in a hypothesis that can blind a scientist to evidence contradicting it. These factors, Ioannidis argues, weigh especially heavily these days and together make it less than likely that any given published finding is true.
Catherine Wagner/Stephen Wirtz Gallery
Ioannidis's argument induces skepticism about science. . .and a certain awe. Even getting half its findings wrong, science in the long run gets most things right - or, as Paul Grobstein, a biologist, puts it, "progressively less wrong." Falsities pose no great problem. Science will out them and move on.
Yet not all falsities are equal. This shows plainly in the current outrage over the revelation that the South Korean researcher Hwang Woo Suk faked the existence of the stem-cell colonies he claimed to have cloned. When Hwang published his results last June in Science, they promised to open the way to revolutionary therapies - and perhaps fetch Hwang a Nobel Prize. The news that he had cooked the whole thing dismayed scientists everywhere and refueled an angst-filled debate: how can the scientific community prevent fraud and serious error from entering journals and thereby becoming part of the scientific record?
Journal editors say they can't prevent fraud. In an absolute sense, they're right. But they could make fraud harder to commit. Some critics, including some journal editors, argue that it would help to open up the typically closed peer-review system, in which anonymous scientists review a submitted paper and suggest revisions. Developed after World War II, closed peer review was meant to ensure candid evaluations and elevate merit over personal connections. But its anonymity allows reviewers to do sloppy work, steal ideas or delay competitors' publication by asking for elaborate revisions (it happens) without fearing exposure. And it catches error and fraud no better than good editors do. "The evidence against peer review keeps getting stronger," says Richard Smith, former editor of the British Medical Journal, "while the evidence on the upside is weak." Yet peer review has become a sacred cow, largely because passing peer review confers great prestige - and often tenure.
Lately a couple of alternatives have emerged. In open peer review, reviewers are known and thus accountable to both author and public; the journal might also publish the reviewers' critiques as well as reader comments. A more radical alternative amounts to open-source reviewing. Here the journal posts a submitted paper online and allows not just assigned reviewers but anyone to critique it. After a few weeks, the author revises, the editors accept or reject and the journal posts all, including the editors' rationale.
Some worry that such changes will invite a cacophony of contentious discussion. Yet the few journals using these methods find them an orderly way to produce good papers. The prestigious British Medical Journal switched to nonanonymous reviewing in 1999 and publishes reader responses at each paper's end. "We do get a few bores" among the reader responses, says Tony Delamothe, the deputy editor, but no chaos, and the journal, he says, is richer for the exchange: "Dialogue is much better than monologue." Atmospheric Chemistry and Physics goes a step further, using an open-source model in which any scientist who registers at the Web site can critique the submitted paper. The papers' review-and-response sections make fascinating reading - science being made - and the papers more informative.
The public, meanwhile, has its own, even more radical open-source review experiment under way at the online encyclopedia Wikipedia, where anyone can edit any entry. Wikipedia has lately suffered some embarrassing errors and a taste of fraud. But last month Nature found Wikipedia's science entries to be almost as accurate as the Encyclopaedia Brittanica's.
Open, collaborative review may seem a scary departure. But scientists might find it salutary. It stands to maintain rigor, turn review processes into productive forums and make publication less a proprietary claim to knowledge than the spark of a fruitful exchange. And if collaborative review can't prevent fraud, it seems certain to discourage it, since shady scientists would have to tell their stretchers in public. Hwang's fabrications, as it happens, were first uncovered in Web exchanges among scientists who found his data suspicious. Might that have happened faster if such examination were built into the publishing process? "Never underestimate competitors," Delamothe says, for they are motivated. Science - and science - might have dodged quite a headache by opening Hwang's work to wider prepublication scrutiny.
In any case, collaborative review, by forcing scientists to read their reviews every time they publish, would surely encourage humility - a tonic, you have to suspect, for a venture that gets things right only half the time.
David Dobbs is the author of "Reef Madness: Charles Darwin, Alexander Agassiz and the Meaning of Coral."
Trial and Error
(The scientific-publishing system does little to prevent scientific fraud. Is there a better way?)
By DAVID DOBBS
Published: January 15, 2006
Many of us consider science the most reliable, accountable way of explaining how the world works. We trust it. Should we? John Ioannidis, an epidemiologist, recently concluded that most articles published by biomedical journals are flat-out wrong. The sources of error, he found, are numerous: the small size of many studies, for instance, often leads to mistakes, as does the fact that emerging disciplines, which lately abound, may employ standards and methods that are still evolving. Finally, there is bias, which Ioannidis says he believes to be ubiquitous. Bias can take the form of a broadly held but dubious assumption, a partisan position in a longstanding debate (e.g., whether depression is mostly biological or environmental) or (especially slippery) a belief in a hypothesis that can blind a scientist to evidence contradicting it. These factors, Ioannidis argues, weigh especially heavily these days and together make it less than likely that any given published finding is true.
Catherine Wagner/Stephen Wirtz Gallery
Ioannidis's argument induces skepticism about science. . .and a certain awe. Even getting half its findings wrong, science in the long run gets most things right - or, as Paul Grobstein, a biologist, puts it, "progressively less wrong." Falsities pose no great problem. Science will out them and move on.
Yet not all falsities are equal. This shows plainly in the current outrage over the revelation that the South Korean researcher Hwang Woo Suk faked the existence of the stem-cell colonies he claimed to have cloned. When Hwang published his results last June in Science, they promised to open the way to revolutionary therapies - and perhaps fetch Hwang a Nobel Prize. The news that he had cooked the whole thing dismayed scientists everywhere and refueled an angst-filled debate: how can the scientific community prevent fraud and serious error from entering journals and thereby becoming part of the scientific record?
Journal editors say they can't prevent fraud. In an absolute sense, they're right. But they could make fraud harder to commit. Some critics, including some journal editors, argue that it would help to open up the typically closed peer-review system, in which anonymous scientists review a submitted paper and suggest revisions. Developed after World War II, closed peer review was meant to ensure candid evaluations and elevate merit over personal connections. But its anonymity allows reviewers to do sloppy work, steal ideas or delay competitors' publication by asking for elaborate revisions (it happens) without fearing exposure. And it catches error and fraud no better than good editors do. "The evidence against peer review keeps getting stronger," says Richard Smith, former editor of the British Medical Journal, "while the evidence on the upside is weak." Yet peer review has become a sacred cow, largely because passing peer review confers great prestige - and often tenure.
Lately a couple of alternatives have emerged. In open peer review, reviewers are known and thus accountable to both author and public; the journal might also publish the reviewers' critiques as well as reader comments. A more radical alternative amounts to open-source reviewing. Here the journal posts a submitted paper online and allows not just assigned reviewers but anyone to critique it. After a few weeks, the author revises, the editors accept or reject and the journal posts all, including the editors' rationale.
Some worry that such changes will invite a cacophony of contentious discussion. Yet the few journals using these methods find them an orderly way to produce good papers. The prestigious British Medical Journal switched to nonanonymous reviewing in 1999 and publishes reader responses at each paper's end. "We do get a few bores" among the reader responses, says Tony Delamothe, the deputy editor, but no chaos, and the journal, he says, is richer for the exchange: "Dialogue is much better than monologue." Atmospheric Chemistry and Physics goes a step further, using an open-source model in which any scientist who registers at the Web site can critique the submitted paper. The papers' review-and-response sections make fascinating reading - science being made - and the papers more informative.
The public, meanwhile, has its own, even more radical open-source review experiment under way at the online encyclopedia Wikipedia, where anyone can edit any entry. Wikipedia has lately suffered some embarrassing errors and a taste of fraud. But last month Nature found Wikipedia's science entries to be almost as accurate as the Encyclopaedia Brittanica's.
Open, collaborative review may seem a scary departure. But scientists might find it salutary. It stands to maintain rigor, turn review processes into productive forums and make publication less a proprietary claim to knowledge than the spark of a fruitful exchange. And if collaborative review can't prevent fraud, it seems certain to discourage it, since shady scientists would have to tell their stretchers in public. Hwang's fabrications, as it happens, were first uncovered in Web exchanges among scientists who found his data suspicious. Might that have happened faster if such examination were built into the publishing process? "Never underestimate competitors," Delamothe says, for they are motivated. Science - and science - might have dodged quite a headache by opening Hwang's work to wider prepublication scrutiny.
In any case, collaborative review, by forcing scientists to read their reviews every time they publish, would surely encourage humility - a tonic, you have to suspect, for a venture that gets things right only half the time.
David Dobbs is the author of "Reef Madness: Charles Darwin, Alexander Agassiz and the Meaning of Coral."