|Year : 2011 | Volume
| Issue : 3 | Page : 754
In the News! An Opinion Scientific Conduct?
J van Dalen
Associate Editor, Education for Health
|Date of Submission||01-Dec-2011|
|Date of Web Publication||07-Dec-2011|
J van Dalen
Associate Editor, Education for Health
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
van Dalen J. In the News! An Opinion Scientific Conduct?. Educ Health 2011;24:754
Do you still have those cardboard boxes with the raw data from your studies of four years ago? Are they gathering dust on top of your cupboard? How often have you considered throwing them away?
Maybe it is wise not to do that yet. Good scientific practice dictates that we keep our raw data for a minimum of five years.
In the autumn of 2011, the scientific world was startled by the news that a Dutch social psychologist of world notoriety, Diederik Stapel, had committed fraud by producing or embellishing data of at least 30 of his publications. His case does not stand alone. In the month after this news, two more cases reached the newspapers; the reports are (in Dutch only) available for all on the Internet. Stapel is not even the worst example of famous scientists who committed fraud. In a table in the leading Dutch newspaper ‘De Volkskrant’1, he occupies fourth place in the top ten of ‘megafrauds’ (Table 1).
Table 1: Top 10 of scientific fraud (so far?)
The attempts to interpret these cases range on a continuum between psychopathological actions of individuals and there are too many pressures and temptations to break codes of conduct. Many of the fraudsters seem to explain their behavior with: 'I know this [my study results] to be true, even without the study'. An example can be found in the clarification by the Canadian Roger Poisson (eighth place in the top ten), who falsified 14 medical papers and at least 115 patient files between 1977 and 1990. He claimed that he made up 115 patient files so that the real patients would benefit from his study.
However, there is also some truth in the pressure to publish. Tenure and careers depend on the numbers of publications, while news media pursue scientists with demands for ‘sound-bites’, or findings that translate easily into practical effects for the public. Nuances and subtleties are not popular in newspaper articles about scientific findings. Moreover, the same can be said to apply to our own journal, where we see a preference for papers that show positive, significant findings. Fewer papers are submitted that found no relationship between variables. Apparently, there is some self-censorship, judging a paper ‘not worthy of publication’ when an expected or hypothesized relationship was not found.
Fortunately not the whole of the scientific community is corrupt. But, it is another shocking finding that in the three latest cases of fraudulent research in my country, it was not the colleagues of the professors involved, but rather their supervisees who had the courage to report their suspicions of the fraud.
The recommendations of the committees who investigated two of the most recent cases in The Netherlands are: '-new scientific staff members must sign a contract that states that they will respect a code of conduct; -all studies must be open for replication, and replication must become ‘the basic instrumentarium’; -data on which (psychological) publications are based must remain in archive for at least five years. They must be made available to other scientists on demand. This does not only apply to raw laboratory data but also to completed questionnaires, audio- and videorecordings etc. Publications should detail where the raw data are located and how they can be approached.'
It remains to be seen how realistic these recommendations are, especially the call for replication studies. We all know that it is much more difficult to publish findings of replication studies, and the number of publications is crucial for our careers and professional status.
No mention in the recommendations is made of the process by which the scientific community as a whole could have discovered such fraud. When a paper is submitted to a scientific journal, neither the editor nor the reviewers have any insight into how the data were gathered, or indeed, if they were actually gathered at all. The Methods section is all they can go by. The review process of scientific journals is usually done by an editor and two or three reviewers. The reviewers, usually experts in the field of study, are not compensated. Most journals have trouble getting and keeping enough reviewers who give constructive feedback about the papers. Moreover, to be frank, the reviewers often do not agree about the quality of a paper.
When such a procedure is compared to current practices on World-Wide Web 2.0 where knowledge is shared, amended, completed and commented upon by many active members of the community, it seems clear to me which is to be preferred. It is high time to thoroughly revise our way of thinking and our current practices in the scientific community. We deserve the suspicion of cynics, and we do not do enough to counter that suspicion.
Or will Diederik Stapel surprise us in half a year’s time and reveal that this has been one of his brilliant social psychological experiments? It is the optimist in me that still hopes that he will, even though I am afraid I know better…
Jan van Dalen
Associate Editor Education for Health
1. Keulemans M. De serieleugenaars [the serial liars]. De Volkskrant, 5 November 2011.