“Did you know that koalas get high on eucalyptus leaves?” This question was posed to me at a flat party last night, after I was introduced to an (admittedly drunk) guy as a zoologist. Even when I explained that eucalyptus leaves are full of almost indigestible toxins which cause them to spend the rest of their time (about 20 hours of the day) sleeping so their highly adapted digestive system can break them down, the guy stuck to his reasoning- which was based off some news article on the phenomenon.
As our generation becomes more and more tech-savvy, editing Wikipedia and creating their own websites, it is increasingly easy for this kind of myth to propagate over the internet. Science articles should reference the relevant published journal articles to support the claims they make and credit the scientists’ work anyway, but as even some major media outlets, such as the Daily Mail, do not reference their sources, neither do their smaller counterparts and fake facts begin to spread.
Fair enough. Sometimes it is just comical to watch the naive ones fall into these traps and believe the ‘facts’ that aren’t supported by feasibility, research or even common sense. But a bigger problem is arising in the scientific community.
Despite the barrage of criticism every published piece of research has to bear before reaching publication, cases of scientific misconduct are rising within the research community. The issues of misconduct are to be discussed at the 3rd World Conference on Research Integrity in May; the conference will focus on the problems with trust in science in the current political and media climate, responses to misconduct and correction of the scientific record. Journals such as Nature have suggested solutions such as improving training and tutoring, agreeing to publish negative results, pre-registering studies, focussing on research ethics and greater penalties.
So what kind of attacks is science battling from the inside?
The biggest problems are fabrication and falsification of data. To receive grants and recognition scientists need to get their work published in journals, so their hypotheses have to be proved right. This puts a huge pressure on scientists to make up or tweak their data so they can ensure further funding for their work. The worst case of this to date was Diederik Stapel, former psychology professor at Tilburg University in the Netherlands, as well as an extremely prolific researcher. However, his eye-catching studies (including work on stereotypes, discrimination and the effectiveness of advertising) turned out to be too good to be true when three junior researchers reported scientific misconduct and it was discovered that over 30 of his publications contained fabricated data.
This well publicised exposé was a worst case scenario, but there are many smaller occurrences of fabrication and falsification that slip under the net, which can be seen by anyone who observes the number of contradictions in scientific papers. In Stapel’s case, three factors worked in his favour: the fact he collected data alone, researchers who replicated his work lacked the detailed information needed to explain the inconsistencies and Stapel never gave method reports thorough enough for the falsehoods and improbabilities to become apparent.
This ‘cherry-picking’ method of reporting science is a rapidly expanding problem in wider science reporting and journalism. Many of Edinburgh’s science undergraduates would have had the lecture by Professor Richard Milne that covers all the ways false scientists and politicians manipulate scientific results to suit their purposes (chiefly on the global warming issue, but that’s an argument for a another time). By only selecting certain parts of data, or even leaving out key parts of the explanation, people can be persuaded of correlations that do not exist and explanations that are completely erroneous.
But even that’s just assuming that the scientific research is correct in the first place. When even the foundation research that science offers up is wrong due to falsification and fabrication, how will that affect the work when being interpreted by politicians, policy makers and journalists?
Originally published in the Student, March 2013.