This is a reprint of a piece I wrote for the Times Higher Education magazine last month, which was the cover story for 22nd July.
With the benefit of hindsight, the firestorm of controversy created by "Climategate" - the illegal release of emails from the University of East Anglia's Climatic Research Unit (CRU) at the end of 2009 - had been brewing for a very long time. In the highly politicised world of climate science, the accusative chorus of sceptical voices and the increasingly exasperated statements of defence from beleaguered climate scientists had become a deafening cacophony.
Initial media reports talked excitedly of the emails as a "smoking gun" showing climate change to be an elaborate hoax, but these were quickly exposed as completely unfounded. A House of Commons inquiry in March found no evidence of systematic deception by CRU researchers. A science panel led by Lord Oxburgh found no evidence of scientific malpractice. And finally on 7 July, after many months of gathering information, the independent inquiry led by Sir Muir Russell reported its long-awaited findings.
The inquiry examined the conduct of the scientists at the CRU and concluded that their rigour and honesty were not in doubt. Concerns were raised over the openness of CRU researchers (and university officials), and reforms of practices and procedures were identified. No evidence of subversion of the peer review or editorial process was unearthed, but the report did include a lengthy reflection on Climategate's implications for peer review by Richard Horton, editor of The Lancet. Horton argued that much of the confusion about what took place at the CRU stemmed from a misunderstanding of what the peer review process can - and cannot - do.
Drawing on Horton's analysis, the Russell report concluded: "Many who are far from the reality of the peer review process would like to believe that peer review is a firewall between truth on the one hand and error or dishonesty on the other. It is not. It is a means of sieving out evident error, currently unacceptable practices, repetition of previously published work without acknowledgement, and trivial contributions that add little to knowledge."
Reacting to the unedifying sight of science's sock drawer emptied on to the floor, however, many commentators have sought to pass judgement on peer review. The processes and practices of science are now in the dock, and non-scientists observing the private correspondence of the peers behind the peer review have found it difficult to escape the conclusion that science is not what it seems.
But for Harry Collins, distinguished research professor of sociology at the School of Social Sciences, Cardiff University, who has for decades studied scientific practices, Climategate told him nothing he did not know already.
"The message that a lot of people seem to have taken from Climategate is far more damaging than it ought to be, because the normal to and fro of scientific practice looks like that. Most of what happened in Climategate was business as usual. People have a misconception of what science is because they are exposed to hero worship about science - stories about Newton, stories about Einstein - it's a sort of fairy tale. But it's disadvantageous to scientists to have science presented in this way, because politicians and journalists ask them for exact answers. Even if science is exact, it's exact only in the long term."
According to Collins, the romantic idealisation of science as a neat and tidy linear path towards greater knowledge is a myth. Science is often messy, sometimes sloppy, and always more complicated than it seems. Tensions can easily arise. Policymakers schooled in the canonical view of science (and battling an electoral cycle that privileges rapid responses over considered contemplation) face enormous pressures to transform the uncertainties of science into political soundbites. Confronted with a politically filtered version of science - clear, certain and precise - it is no surprise that people sense a scandal when things do not turn out to be quite as they expected.
Scientific objectivity goes no further than the circles of expertise that comprise fields of scientific endeavour. Of course, there are any number of "facts" to be objectively recorded in the natural world through experiment or observation. In climate science, the facts unambiguously point to the influence of human activity on the climate. But as the science and environmental journalist David Adam has suggested, the process by which scientists judge each other's work as fit for publication has always been where objective science dashes on the rocks of subjective human opinion. Short of automating the peer review process, the human fallibility of peer reviewers is simply unavoidable.
Arguments such as these are potent fuel for the fire of sceptical claims that climate science has become a self-regarding consensus machine, fine-tuned to keep out the outliers and reinforce the status quo. Three inquiries into Climategate have found no evidence that this is the case. But sceptics have been eager to use the emails as a vehicle for attacking climate science and climate scientists' behaviour nonetheless.
Some have even sought to broaden their criticism to science in general. A.W. Montford, author of The Hockey Stick Illusion: Climategate and the Corruption of Science, has grandly suggested that peer review achieves very little for society and is "not up to the job". The response of some high-profile environmental commentators has also been surprisingly visceral. Pre-empting all the inquiries, the campaigner and writer George Monbiot called for Phil Jones, who was then head of the CRU, to resign (a call he much later retracted).
The science journalist Fred Pearce - despite doing an enormous amount of work in challenging the initial media response to Climategate - has repeatedly criticised CRU researchers. "I think the emails raise questions about conflicts of interest apparently tolerated in science that would surely not be tolerated in most other professions," he said. In one email, Phil Jones expresses a desire to "keep out" two papers critical of his work from the Fourth Assessment Report of Intergovernmental Panel on Climate Change (IPCC). The papers were not, in fact, excluded. But according to Pearce, "Phil Jones seemed to relish the chance to 'go to town' against those questioning his work."
In fact, it is common practice for journal editors to send papers challenging a body of work to the author responsible for that work - as experts in increasingly atomised fields, they are often in the best position to review it. The process hinges on honesty: faulty methodology is a reason to reject a paper; a personal dislike of another scientist, however, is clearly out of bounds. The appropriate criteria for making peer review judgements about another's work could - in some circumstances - include inferences about the author. "If some group of activists invent a journal, peer review it themselves, and have no intention of doing the job honestly, then of course this is relevant," states Collins, "but you'd have to set out the reasons - not just make up your mind."
Robert Evans - a colleague of Collins' at Cardiff and an expert in the sociology of knowledge and expertise - points out that these criteria are not fixed and may vary from discipline to discipline. "It depends what kind of publishing culture there is - in some fields, journals publish what are really quite daft ideas, because they feel that people have the right to take these ideas down in public; whereas in other fields, a lot of that work is done in private, in the selection process."
Of course, it is precisely these private selection processes that have come under scrutiny. There's no doubt that science up close bears little resemblance to the brave and noble empiricism of Newton and Einstein. But to claim - as Pearce and others have repeatedly - that the CRU email exchanges revealed some previously unacknowledged fault with the scientific method is hyperbole. "It might have been a good thing," suggests Evans. "Maybe all people found out was what science was actually like. It only seems as if scientists were behaving badly if you had a very idealised view of what scientists were like in the first place."
It's a strange kind of defence - innocence by appeal to mass guilt - but if Collins and Evans' analysis of what science is "really" like is accurate, then it is important to consider the implications. An uncomfortable light has been shone into the inner chambers of science's castle, and outside observers have not been impressed by what has been revealed. So is there an argument for radical reform of the institutional culture of science?
There is a movement in science towards publicly accessible data, and the archiving of databases is now common practice in many subjects. The digitisation of data and the ubiquity of the internet have ushered in a new level of expectation around public access to information - not only in science, but in global society more generally. Reflecting this, the aftermath of Climategate has seen repeated calls for climate scientists' raw data to be made available to the general public.
"I think people should be open about their data and about their methods wherever possible," says Ben Goldacre, the doctor, columnist and author of Bad Science. "If someone is making a public claim about a conclusion they have drawn from a piece of scientific research, they should be ready to be meticulous about showing their work. If someone doesn't, I find it hard to take them seriously."
The move towards open access is not only reasonable but inevitable. But for highly politicised areas of science such as climate change, there may be hidden dangers. "I'm not sure it would solve things in the way people would like," says Evans, "because the data themselves would still need to be analysed in the context of the scientific theories that give them meaning."
Collins is even more direct: "That would be a complete disaster. Analysing data and making sense of it is a very subtle business. Analysing data and getting something out of it is very easy - you can get out of it more or less what you want. There are an infinite number of ways to analyse data, and it would take an infinite amount of time to track down all the things that had gone wrong. If you allow that to happen, then you are saying goodbye to your science."
As with much of the Climategate debate, there is more at stake than climate change data. Although the perceived integrity of climate science seems to need a shot in the arm, it cannot come at the cost of a functioning scientific community. "Scientists would spend their whole lives trying to pick apart what other people had done, and the science would just grind to a halt," Collins suggests.
Throwing open complex climate science databases without due caution could amount to sacrificing climate change data on the altar of public opinion. Faced with dozens of well-publicised and smartly presented pseudo-analyses of climate change data, who other than the climate scientists themselves would be capable of sorting the wheat from the chaff?
Perhaps the answer is "citizen scientists". The notion that science should seek actively to engage with non-scientists is increasingly popular. At its best, public engagement with science can help shape the values that guide scientific enquiry, construct scientific knowledge and contribute to decisions about science funding. By determining the social and ethical implications of science, engaging with citizens can enhance the role of science in society. The movement towards more public engagement is a hugely positive development.
But are the legions of bloggers and auditors - often, but not always, ideologically motivated to find fault (real or imagined) with climate science data - really fulfilling the role of citizen scientists? Alice Bell, a lecturer in science communication at Imperial College London, has argued that successful citizen-science projects work because they offer collaborative relationships between scientists and the public - not an adversarial auditing of data on the assumption that scientists are dishonest. The bottom line is that open access can never be a panacea for a crisis of institutional trust.
Open access is based on the premise that there are those outside the inner circle of peer reviewers who are competent enough to provide a second opinion on the science. This is indisputably true. But while talk of throwing open the lab doors might be rhetorically satisfying, it would provide only an illusion of democracy. Certainly there are non-academics competent enough with statistics to find errors in a piece of published science. Correcting errors in science would be a valuable service for an auditor to offer. But if several auditors reached conflicting conclusions, then somehow a judgement would have to be made about their respective competence. And who should make that judgement? Presumably a group of suitably qualified, honest individuals with a proven track record in a relevant discipline - in other words, peer review.
Any argument for reform must contain more than just a critique of the existing system - it must also hold out the possibility of something better. Would broadening the group of people who are assigned the task of fact verification resolve the problems of peer review? Sadly, there is very little in the way of guidance for answering this question, as very few systematic studies have been conducted into the merits of peer review. Although its flaws are well documented (and have been for many years), critiques typically focus on the fact that peer review is not perfect, but struggle to identify serious alternatives.
Harry Collins' suggestions for reform include removing the anonymity of reviewers and ring-fencing a proportion of journal space for papers that generate significant controversy among reviewers (as these papers hold an interest of their own). In the Russell report, Richard Horton suggests that "the best one might hope for the future of peer review is to be able to foster an environment of continuous critique of research papers before and after publication".
There is no question that science needs to be proactive in engaging the public. There may be some role for Freedom of Information legislation to play in bringing this about. But processes of dialogue and participatory engagement seem much more promising ways for scientists and the public to interact. Citizens' juries and deliberative workshops are two tried-and-tested methods for achieving this aim.
Perhaps if any good is to come of the Climategate controversy, it will be a renewed interest in smoothing the rough edges of peer review and a greater awareness of the necessary fallibility of the scientific publishing process. However, no one seems to have any suggestions for a serious alternative. For now, like Winston Churchill's famous description of democracy, peer review is the worst option - except for all the others that have been tried.
AC
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment