Nine months of COVID – what lessons for science advising?
Author(s):
Sir Peter Gluckman
Network for Government Science Advice (INGSA)
Chair International
International Science Council (ISC)
President Elect
We are nine months into the coronavirus pandemic. While it is too soon for the inevitable and important retrospective governmental reviews, it is not too soon to reflect upon what we have learnt about the evidence-policy interface. There are many questions to ask: why were so many countries ill prepared or slow to respond, despite many warnings by experts in recent years about the high likelihood of an impending severe viral pandemic ? Why were such risk assessments ignored? Why were indicators such as those in the Global Health Security Index so misleading? What can this experience tell us about the gap between advice and action? There are many reasons why political and policy communities might defer action on predicted crises, be it about addressing pandemic risk, aging infrastructure or climate change, but when existential risks are at play, we need to understand the obstacles to action and how to cut through.
INGSA has been tracking the use of evidence in over 120 countries’ pandemic-related decisions made during the first few months of the evolving pandemic. (see https://www.ingsa.org/covid/tracker-report-1/). COVID has created a classic case of urgent science-informed decisions having to be made in the face of uncertainty. Indeed, there remain many unknowns about the behavior of the SARS-CoV-2 and the pathophysiological and immune responses it triggers. However, it is becoming clear that those countries that made rapid and uncompromising decisions to take the virus seriously and impose severe restrictions on travel and social interactions, have done better in terms of health outcomes. What we now appear to be seeing is that the premature relaxation of such restrictions leads to a resurgence of community spread at well above any societally acceptable threshold.
But how has science really been used in making these decisions? In some countries there has been a plurality of disciplinary inputs from the outset. In others, the advice has tended to be more narrowly constrained. In choosing whether to foreground economic, behavioral, or sociological advice alongside public health and epidemiological advice in their responses, different governments are demonstrating their specific interpretations for the problem and its solution. What impact will such different interpretations and framings have as the global pandemic progresses into a more chronic phase.
Both the interpretive and institutional frameworks that countries hold are key in how they structure their responses. In this, the less technical (but no less crucial) skill of evidence brokerage can play an important role. In helping to frame decision-makers interpretation of the problem and its solution(s), having the capacity to bridge the political/science divide is essential, but it has proved very difficult in some contexts and both good communication and diplomatic skills may be necessary.
In some countries pre-existing mechanisms of science advice were used as the basis of the pandemic response. In others, no such mechanisms existed, and ad hoc mechanisms were rapidly developed. Indeed, the INGSA tracker shows that many developing countries rapidly embraced and deeply engaged their own scientific communities. It remains to be seen whether these newly developed mechanisms will trigger the development of institutionalized advice systems post-pandemic.
What has been the relative role of those scientists with formal appointments within the advisory system, whether preexisting or ad hoc, compared to whose advice has been more informal or through the media? How have they each contributed to shaping a particular interpretation or response? In some cases, conflicting advice appeared and scientific debates were soon politicized. There has been much reliance in some countries on formal models – but to what extent has the data used in these models and their interpretation been transparent? What disciplinary expertise helped inform the model? What assumptions were built into the model’s algorithms and have they been subjected to formal or informal peer review? Much scientific information regarding COVID has been only published in non-peer reviewed preprints: expedited peer review is yet to become the norm for many journals.
Has the extent of uncertainty been adequately communicated to the public and governments? Models, numbers and graphs have become the mainstay of communication collateral, but these models, by definition, have enormous assumptions and uncertainties. Are there the right tools to communicate those uncertainties effectively and honestly, when decisions must be made? It is not always commonly understood that models and raw data do not define reality, and that interpretation and judgements have been made at each step along this journey. For instance, the ongoing public debates in some places over the seriousness of COVID is a reflection of the failures of science communication on one hand and the politicization of information on the other.
We have seen how the pandemic itself has been politicized. The early exchanges between China and the USA, for instance, related to the origin of the virus had knock-on effects which have impacted the WHO. But beyond these diplomatic dimensions in several countries we have seen an extraordinary conflation of science, for example over mask-wearing, with partisan politics to the point that science-denial appears to now be a necessary test of political loyalty.
Misinformation and disinformation are exploiting this vulnerability and the longer-term implications of this situation are extraordinarily worrisome for the contract between science and society. Sadly, in some places the science of human health is being pitched as the enemy of business and economic health. Of course, this interpretation is manifestly wrong, but my fear is that its echo will have a long legacy and will fuel political resistance to the use of science in addressing other areas of collective action, from the challenges of climate change and environmental degradation to those of human development.
Regulatory processes and standards could be compromised by politics in the ‘race’ to be the first country to have a vaccine. Together with the capacity for misinformation to fuel distrust in vaccines, this will also no doubt fuel the anti-vax movement, with the result being a delay in the global retreat from pandemic status. Science advice, including regulatory science and science communication, will face real challenges in this context.
It is too early in the pandemic to reach conclusions on these and other issues, but they do highlight the complexity of the interface between science and public policy, especially in times of crisis. If nothing else, the pandemic has clarified the interpretive and critical appraisal skills required in 21st Century policy-making, all while juggling the impacts on societal values and the trade-offs that also need to be weighed. The question is whether the performance of the science community as advisors has shown the policy community the importance of a competent, trained and institutionalized ecosystem for advice to help interpret and frame problems and solutions by investing in components like risk assessment, evidence synthesis and knowledge brokerage.