Scientific Progress vs. Power Politics: Technology, Geopolitics, and Existential Risk

Published On: November 2019Categories: CSPC 2019 Panels & Speakers, EditorialsTags:

Author(s):

Nathan Alexander Sears

The University of Toronto

PhD Candidate in Political Science

Sears

Humankind may soon greatly expand its power over nature through a range of potentially transformative technologies. Advances in biotechnology, like the “Crispr revolution”, could greatly expand human control over biological and evolutionary processes. Geoengineering methods, like “solar radiation management”, may one day make humans the masters of Earth’s climate system. And artificial intelligence, driven by the gains in hardware, software and data, might make “superintelligence” a reality.

There are, of course, still great debates between scientists over the nature and scope of the changes that such technologies will bring for humankind and the planet. It would be fortunate if those scientists are correct who argue either that these new powers over nature are exaggerated, or that they may materialize but only in some distant future. For what if humans were to quickly and dramatically increase their control over nature? Would such power contribute to, or threaten the long-term prosperity and survival of human and animal life on Earth?

There is a growing interdisciplinary research agenda on “existential risk”. In Nick Bostrom’s frequently-cited definition, “An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.” Probably the most obvious existential risk is a “total” nuclear war between the nuclear powers, the United States and Russia, with hundreds of millions of immediate deaths, followed by possibly billions more from “nuclear winter”. Another is that climate change passes certain “tipping points” in the Earth system, triggering “runaway” global warming that leads to a “hothouse Earth” climate. Most other existential risk scenarios involve some radical breakthrough in science and technology.

Yet science and technology have been some of the main drivers behind the incredible increases in health and wealth, and the decline in mortality and poverty that human beings have experienced over the past several centuries. When it comes to catastrophic or existential risk scenarios — such as climate change or a pandemic — technology is often seen as one of humanity’s best defenses.

Technology is neither “good” nor “bad”. Rather, the janus-faced nature of technology is best captured by the “dual use problem”, which states that many of the same scientific discoveries and technological innovations that can be used for the benefit of humankind can also be used for destruction. Whether technology is employed for benign or malign purposes is therefore driven by the combination of technological and social forces.

Cambridge scholar and Director of the Center for the Study of Existential Risk, Martin Rees, calls himself a ‘technical optimist, but political pessimist’. However, it is difficult to be optimistic about technology while being pessimistic about politics. Politics shapes what technology is produced, how it is applied, and towards what ends. The technical capacity to split the atom did not necessitate the creation of nuclear weapons. Nuclear weapons were the technological manifestation of a political problem that had its origins in international relations.

Power politics has not disappeared. Power politics will intervene in humanity’s technological futures — be it biotechnology, geoengineering, or artificial intelligence — much as it did in nuclear physics. An appreciation for the dynamic between power politics and technology leads to a different set of considerations about the potential impacts of scientific and technological progress on humankind. Will research and development in AI look like an “arms race”, and could the prioritization of strategic concerns about “relative gains/losses” increase the risk of misaligned “superintelligence”? How can “solar radiation management” be governed at the global-level, and will the possibility of geoengineering lead to a geopolitical “struggle for power” over the Earth’s climate system? Are international standards possible for “gene editing”, and, if not, how will weak global governance and powerful biotechnology shape humanity’s evolutionary future? In the words of Crispr-discoverer, Jennifer Deudna, “What will we, a fractious species whose members can’t agree on much, choose to do with this awesome power?”

In the twenty-first century, the prudent scientist must think like a policy-maker — or better yet, a diplomat! Scientists should ask themselves how progress in science and technology could influence geopolitical competition, and how geopolitics may shape the directions of scientific research and technological innovation. They should consider how new technologies can enable both benign and malign applications, and appreciate that the “end uses” of technology are not necessarily the same as the “original purposes” of their creators. They must imagine future worlds in which the rapid pace of scientific and technological change interacts with the durable political features of “national sovereignty” and “realpolitik”. In short, scientists must learn to think like a Machiavellian in order to avoid an Orwellian world.