Slowing down Pandora’s box

Published On: August 2023Categories: 2023 Editorial Series, Editorials, Theme 3: ChatGPT

Author(s):

George Wang

AI Research Engineer / Freelance

Headshot of an Asian man in glasses and long hair.
Disclaimer: The French version of this editorial has been auto-translated and has not been approved by the author.

This thought keeps coming back to me lately: everything is about to get really weird. For the past two years, I’ve had a front-row seat to the primordial soup of Artificial Intelligence (AI) research and startups. I’ve built a couple of products based on language models, consulted on generative music tools, and dabbled across the board of generative AI, including image, audio transcription, voice, and more. A massive wave of commercial products is coming, for which ChatGPT is just the tip of the iceberg.

AI has been fairly consistent in progressing exponentially. The thing about exponential growth is that for a long time, it looks like nothing is happening, then it looks like something is finally happening, and then while you’re still processing the something that’s happening, the growth explodes and… uh oh, suddenly everything is weird now. We’re still in the “huh, it looks like something is finally happening” stage with AI.

A certain recent pandemic demonstrated that we’re not so good at dealing with the next phase of exponential growth. But Covid didn’t care what was or wasn’t intuitive to us; the world just changed overnight anyway. We scrambled to react and flatten the curve, but not before it was already a foregone conclusion that there would be a huge curve. When it comes to AI, it’s fortunately not all downsides like it is with a global pandemic. AI has incredible potential to bring prosperity – some have even compared it to the discovery of fire or electricity. But there’s a darker side, and that side grows exponentially too.

While writing this, I hopped on X (formerly known as Twitter, RIP) and saw a post on practical acoustic side-channel attacks on keyboards. The model in that paper can correctly identify 93% of your keystrokes from the audio that a Zoom call picks up. I see news like this more often than is comfortable, and then I predictably think to myself that everything is about to get really weird. You have to really stand out these days to be noteworthy – a few months ago, a research team managed to read the human mind(!) with >90% accuracy by training a model on fMRI scans.

The optimistic among us would hope in vain that AI technology isn’t abused. FraudGPT and WormGPT are two cousins of ChatGPT, repurposed for enabling cyberattacks and scams. ChaosGPT was a (fortunately ineffective) bot whose task was literally to destroy humanity. What happens as we progress along the exponential growth curve and the people building these harmful AIs get access to shiny new toys? Dario Amodei, CEO of Anthropic, testified to the US Congress that a straightforward extrapolation of today’s systems suggests that large-scale biological attacks could be possible in only two or three years. He went on to say that if we don’t have mechanisms in place to restrain AI systems by 2025-26, “we’re gonna have a really bad time.”

Two to three years isn’t a lot of time, and we don’t even know what the right restraining mechanisms are yet! It would be great to have a bit more time to figure things out before the exponential progress curve blows past us. Fortunately, unlike with the pandemic, we can actually make that happen for AI. The Future of Life Institute wrote an open letter with over 33,000 signatures calling for a six-month moratorium on AI development, signed by many of the biggest names in AI research and technology. It’s not clear that the specific plan of a six-month moratorium would be most effective, but this seems directionally correct. Policymakers might consider prophylactic legislation that would allow such a moratorium to come into effect if certain conditions are met. Compute governance is a similar slowing mechanism, where large training runs of AI models are limited in the amount of compute power they’re allowed to use. The threshold can be adjusted over time to have some control over the pace of development without it being all or nothing. These aren’t long-term solutions, but they would buy time to figure out more permanent measures.

Besides short timelines to serious risks, another core challenge with making AI safe is the asymmetry between offense and defense. There are simply too many angles of attack to be able to cover all vulnerabilities – but policy can reshape the AI landscape into something more defensible. When advanced AI is too openly accessible, any individual can choose to take one of those many angles of attack, so an obvious solution might be to prevent advanced AI from being too openly accessible.

We can accomplish that via regulatory action, but this is a pretty controversial topic in the tech world; open-sourcing (making publicly available) your research and your code is usually synonymous with nice things such as transparency, collaboration, and even security since you have more eyes on fixing vulnerabilities. Furthermore, a common counterargument to closing off access to AI technology is that it centralizes too much power in the hands of the few major AI labs and that we can only have a fair balance of power if the technology is available to everyone.

But the world is not always a safer place when people have more access to technology. The US has a lot of guns and – surprise – a lot of mass shootings. If everyone had a nuke in their pocket, it would only be a matter of time before most urban cities disappeared in a mushroom cloud. All it takes is one person doing the wrong thing. And if everyone has a large-scale-biological-attack-capable AI on their home computer, Dario Amodei put it well: we’re gonna have a really bad time.

It doesn’t seem likely that we can entirely close Pandora’s box on AI, but we can make it manageable. Everything is about to get really weird, but we still have the power to decide if that’ll be a good weird or a bad one.