Umetna inteligenca: Pot v apokalipso?

This week more than 60 US senators met behind closed doors with 22 “tech titans” (as the New York Times put it) to discuss artificial intelligence. Not present, unfortunately, was the tech titan who in the coming months may have the biggest impact on discourse about AI: Mustafa Suleyman, whose new book The Coming Wave will hit the bestseller list next week and seems destined to become the first big zeitgeist-shaping book about AI of the ChatGPT era. (Which is to say: the era when “generative” AI, and in particular “large language models,” got lots of people to take this AI thing seriously.)

A month ago I lauded a Foreign Affairs piece about AI co-authored by Suleyman (co-founder of DeepMind and now CEO of Inflection AI) and geopolitical analyst Ian Bremmer. Now that Suleyman’s book is out, I can say a bit more about why I think he’s worth listening to—why we’re lucky that the first big tech-titan-authored AI book of this era is by him (as opposed to, say, hyper-libertarian Peter Thiel or hyper-disingenuous Mark Zuckerberg or hyper-hyperbolic Elon Musk).

Disclaimer: I haven’t quite finished the book, so I’m reserving final judgment; maybe in a future NZN I’ll have some biting criticisms—and, in any event, I’ll probably have more to say about this book down the road. For now I just want to highlight some things that align it nicely with the mission of this newsletter.  

Back in January of 2021, when I launched the paid version of NZN, I defined its mission as the “Apocalypse Aversion Project.” The idea was that there were various looming technologically abetted perils that humankind needed to confront collectively if it wanted to avoid various large-scale bad outcomes. To get a sense for those perils, see the illustration above, which accompanied the announcement of the paid version.

If I were revising that image in the wake of ChatGPT, I’d move AI from “near term” dangers to “present” dangers and maybe elaborate on its dangers. But the basic idea behind the image makes at least as much sense now as it did then: Various technological trajectories, for all their upside potential, have massive and often underappreciated downside potential.

One thing about Suleyman’s book that doesn’t come through in many of his media interviews about it is that it isn’t just about the challenge of handling AI wisely. He highlights other hard-to-handle technologies, in particular “synthetic biology” (gene splicing, etc.), which gets equal billing with AI. And he recognizes that these two technologies could lead to very bad outcomes—like, as he indelicately puts it, “catastrophe.”

I think AI and biotech do belong at the top of the apocalyptic perils list. Not because they necessarily have the greatest destructive potential (13,000 nuclear warheads could ruin your day!) but because their proliferation is so hard to control and because they can be put to massively destructive use by small groups, even by lone actors. (As it happens, my two most recent Washington Post opinion pieces have been about, respectively, the challenge of getting biotech under control and the challenge of getting AI under control.)

And it gets worse! As Suleyman emphasizes, the AI-biotech threat is greater than the sum of its parts. AI will exacerbate the biotech peril both by accelerating progress in the field broadly and by giving miscreants new tools for manipulating organic materials. He writes, “Just as today’s [AI] models produce detailed images based on a few words, so in decades to come similar models will produce a novel compound or indeed an entire organism with just a few natural language prompts.”

The good news is that he said “decades,” not “years.” The bad news is that the path to that point is incremental; in the coming years, AI will facilitate freelance bio-engineering in ways that, if less exotic than the case he posits, are scary. According to the Washington Post, one of the 22 tech titans at that Senate meeting, Tristan Harris of the Center for Humane Technology, “told the room that with $800 and a few hours of work, his team was able to strip Meta’s safety controls off [its open-source large language model] LLaMA 2 and that the AI responded to prompts with instructions to develop a biological weapon.”

Meta chief Mark Zuckerberg reportedly replied that those instructions are available on the Internet—so this was just an example of AI doing sophisticated search. Fair enough. But this is just the beginning. Suleyman was at Deep Mind when it developed AlphaFold, which turns the legendarily nettlesome “protein folding” problems into something closer to child’s play. When he warns about the coming scientific power of AI, he speaks from experience. 

Suleyman doesn’t pay much attention to the sci-fi AI doom scenarios that have gotten so much air time—the ones where an AI superintelligence decides to squash humankind en route to world dominance. Like me, he’s agnostic about those scenarios and, like me, he thinks we in any event need to focus on more immediate and concrete dangers.

Unsettlingly, he manages to make those dangers sound about as terrifying as a humankind-squashing superintelligence. The reason is that he does such an effective job highlighting the obstacles to controlling these technologies: the commercial incentives behind promulgating them, the political obstacles to wisely regulating them, the fact that many nations are having trouble with effective governance in the first place, and so on. Of the prospects for “containing” these technologies, he writes: “Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible.”

I might rephrase that as follows: Containment is, on the face of it, possible in principle, but only possible in practice if massively more attention and commitment is focused on the problem. That’s one premise behind the Apocalypse Aversion Project, a project that continues to be the animating spirit of this newsletter (in ways that, perhaps, I should sometimes spell out more clearly). And this book will help generate some of that attention and commitment, thus advancing the project.

To be continued… 

Vir: Robert Wright, Nonzero