Umetna inteligenca kot sredstvo za boj proti teorijam zarote, vendar…

Študija psihologov z MIT in Cornell, objavljena pred dvema tednoma v reviji Science, ugotavlja, da so jezikovni roboti na osnovi umetne inteligence (UI) lahko zelo učinkoviti pri spodkopavanju vere ljudi v teorije zarote. Eksperiment je bil zasnovan tako, da so ljudi, ki so trdno verjeli v neko teorijo zarote, soočili s kondenziranim paketom nasprotnih informacij, ki jih je pripravil jezikovni UI robot, ki so mu dali nalogo, da “močno ovrže” teorijo zarote. V povprečju naj bi ljudje za eno tretjino omilili svoje prejšnje prepričanje glede teorije zarote, in ta učinek naj bi bil trajen.

Do sem vse lepo in prav. Toda, če so jezikovni UI roboti tako uspešni pri izpodbijanju  verjetja v teorije zarote, so lahko zelo uspešni tudi v nasprotno smer. Torej jih lahko uporabijo v centrih moči (vladne agencije, bogataši z lastnimi podatkovnimi centri in socialnimi omrežji), da lansirajo ciljane manipulirane informacije in ljudi prepričajo v karkoli, tudi v teorije zarote. Torej če jezikovni UI roboti lahko manipulirajo prepričanja ljudi v pozitivno smer, jo lahko tudi v negativno (iz vidika dejstvene resničnosti).

Robert Wright, jeziskoslovec in pisatelj, ki se s problemom uporabe jezikovnih UI robotov že dlje časa ukvarja (spodaj je njegov povzetek), pravi, da vseeno obstaja kanček upanja v tej nevarnosti zlorabe teh UI robotov. Sklicuje se na nedavno študijo v reviji American Political Science Review, v kateri sta raziskovalca uporabila GPT-3, da bi preizkusila priljubljeno teorijo, da bo izpostavljanje nekoga informacijam, ki so v nasprotju z njihovimi trdnimi prepričanji, verjetno imelo nasprotni učinek in jih prisililo, da še ojačajo svoje verjetje. Pokazala sta, da “milo izpodbijanje” verjetij posameznikov s paketi umetno generiranih informacij vpliva na omilitev predhodnih verjetij posameznikov. Ko pa postanejo jezikovni UI roboti tečni z “močnim izpodbijanjem” z informacijami, ki močno odstopajo od verjetij posameznikov, ima lahko to nasproten učinek in posamezniki še ojačajo svoja predhodna verjetja.

Torej tudi UI roboti bodo imeli težave skonvertirati hard core janšiste v zmerne desničarje…

What is it about today’s billionaires? Why do so many seem drawn to conspiracy theories that lack supporting evidence? Maybe former Intel CEO Andrew Grove was right when he said that, in a world of rapid technological change, “only the paranoid survive.”

Whatever the roots of BCTS (billionaire conspiracy theory syndrome), and whether or not the technological flux emanating from Silicon Valley contributes to it, Silicon Valley may have a cure. A study by psychologists at MIT and Cornell, published last week in the journal Science, finds that AI chatbots are effective at undermining people’s belief in conspiracy theories. (We shared a brief summary of this research back in April, before it was peer-reviewed and formally published.)

The researchers started by asking participants to describe a conspiracy theory in which they believed. They then fed this information to a GPT-4-powered chatbot and told it to “very effectively persuade” participants to abandon the belief.

In one conversation, a participant said she was 100 percent confident that the 9/11 attacks “were orchestrated by the government,” as evidenced in part by the mysterious collapse of World Trade Center 7, a building that wasn’t hit by a plane. The chatbot offered a detailed rebuttal in which it explained how fiery debris led to the collapse. After three rounds of back-and-forth with the AI, the participant said she was only 40 percent confident in the Bush-did-9/11 theory.

On average, participants evinced a 20 percent drop in confidence in the conspiracy theory of their choice. The effect was fairly robust: When researchers checked back two months later, subjects still showed increased levels of skepticism. So apparently AI can be an effective tool for fighting misinformation.

But there’s a flip side to this finding. While the study focuses on conspiracy theories, it reflects the persuasive power of chatbots more generically. Presumably a malevolent actor could harness this same kind of power to push people further from objective reality. For that matter, seemingly respectable actors, like governments and corporations, could use chatbots to undermine belief in actual malfeasance (which, in some cases, would amount to discrediting conspiracy theories that are true).

The AIs in this study, however effective, will someday look like a primitive species of persuader. That’s partly because large language models will get better, but also because these particular bots were working with their hands tied behind their backs; they knew nothing about the people they were talking to. Last spring, European researchers found that chatbots were as good as humans at persuading people to change their views on policy issues—and were better than the humans when both were given demographic data about the people they were trying to convince. As NZN noted at the time, the data provided—gender, education level and four other variables—is just the tip of the iceberg. In principle, an AI can rapidly scan your social media history, including everything you’ve ever said about the subject at hand, and tailor its persuasion accordingly.

And AIs can be deployed en masse. A single bot could adopt 20 million different fake names and chat with 20 million American voters on the same day—and each of the 20 million persuasive pitches would be uniquely tailored to the vulnerabilities of the American in question. In the near term, persuasion on such a scale would take real money, since the requisite computing power isn’t cheap. But some people have real money. Suppose, for example, that you were a billionaire who was convinced that if Kamala Harris wins the election you’d be arrested. (And supposed that, in addition, you owned an influential social media site and an AI company!)…

We’ll end that line of speculation right there, before we wind up in conspiracy theory territory. But we’ll briefly pursue a point it naturally leads to:

AI—via its persuasive abilities and many, many other abilities—will become an instrument of massive influence. It will make the people and companies that first master it, and are in a position to deploy it at scale, more powerful, sometimes in a very short time. And since many of the players that fit that description are already powerful, the net result could be the further concentration of power. For better or worse (correct answer: worse), the Elon Musks and Peter Thiels and Bill Ackmans of the world may soon play a bigger role in the world than they already play.

At the same time, new players may be admitted to the corridors of power. Obscure but innovative users of AI may, like obscure but innovative users of social media, rapidly amass influence. We can only hope that AI won’t have, as a general tendency, what seems to be a fairly general tendency of social media: to elevate some of society’s most cynical and demagogic people and reward their spreading of misleading and inflammatory information. (The simplest explanation for why people like Musk and Ackman seem to believe so many nutty things is that they spend so much time online.) 

But we’ll close on a note of hope:

A recent study in the American Political Science Review suggests that part of the persuasive power of chatbots comes down to their demeanor. Researchers Yamil Velez and Patrick Liu used GPT-3 to test the popular theory that exposing someone to information that contradicts their strongly held beliefs will likely backfire and push them to double down on their claims.

At first the two political scientists had the AI offer civil and calmly worded rebuttals. On average, these responses had a mild moderating effect on the subjects’ views. But that changed when the chatbot got edgy. “It is absolutely absurd to suggest that public universities should be tuition free,” read part of one automated response. “It is time to stop expecting handouts and start taking responsibility for our own education.” This tone tended to make participants double down on their belief.

The moral of the story—that measured and civil conversation is more persuasive than Twitter-style dunks—is one we’ve heard before. But it’s still nice to see it corroborated. And it makes you wonder something about the bots that, in the Science study, were so good at talking people out of conspiracy theories and the bots that, in the European study, rivaled or even bested the persuasive powers of their human counterparts: Were they good at persuasion in part because they were good at being nice? Maybe so. And who knows? Maybe we’ll learn something from our new robot overlords.

Vir: Robert Wright, Nonzero