Uporaba umetne inteligence bo še povečala količino bullshita

Brilijantna kolumna Tima Harforda, ki temelji na zadnjih zelo grenkih izkušnjah z GPT tehnologijo. Večina znanstvenikov, uprabnikov softverja, ki temelji na umetni inteligenci, je bila močno razočarana, ko je spoznala, da si s tem softverjem ne more pomagati niti pri najbolj preprostem delu pisanja znanstvene razprave – pri pripravi pregleda dosedanje literature na relevantnem področju (tudi moja izkušnja je podobna). ChatGPT in ostale podobne aplikacije namreč tendirajo k temu, da si izmišljujejo vire oziroma navajajo reference na akademske članke, ki sploh ne obstajajo. Problem je v tem, da tehnologija GPT ne temelji na tem, kar je bilo dejansko narejeno (kot denimo iskalnik Google Scholar), ampak na tem, kar se zdi plavzibilno – kar je verjetno, da bi lahko bilo narejeno. Iz tega vidika so izkušnje z GPT podobne kot s “fake news”. Tudi tam gre namreč za zbirko različnih resničnih oseb, dogodkov in krajev, vendar zamešanih v povsem izmišljeno kombinacijo (zgodbo). Gre za bullshit zgodbo.

Pomembna je tale paradoksalna ugotovitev: medtem ko imajo lažnivci resen odnos z resnico (poskušajo jo sprevreči), pa so bullshitterji ambivalentni oziroma indiferentni do nje. Vseeno jim je, ali je nekaj res ali ne, zanima jih le pozornost, ki jo z bullshitom ustvarijo. Pri GPT softverju pa še tega ni – njihovi produkti ne iščejo pozornosti, ampak zgolj producirajo bullshit z rekombiniranjem informacij.

Much has changed since 1986, when the Princeton philosopher Harry Frankfurt published an essay in an obscure journal, Raritan, titled “On Bullshit”. Yet the essay, later republished as a slim bestseller, remains unnervingly relevant. Frankfurt’s brilliant insight was that bullshit lies outside the realm of truth and lies. A liar cares about the truth and wishes to obscure it. A bullshitter is indifferent to whether his statements are true: “He just picks them out, or makes them up, to suit his purpose.”

Typically for a 20th-century writer, Frankfurt described the bullshitter as “he” rather than “she” or “they”. But now it’s 2023, we may have to refer to the bullshitter as “it” — because a new generation of chatbots are poised to generate bullshit on an undreamt-of scale.

Consider what happened when David Smerdon, an economist at the University of Queensland, asked the leading chatbot ChatGPT: “What is the most cited economics paper of all time?” ChatGPT said that it was “A Theory of Economic History” by Douglass North and Robert Thomas, published in the Journal of Economic History in 1969 and cited more than 30,000 times since. It added that the article is “considered a classic in the field of economic history”. A good answer, in some ways. In other ways, not a good answer, because the paper does not exist.

Why did ChatGPT invent this article? Smerdon speculates as follows: the most cited economics papers often have “theory” and “economic” in them; if an article starts “a theory of economic . . . ” then “ . . . history” is a likely continuation. Douglass North, Nobel laureate, is a heavily cited economic historian, and he wrote a book with Robert Thomas. In other words, the citation is magnificently plausible. What ChatGPT deals in is not truth; it is plausibility.

And how could it be otherwise? ChatGPT doesn’t have a model of the world. Instead, it has a model of the kinds of things that people tend to write. This explains why it sounds so astonishingly believable. It also explains why the chatbot can find it challenging to deliver true answers to some fairly straightforward questions.

It’s not just ChatGPT. Meta’s shortlived “Galactica” bot was infamous for inventing citations. And it’s not just economics papers. I recently heard from the author Julie Lythcott-Haims, newly elected to Palo Alto’s city council. ChatGPT wrote a story about her victory. “It got so much right and was well written,” she told me. But Lythcott-Haims is black, and ChatGPT gushed about how she was the first black woman to be elected to the city council. Perfectly plausible, completely untrue.

Gary Marcus, author of Rebooting AI, explained on Ezra Klein’s podcast: “Everything it produces sounds plausible because it’s all derived from things that humans have said. But it doesn’t always know the connections between the things that it’s putting together.” Which prompted Klein’s question, “What does it mean to drive the cost of bullshit to zero”?

Experts disagree over how serious the confabulation problem is. ChatGPT has made remarkable progress in a very short space of time. Perhaps the next generation, in a year or two, will not suffer from the problem. Marcus thinks otherwise. He argues that the pseudo-facts won’t go away without a fundamental rethink of the way these artificial intelligence systems are built.

I’m not qualified to speculate on that question, but one thing is clear enough: there is plenty of demand for bullshit in the world and, if it’s cheap enough, it will be supplied in enormous quantities. Think about how assiduously we now need to defend ourselves against spam, noise and empty virality. And think about how much harder it will be when the online world is filled with interesting text that nobody ever wrote, or fascinating photographs of people and places that do not exist.

Consider the famous “fake news” problem, which originally referred to a group of Macedonian teenagers who made up sensational stories for the clicks and thus the advertising revenue. Deception was not their goal; their goal was attention. The Macedonian teens and ChatGPT demonstrate the same point. It’s a lot easier to generate interesting stories if you’re unconstrained by respect for the truth.

I wrote about the bullshit problem in early 2016, before the Brexit referendum and the election of Donald Trump. It was bad then; it’s worse now. After Trump was challenged on Fox News about retweeting some false claim, he replied, “Hey, Bill, Bill, am I gonna check every statistic?” ChatGPT might say the same.

If you care about being right, then yes, you should check. But if you care about being noticed or being admired or being believed, then truth is incidental. ChatGPT says a lot of true things, but it says them only as a byproduct of learning to seem believable.

Chatbots have made huge leaps forward in the past couple of years, but even the crude chatbots of the 20th century were perfectly capable of absorbing human attention. MGonz passed the Turing test in 1989 by firing a stream of insults at an unwitting human, who fired a stream of insults back. ELIZA, the most famous early chatbot, would fascinate humans by appearing to listen to their troubles. “Tell me more,” it would say. “Why do you feel that way?”

These simple chatbots did enough to drag the humans down to their conversational level. That should be a warning not to let the chatbots choose the rules of engagement.

Harry Frankfurt cautioned that the bullshitter does not oppose the truth, but “pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.” Be warned: when it comes to bullshit, quantity has a quality of its own.

Vir: Tim Harford

En odgovor

  1. Prava stvar za politike … Dober PRovec bo izkoristil poslovno priložnost.

    V eni nadaljevanki sta bila dva ragbista, ki sta se prepirala na čivkališču. Na koncu sta ugotovila, da sta najela istega čivkača. Nadaljevanka je izmišljena, mogoče se je to kje res zgodilo.

    Všeč mi je

%d bloggers like this: