Aplikacije, ki temeljijio na generativnih algoritmih UI so lahko zelo koristne, lahko pa tudi zelo uničevalne. Če nad njimi ni kontrole. In trenutno se zdi, da je zadeva prepuščena podjetnim inovatorjem. Spodaj je dobra nit Maxa Roserja, duše portala Our World in Data.
Why does powerful Artificial Intelligence pose a risk that could make all of our lives much, much worse in the coming years?
There are many good texts on question, but they are often long.
🧵I’m trying to summarize the fundamental problem in a brief Twitter thread.The fundamental reason is that there is nothing more dangerous than intelligence used for destructive purposes.
Some technologies are incredibly destructive. Nuclear bombs for example.
If used, they would kill billions of us.
ourworldindata.org/nuclear-weapon…But in the bigger picture nuclear weapons are a downstream consequence of intelligence.
No intelligence, no nuclear weapons
Of course, intelligence also makes a lot of the best things possible.
We used our intelligence to build the homes we live in, create art, and eradicate diseases.But to see the risk of AI, we have to see that there is nothing more dangerous than intelligence used for destructive purposes.
This means the following question is extremely important for the future of all of our lives: What goals are powerful, intelligent agents pursuing?
(That has always been the case. Throughout history the worst problem you could have was an intelligent opponent intent to harm you.
But in history these opponents were intelligent individuals or the collective intelligence of a society.)The question today is, what can we do to avoid a situation in which a powerful artificial intelligence is used for destructive purposes?
There are fundamentally two bad situations we need to avoid:
1) The first one is obvious. Someone – perhaps an authoritarian state, perhaps a reckless individual – has control over very powerful artificial intelligence and uses the technology for bad purposes.
As soon as a malicious actor has control over powerful AI, they can use it to develop everything that this intelligence can develop — from weapons to synthetic pathogens.
And an AI system’s power to monitor huge amounts of data makes it suitable for large-scale surveillance.2) The other situation is less obvious. That’s the so-called alignment problem of AI.
Here the concern is that *nobody* would be able to control a powerful AI system.The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to pursue destructive goals.
The risk is that we try to instruct the AI to pursue some specific goal – *even a very worthwhile one* – and in the pursuit of that goal it ends up harming humans.
The alignment problem is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.
To summarize: I believe we are right now in a bad situation.
The problems above have been known for a very long time – for decades — but all we’ve done is to speed up the development of more and more powerful AI and we’ve done close to nothing to make sure that we stay safe.
I don’t believe we will definitely all die, but I believe there is a chance.
And I think it is a huge failure of us today to not see this danger:
We are leaving it to a small group of entrepeneurs to decide how this technology is changing our lives.
This is despite the fact that, as @leopoldasch has pointed out, “Nobody’s on the ball on AGI alignment”.
Nobody’s on the ball on AGI alignment Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.) https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/On @OurWorldInData we’ve done a lot of work on artificial intelligence, because we believe the immense risks and opportunities need to be of central interest to people across our *entire society*.
→
Thank you for reading.
If you want to read more, I wrote this essay last year about the same topic.
Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well How AI gets built is currently decided by a small group of technologists. As this technology is transforming our lives, it should be in all of our interest to become informed and engaged. https://ourworldindata.org/ai-impactThe two situations above are those that I believe would make our lives much, much worse.
But they are of course not the only possible risks of AI. Misinformation, biases, rapid changes to the labour market, and many other consequences also require much more attention and work.
Vir: Max Roser, via twitter