Umetna inteligenca je kot tinejdžerski seks

Težko, da mi ne bi bila všeč ta oznaka Dana Arielyja, profesorja na univerzi Duke, ki je uvajanje umetne inteligence v poslovanje označil kot “tinejdžerski seks”:

Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.

Moja osebna skepsa glede umetne inteligence se ne nanaša toliko na to, da nas bodo roboti zamenjali na delovnih mestih, da nam bodo algoritmi zjutraj kuhali kave, pospravljali stanovanja in premagovali pri družabnih igrah (pri čemer je vse troje dokaj preprosto in izvedljivo, saj gre za rutinska opravila z omejenim številom kombinacij), pač pa na to, da nas bodo algoritmi zamenjali pri kreativnem razmišljanju in inoviranju. Da bodo kreativni namesto nas. Da bodo stvaritelji namesto nas.

Algoritmi se pač lahko naučijo, da nas posnemajo. Čim več informacij imajo, čim več podatkov imajo, tem bolje se lahko naučijo nas posnemati in se  v isti situaciji enako odločiti kot mi. Toda v drugačnem kontekstu je vse to posnemanje zgolj “naučenost na pamet”. Ne morejo razmišljati namesto nas, se odločati namesto nas v kompleksnih situacijah, kjer ni jasnih pravil igre in kjer so pomembne čustvene dimenzije. Ampak o čustvih, ki nas ljudi (človeške možgane) delajo ne samo bolj kreativne, ampak –  paradoksalno – tudi bolj racionalne, enkrat drugič.

When you look at well known applications of AI like Google’s AlphaGo Zero, you get the impression it’s like magic: AI learned the world’s most difficult board game in just three days and beat champions. Meanwhile, Nvidia’s AI can generate photorealistic images of people who look like celebrities just by looking at pictures of real ones.

AlphaGo and Nvidia used a technology called generative adversarial networks, which pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined.

Most business problems can’t be turned into a game, however; you have more than two players and no clear rules. The outcomes of business decisions are rarely a clear win or loss, and there are far too many variables. So it’s a lot more difficult for businesses to implement AI than it seems.

Today’s AI systems do their best to emulate the functioning of the human brain’s neural networks, but they do this in a very limited way.  They use a technique called deep learning, which adjusts the relationships of computer instructions designed to behave like neurons. To put it simply, you tell an AI exactly what you want it to learn and provide it with clearly labelled examples, and it analyzes the patterns in those data and stores them for future application. The accuracy of its patterns depends on data, so the more examples you give it, the more useful it becomes.

Herein lies a problem: An AI is only as good as the data it receives. And it is able to interpret that data only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation. AI is more like an Excel spreadsheet on steroids than a thinker.

Vir: Vivel Wadhwa, VentureBeat

7 responses

  1. Tole je precej naiven pogled na AI. Poglejte si tole razmisljanje na Twitterju:

    • Zgrešili ste poanto mojega komentarja. Ne gre za to, da sodobni algoritmi (AI) ne bi bili sposobni izkoriščati in manipulirati nevronskih mrež človeških možganov, pač pa da gre še vedno samo za učenje in posnemanje obstoječih vzorcev. Algoritmi ne morejo namesto nas biti kreativci in inovatorji, kar je ključni faktor, ki ljudi loči od živali.

      • Govoriva o različnih stvareh. To da nam stroji, avtomati, roboti in algoritmi pomagajo pri vsakdanjem delu in bivanju, je itak samo po sebi umevno. Vprašanje je strah nekaterih, ali nas lahko zamenjajo.

  2. Tule je cel Twitter tread, kvaliteten. Mimogrede, F.Chollet je kvalificiran AI ekspert, ki dela za Google, kjer zbirajo neprimerno vec podatkov od Facebooka…

    The world is being shaped in large part by two long-time trends: first, our lives are increasingly dematerialized, consisting of consuming and generating information online, both at work and at home. Second, AI is getting ever smarter.

    These two trends overlap at the level of the algorithms that shape our digital content consumption. Opaque social media algorithms get to decide, to an ever-increasing extent, which articles we read, who we keep in touch with, whose opinions we read, whose feedback we get

    Integrated over many years of exposure, the algorithmic curation of the information we consume gives the systems in charge considerable power over our lives, over who we become. By moving our lives to the digital realm, we become vulnerable to that which rules it — AI algorithms

    If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your political beliefs and your worldview

    This is not quite news, as Facebook has been known to run since at least 2013 a series of experiments in which they were able to successfully control the moods and decisions of unwitting users by tuning their newsfeeds’ contents, as well as prediction user’s future decisions

    In short, Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior. A RL loop.

    A loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see

    A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the phenomenon at hand. In this case, us

    This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. While thinking about these issues, I have compiled a short list of psychological attack patterns that would be devastatingly effective

    Some of them have been used for a long time in advertising (e.g. positive/negative social reinforcement), but in a very weak, un-targeted form. From an information security perspective, you would call these “vulnerabilities”: known exploits that can be used to take over a system.

    In the case of the human mind, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. They’re our psychology. On a personal level, we have no practical way to defend ourselves against them.

    The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.

    Importantly, mass population control — in particular political control — arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat.

    So, if mass population control is already possible today — in theory — why hasn’t the world ended yet? In short, I think it’s because we’re really bad at AI. But that may be about to change. You see, our technical capabilities are the bottleneck here.

    Until 2015, all ad targeting algorithms across the industry were running on mere logistic regression. In fact, that’s still true to a large extent today — only the biggest players have switched to more advanced models.

    It is the reason why so many of the ads you see online seem desperately irrelevant. They aren’t that sophisticated. Likewise, the social media bots used by hostile state actors to sway public opinion have little to no AI in them. They’re all extremely primitive. For now.

    AI has been making fast progress in recent years, and that progress is only beginning to get deployed in targeting algorithms and social media bots. Deep learning has only started to make its way into newsfeeds and ad networks around 2016. Facebook has invested massively in it

    Who knows what will be next. It is quite striking that Facebook has been investing enormous amounts in AI research and development, with the explicit goal of becoming a leader in the field. What does that tell you? What do you use AI/RL for when your product is a newsfeed?

    We’re looking at a powerful entity that builds fine-grained psychological profiles of over two billion humans, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen. Personally, it really scares me

  3. Se strinjam s predhodnikom, da gre za precej naiven pogled na umetno inteligenco.

    Že v 90ih letih so namreč evolucijski algoritmi izumljali nove oblike anten, v novejšem času pa se nekateri zabavajo z generiranjem besedila znanih oseb s pomočjo rekurenčnih nevronskim mrež, ustvarjanjem glasbe in likovne umetnosti, potem pa imamo seveda AlphaGo Zero, ki je iznašel nove strategije igranja Go-ja daleč nad človeškim nivojem – in to ne s posnemanjem igranja ljudi, temveč skozi igranje velikega števila iger s samim sabo, pri čemer je bil edini input pravila Go-ja – in ne “clearly labelled examples”, ki naj bi jih nato analiziral, kot je to omenjeno v citatu – AGZ spada v paradigmo spodbujevalnega učenja (reinforcement learning), ki je mnogo splošnejše od nadzorovanega učenja, na katerega nakazuje citirani del, moram reči, da rahlo ironično. 🙂
    (Mimogrede, imamo še AlphaZero, ki se je poleg Go-ja uspel naučiti še šah ter japonski šah.)

    Kar se tiče “nezmožnosti sprejemanja kompleksnih odločitev”:
    igra Go ima okoli 10 na 172 možnih pozicij, v povprečju imamo na vsakem koraku okoli 250 možnih potez, kar pomeni, da je njena kombinatorična kompleksnost enormna. Zaradi tega ga ljudje igramo običajno tako s pomočjo analitično-kombinatoričnega razmišljanja, kot s pomočjo intuicije, prepoznavanja vzorcev… in ravno zaradi slednjega nas umetna inteligenca naj ne bi še zelo dolgo časa uspela premagati v tej igri – a se je izkazalo drugače. Očitno se celo mnogo bolje od nas znajde v odločitveno izjemno kompleksnih situacijah.

    Pojasnilo čustvene dimenzije in zakaj ne predstavlja neke ovire umetni intelingenci, bi pa zahtevalo vsaj kak daljši esej, tako se bom ta trenutek omejil raje le na eno splošno opazko:
    kadar nekdo pokaže na neko lastnost možganov (npr. čustva), ki naj bi jo bilo nemogoče algoritmično poustvariti, se v ozadju običajno skriva neko prepričanje, da naš um deluje na osnovi magije.
    Vkolikor je namreč za vsem skupaj samo običajna, dolgočasna fizika – ki je seveda izračunljiva – je samo vprašanje časa, kdaj bomo uspeli izvesti reverzni inženiring in bodo naši algoritmi tudi vsebovali to “magijo”.

    Če k temu dodamo še razumevanje, kaj točno sploh so čustva (zametek tega je npr. tule: https://www.cep.ucsb.edu/emotion.html) in kako zelo kompleksno je vizualno procesiranje, ki ga vršijo naši možgani, ter kolikšen del tega smo že uspeli prenesti v algoritmično obliko… pa postane kolikor toliko jasno, da so čustva (v ozkem človeškem smislu) postranska stvar, medtem ko se v širšem smislu seveda bodo pojavila v bodočih nekoliko-splošnejše-inteligentnih programih.

    Toliko z vrha glave. 🙂

    Linki:
    https://en.wikipedia.org/wiki/Evolved_antenna

    https://github.com/ArmenAg/Improvisation
    https://en.wikipedia.org/wiki/AlphaZero
    https://www.theguardian.com/artanddesign/2016/mar/28/google-deep-dream-art

    • Ojoj. To je nekako tako, kot poslušati matematika ali fizika, ko govori o ekonomiji. Poslušati deterministe govoriti o stohastiki.

      Kot kaže nevro znanost. so čustva ključen del kreativnega procesa. In čustva so ključen del naše racionalnosti. Boj med strahom in vzhičenostjo, ki nas dela racionalne. Šele ko to razumemo, razumemo, zakaj se stvari v človeškem svetu (ekonomiji) odvijajo drugače od napovedi.

      Andrew Lo, Adaptive Markets: Financial Evolution at the Speed of Thought

%d bloggers like this: