Tudi če ChatGPT in druge UI modele skrbno vodite skozi proces priprave željenega outputa, še vedno zaidejo v “halucinacije” (izmišljevanje stvari, ki nimajo zveze ne z dejanskim stanjem, ne z realističnimi predpostavkami). To je eden izmed njihovih problemov. Drug, s tem povezan problem nastane, če tem halucinirajočim UI modelom omogočite avtonomijo, torej da sami odločajo, po kateri proti ali kateri proces izvesti na podlagi halucinacije, ki so jo razvili iz analiziranega vzorca. Ta del je res strašljiv.
Spodaj je dobro razmišljanje na to temo na osnovi nove verzije GPT-5.
This week OpenAI released GPT-5, the very-long-awaited successor to GPT-4, which came out more than two years ago. There have been other OpenAI models that arguably deserved the title “successor”; there’s 4.5, not to mention models called o1, o3, and 4o (names that, when rendered in fonts whose lower-case o’s resemble zeroes, become even more confusing than they otherwise would be). But GPT-5 integrates the distinctive powers of the different OpenAI models under a unified user interface and brings significant new advances of its own. The overall effect isn’t enough to warrant serious discussion of whether the breathlessly awaited threshold of “artificial general intelligence” has been reached. But it’s enough to sustain confidence that the trajectory of AI progress will continue: More and more AI power, in more and more useful forms, will be available to more and more people at lower and lower prices, with growing social, economic, political, and geopolitical impact. So, in acknowledgment of this moment, we begin this week’s Earthling with a few items that are about either GPT-5 itself or issues raised by ever-more-powerful AIs.

You must be logged in to post a comment.