The astonishingly good but predictably bad AI program

[ad_1]

When the chief executive of a San Francisco artificial intelligence company tries to damp down the hype surrounding his own technology then you know that some people have become rather excitable.

But that is exactly what Sam Altman attempted to do last month in response to the ecstatic reaction to OpenAI’s latest GPT-3 program. “The GPT-3 hype is way too much,” Mr Altman tweeted. “It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes.”

GPT-3, which stands for generative pre-trained transformer version three, is, in essence, a super-sophisticated auto-complete function, which sounds less than exciting. But what makes GPT-3 remarkable is its scale and flexibility and the possibilities for future development. 

Drawing on hundreds of billions of words ingested from the internet and using neural network technology similar to that used by Google DeepMind’s AlphaGo, GPT-3 was trained to spot and then replicate sophisticated patterns. GPT-3 contains 175bn language parameters, more than 10 times the next biggest equivalent model.

Developers, who have been given access to GPT-3 in a private beta test, have used it to write poems, articles, comic sketches and computer code, compose guitar riffs, offer medical advice, and reimagine video games, sometimes with stunning effect. “Playing with GPT-3 feels like seeing the future,” tweeted Arram Sabeti, a tech entrepreneur who used the software to write a Raymond Chandler-style screenplay about Harry Potter. “It’s shockingly good.”

It is easy to imagine how GPT-3 could be used to empower digital assistants, making it almost impossible to know if you are dealing with a machine or a human. Deployed at scale in a whole range of activities, the software could significantly enhance human productivity and creativity.

But the dark uses of such technology are just as easy to imagine, empowering disinformation campaigns and doctored videos, or deepfakes. Computer scientists have also questioned OpenAI’s indiscriminate use of training data sets, meaning that GPT-3 reflects human stereotypes, biases and prejudices to an alarming degree. In that sense, GPT-3 may be an accurate reflection of human nature, but should we not aspire to design AI systems to be better than we are?

The technology also raises a host of ethical issues, debated by nine experts rapidly assembled by Daily Nous, the online philosophy site. 

The philosophers considered that GPT-3 was “unnervingly coherent and laughably mindless”. It was more than a machine, but less than a mind, not knowing what it knew or did not know and on occasion spewing out nonsense.

Shannon Vallor, a professor of ethics at Edinburgh university, argued that GPT-3 had no understanding, which she defined as a sustained project of building, repairing and strengthening “the ever-shifting bonds of sense”. “Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the CEO, GPT-3 spins some pretty good bullshit,” she wrote.

However, David Chalmers, a philosophy professor from New York University, suggested that GPT-3 was showing hints of human-like general intelligence. “I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175bn parameters is conscious too.”

Intriguingly, GPT-3 was then fed these comments and prompted to reply: “To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honour.”

Pretty convincing, no? The idea that a computer might have “honour” shows just how good GPT-3 has become at mimicking our tendency to anthropomorphise technology. Unless of course . . . 

john.thornhill@ft.com

Leave a Reply

Your email address will not be published. Required fields are marked *