clock menu more-arrow no yes mobile

Filed under:

An AI Program Allegedly Passed the Turing Test! So What?

Did the programmers create a thinking program or a clever way to trick the judges?

Vladgrin / Thinkstock

Computer science pioneer Alan Turing famously said that by the end of the 20th century, machines would be able to convince people that they were human about a third of the time.

As I wrote in an earlier story:

Turing didn’t exactly say that would mean machines were thinking, but suggested the distinction would largely cease to matter. At that point, humans would think of machines as thinking.

He was overly optimistic about the timeline. But on Saturday, in a test conducted at the Royal Society in London, a Russian team’s chatbot reportedly fooled 33 percent of the judges into thinking it was human, largely by pretending to be a 13-year-old non-native-speaking boy.

So did the world just crack a pillar of computer science that’s been standing for more than 60 years? That depends on how you view things.

The Independent provided a revealing quote from Vladimir Veselov, one of the chatbot’s creators:

Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything. We spent a lot of time developing a character with a believable personality.

Their program may be a notable computer science feat, but I’ll assert two things.

First, the researchers passed the letter of the Turing Test, but I’m not so sure about the spirit of it. I’m not left believing that the computer can think — the underlying benchmark of the test — I’m left believing that the programmers figured out a clever way to trick the judges.

Limiting their expectations by inventing a teenager feels more like a gimmick than a technical breakthrough. It seems like the programmers approached the problem like too many high school teachers: Training pupils for the test, not for a true understanding of the subject matter.

Second point: It doesn’t really matter. Turing’s obvious genius notwithstanding, the test has become an arbitrary way of measuring something that can’t really be measured: Whether or not a computer is actually thinking. Which is a less-disturbing way of saying: Whether or not a computer is sentient.

I come down on the side of the late Dutch computer scientist Edsger Dijkstra, who is quoted as saying: “The question of whether machines can think … is about as relevant as the question of whether submarines can swim.”

In fact, the test represents an increasingly antiquated model of AI: A brain in a box like KITT or HAL 9000. Many AI researchers have long ago moved away from this philosophically (and perhaps ethically) fraught territory. What they care about is using the tools of computer science — be they smart algorithms, artificial intelligence, machine learning or big data — to solve hard problems.

As Google’s Peter Norvig, co-author of “Artificial Intelligence: A Modern Approach,” told me in an interview several years ago: “Some people have thought of it as duplicating a human brain. I tend to think of it more as just building something that works.”

In many ways, computational tools are already superior to human minds — certainly in terms of memory, recall and the ability to connect dots across gigantic sets of information. They don’t have to mimic humans to be incredibly useful to humans.

We are already seeing these capabilities applied in practical ways — for instance, search engines that can immediately pinpoint information anywhere across the Internet; programs that can translate across multiple languages in nearly real time; and, as I wrote about yesterday, tools that may hit upon medical treatments that have stumped human minds for centuries.

This article originally appeared on Recode.net.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.