[News] Turing Test passed
post by Stuart_Armstrong · 2014-06-09T08:14:02.668Z · LW · GW · Legacy · 48 commentsContents
48 comments
The chatterbot "Eugene Goostman" has apparently passed the Turing test:
No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.
But ''Eugene Goostman'', a computer programme developed to simulate a 13-year-old boy, managed to convince 33 per cent of the judges that it was human, the university said.
As I kind of predicted, the program passed the Turing test, but does not seem to have any trace of general intelligence. Is this a kind of weak p-zombie?
EDIT: The fact it was a publicity stunt, the fact that the judges were pretty terrible, does not change the fact that Turing's criteria were met. We now know that these criteria were insufficient, but that's because machines like this were able to meet them.
48 comments
Comments sorted by top scores.
comment by Roxolan · 2014-06-09T19:16:21.405Z · LW(p) · GW(p)
Scott Aaronson has posted a transcript of his "conversation" with Eugene Goostman.
Replies from: V_Vcomment by [deleted] · 2014-06-09T08:27:36.965Z · LW(p) · GW(p)
I am a bit sceptical about whether or not it actually passed the Turing test. To me it looks more like a publicity stunts for the following reasons:
1) 5 minutes is a short period of time.
2) I don't believe Turing mentioned anything about 30% . I might be wrong on this one.
3) I don't know if the judges were properly trained. What questions did they ask? I feel like there must be plenty of questions related to IQ and creativity that a thirteen year old could answer with ease but that Eugene Goostman would struggle with. Examples: "Cow is to bull like, bitch is to ....?", or "Once upon a time there lived a pink unicorn in a big mushroom house with three invisible potatoes. Could you finish the story for me in a creative way and explain why the unicorn ended up painting the potatoes pink?" . The idea with the Turing test is that the computer should be indistinguishable from a human (in this case a 13 year old non native english speaker). I don't believe this criteria has been met until I see a chat transcript with reasonably hard questions.
4) Having the bot pose as a none native English speaking 13 year old might not be a violation of the rules, but I very much feel like it goes against the spirit of the Turing test. It reminds me a bit of this comic (http://existentialcomics.com/comic/15). But this is beside the point, I don't even think the bot would pass the Ukrainian-13-year-old-boy-turing-test if it was asked reasonably hard questions.
Until I learn more about the proceedings I remain utterly unconvinced that this is the milestone in AI media portrait it to be. It is nonetheless pretty cool!
Replies from: ahbwramc, DanArmak, Stuart_Armstrong, David_Gerard, David_Gerard, DanielLC↑ comment by ahbwramc · 2014-06-09T12:43:51.074Z · LW(p) · GW(p)
"Once upon a time there lived a pink unicorn in a big mushroom house with three invisible potatoes. Could you finish the story for me in a creative way and explain why the unicorn ended up painting the potatoes pink?"
Well obviously, the unicorn did it to satisfy the ghost of Carl Sagan, who showed up at the unicorn's house and started insisting that the potatoes weren't there. Annoyed, she tried throwing flour on the potatoes to convince him, but it turned out the potatoes really were permeable to flour. It was touch and go for a while there, and even the unicorn started to doubt the existence of her invisible potatoes (to say nothing of her invisible garden and invisible scarecrow - but that at least had done an excellent job of keeping the invisible birds away). Eventually, though, it was found that pink paint coated the potatoes just fine, and so Carl happily went back to his post co-haunting the Pioneer 10 probe. The whole affair turned out to be a boon for the unicorn, as the pink paint put a stop to a previously unfalsifiable dragon, who had been eating her potatoes (or so she suspected - she had never been able to prove it). The dragon, for his part, simply went back to his old habit of terrorizing philosopher's thought experiments.
Replies from: shminux, gjm↑ comment by Shmi (shminux) · 2014-06-09T16:28:02.339Z · LW(p) · GW(p)
Nice try, chatterbot.
↑ comment by DanArmak · 2014-06-09T09:12:46.677Z · LW(p) · GW(p)
The test was in fact as Turing specified. In addition to 30% being the challenge, as Stuart pointed out, Turing specified 5 minutes and an "average interrogator".
The more interesting point here, I think, is the discovery (not very surprising by now) that a program that can pass the true Turing Test is still narrow AI not applicable to many other things.
↑ comment by Stuart_Armstrong · 2014-06-09T08:58:42.427Z · LW(p) · GW(p)
The 30% quote is legit:
" I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning."
http://loebner.net/Prizef/TuringArticle.html
Replies from: jsteinhardt, DanArmak↑ comment by jsteinhardt · 2014-06-10T04:18:48.208Z · LW(p) · GW(p)
This was a prediction Turing made, not how the test was defined.
↑ comment by DanArmak · 2014-06-09T09:09:53.200Z · LW(p) · GW(p)
We can't do it in 10^9 bits, though. Of course that's just nitpicking.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-06-09T10:00:23.784Z · LW(p) · GW(p)
Maybe with the best compression we can? But yeah, that's not the main goal.
↑ comment by David_Gerard · 2014-06-09T11:13:00.385Z · LW(p) · GW(p)
It's Kevin Warwick, it's completely a publicity stunt.
Replies from: None↑ comment by David_Gerard · 2014-06-09T11:29:27.666Z · LW(p) · GW(p)
I mean, successfully imitating a 4Chan user would technically pass. (I wrote that piece after one of Warwick's Turing test press releases six years ago.)
↑ comment by DanielLC · 2014-06-10T03:24:00.493Z · LW(p) · GW(p)
I feel like if any program does nearly that well, the judges aren't cheating enough. They should be picking things they know the computer is bad at. Like drawing something with ascii art, and asking what it is, or having it talk to a bot and seeing if the conversation goes anywhere.
If all you do is talk, then all it shows is that the computer is good at running a conversation. Maybe that just was never something that took a lot of intelligence in the first place.
comment by David_Gerard · 2014-06-09T11:12:22.547Z · LW(p) · GW(p)
This is courtesy Kevin Warwick at the University of Reading, who is good at making media claims along these lines, and has done for several years. I advise disregarding anything containing his name.
comment by HungryHobo · 2014-06-09T12:55:23.461Z · LW(p) · GW(p)
I don't know why they're calling this the "first time"
In 1972 bots were able to convince trained professionals that they were human schizophrenics:
Kenneth Colby created PARRY in 1972, a program described as "ELIZA with attitude".[28] It attempted to model the behaviour of a paranoid schizophrenic, using a similar (if more advanced) approach to that employed by Weizenbaum. In order to validate the work, PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the "patients" were human and which were computer programs.[29] The psychiatrists were able to make the correct identification only 48 percent of the time — a figure consistent with random guessing.[30]
foreign 13 year old who isn't being challenged is a low bar to pass.
a bot which posts bellow youtube videos and does nothing but spew racial abuse and “lol” would be indistinguishable from the 13 year old humans doing the same thing so would technically pass the turing test.
I’ll be much more interested when it can convince a group of professionals that it’s another professional in their field, much more useful.
Replies from: CZDenton↑ comment by CZDenton · 2014-06-10T22:59:30.947Z · LW(p) · GW(p)
I used to play a MUD that had a chatbot on it for months in the late 1990s before the people running the game found out and kicked "him" off for violation of the no-bots rule. The chatbot used one specific group chat line and acted somewhat like the hypothetical video poster - mild verbal insults that weren't quite nasty enough to justify complaining to admin about, potty humor, "shut up [name]" and similar responses to questions, and other behaviors that were believably how a middle-school-aged player with trollish intentions might act.
Lowering the standard of the chatbot's expected conversational level by giving it the persona of a child or early adolescent speaking in different language than his/her first language does seem like a form of cheating while following the letter of the rules. At a minimum, I'd like to see the chatbot pass as an ordinary adult of at least average intelligence who is a native speaker of the language that the test is conducted in. A fellow professional in a given field would be even better.
comment by Morendil · 2014-06-09T13:17:18.797Z · LW(p) · GW(p)
Let's discuss a new type of Reverse Turing Test.
This simply consists of coming up with a general class of question that you think would reliably distinguish between a chatbot and a human within about 5 minutes of conversation, and explaining which feature of "intelligence" this class of question probes.
If you're not able to formulate the broad requirements for such a class of question, you have no business being the judge in a Turing Test. You're only playing the chatbot as you would play a video game.
One of my candidates for questions of this kind: ask the interviewee to explain a common error of reasoning that people make, or can make. For instance: "If you look at the numbers, there's quite a correlation between sales of ice cream in coastal locations and number of drownings. Some people might be tempted to conclude that ice cream causes people to drown. Do you think that's right, and if not, why not?"
For another example, Dennett discusses having the chatbot explain a joke.
ETA: Scott Aaronson passes with flying colors. Chatbots are likely to lack basic encyclopedic knowledge about the world which every human possesses. (To some extent things like the Wolfram platform could overcome this for precise questions such as Scott's first - but that still leaves variants like "what's more dangerous, a tiger or an edible plant" that are vague enough that quantitative answers probably won't be accessible to a chatbot.)
Replies from: palladias↑ comment by palladias · 2014-06-10T07:09:39.533Z · LW(p) · GW(p)
I quite recommend The Most Human Human by Brian Christian, where he participates in a TT as one of the decoys, and puts a lot of thought into how to steer the conversations to give himself the distinction of being the human most frequently correctly identified as human.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-06-11T05:45:48.774Z · LW(p) · GW(p)
Blatant bullshit. Nothing even close to the Turing Test was passed. Too much charity toward a bullshit publicity stunt.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-06-11T11:04:25.666Z · LW(p) · GW(p)
Ok, properly rephrased: "Turing's 1950 prediction on expected level of success for his test, which he predicted to happen in 2000, has been achieved in 2014".
I think the main problem is that "Turing Test" has become an overbroad term. It extends from variants coming out of Turing's original paper (which we now know to be too weak) through to much stronger idealised versions of what the Turing test should be for it to be useful. "Nothing even close..." depends on which end of the spectrum we're thinking of.
Replies from: Eliezer_Yudkowsky, jsteinhardt↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-06-11T18:18:55.888Z · LW(p) · GW(p)
Turing's 1950 prediction on expected level of success for his test, which he predicted to happen in 2000, has been achieved in 2014
No. Please apply more skepticism to press releases from Kevin Warwick. See http://www.kurzweilai.net/response-by-ray-kurzweil-to-the-announcement-of-chatbot-eugene-goostman-passing-the-turing-test
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-06-13T19:16:41.896Z · LW(p) · GW(p)
Nothing Kurweil says undermines the claim Kevin made, given what Turing wrote in 1950:
I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.
Anyway, we seem to be agreeing on what actually happened (nothing much), and what its implication are (nothing much), so debating about whether this counts as a pass or not, is not particularly useful.
↑ comment by jsteinhardt · 2014-06-11T16:03:23.437Z · LW(p) · GW(p)
From Turing's original paper:
Replies from: Stuart_ArmstrongInterrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
Witness: It wouldn't scan.
Interrogator: How about "a winter's day," That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.
↑ comment by Stuart_Armstrong · 2014-06-11T16:10:25.815Z · LW(p) · GW(p)
Yes, I think Turing was very mistaken in his impression of what an "average" interrogator would be like.
This compensated for his over-optimism on the progress of computers, giving him an ok prediction by chance.
comment by jsteinhardt · 2014-06-10T04:31:51.629Z · LW(p) · GW(p)
This is bunk: see here. Working in an AI lab, everyone I know who has heard about this release is either extremely annoyed or actively scornful of it (or both). I would like to humbly propose that we talk about anything other than this.
comment by Richard_Kennaway · 2014-06-10T21:02:36.806Z · LW(p) · GW(p)
And in other news, a 13-year-old boy has convinced a panel of judges that he is a human being!
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-06-11T11:11:29.967Z · LW(p) · GW(p)
Brilliant :-)
comment by Morendil · 2014-06-09T12:26:52.327Z · LW(p) · GW(p)
I've read one transcript of a judge conversation. What I find striking is that the judge seems to be doing their best to be fooled! Of course, no one wants to get a 13 year old upset.
In a Turing Test situation I'd start by trying a bunch of Winograd Schemas.
Replies from: tgb↑ comment by tgb · 2014-06-10T14:19:58.388Z · LW(p) · GW(p)
I don't think that was a judge conversation. That was just someone using the online chat program:
"I logged on to what I think is the Goostman program. Here’s the transcript of our conversation: (Eugene is supposed to be around 13 years old.)"
Not only that, but it's an old version from a year ago. (Not that I think the real judges' conversation would be significantly better.)
Replies from: Morendilcomment by Shmi (shminux) · 2014-06-09T20:33:13.183Z · LW(p) · GW(p)
As others noted, the bot only succeeded in passing the test because the judges themselves failed it.
On that note, why don't we have a thread where people try to unmask Eugene's true nature with a single question (replies indicating server downtime do not count).
Try your best here: http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/
Replies from: V_Vcomment by Punoxysm · 2014-06-09T18:21:44.012Z · LW(p) · GW(p)
Just proof that the Turing Test is not what Turing imagined it would be. It's more an application of exploiting vulnerabilities in judges than in genuinely advancing AI.
The question then becomes: how can a harder variant of the Turing Test be created that would stay true to the spirit of the original, yet motivate high-quality, generally-applicable research?
Replies from: Stuart_Armstrong, HungryHobo↑ comment by Stuart_Armstrong · 2014-06-09T19:45:39.704Z · LW(p) · GW(p)
That sounds like one of those questions whose answer gets us a lot of the way to true AI.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-06-09T20:56:03.163Z · LW(p) · GW(p)
Well, let's not set the bar too high. E.g. "convinces 90% of a panel of psychologists, cognitive scientists, neuroscientists, and Natural Language Processing researchers in an hour long interrogation".
Somebody else mentioned Winograd schema testing, which is justified by its targeting of specific weaknesses of current Question Answering / NLP approaches.
↑ comment by HungryHobo · 2014-06-10T09:44:22.126Z · LW(p) · GW(p)
increase the time, increase the age, increase the degree of contact.
the highest level might be a full spectrum test using a human-like robot controlled by an AI which lives and works with professionals, convinces them it's another professional forms relationships and goes unnoticed for months or years.
comment by Paul Crowley (ciphergoth) · 2014-06-14T08:42:46.301Z · LW(p) · GW(p)
Absolute bullshit; it's shameful that FHI went anywhere near this. It's not even techically true since the Turing Test as originally specified includes three participants: judge, human, and machine.
comment by gjm · 2014-06-10T10:44:36.293Z · LW(p) · GW(p)
Of course the "news" is bunk, but I don't see that this LW post deserves all the downvotes it's evidently received. Halo/horns effect in action?
(I would be more certain that the post doesn't deserve the downvotes if Suart had put quotation marks around "passed" in its title.)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-06-10T10:50:01.331Z · LW(p) · GW(p)
It passed Turing's original criteria. I don't see how I can't consider that a genuine pass, however we feel about the methods used.
Replies from: gjm↑ comment by gjm · 2014-06-10T11:46:35.023Z · LW(p) · GW(p)
I think all it shows is that Turing's original suggestion of 30% success for 5 minutes with average interrogators was probably overoptimistic. Those particular stipulations were never, it seems to me, core to what Turing was saying, and the sample conversations in his article make it clear that even if he said "average" he was actually thinking of a rather higher standard of interrogation than "Eugene" got.
And of course the whole "13-year old immigrant who doesn't speak English very well" thing is rather a cheat. Here, I've got a program that passes the Turing test. It simulates a person who doesn't know how to use a computer keyboard.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-06-10T12:36:45.224Z · LW(p) · GW(p)
I agree. Which is why we need better tests! http://lesswrong.com/r/discussion/lw/kc8/come_up_with_better_turing_tests/
comment by Shmi (shminux) · 2014-06-09T14:57:04.945Z · LW(p) · GW(p)
Somehow this emotion-sensing mood-improving robot seems like more of an achievement: http://edition.cnn.com/2014/06/06/tech/innovation/pepper-robot-emotions/
Replies from: Emilecomment by xnn · 2014-06-09T09:12:56.029Z · LW(p) · GW(p)
Side note for those who might not have come across him before: Kevin Warwick is a Professor of Cybernetics who, among other things, has communicated with his wife through electrical cables attached directly to nerves in their forearms.
comment by wobster109 · 2014-06-10T16:53:57.862Z · LW(p) · GW(p)
Eugene has actually been around for many years, since 2008 (http://en.wikipedia.org/wiki/Loebner_Prize), and at that time he convinced one of 12 judges in a parallel test. One of the judges found him more human than an actual human conversation partner.
People keep saying the test is bad, but I feel the standards are very high already. You have to fool a human judge who is on the lookout for a bot. Based on the news articles, it's not clear if Eugene competed against a human partner in side-by-side conversations, but since they're so insistent about the "true" Turing test I'd guess he did. The ability to fool an unsuspecting judge has been around since AIM bots.