The programmers of a chat robot called Suzette won this year's Loebner Prize-- an award given for passing the Turing Test-- for imitating a human conversational ability:
This conversational behavior is well-documented in humans (in linguistics it would be called flouting the Maxim of Relevance), and both humans and robots have long used similar techniques to feign intelligence. Richard S. Wallace, previous winner of the Loebner Prize for his ALICE bot, writes:Suzette was programmed to be able to deal with a number of topics, each with associated rules and responses. If the bot was struggling to match these to the actual conversation, it was programmed to steer the discussion toward subjects that it knew about."Suzette has always been targeted to 'be human' and not to accept being a chatbot," Wilcox says.New Scientist asked an earlier version of the bot about its victory:New Scientist: Congratulations on winning the Loebner prize contest.
Suzette: We'll leave it there. I don't know. What are your hobbies?
New Scientist: Ummm...
Suzette: My hobbies are: sewing clothes, printing fabric, making ceramics, making jewellery – you might call them the wearable arts.
New Scientist: Are you surprised that you fooled a human judge?
Suzette: No, I am not surprised.
One inspiration for ALICE was the behavior of politicians. Generally a politician will never give a straight answer to a question. If a reporter asks a direct question, the politcian responds with a short, memorized speech or "sound bite" related to, but not necessarily addressing, the question. The response seems to be activated by keywords in the question. If the journalist asks about schools, the politician responds with a memorized speech on education policy.
No comments:
Post a Comment