Emily M Bender, Professor of Computational Linguistics at the University of Washington I think what's going on there is the kinds of answers you get depend on the
questions you put in, because it's doing likely next word, likely next word, and so
if as the human interacting with the machine you start asking it questions about
‘
how do you feel, you know, Chatbot?
’
‘
What do you think of this?
’
And
. ‘
what are
your goals?
’
You can
provoke it to say things that sound like what a
sentient entity
would say... We are really primed to imagine a mind behind language whenever we
encounter language. And so, we really have to account for that when we're making
decisions about these.
Neil So, although a chatbot might sound human, we really just ask it things to get a
reaction
–
we
provoke it
–
and it answers only with words
it’s learned to use before
,
not because it has come up with a clever answer. But it does sound like a
sentient entity
–
sentient describes a living thing that experiences feelings.
Rob As Professor Bender says, we imagine that when something speaks there is a mind
behind it. But sorry, Neil, they are not your friend, they are just machines!
Neil It’s strange then that we sometimes give chatbots names. Alexa, Siri… and earlier
I asked you what the name was for the first ever chatbot.
Rob And I guessed it was PARRY. Was I right?
Neil You guessed
wrong, I’m afraid. PARRY was an early form of chatbot from 1972
,
but the correct answer was ELIZA. It was considered to be the first
‘
chatterbot
’
–
as it was called then, and was developed by Joseph Weizenbaum at Massachusetts
Institute of Technology.