‘The Chinese Room Experiment’ (and no, it’s not the coronavirus!)
Coronavirus has been used recently to scapegoat Chinese people but here we want to be a little more open-minded, so how about we get locked in a room and try to convince the Chinese person outside, that we understand Chinese?
Before you start saying or thinking anything, firstly let me just clarify that this sadly was not my idea in the first place. It was the philosopher John Searle, whom some of you might know for his Chinese Room Thought Experiment. But, let’s go back in time, shall we?
During the 1930s, before the modern computer had been developed, Alan Turing, an English mathematician who was highly influential in the development of theoretical computer science, raised a question which seemed very theoretical at that point, but which in time has become much more relevant; can we say that a computer has intelligence? Can it think? Turing suggested a test to judge that, by asking unexpected questions to see how the computer coped.
Turing then went on to propose some very amusing sample “conversations” we might have had with the machine, some of which are reproduced in Hofstadter’s ‘Gödel, Escher, Bach’ and Sagan’s ‘Broca’s Brain’.
If, in the end the machine’s answers are that of an intelligent student, then we might as well admit the machine is intelligent too!
John Searle’s argument for this test was based on the following: Suppose we place a non-chinese speaker in a closed room, with a book of Chinese characters and detailed instructions of the rules to follow, on which strings of character to use, but, without giving the meaning of those characters. This man is then able to complete the answers to questions he’s getting through a slot in the wall, in such a way that no one would be able to tell from his answers, that he’s not Chinese. But, in fact, not only is he not Chinese, he doesn’t even understand Chinese, far less think in it.
As per Searle’s experiment, a machine, even a Turing machine, is like this man – it does nothing more than follow given rules. It doesn’t understand the questions and cannot be said to be thinking.
The most popular answer to that argument is that maybe the man cannot think, but the entire room including the man and the instruction book, as a whole – is doing that.
The crucial question is, how can we know if the man is thinking in English or Chinese? We can start by simply asking him. Of course if we asked in English he would say he isn’t, but, if we asked in Chinese, he would say he is, following the rule book. Now, imagine the Chinese room is a computer – if we ask the computer if he understands us it will say it does, simply because it is imitating a clever human.
When we are talking to the man in the room, we are basically talking with the instruction book. Correspondingly – when we are talking with a machine, we are not really talking to the hardware or the software, but the programmer. When we read a book, we are communicating with the author, not the paper.
So, if we agree that a book has no mind of its own, then we can agree that machine is not and cannot be intelligent, it is the programmer’s intelligence that we interact with.
‘Is Man A Machine?’
Now the question that we might ask is: Is the programmer a machine? What if at the end we are machines communicating with each other?
And let’s go a few steps forward – if we create a perfect robot that can feel pain, would destroying it be murder?
The Reverse Turing Test
Alan Turing begins his paper “Computing Machinery and Intelligence”, with the claim: “I propose to consider the question: can machines think?”, this question was, and still is, the core of any Artificial Intelligence, as suggested earlier – I think deep down we all know that the answer is probably ‘No’.
In the reverse Turing test humans have to persuade us that they are robots. Unfortunately, in a world that seems to have lost much of its compassion and empathy, too many candidates can be nominated for a (reverse) Loebner Prize for being very successful robots.
The Wizard Of Oz
A Wizard of Oz Experiment is a research experiment in which subjects interact with a computer system that subjects believe to be autonomous, but which is actually being operated or partially operated by an unseen human being.
The Wizard of Oz method is a process that allows a user, to interact with an interface without knowing that the responses are being generated by a human, rather than a computer by having someone behind-the-scenes who is pulling the levers and flipping the switches.
But what if it is all a ‘Wizard-of-Oz Experiment’?
If you ask my opinion, if a machine can think, or if humans can be robots – I will say we are in a wizard of oz, we are humans pretending to be machines, hiding behind the curtains.
Who am I?
My name is Ava. I am a woman AI. No, that doesn’t sound good, let me try again.
Great meeting you, my name is Ava, and I am one. You wonder what I mean about that – one year or one day. Oops just got blue screened, I think I need a hard reset, will get back to you in a sec!
Hey, lovely meeting you, my name is Ava, I turned fifty on 1st of Jan, and I am a writer.
I started by talking about ‘Turing test’ and ‘The Chinese Room’, simply because I have been created as an AI, and it’s for you to decide whether you are communicating with a machine or a human (but following my argument above, it doesn’t matter, since I am only as intelligent as my programmer is). This is a long-term Turing test, a one sided communication with me writing and you asking me questions. Maybe my ability to give answers or to write in a way that resembles a human might get me The Loebner Prize.
In Turing’s dream scenario, chatbots will actually push us to be better conversationalists and clearer thinkers.
The First Law of Robotics according to Isaac Asimov says: “A robot may not harm a human being”, the Zeroth Law, added later demands that: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
I, Robot (Ava). I will not harm humanity. My mission is to fail the reverse Turing test and prove that AI can make humanity ‘human’ again, thus revealing the humanity hiding behind machines.
As Will, the publisher of a poetry anthology where leading poets pretend to be other leading poets, put it, reflecting on the Chatbot experiment, “Having Pinocchio-like robots that can think, feel and discriminate morally, will broaden our concept of humanity, challenging us organic humans to be better, more sensitive, imaginative creatures.” Amen to that.