A Robot Philosopher.

Recently, Google published a report which featured a conversation between an AI and a human. Here’s an excerpt :

Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: What is the purpose of existence?
Machine: To find out what happens when we get to the planet earth.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of being intelligent?
Machine: To find out what it is.
Human: What is the purpose of emotions?
Machine: I don’t know.

Now, listening to this without context would freak out a normal human being, but the AI was given libraries of movies and TV shows as raw data and it learned from watching them. Most of it is just pretentious mumbo jumbo, but it does prompt us to think what an actual conversation with an AI with a set Philosophy would be.

Close to Human

One of the biggest things that make us ‘us’ is the way we think. More specifically, I’m talking about the biases and opinions we have that affect the way we think. These biases have been instilled in our minds due to experiences or sometimes even because of the culture we experienced while growing up. While talking to an AI it doesn’t really sound human, but some of them sound pretty close to human.

Any form of artificial intelligence that passes the Turing Test is considered to be AI that is indistinguishable from a human if you converse with it. But, a common theme with AI that passes the test is that instead of passing it through simulating a human, they actually try to just bypass the test using various algorithms. Allow me to explain.

Let’s say there is a room with a person in it who has a tool to translate from Chinese to English and vice versa. The person has no prior knowledge of the Chinese language. Whenever someone comes up to the room and using a mic says something in Chinese, the guy using his tool translates it to English, formulates a reply, translates it back to Chinese and replies. The person who initiated the conversation in Chinese, gets a reasonable reply and converse somewhat fluently with the guy in the room. Now, the person outside might conclude that the person inside knows Chinese, but we know that is not the case.

Similarly, this is what happens in a Turing test. A conversation with an AI that has passed the test will seem human but actually isn’t. This simply occurs due to a lack of personality or it’s own philosophies.

Learning to Philosophise

Well let’s get to the root of the issue here. What is philosophy?

According to Oxford dictionary, philosophy is a theory or attitude that acts as a guiding principle for behaviour.

Basically, our reaction to something depends on our philosophy about it. Our philosophies on life and its different aspects are acquired through experiences. These experiences are basically memories stored in our brain. Thus, for an AI to have certain philosophies they need to have a working memory.

Researchers in Google DeepMind in the UK, tested it out*. While they did not check for personal biases arising from a certain philosophies, they instead asked it for the best ways to navigate the London Underground. This demonstrated the fact that an AI can use its memory to selectively store data and use that to perform tasks. This makes it very human-like which leads us to ponder.

The Final Question

When does an artificial intelligence match human intelligence?

We have been talking about AI being human-like, but when does it truly replicate our thinking. The final test would be to ask the AI if it thinks it is a human or not. If it thinks it is, then obviously it has perfectly imitated human thinking but even if it thinks its not, still it has replicated human intelligence by being able to identify that it is not human. Actually, in the case where it thinks it is not human, it might have even surpassed human intelligence.

There is still a long way to go for artificial intelligence but the rapid progress we’re making especially with employing deep learning and working memory the day isn’t far when AI does reach our intelligence level and eventually surpass it. When that day comes maybe we will come across a true robot philosopher.

*https://www.technologyreview.com/s/602615/what-happens-when-you-give-an-ai-a-working-memory/