Robot with soul
18 October 2012
In the future it will be possible to build conscious machines – machines that understand when their rights are abused and that in a philosophical sense can also feel pain. This is the claim of neurophysiologist Germund Hesslow from Lund University, based on a hypothesis that challenges our view of what it means to be human.We humans tend to think we live in two worlds, one external and material, the other internal and mental. Many believe highly developed animals also have such an inner world.
However, inanimate objects cannot have a soul. Or can they?
In fact, it is not always all that easy to draw the line between man and machine – you understand that when you meet K – a robot the size of a fist.
“K has an inner world, a primitive consciousness”, claims one of its creators, Germund Hesslow, Professor of Neurophysiology at the Department of Experimental Medical Science.
Over a couple of decades, Germund Hesslow has developed a hypothesis of what marks out human consciousness. The hypothesis is that it is the ability to imagine things and anticipate events that is what we mean by consciousness.
In 1999 he held a lecture about this at the University of Skövde. After the lecture, a couple of people from the School of Humanities and Informatics came up to him. They believed it was possible to create a robot that would work according to the principles that Hesslow had outlined. So, Dan-Anders Jirenhed and his supervisor Tom Ziemke had a go, basing their work on Khepera, a robot that was already on the market and which they gave the nickname K.
K has wheels and can roll forwards and backwards and turn. The researchers equipped it with an electronic ‘brain’ which makes it able to reflect on what it does, if only to a limited degree.
Just as in humans, K’s ‘brain’ has a sensory region where ‘sensory impressions’ can be received, i.e. information in the form of sound, light, etc. The sensory region is linked to a motor region that controls the robot’s movements.
K’s electronic brain is constructed in a different way from classic computers. It is a type of network known as an artificial neural network (ANN). Neural networks can learn through training, i.e. the network gathers experiences, learns from them and proceeds by a process of trial and error until it succeeds with a task. It is reminiscent of how children learn to master the physical world. Using the learning capacity of its artificial brain, K is able to avoid obstacles as it crosses a room or navigates a maze.
Great, say the sceptics, but an automatic vacuum cleaner or lawnmover can do that. They are not particularly soulful machines!
No, it’s true; Germund Hesslow agrees:
“Both K and a household robot can avoid obstacles thanks to their sensors. That’s nothing new.”
However, the difference becomes clear when the sensors are turned off:
“K can still avoid the obstacles ‘blindfolded’ because it can imagine, based on previous experiences, what is going to happen and what consequences it will have”, he explains.
So, it is not a question of simple programming that says “avoid that obstacle”, as with a robotic lawnmower, but rather an internal process that leads the machine to avoid the obstacle. The point is not what the machine has learnt, but what it can anticipate. It is this which K has in common with humans, according to Germund Hesslow.
So how does the robot work?
“It learns two things. On the one hand, it learns to avoid obstacles. Using its sensors, it can ‘see’ an obstacle approaching and growing on its artificial retina. On the other hand, it also learns a lot of connections between behaviour and visual consequences of this behaviour. It is therefore able to predict what will happen when it approaches the obstacle. These predictions can then be entered into the robot’s sight system, which means that K ‘sees’ even when the sensors are turned off.”
K has not proved that Germund Hesslow’s hypothesis on human thought (see below) is correct. However, it shows that the principle is a working possibility – K behaves as if it was making considerations based on an inner mental world. Hesslow admits that K’s inner world is primitive compared with a human’s inner world. However, it could be expanded. For example, K could be made able to report its predictions.
However, an important component of human consciousness is self-awareness. The inner world belongs to the individual and no one else. If K was equipped with self-awareness, we would face a moral problem – there is no declaration on the rights of the machine...
Germund Hesslow does not doubt that it will be possible to build conscious machines in the future.
He admits that there could be ethical problems if we equip robots with an equivalent of values and sensors to identify damage and mechanisms to avoid this. In Germund Hesslow’s view, this would mean that the robot would be able to feel pain.
Is this really possible? Would it not just be a zombie that plays up and shouts “ouch!” when you kick it on the shin, deceiving us by looking as if it felt something?
“No, because in my view, pain is the perception of injury in combination with a very strong desire to avoid injury. It is easy to understand how a normal robot works. It receives input and responds to it. However, K’s behaviour is inexplicable unless we presume that it has an inner world”, says Germund Hesslow.
The crucial point is of course what we mean by an ‘inner world’ and how we believe that one comes about. However, if it is possible to build a human machine, the natural follow-on question is of course “mustn’t humans also be machines?”
Text: Göran Frankel
Germund Hesslow’s hypothesis about the processes that lead to the creation of an inner reality or consciousness in humans is based on three mechanisms, or abilities:
• The ability to simulate, or imagine, behaviour
“There are a number of studies that show that the same nerve cells are activated when a person carries out an action as when he or she imagines that he or she is carrying out the action; it is only the final signal that executes the actual action that is missing when we simulate or imagine something. It could be compared to driving a car without releasing the clutch.”
• The ability to simulate sensory impressions
“When you imagine that you are looking at a tree, similar processes take place in the brain as when you really look at a tree. The difference is that the signal to the brain’s sensors comes from the inside, from the human imagination based on experience, rather than from a real tree.”
• The ability to link the imagining of behaviour with its sensory consequences
“If I hit something with a hammer, a loud noise is heard, and if I sit down, I feel the pressure beneath me. Or I can sit here and imagine that I get up and go to the door. This is a simulation in which I see the door and the handle before me, which could make me open the door. In this way, long chains of behaviour and consequences can be simulated.”
May 10, 2013 Swedish universities best in Europe
May 06, 2013 Possible treatment for serious blood cancer