Eye, robot: the power of an AI’s gaze


Sadly, the 21st century has turned out to be far from as technologically advanced as was once perceived by our ancestors who envisaged weather-controlling machines, life underwater, and nursing homes on the moon.

However, there are a few futuristic aspects to our modern lives that were successfully predicted – two of which being the use of artificial intelligence (AI) and robots. In fact, AI is so seamlessly integrated into our lives that its use has become quotidian – demonstrated by well-known voice assistants such as Siri, Cortana and Alexa.

Scientists have developed AI systems by essentially training computer systems to ‘think’ in the same ways that humans do through machine learning and the development of neural pathways. Characteristics that were once exclusive to humans such as creativity and logic are now being imitated by technology.

There is a peculiar obstacle in the use of human robots

More recently, these qualities, distinctive of AI, are being used in healthcare, banking and smart cars. It’s clear to see how the use of AI has had an auspicious impact on society and consequently altered human behaviour. But can the same behaviour changing attribute be argued for the field of robotics?

Unbeknownst to many, including myself previously, AI and robotics are not really the same thing. Both fields have a slight commonality when considering ‘Artificially Intelligent Robots’, however robotics is more defined by physicality and autonomy. To put it simply, robots follow pre-set instructions whereas AI programs imitate human thought pathways.

Crucially, the physical nature of robots has led to recent breakthroughs in the study of human behaviour. These studies use humanoid robots; machines specifically built to resemble a human body. These types of robots are hoped to be used in many contexts including research, caregiving, education, manufacturing and even in exploring outer space. Yet there is a peculiar obstacle in the use of human robots…

Scientists have discovered that the uncanny valley is more than just an emotion

A feeling known as the ‘uncanny valley’ can occur when people look at objects that resemble human beings such as dolls, animations and, of course, humanoid robots. This phenomenon is said to illicit an eerie and uncomfortable feeling in the observer. There are numerous theories as to why this cognitive response is evoked, one of which being that viewing a humanlike robot elicits an inherent fear of death as the onlooker is replaceable by said robot. Recently, scientists have uncovered that the uncanny valley is more than just an emotion.

Recent research from the Istituto Italiano Di Tecnologia in Italy has revealed that making eye contact with a robot can alter our decision-making capability. During the study, participants were required to play a simple video game, basically deciding whether to allow a car to crash or not. Yet, the game was played against a humanoid robot sat opposite. The robot’s presence allowed researchers to recognise that we respond through similar neural mechanisms to if we were making eye contact with a robot or with another human.

In this experiment mutual eye contact represents a simplified version of the ‘gaze’, a psychoanalytic theory which postulates an anxious state of self-awareness in realising that one can be looked at by others. Professor Agnieszka Wykowska, principal author of the study, told Science Robotics that the “gaze is an extremely important social signal that we employ on a day-to-day basis when interacting with others.”

A robot’s gaze can trick the brain…

Throughout the game, the robot would either look towards or away from volunteers, who were having their cephalic nervous activity measured by an EEG. Wykowska concluded that “the human brain processes the robot gaze as a social signal,” consequently affecting the participants “decisions by delaying them, so humans were much slower in making the decisions in the game.” Therefore, a robot’s gaze can trick the brain into thinking that it is partaking in social interaction.

These findings hold significance in how we will implement humanoid robot technology in the future. Wykowska argues that we need to investigate further into the specifics of when robots elicit this response in order to “decide in which sort of context [humanoid robotics] is desirable and beneficial for humans and in which context this should not occur.” For example, a robot in a caregiving role; reminding an elderly person to take medication, may be very effective due to the social behaviours it displays, causing the patient to be more likely to conform. Thus, when looking to the future it’s fair to say that it’ll be a while before nursing homes make it to the moon, but perhaps not too long until robots are working in them.


Donate to Palatinate

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.