Book
a robot
now

AI? Robot? Am I Confused?

Where does Robotics end and A.I. Start?

There has been a lot of media attention devoted to both robots and artificial intelligence in the last year, and it often seems to be a little confusing. There seems to be a huge overlap in both these areas which has led to a lot of misunderstanding of the two technologies. So, let’s start with a definition of what each technology is:

Artificial Intelligence

Lexico.com, a collaboration between Dictionary.com, and the Oxford University Press, defines artificial intelligence as follows:

“The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

And the Cambridge Dictionary as:

“The study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognize pictures, solve problems, and learn”

So, both seem to agree, in a broad way, that artificial Intelligence is about creating systems that have a range of characteristics similar to a human mind, that can process data that requires both context and perception, to some degree. Do note that we are taking about a human mind, rather than a human brain. We want the results we would expect when we “think”, but we are not trying to create a replica of how the human brain functions. We just want the same end results, regardless of how the system reaches them.

One of the key aspects of artificial intelligence systems is their ability to be trained, or “to learn”. The ability for a system to learn is often referred to as Machine Learning or Deep Learning and some have likened these new systems to statistics and statistical modelling. While there are some similarities it would be unfair to say that machine learning is just statistics. Statistical modelling uses statistics and data to build a mathematical approximation of the world, according to the dataset. We can use these statistical models to gain insight into the particular dataset.

Machine learning is a method of showing a computer program how to learn like a human. As our computational power has grown, it has become possible to use datasets that are so huge they would be beyond the capability of a normal person to perceive the patterns in the data. This equates to using a huge dataset of a particular type, often with subsets of data that reinforce what we wish the computer to perceive and also provides negative example. These would be classified as such and the computer would learn to find patterns in the positive reinforcement data that are not present in the negative examples. This is a gross oversimplification but will serve as a basic explanation. Data is typically cleaned and categorised before being feed to our computer program. When we speak about artificial intelligence, we are often speaking about a pretrained system that has had millions of learning steps. We then provide new data from outside the original training set and the AI gives us a response based upon its purpose and learnt experience. What becomes key in this is that the data provided during the learning process is correct for the task to be learnt.

Let me give you a simple example:

The University of Washington ran an experiment to teach an artificial intelligence system how to identify pictures of Wolves and separate them from pictures of Huskies. Not an easy task, as Wolves and Huskies are very similar. The learning stage involved supplying a large number of categorized pictures of Wolves and Huskies to the learning process. It was then test using an independent set of pictures, which is the usual practice in these situations. The system achieved a 90% accuracy rate and the researching were more than happy with the results. All seemed good, however………

When an explainer routine was run, which explains how the algorithm made the determination, it became very clear that the system’s decisions were based, primarily, on the background of the pictures. The researchers discovered that their Wolf dataset, primarily had snowy backgrounds and their Husky dataset didn’t. Rather than create a Wolf/Husky categoriser, they had built a snow detector.

So please, bear in mind, that the AI systems you encounter are only as good as the data they were trained on.

Robots

Lexico.com definitions a robot as:

“A machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer.”

And the Cambridge Dictionary defines a robot as:

“a machine controlled by a computer that is used to perform jobs automatically”

Again, both sources agree on the fundamental elements of a robot, as we are looking for in this discussion. A robot is a physical machine that can be programmed to complete a series of actions without human intervention. This very neatly defines the typical industrial robot used on production lines to create anything from cars to smart phones. But these are not really the types of robots people immediately think of when you ask them about robots. People’s most common view of robots are more akin to the characters we see in science fiction films, like C3PO and R2D2 or Sonny from iRobot. The most typical association people have with robots is one of fictional machines that will exist in the future, however, the first iterations of these machines are appearing now. They have limited functionality but offer a machine more in line with the general public’s perceptions. These types of robot are typically referred to as Social Robots.

A social robot is a robot that is a physical machine and can be programmed to interact with users using common approaches such as speech. Furthermore, a social robot should conform to the expectations of the social role into which it has been placed. Social interaction for these types of robots are typically cooperative, requiring the robot to understand the context they are in, and the intent of the interaction, something that is currently not possible. For this reason, many social robots are remotely controlled, even if only partially, to allow them to mimic complex behaviours. Much of how interactive or “alive” we perceive a robot to be is due to our own tendency to assign human or ‘alive’ characteristics to things that do not actually have them. On many occasions, when working with Pepper, the Softbank Robotics social robot, we have encountered people who assign Pepper’s random movements to a perceived response to some action they have taken. This tendency can be used to improve the acceptance of social robots in many situations.

When we interact socially, we are taking in a huge amount of sensory data and evaluating this data based upon our personal experiences and the social situation we are in. For our social robots to be able to interact they have to be able to process sensory data about their environment and about how they are being interacted with. For example, if we wish a social robot to react when someone approaches, it should be able to understand enough about its environment to detect movement and identify if the approaching object is something it can interact with. For example, it a social robot can understand movement detection and has depth perception it can determine that something is moving towards it, or away from it. If when the object is close, it can use routines to perform face detection, this may be enough for it to trigger a greeting or basic interaction. After all something with a human face has approach it. We would normally react to this and our social robots should too.

However, the amount of sensory data and cues that we use to trigger our interactions and determine how we react is enormous. Just think about the simplest interaction you have with another person. Firstly, we need to determine that the person is trying to interact with us (perhaps with eye contact), and then we will evaluate their body language, facial expression, their perceived age, or if they speak we will assess their tone of voice, and the list goes on and on. All of this is done in a very short space of time and is all based upon our previous experiences and our understanding of the social situation we are in. Attempting to replicate this within a robot is extremely difficult and is where artificial intelligence can be combined with robotics to create a more complete and fulfilling experience.

If we look at the general public’s expectations for a social robot, we will see that their expectations, fuelled by the media and science fiction, are far higher and more complex than the currently available technology allows. Some social robotics researchers, such as Professor Tony Prescott from Sheffield University, have chosen to take a different approach to social robotics. In his case, he looked at the current field of social robotics and saw that todays robots do not even have the intelligence of our pets. His thoughts then lead to the idea of creating a social robot based upon mimicking how a pet would react. The robot he and his team developed, MiRo, uses a technique called biomimetics to create an engaging and familiar experience of being around a pet, such as a cat or dog. This is a highly social experience, using a 3 stage simulation of a mammalian brain, and is a highly realistic one. Again, providing the cues for us to assign the behavioural and emotional content we would expect onto the robot.

Our ability to superimpose an emotional state onto an inanimate machine, such as a robot is based upon our collect experiences of social interactions. Our ability relies on our memories, a facility that almost no robot has. Our current robots can be programmed, to have a default, and even randomised, reaction to sensory input, but once turned off, the collected experience of those interactions is lost. Being able to draw on collective experiences is key to smooth, natural social interaction and this is where we can see the combination of artificial intelligence and social robotics.

Social Robots, Artificial Intelligence and Other Avatars

Since the early 1990’s artificial intelligence researchers, working with roboticists, have been creating robots that engage on a social level. With the increases in computing power, the acceptance of cloud-based computing services and advances in robotics technology, we are now seeing mobile robots that are integrated with a wide range of artificial intelligence services. When we interact with a social robot, we can expect a range of its skills to be A.I. based. Understanding the intent of spoken commands and phrases, evaluating faces, and determining mood are all areas that use A.I. in today’s social robots. As we move forward, A.I. systems that can analyse voice stress pattern to assess emotional state, enhanced verbal and contextual understanding tool and even vision analysis system are being added to our robots to allow them to better asses the environments and social interactions they are party to.

These artificial intelligence systems are trained using millions of data points, giving the social robots we interact with access to a huge bank of ‘experiences’ on which to base their reactions and responses, further enhancing the interactions we experience. Social robots have become an easily accessible, readily accepted way to expose people to artificial intelligence in a manner that both engages and feels very natural. However, this also leads to the issues with expectations. If I see a humanoid robot, mimicking human behaviour, I, naturally, assume is has the same capabilities as a human; something we are far from achieving.

Artificial Intelligence systems are not just tied to robots. Most people’s initial exposure to A.I. is via their mobile phone, using SIRI or Google Assistant, or a home assistant such an Amazon’s Alexa. Many websites now incorporate A.I. based chatbots to assist with questions or to allow their customers to access specific information. As with all emerging technology, some work very well, while others can be somewhat frustrating, but with every additional interaction and research project, our journey to a natural interface with technology moves forward.

So, where does the confusion of these fields come from? Primarily, it is due to our expectations of the social robots around us. As we expect more from the robots we encounter, the line between the robot and the artificial intelligence systems supporting it become more blurred. When we ask questions of a robot is it responding, or are we really chatting with the supporting A.I.? It is very understandable how the technologies have become confused for people outside of these two complex fields.

As an example, that will hopefully make this a little clearer, imagine a house in the near future, with smart appliances, lighting and heating. A.I. working to manage all of these systems and provide a natural interface for the homeowner. Perhaps the primary home interface is via a social robot, acting as the ‘face’ of our smart system, or a screen-based avatar such as the ones recently released by Robots of London, with their Animatars product range. Furthermore, these could be project 3D holograms, again a technology that is commercially available in the Animatars product range. Our home assistant may also be our mobile assistant, delivered to our smart phone and even accessible from our smart vehicles. A personal artificial intelligence companion, available wherever and whenever we require it. That will be the artificial intelligence component of these systems, with the delivery platform being somewhat irrelevant, whether it is a screen, mobile phone or robot.

Before everything gets too carried away, we are many years away from being able to hold a natural, flowing conversation with an artificial intelligence. There are many nuances in language understanding that we need to address and huge steps in computational power and programming techniques to be made. We are many years away from the supporting companions we see in science fiction. In fact, these types are A.I. are sometimes referred to as Artificial General Intelligence, to allow for the distinction, where this type of intelligence is aimed to be more like a human intellect. But, also, a far harder goal to achieve.

The robots are coming, but we do need to accept that they won’t all be like our science fiction characters C-3PO and R2-D2; some will be screens, some will be phones and yet others may just be a disembodied voice. Whatever form they take, the experiences and research happening today, with physical social robots and A.I. systems, will shape the artificially intelligent support systems we have in the future.