VIEWPOINT VIEWPOINT

Trustworthy AI - Why Does It Matter?

11/19/2019
By Nathan Michael

Image: iStock

All technology demands trust, especially technology that is new or unprecedented. We’ve seen it across time for disruptive technologies; the combustion engine, the airplane and the automobile all required some element of trust in order for society to adopt and embrace the new system. Trust that the technology would be reliable. Trust that the technology would be safe. Trust that the technology would be used appropriately and contribute to the betterment of society.

Such is the case for artificial intelligence and robotics. From a science and engineering perspective, artificially intelligent robotic systems are simply engineered systems. No different than a car or a bridge, these systems are based on the theory and underlying principles of math and science. Therefore, like all other engineered systems, AI systems must adhere to certain performance expectations for us, as humans, to begin to trust them. Trust is about the system operating as expected, in a consistent manner, time and time again.

The more that the system is perceived to reliably work as expected, the more trust we build in it.

Conversely, if the system starts behaving erratically or failing unexpectedly, we lose trust. This response makes sense and feels obvious. What is more nuanced concerning trust as it relates to AI systems, is that if the system works as designed, but in a manner that does not align with human expectations, we will tend to distrust the system. This observation implies that trust of AI requires not only a system that performs as designed with high reliability, but also a system that human observers can understand.

The role of human expectations in the trust of artificial intelligence comes as a result of the fact that the human understanding of correct performance is not always technically right.

This is because human expectations, intuition and understanding do not always translate to optimal performance. People tend to optimize their behavior to conserve effort, based on the innate biological drive to conserve energy. Whereas artificially intelligent systems are engineered to optimize their behavior given certain performance criteria. What follows, is that in situations where an AI system is built to optimize its performance for something other than the conservation of energy — such as to maximize speed or minimize inaccuracy — misalignments arise between the robot’s behavior and what a person would consider the correct action.

The idea of developing AI systems that humans can understand and therefore trust, is captured through the idea of “Explainable AI.” Explainable AI, sometimes called “Interpretable AI” or “Transparent AI” refers simply to AI technology that can be easily understood such that a human observer can interpret why the system arrived at a specific decision. The concept of establishing human-operator expectations is particularly challenging when working with resilient intelligent robotic systems because these technologies are built to introspect, adapt and evolve to yield increasingly superior performance levels over time. Then, in order to develop AI systems humans can understand, we must consider how to enable the operator to work with the system and to understand how the system is improving through experience.

This concept is addressed through the development of interfaces that, within the context of artificial intelligence, refer to the development of capabilities that enable machines to engage effectively with human operators. Effective interfaces not only help humans understand the behavior of robots, but also allow for a robot to account for an operator’s needs.

Interfaces allow humans to build trust in robotic systems — and for human interaction with the robot to be personalized or guided, or for the robot to augment the user’s ability.

The significance of effective interfaces becomes evident when considering why it is important to build trust in AI systems and how increased trust will translate to increased reliance on robotic systems. With increased reliance on AI, humans will be able to offload lower-level tasks to these systems in order to focus on more important, higher-level processes. In doing so, artificial intelligence can and will be used to amplify, augment and enhance human ability.

Development of these interfaces is already underway. Today, we are developing robots that can create models that allow them to intuit some of a user’s intentions. These models make it possible for humans to engage with the robot and to achieve much higher levels of performance with less effort. When the operator recognizes this behavior, the operator starts to grow more confident that the robot “gets” them, that the robot understands what it is that they want to achieve and is working with them to achieve a common objective.

The concept of acting as a team evolves, rather than the operator simply utilizing the robot as a tool.

This relationship becomes particularly important as we consider multi-robot systems, swarming and teaming. A human operating a large group of robots will encounter difficulty in perceiving and understanding every occurrence that’s happening while several robots simultaneously perform complex actions. Due to the elaborate nature of the operation, it is possible for an operator to make a mistake, such as asking the system to perform a task counter to what they are actually trying to achieve. A system that can engage in intent modeling of the user will serve to improve and augment the overall performance.

When an artificially intelligent system models the intent of an operator’s desired task, it becomes possible for the system to mitigate, anticipate and adapt in order to overcome user errors, including problematic, unsafe and suboptimal requests. This modeling can be done without any great insight by the system as to what the operator wants, but rather insight into how the operator has engaged in the past.

It’s interesting to observe how these human-robot interactions impact trust, because as humans interact with systems they understand and systems that are built to model their operator’s intent, these characteristics make a tremendous difference. It’s the difference between a person walking up and engaging with a system immediately versus a person requiring extensive training to learn how to interact with that system and its nuances.

When the system adapts to the experience of the individual, it enables anyone to engage with it, having never worked with it before, and to very quickly perform as an expert. That ability to amplify the expertise of the operator is another mechanism by which trust is earned.

One of the greatest challenges with artificial intelligence is that there is an overwhelming impression that magic underlies the system. But it is not magic, it’s mathematics.
What is being accomplished by AI systems is exciting, but it is also simply theory and fundamentals and engineering. As the development of AI progresses, we will see, more and more, the role of trust in this technology. Trust will play a role in everything from the establishment of reliability standards to the improvement of society’s understanding of the technology to the adoption of AI products in our day-to-day lives to discussions of the ethical considerations.

Every member of society has a responsibility to contribute to this discussion; industry, academia, researchers and the general public all have voices to be heard in the discussion of not only what the future of AI could look like, but what the future of AI should look like.

Nathan Michael is chief technology officer of Shield AI.

Topics: Viewpoint

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
 
Please enter the text displayed in the image.