Olivier Desbiey Senior Foresight Analyst

Fostering trust with responsible AI

Once just an engineer’s dream, Artificial Intelligence (AI) is now part of our everyday lives, while also holding up a mirror to contemporary society. In fact, AI often reproduces our own mistakes and biases, just as it transforms our behaviors and conceptions. But however powerful it may be, this technology must first earn our trust before it can become a permanent and acceptable feature of our society. That is why AXA works through Impact AI to define a responsible form of AI and to share the concrete tools needed for its roll-out. Exploring new ways to live
Mar 27, 2019

Machine learning, data centers, algorithms: though just a few decades ago these words would have sounded more at home in a sci-fi movie, today they have become part of our everyday vocabulary. This simple fact shows just how much AI and algorithms are currently transforming the way we work, shop and entertain ourselves in today’s world.

While artificial intelligence is already ubiquitous in a wide range of everyday activities, responsibility and ethics will play a key role in building a form of AI that benefits everyone. As underlined by constitutional law specialist Lawrence Lessig during the release of the AXA Research Guide on AI and fostering trust: “When talking about artificial intelligence, there is a dystopian future and a utopian future. […] And there is one fundamental decision that will determine if that future is utopian: whether we have a governing structure that is capable of making sure that the benefits of this future are shared by all of us, as opposed to owned by the tiniest fraction of us”.

Cécile Wendling, Group Head of Foresight at AXA, sums up the issue in the following terms: “How can we ensure that AI will respect our Western democratic values, such as individual rights and civil liberties? We urgently need to establish an ethical compass in this area; what will it be?”   

AI: from a researcher’s dream to a market reality

And yet, artificial intelligence is not a new concept. As Antoine Petit, President and CEO of CNRS, reminds us: “deep learning is an idea that has been around for 30 years [...] Advances in computing power, and especially the proliferation of data generated by the rise of the internet, combined to instill AI with the performance we see today.” In other words, with the development of the digital society, the data volumes generated by these technologies and advances in computing hardware, artificial intelligence has morphed from a researcher’s dream into a market reality.  

And it is in the real world – not in the lab – that AI is facing its biggest challenges. “Research on artificial intelligence is classified into different use cases”, recalls the President and CEO of CNRS, citing the example of self-driving vehicles. “If your image recognition algorithm is 99.9% reliable, that may be sufficient for a laboratory test. But if you put this same technology in a self-driving vehicle, that means the car has a 1 in 1,000 chance of not recognizing a street sign—and that’s far too high.” It goes without saying that putting this car on the road without first improving the algorithm would be unthinkable. As Antoine Petit concludes: “We are forced to outdo ourselves when we grapple with the real uses of our tools. As researchers, applying AI technology pulls us towards perfection”.

"As researchers, applying AI technology pulls us towards perfection”. Antoine Petit, President and CEO of CNRS

From innovation to acceptability

For an insurer like AXA, artificial intelligence will transform the way it helps customers achieve a better life. “Artificial intelligence will impact insurance in several ways”, observes Cécile Wendling. For the AXA Group Head of Foresight, AI first enables a better understanding of risks and facilitates the management of the complex technical data that drives insurance activities. It also allows for an enhanced user experience. “In the example of damage occurring outside the hours of a traditional call center, customers may contact a voice assistant to get instructions on the first steps to take”, explains Cécile Wendling, who also leads the AI working group in the Impact AI brainstorming group.

But AI will also transform the insurance industry on a broader scale. It notably includes the notion of insurance for artificial intelligence and solutions that employ this technology. The case of self-driving cars illustrated by the partnership between AXA and Navya is revealing: as technology advances, bringing innovative solutions to users, it is also necessary to ensure that society accepts this technology – which means responding to a certain number of challenges. How can we ensure privacy while collecting the data necessary for algorithms to work properly? How can we ensure a form of AI that will serve everyone? How should we think about relationships between people and machines in terms of collaboration?  

The challenge of trust

For Raja Chatila, Director of the Institute of Intelligent Systems and Robotics (ISIR) at Pierre and Marie Curie University in Paris, the equation is simple: “If we want a responsible form of AI, we need responsible research”, he explains. This means that a multidisciplinary approach combining social science and hard science is fundamental for guaranteeing ethical practices and fostering trust. However, this notion of responsibility is not so easy to define. Marcin Detyniecki, Head of Research and Development at AXA, has worked on artificial intelligence for many years. “As a researcher in our labs, we had only one thing in mind: it had to work”, he remembers. “In real life, no one expects us to be the most accurate or precise. Now we need to think about the social biases and values within our data sets. Do the data we use to teach our algorithm reproduce social inequalities? Could the algorithm then reinforce these inequalities? On which variable does the machine base its decision? These questions may be new for a math researcher, but they are essential.”  

“Now we need to think about the social biases and values within our data sets.” Marcin Detyniecki, Head of Research and Development at AXA

Dominique Cardon, Director of the Medialab at Sciences-Po, explains this necessary reflection: artificial intelligence turns machines into a perceptive system, meaning that what it perceives, its inputs, have an impact on the machine itself and how it produces information. “In this way, AI maintains a permanent relationship with society. It analyzes our behaviors in order to adapt its responses.”

Establishing values when designing and rolling out artificial intelligence systems is therefore a central question, so that we can instill the algorithm with the social values we want to see in the world. But how can we produce these values and how can we ensure their inclusion within AI algorithms?

Dominique Cardon, Director of the Medialab at Sciences-Po
Raja Chatila, Director of the Institute of Intelligent Systems and Robotics (ISIR)

Tools available to everyone

In France, the strong message delivered by President Emmanuel Macron in late March 2018 underlined the legitimate role of the French government and European governance bodies in defining collective choices and shared values in terms of AI. In fact, faced with the massive scale of digital, establishing a European vision of AI ethics will be a tall order. But as the French President maintained, we must also foster trust through private initiatives.

It was in response to this call that the Impact AI group formed. For its members, no matter what individual initiatives businesses enact, the questions raised by artificial intelligence can only be answered through collective action. As emphasized by Cécile Wendling: “Developing responsible AI, finding the right tools to identify and correct bias in a data set: all that has a cost and we want everyone to have access to what we as corporations have put in place: that is the purpose of Impact AI as a sharing initiative and the toolkit we developed.”

“Developing responsible AI, finding the right tools: all that has a cost and we want everyone to have access to what we as corporations have put in place.” Cécile Wendling, AXA Group Head of Foresight

This toolkit contains math and technology tools like indicators that provide a better understanding of AI decisions, as well as governance tools (ethics panel, charter, processes, etc.) that can be implemented within any organization. Finally, it also comprises training opportunities (university courses, MOOCs, etc.) and documentation (scientific articles and publications) to help people master the topic.

Antoine Petit, President and CEO of CNRS, applauds the initiative: “We must not oppose ethics and business. In order to serve and be accepted by everyone, AI must integrate the community’s values from the design phase.”

Enacting our commitments

Aware of the fundamental role of trust in the insurance business, AXA regularly leads reflection on the ethics of artificial intelligence. This reflection is lead by the Data Protection and Ethics Panel, a panel of independent experts led by Cécile Wendling, which has met twice a year since 2015 with AXA leaders to help the Group’s fine tuning its position in the debates regarding data and algorithms. One concrete example of the panel’s work: the commitment not to sell personal data belonging to AXA customers. Another aspect of the Group’s proactive efforts in this area is the funding of many projects by the AXA Research Fund, which fosters trust in artificial intelligence through research. Among others, this comprises research projects focused directly on the definition and challenges of responsible AI, including the project led by Dr. Sarvapali Ramchurn. “My goal is to develop a portion of the fundamental technology that will ensure a safe and responsible form of AI”.

 

“Through this initiative, we have put in place a number of internal processes that enable us to pursue a responsible vision of Artificial Intelligence”, explains Cécile Wendling. Each of these “bricks” enacts the commitments taken by AXA: a technical brick focused on researching and developing the right tools for the Group’s activity; a governance brick that consists in integrating artificial intelligence verification processes into decision-making steps; and an HR brick to ensure that everyone who works with AI in any capacity will receive ethics training.