Back to home of AXA's website

July 12, 2021

AI Regulation and the Limits of Transparency

During AXA’s Security Days – an internal event that brought together AXA Group security teams to discuss the future of security – Joanna Bryson, Professor of Ethics and Technology at the Hertie School in Berlin and an AXA Award recipient for AI Ethics, discussed the issues of AI Regulation and the Limits of Transparency. Read below an abridged version of her remarks.

Original content: AXA Research Fund

One of the original definitions of intelligence is from the 19th century, when we were trying to decide which animals were intelligent or not: It's the capacity of doing the right thing at the right time, which is a form of computation. It entails transforming information about context into action. This very general definition includes plants and thermostats. With this definition, it's easy to define AI as anything that behaves intelligently and that someone deliberately built. As a result, thermostat is in, and plant is now out.

The important thing for transparency is that this deliberation implies responsibility, at least in human adults. We're only talking about human adults because only they can really be held legally accountable. Ethics is the way a society defines and secures itself, and the fundamental components of an ethical system are the moral agents. They are the ones our society considers to be responsible. Moral patients are the ones that they are responsible for, and that can include things like the ecosystem. Different societies define moral agents and moral patients differently. Some don’t recognize the moral agency capacities of women or minorities, for example. We construct our society out of the way we define these agents and these patients.

These definitions only work to the extent that moral agents are roughly peers. There will be leaders – kings or presidents – but there is still more or less equality. Even an autocratic leader can't completely determine what people do. This is the key when we're thinking about how ICT is changing society. How are we going to handle enforcement when we enact laws?

I don’t talk about trustworthy AI because it isn't the kind of thing you trust. You can only trust peers. We can exploit the psychological sensation of trust and talk about trust in governments or robots, but it's not coherent. Trust is a peer-wise relationship where you say I'm not going to try to micromanage you. When we think about enforcement, which is important for understanding transparency, then we must think about peers. 

It's not that we should trust corporations and governments. We should hold them accountable. That's why we want transparency. And that's what the new digital services act is about: how can we make sure that we know what's going on with digital artifacts.

AI is not a peer. The ways that we enforce, law and justice, have to do with dissuasion much more than recompense. If the robot itself or an AI company does something wrong, then it is just ordered to pay a fine, but discovering and proving the problem is unfortunately very unlikely. So, we must also dissuade, and this dissuasion is based on what humans do or do not like. We really care about not going to jail or not losing our money, but we can't build that into AI. We can't guarantee that a system we build is going to feel systemic aversion as animals do to isolation. Safe AI is modular. That's how we can make sure that we know how it works; we construct systems that allow us to hold accountable or trace accountability.

If we let AI itself be a legal agent, it would be the ultimate shell company. But what about robots? This comes back to the work that AXA has been funding: not even your robots are your peers. And I find it astounding. Here’s one of the robots that AXA has funded for us. This robot cannot do anything to help this gentleman, for example, get up off the couch. It's too fragile. And it's amazing to me that we can even consider whether a robot could be left to take care of the elderly.

Robots are designed and owned, which means we can't even think about consensual relationships. We're not talking about trust. A robot is basically an extension of a corporation with cameras and microphones in your home. Is that a good idea? Anthropomorphism means we see a thing and start thinking it's a person like us that we use. We accommodate the fact that we find it convenient and don't worry too much about security. We start thinking of it as a member of our household. This is not necessarily a conscious decision. Some natural language processing researchers believe we cannot put natural language into someone's house ethically, because all these language-speaking toys affect how families interact with each other. They affect the language we use because humans naturally anthropomorphize. We naturally try to accommodate. The flip side is dehumanization, which unfortunately we do too. When we feel threatened, we can decide we don't want to deal with something that's too different from us. Instead, we exclude. I got into AI ethics because I was astounded by this phenomenon and didn’t understand, as someone who built robots, why people felt that they owed ethical obligations to robots. It has to do with this inclusion/exclusion process.

Does anthropomorphism interfere with transparency? Can we help humans understand that a robot is an artifact, that it's an extension of a corporation and how to ensure it is safe to have in their homes? Or is the bias too ingrained? We've played with this idea. Just putting a bee costume on a robot alters how people understand it. We have a system for showing people how robots work. We know this increases human understanding. Getting people to understand the goals of the robot helps them understand and reason about these, but not perfectly, unfortunately. 

Digital systems are easily transparent. This doesn't mean every digital system is transparent. It's also easy to make them non-transparent. The point is that, since it's an artifact, we can design it and we can keep track of how we design it. What we're trying to audit is not the micro details of how AI works but rather how humans behave when they build, train, test, deploy, and monitor AI. What's essential is showing that humans did the right things when they built and deployed and tested the software.

We're trying to maintain order within our own society. A good maintainable system that should be a legitimate product includes things like an architecture. We have an idea of what modules are in there and where they came from. As the Solar Winds example shows, you need to know the provenance of your software, but you also need to know the components.

If you're planning a building or any kind of process, you design and document its components, processes for development, use, and maintenance. For a digital system including one with AI, you also have to secure it: this includes the development and operation logs, and provenance of software and data libraries. If you're using machine learning, you need to be sure about the provenance of the data. All these things must be cyber-secure and keeping this all straight is called development and operations. It helps us write our software better. Good software companies have been doing this for decades.

But AI companies are often not doing it and it isn’t clear why. You can document with secure revision control every change to the code base. It’s helpful to programmers to be able to see who changed what and why. For Machine Learning, you need to keep track of your data libraries and the model parameters. Unbelievably, people in Machine Learning often cannot go back and replicate their own results. You also need to keep logs of testing. For the last several decades, we in software have been “programming to test.” You think beforehand how you want the system to work, and then you document whether you achieved those goals. You write the tests before you write the code.

If you have a system that is not changing, neither through machine learning nor corrosion, then you may only test up front and then release. Otherwise, testing should be frequent, even continuous. Companies like Facebook have enormous numbers of processes running to make sure it doesn't look like something is going wrong in real time. Normally, testing is done in advance and monitoring/testing is continuous during deployment. Again, records should be kept for the benefit of developers as well as subsequent auditors.

If you're good with digital technology, transparency should be easy. The best AI companies should have simple, transparent processes. At Google, people were brought in specifically to do ethics and they got fired for writing a paper about the ethics of natural language processing. The bigger issue about transparency for me, of course, was in ATEAC when Google put together a panel of external experts, and they couldn't even communicate internally about what they were doing and why. And they also had problems with external relations. It left me wondering why one of the world’s leading communication companies cannot do transparency!

I have thought of three reasons:

  1. The first is combinatorics. Intelligence is computation, a transformation of information. Computation is a physical process that takes time, space, and energy. It’s not an abstraction; it's not mathematics. Finding the right thing to do at the right time requires search. Even if you're only capable of doing a small number of things, the possible space of what you could do overall just explodes. That's what combinatorics is. It is exponentials on exponentials. The foundation of computer science is figuring out what even is computable. We've come up with a few solutions. One is called concurrency, where many computers work at the same time. It won’t solve every problem, but it can save time. It won’t save energy though, and it requires more space. Quantum can save space for certain algorithms, as well as time, but costs even more energy than you would expect. Part of the reason we humans can do so much more than any other species is that we're so good at communicating solutions. Once we find them, we do massive concurrent search all the time for good answers. And the more people we're bringing online with education and internet connectivity, the faster we're able to come up with interesting solutions and change things. That entails some risk but is fantastic in terms of empowerment.
    There's implicit bias in language because of our concurrent search capacity. Stereotyping, thinking for example that “men are more likely to be in careers and women are more likely to stay at home” isn’t something we believe explicitly but implicitly, when we're doing things like pushing buttons to see how fast we can pair ideas, these implicit biases emerge. It happens to everyone. It is inside of us. The stereotypes are there. But when we capture meaning of words for AI using word embeddings, as they’re called, they reflect not only these biases, but also the way the real world works. And our biases are a consequence of this concurrent search we do. Then we just pass on to each other our experience of the actual world. This means that implicit biases reflect reality.
    Our implicit behavior is not our ideal. When we decide we don't want racism or sexism, we're choosing a target and trying to improve ourselves. We’re moving our society in that direction. You aren't going to get that target when you use machine learning on existing data. It’s not possible to have good data with no biases. It is important to understand where the implicit bias comes from. It isn’t possible to get data about a perfect world because we don't live in one. Finnish, like Turkish, doesn't have gender pronouns. The word “Hän” is the same word regardless of gender, but depending on what words are next, in English it's translated as “he” or “she” by Google Translate. This happens for example because the word “invests” is used more often to talk about a man and the word “laundry” is used more often to talk about a woman in the real world. Google Translate is just telling you how your society works.
    How do we handle this? We all agree it’s not okay. We don’t like Google Translate to do that to us. Some have suggested we change the outcome by warping the machine learning, but this is not transparent. If you use a clear, simple machine learning algorithm, you will get this kind of problem. We can expect it to replicate lived experience or the real world. You get the stereotype output. That's not acceptable. Recall that I said earlier we should be writing our program software to test. Let's have a test about what we think is fair output. That's not easy either. Fairness is not actually a natural thing. We must negotiate and argue about what would be fair. But once you've defined it, you can go back and say here's how to fix the outcome of the simple AI system. We can use either “explainable”, human-readable AI or machine learning to fix it. But you are not done until you have designed this whole system. You don't want companies just hacking that first box, because if there's this totally inscrutability in that box, how can you go back with accounting and tell they are only hacking it for sexism and racism? What if they were also bending it so that it pushes you more towards the people that make them more money? That's why we want to have something we can test against the real world and something else we can test against the desired outcomes. And we should have a transparent process that allows us to see how development was done for both, how both parts work. Every stage must auditable and replicable. Every stage demonstrably meets the criteria. 
  2. A second problem for transparency is Polarization. We know about political polarization. It's bad in America. It's better in most of the EU. The X axis, this is income inequality, and the Y axis is social problems including health outcomes. We know that polarization and inequality correlate with each other. Why do these two things correlate? It's not actually the inequality that's the problem. It's the threat of a declining economy. And that just tends to correlate with inequality if you don't prop up the bottom as inequality grows anyway. But in periods of polarization, ideas are not used for reasoning but for identity. Also, social mobility is lower, so it’s less likely that someone you know and trust understands how a system works. So polarization and inequality are impediments for transparency.
  3. The third problem for transparency is conflicting goals. Coming back to what happened with Google, they presumably can communicate well up to the limits of time, space, and energy. But what if the highest priority of some actors is to maintain the agency for their company? What if they genuinely believe that they're under threat from Bing? And that maintaining “first mover” status is existentially necessary? But what if they hired other actors to ensure their ethical integrity? Then of course these two sets of actors would have what we call an impasse. These two things could conflict with each other. And it may be that the apparent breakdown in transparency is actually this logical impasse. This impasse, combined with polarization, may be why people find it impossible to understand each other.

What can we do? We are doing amazing things with combinatorics. It will never be perfectly solved, but the work we are doing in quantum and in bringing people together to be able to communicate is changing the world. With polarization, we need to reduce vulnerability. If people feel like they are going to go bankrupt, lose their home, or lose their children, it is plausible that reducing their risk profile is more important than having a riskier opportunity to do better, which is what comes from working with more diverse groups. This problem can be solved through infrastructure and investment. As for multiple conflicting goals, the best way to solve that is iteratively, through iterative design. This is what governance and politics are all about. People tend to think that we've done something wrong because we're in a broken situation. But it’s natural that innovations lead to new problems to solve. When we talk about regulation in biology, it is about keeping things going and it often involves oscillations. We aren't necessarily looking for a solution that’s going to last forever. We are looking for a solution that we can apply regularly. If it is every five years or every 10 or 30 years and we can keep ourselves in a more or less sustainable balance, then it is fine. Ultimately a perpetuation is also intractable, but let's just make it for another billion years. 

About Prof. Joanna Bryson

Joanna Bryson is Professor of Ethics and Technology at the Hertie School in Berlin. Her research focuses on the impact of technology on human cooperation, and AI/ICT governance.  From 2002-19 she was on the Computer Science faculty at the University of Bath. She has also been affiliated with the Department of Psychology at Harvard University, the Department of Anthropology at the University of Oxford, the School of Social Sciences at the University of Mannheim, and the Princeton Center for Information Technology Policy. During her PhD work, she observed the confusion generated by anthropomorphized AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010, she co-authored the first national-level AI ethics policy, the UK's Principles of Robotics. She holds degrees in psychology and artificial intelligence from the University of Chicago (BA), the University of Edinburgh (MSc and MPhil), and the Massachusetts Institute of Technology (PhD). Since July 2020, Prof. Bryson has been one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence. Professor Bryson  has received an AXA Award on Responsible Artificial Intelligence for her project about dealing with humanoid robots. 

Discover Prof Joanna Bryson's project

AI: how human should humanoid robots look?

Project page
Artificial Intelligence: Responsible AI and the path to long-term growth

Artificial Intelligence: Responsible AI and the path to long-term growth

Read more
AI - With Greater Power Comes Greater Responsibility

AI - With Greater Power Comes Greater Responsibility

Read more
Can the insurance industry afford to ignore computable contracts?

Can the insurance industry afford to ignore computable contracts?

Read more
Insurance Distribution: Why the future is phygital

Insurance Distribution: Why the future is phygital

Read more