Explainable Artificial Intelligence
Responsible AI
 
Liz O'Sullivan, VP at responsable AI company Arthur/ activist

Maybe you’ve seen this term floating around recently, “Responsible AI.” What is responsible AI? Is it different from transparency, or trustworthiness, or ethics, or privacy or equitable AI? It’s actually all of the above. In 2018, Accenture (yes, the consulting company) defined responsible AI as → “a framework for bringing many of these critical practices together. It focuses on ensuring  the ethical, transparent and accountable use of AI technologies in a manner consistent with user expectations, organizational values and societal laws and norms.” 

What does responsible AI mean? It means that when creating products that use AI, the creators are thinking through every step of building the AI system (from gathering data, labeling that data, creating a data model, building an algorithmic model) and then how that model will interact with the product, and all of that will then interact with users and impact people. When we think of responsibility, it’s really keeping an eye on every detail that could go wrong and asking how will this harm people? Lastly, there needs to be an easy to understand  explanation how products and AI work to users, so we can accurately understand how these systems are built and why something is happening. 

“Users build mental models of the systems and societies they exist in.”

Caroline Sinders

Transparency has been a hype term in the field of AI and for a good reason. The idea behind transparency is an important one as transparency in practice would mean that:

→ We can figure out which data points, information, correlations, patterns have ended up in this decision making process.

→ We can try to make sense of the values that went into AI, logic behind it and potential harms.

But this is easier said than done! As Paddy Leersen explained "we can try to simplify and boil down true reasons that are understandable, but then you are not often getting a full picture.” 

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Caroline Sinders, critical designer/artist

However, we want to point out that lots of companies are engaging in ‘ethics washing’ where they put out a statement that they are building responsible AI, ethical AI, or engaging in social justice, but offer no proof. So, it’s not just enough to say you’re building responsible AI, you have to really show it.  For example, Facebook and Instagram are now offering explanations as to why an ad is shown to you. But is this responsible AI, is this transparency? These are minimal, first steps, but they are not enough to truly be transparent or responsible.

How can users know if something is built responsibly? This is why transparency is important. Companies should disclose their responsibility frameworks, as a form of transparency, and then explain how they built a product following those frameworks, including where their data came from, and how the model works using specific kinds of data. They should also include how the product was QA-ed (quality assurance - it means testing and ensuring the product is fit for the market).  → You can’t be responsible if you aren’t being transparent.

“It is not always clear who is transparency for.”

Paddy Leersen

Transparency principle is closely connected to the principle of explainability, which, put simply means that we need to be able to understand how something works. That is to say, creators of AI, in this case tech-giants, need to be able to explain the intent behind the decision-making of their AI systems (and why they built the system), what values these data and AI systems reflect, and create a way for us, the users, to make sense of these processes. The underlining power of explainability lies in the fact that explanation brings knowledge back to people in a world where information is a power. Explainability can bring back the control to the people, us, users.

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Aviva de Groot, PhD candidate at Tilburg University

But explainability is hard; algorithms are working with millions of data points and complex technical systems. Even the engineers behind algorithms may have a hard time determining why a system is showing you one video versus another, or one piece of content over another.

“So if you want to explain things that are meaningful to users, we are usually leaving away much of the relevant information.”

Paddy Leersen

What can we do? At the very least, we can demand that companies and platforms to explain in ‘plain language’ (everyday, non technical language that any person can understand) how they created an algorithm, and how they think it works. Users need → clear and understandable explanations as to how an algorithm works and → why it is making certain decisions to show content, and then why that content is being shown.

“Our explanation practices need to become more honest and include more of us. Today, it is in the hands of a few and not many people can understand it.”

Aviva de Groot

The paradox of explaining is two fold:  to explain something as complex as an algorithm, we minimize the explanation to be so understandable and then we leave out important details. For example, Facebook discloses why you’re seeing an ad by saying that something similar was shown to other users. But how Facebook decided to show us that ad is not explained.

Legibility or explainability goes hand in hand with transparency. We need to know why a system is recommending content to us, then we need to know how the recommendation system was built → what data is in the data model and how that shapes the recommendation system → and then how my data is interacting with that system to show recommendations. 

It’s the data gathered + what the model assumes or learns from that data, and + how my own data or interactions impacts that system to give me a recommendation. But these are three distinct parts that all have to be explained, and revealed. 

As Paddy Leersen eloquently explains here "we know the ways ads are targeted is much more complicated than that. The problem is that if we want to explain things in the ways that are meaningful to users, we’re often leaving behind relevant information.”

“We fail to see the reasons for their decisions but also fail to see their actual decisions.”

Paddy Leersen

There are few initiatives around the globe, where governments are trying to intervene to regulate AI. Some countries like Germany and France adopted specific laws to control the spread of hate speech online and the European Union is in the process of adopting a set of laws that should govern our digital communication systems. An easy way to explain the intentions of the legislator in the EU is to find a holy balance of not over-regulating - neither platforms nor speech, and in search for this balance EU initiative, Digital Service Act is grounded in the idea of transparency, operability, risk assessment and other principles of responsible AI, that are unfolded in this essay. But, as Paddy Leersen explained "transparency alone is not enough, transparency is a means to hold people accountable, but accountability also requires power, it requires principles that can be applied and actors who can enforce these principles.“

So, as with any law touching upon a myriad of digital rights issues, EU legislative initiatives are going to curtail our interaction online, as much as online platforms are currently doing. The question regrettably boils down to who is going to control digital space: states or platforms.

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Eliška Pírková, policy analyst at Access Now

“One of the goals of the [EU] legislators is to establish systematic regulation of the online gatekeepers, to create sets of responsibilities in line with the international human rights law and fundamental principles and also to tackle dominance they gained.”

Eliška Pírková