Explainable Artificial Intelligence
Entire Essay
Everyday AI:
Alex Fefegha, the co-founder & head of making at Comuzi

What is AI? AI - stands for “Artificial Intelligence” - can be described through simple technological lens as:

  • a line of code or
  • a automated tool for recognition of patterns and correlations or
  • a computational system that is simply solving a certain task.

AI is code, or like salt- when added to a product, it transforms the product. But AI isn’t magic, and can’t do much without being given a task, a purpose, or being placed in society and in products. AI can help software and products accomplish  different tasks, from scheduling a doctor’s appointment to recommending music on Spotify. AI is dependent on gallons of data, your data, my data, community data: date of birth, sexual and gender identity data, wedding pictures, political affiliation, identification number, health data, it does not matter → as long as there is a kind of data + human intention in building, designing and employing = AI will perform any task.

But we should question HOW we build and create equitable and responsible technology systems, with AI. Many of these tasks are extremely socially sensitive: court rulings, social sorting within the banking system, predictive policing, political micro-targeting, health diagnostics → AI is not a NEUTRAL agent. 

From the moment of creation to employment, the entire lifespan of AI makes decisions that have direct consequences on our lives → court decisions, police fines, credits and housing, voting system, who you date, and what you stand for → your rights and liberties.

“One way to think of AI is as salt rather than its own food group. It’s less interesting to consider it on its own. But once you add salt to your food it can transform the meal.”

People’s Guide to AI

As Caroline highlighted, “AI is inside of all of these apps that touch consumers every day lives in a way that consumers are not necessarily aware of.”

 
Caroline Sinders, critical designer/artist

And it is all around us. It is the backbone of nearly every internet interaction,  it is in the public and private spaces, in the schools, hospitals, most probably at your work. The fact that is invisible, outside of our mental models, intangible makes AI even more pervasive. Still, when we talk about AI → we need to make sure to talk about CREATORS: companies, governments, universities, civil societies. The GOALS AND TASKS: recommenders, personalizers and the RISKS AND HARMS: discrimination, limitation of rights, interfernce with election process, inaccessible reddress mechanisms, information deserts and divides. 

 

 
Mutale Nkonde, founder of AI for the People

Let’s remember that the consequences of AI decisions on us, individuals need to be placed in a broader context as these decisions have a long lasting impact on our communities, societies, transforming the world we live in and even potentially, our futures.

 
Lukáš Likavčan, philosopher
 
Janus Rose, editor at Motherboard/Vice

AI as set of ideological structures that we already have in the world and reproducing them in code and in some ways entrenching them that makes them in a way inescapable.”

Janus Rose
Speech and AI

Do you ever wonder why your Facebook profile and perhaps a best friend or parent’s are not the same? Why are our Spotify Discover Weekly or recommendations so different? Why is it that I never hear the latest hits, but my partner is recommended techno? 

Consider this → have you ever had a conversation with a friend about a certain type of product, let’s say a camera, and then you check your Instagram and suddenly see an Ad featuring the exact same camera. That’s kind of scary, isn’t it?

How do these systems work and what are the consequences of such pervasive and invasive systems? Remember → these tricks are human-made, more precisely they are created, designed and applied by social media platforms and alike.

In the following videos, academics, policy analysts, researchers and artists will introduce us into the invisible and yet powerful world of AI-driven social media platforms.

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Caroline Sinders, critical designer/artist

As Paddy and Caroline explained, what you see or don’t see, what you hear and more importantly what you are EXPOSED to is entirely decided by the platforms and AI, fueled by your data, previous interaction with a certain content, suggestions and preferences (are you into politics, sports, music, nature?). Every interaction, share and comment counts and the more they know about you → the longer they will incentivize you to stay on their platforms. The goal is engagement, from you. Simple as that.

But, this engagement goal comes with significant consequences to you, me, our communities, societies and media.

To make users stay and like, share, comment → it all involves interacting with AI at some level→ as users we are interwoven in a kaleidoscope of our likes, shares, preferences, desires, worries, questions, and thoughts. 

As a result, you get to see content that is similar to content you previously interacted with, most likely already matching your views, values and wishes. This phenomenon is often called “filter bubble” or “informational rabbit holes.”

 
Eliška Pírková, policy analyst at Access Now
 
Caroline Sinders, critical designer/artist

Digital spaces and platforms host a variety of content, from information, news, music, videos, art, etc. and they already gather a lot of data ABOUT users - what we like, where we live, who we are, how old we are, etc. All of this gets combined together with our data, the content we interact with and HOW we interact with content. In this way, platforms build ideas about people (millions and millions of people, communities, cities, and countries). One of the ways they gather this information is through ad tracking, data brokers, and tracing cookies. Tracing cookies are little bits of information placed within web pages and your computer that send information about you and your browsing habits to companies. This information is specific information the company wants - like what previous websites you've visited, for example.

 
Janus Rose, editor at Motherboard/Vice

“Becoming ‘environmental of computation’ (Jennifer Gabrys) tells us how different computational processes, including AI and algorithms, are being used for something that is not longer visible thing in the foreground of our everyday lives, it becomes something invisible but active background - a hidden infrastructure of life.”

Lukáš Likavčan

AI driven platforms can elevate harmful content, as well as important content, often rendering misinformation the same as journalism content → as long as the posts, regardless of factual correctness, are attracting lots of eyeballs, clicks and interactions, the posts will be shared.

However removal of illicit content is tied to a number of risks to human rights and digital participation.

 
Paddy Leersen, PhD candidate at University of Amsterdam

“Not only that we fail to see the reasons for certain decisions but we fail to see actually decision.”

Paddy Leersen
 
Lukáš Likavčan, philosopher

It’s important to emphasize, when talking about AI, we talk about AI that is designed, coded, breadth and tuned by social media platforms, we are talking about the big 5 Giants (Facebook, Google, Amazon, Apple, Microsoft) → they truly are giants in terms of money, scale, and the amount of people who use these platforms. Not only are the creators behind these platforms incredibly wealthy, but the platforms themselves also have enormous political power. During the Cambridge Analytica case, the director of Facebook refused to testify before the UK government. As Lukáš brilliantly parsed digital platforms are non-state sovereignties, with political influence and pseudo-diplomatic relations with states. He underlined that states are transforming into platforms, but that platforms are also becoming pseudostates by framing and regulating our digital realities.

 
Eliška Pírková, policy analyst at Access Now

“This all points out the to fact that freedom of expression is experiencing tough time around the world, and that activism and human rights work in this field is crucial and important."

Eliška Pírková

Check this video: Your data, our democracy

Have you heard of the phrase “move quickly and break everything around?” This is a strategy that comes out of Silicon Valley and has been used to guide the building of platforms. However, these platforms are “breaking” parts of society and negatively impacting journalists and journalism.  These platforms are an authoritative decision makers when it comes to dissemination of the media content.

As Caroline mentioned: “we are not immediately cognizant” that we are engaging with AI since AI can be ‘masked’ or just embedded in products invisibly. In this new socio-technological (dis)order, where platforms can dictate the (new) rules of the game and AI facilitates content and interactions in our daily digital world, the media is struggling to find their way through all of this.  

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Janus Rose, editor at Motherboard/Vice

 

“Any information feed that we are exposing ourselves to and data that are we are constantly being fed is influencing us in some ways.”

Janus Rose

Platforms create algorithmic newsfeeds and recommendations that surface and share content to users (that means us), because of predictions these platforms create from our data, our profiles, and what we like and share. With so much data ABOUT us → these platforms decide if we are going to be exposed to quality journalism or misinformation or disinformation, or entertainment or key political topics or music or the weather forecast. The list goes on.

“The logic of platforms fundamentally runs contrary to what we have aimed to produce as good quality journalistic content, which is sometimes not entirely based on what would people want to read.”

Janus Rose

We should not trust platforms to decide for us which media content is accessible to us, and what kinds of information is of public importance. As Paddy Leersen explained: “After all, when it comes to media we don’t tend to trust to government to decide for us what should be shown and how the media should work, we need journalists, academics, activists and other participants in the public debate to be able to understand what decisions are being made in order to remain critical, in order to remain independent.”

There is a problem of duality here: which is reporting on AI while having to use AI tools to share that reporting. There can be a steep learning curve for journalists and the media when reporting on things like AI because of how technically difficult and extremely opaque AI systems are. Then, there is the added layer of sharing that research and journalism; where newsrooms and journalists have to turn to platforms like Twitter and Facebook to share their message, and are having to rely on these opaque and broken systems like recommendation algorithms to surface that content to new users. This problem is a bit like an ouroboros.

Here are two great playful tools to help you explore the logic of AI: content amplification: from conspiracy theories to historical facts and a ready to use fact-checker. 

AI - makes it almost like a smokescreen that enables platforms and governments to absolve themselves from responsibility by claiming that their systems are objective. It is difficult to say what is going on because of the black box nature of these systems.”

Janus Rose

AI’s impact on speech is an important topic as it matters enormously who and how decides to make certain information and content accessible over other.  It matters if platforms interfere with our election process and incentivize certain political views over others.  It matters that we all are able to access and read local and world news. It matters not only for the public debate → it matters because:

“We need to make sure that we all have a say on what happens within the public debate, public debate is central for knowledge production.”

Aviva de Groot
AI Surveillance
Liz O'Sullivan, VP at responsable AI company Arthur/ activist

The digital systems of online surveillance like cookie tracking, prediction systems and data tracking → are moving to the physical world with facial recognition placed into CCTV cameras systems. We see surveillance in our streets, streets, next to our doors, around our schools, buildings, hospitals, theatres, and highways.


What happens when pervasive, omni-potent systems of surveillance that are already fully functional in our digital world interact with AI?

“Anything you say online can be connected to your face in the offline world. Your face is also a part of this system. Whether we like it or not we are all a part of the dark pipeline."

Liz O’Sullivan

The answer is this: it chills our right of expression: to show our delight, anger, criticism, dissent, and disagreement.  

As cameras with facial recognition technologies are proliferating throughout cities and societies, it becomes increasingly easier to know who said, what, when and where, in public spaces or during protests, in parks, and street corners. 

AI surveillance systems, created by private companies and used by cities, local governments and national governments. These surveillance systems are trained with and collect gallons of data, from people, their photos, their habits and where they live, etc. These systems are trained and deployed to recognize our faces. Once the camera captures your face at a protest, for example→ law enforcement officials can just run those images through a facial recognition system and voilà! → they can figure out who you are.

 
Liz O'Sullivan, VP at responsable AI company Arthur/ activist
 
Mutale Nkonde, founder of AI for the People

They might not use this information against you immediately, but the simple fact that these systems are gathering so much data, that subliminally surveillance systems are cognizant of our whereabouts, where we go, and what protests we go to is terrifying.

And if you are picked up from the protest, in some instances even the court decisions are grounded in the AI systems, that are amphibious and biased.

 
Caroline Sinders, critical designer/artist

Finally, AI surveillance technologies are the result of powerful private + public partnerships that mutually reinforce systemic inequity, and the power structures already in place: our governments hire private companies and ironically pay for surveillance with our (tax and other) money to train and build these instruments of control.

 
Liz O'Sullivan, VP at responsable AI company Arthur/ activist

These are oppressive technologies, built to observe and control us: Ban the Scan.

AI Harms

One thing to keep in mind about AI is that it’s a form of technology, and technology isn’t neutral. Technology is a mirror of society, and it reflects the systemic bias and harms that exists in society. Technology is made by people and for people, and it has all of the same problems humans have, like bias, hate, and harm. AI, like any technology, harms people

→ On June 28th, 2015, computer engineer Jacky Alcine was uploading photos of himself and his friend. But the Google+ app he was using kept auto tagging himself and his friend as 'gorillas'. Jacky and his friend are both Black. 

→ One day in 2013, Harvard University professor Latanya Sweeney was googling her own name and noticed that the Google search results were prompting ads that read “Latanya Sweeney, arrested?" and “Latanya Sweeney, bail bonds?” Professor Sweeney tried other African American sounding names, and the same prompts showed up. Professor Sweeney has never been arrested. 

→ In 2018, researcher Joy Buolamwini and Dr. Timnit Gebru discovered that computer vision algorithms, across major companies like Google, IBM Watson, and Amazon, had a difficult time recognizing Black faces, across all genders. And yet these products were on the market, for any person or company to use. In fact, the software was 99% right for white masculine presenting faces, adn only 35% right for darker skinned, feminine presenting faces. 

→ In 2017, transgender YouTubers had their transition videos copied, without their consent, so AI researchers could use their videos as training data to build an algorithm.

 
Liz O'Sullivan, VP at responsable AI company Arthur/ activist
 
Caroline Sinders, critical designer/artist

“So it’s not that technology misidentifies, but it is also that technology is used to further reinforce racist policy and practices in a different areas.”

Janus Rose

Researchers have ‘built’ AI systems to ‘recognize’ gender from eye irises, and to study faces to determine political affiliation - but all of these algorithms are inaccurate. You can’t determine someone’s gender from their eyes- but believing that you can, or building a system to gather that kind of data, will lead to harmful and dangerous consequences for people from marginalized groups. As Liz O’Sullivan pointed out "there is a part of the society that gets disproportionately negatively affected when AI touches upon their live.”

This website you’re reading was made in 2021, and we are having issues with AI. It’s not just that AI is built ‘incorrectly’ or ‘unethically’, but AI exists in society and  in a world of deep systemic injustice. Janus Rose underlines "Everything that led to the creation of these systems operate as a base level of our society -  that is the root cause.

 
Lukáš Likavčan, philosopher

These harms are multi-fold-one in that these AI systems are not accurate and can never be accurate, but also we have to ask, what happens when this kind of technology is deployed in everyday life? What happens when these systems make assumptions about individuals in the general public? What happens when this kind of technology affects negatively people of color, transgender people, and women? It harms them, and it furthers systemic harm

AI and algorithms “reproduce and codify conditions that are created by systems of oppression.”

Janus Rose

But who really uses this? Look at this example, to figure out the scale of the problem. 

Have you heard of emotion recognition systems?  

Emotion Recognition Systems utilize facial recognition to determine emotion. Emotion Facial Action Coding System (Emfacs) developed by Paul Ekman and Wallace V Friesen in the 1980s. This system that defines emotions is the backbone of most emotion recognition systems today and can be traced back to the 1960s, when Ekman and two colleagues hypothesized that there are six universal emotions – anger, disgust, fear, happiness, sadness and surprise – and that these emotions can be detected and are expressed across across all cultures. 

What is wrong with this scenario? If emotion recognition systems are using computer vision, which is already inaccurate, but it’s also built on top of an impossible premise- that human society ONLY has six emotions that are universally shared. But we have so many emotions, and emotions are expressed differently across cultures. 

More importantly, emotion recognition assumes that we, as members of society, express our emotions truly, everyday. But we don’t, sometimes when we’re scared we smile to diffuse a situation, sometimes we laugh to ease tension, sometimes when we’re incredibly sad, we learn to keep our faces neutral. 

Another powerful illustration is digital discrimination, as explained by Janus Rose, "when you create a system that recognizes people as this is a man or this is a woman - what it does - it erases all of the people who fall outside binary to exist and it also erases people who don't conform with these normative standards.”

 
Janus Rose, editor at Motherboard/Vice

“The harms happen to those who are already marginalized, it’s highly disperse, this impact.”

Aviva de Groot

We opened this essay with some recent examples of the AI inflicted harms. But this real harm goes even further back, before AI, and here are just a few examples how technology generally harmed people of color. 

→ In the mid-1950s, Kodak invented the “Shirley” card, a card that was a photograph of a white woman and a set of colors to help photographic labs ensure that colors and densities (or tones) of prints were printed correctly. This helped labs process negatives, but it also skewed photo processing to favor white skin

→ Let’s go back even further to the 1800s to the creation of phrenology, the racist fake-science that believed the shape of one's face could determine one’s intelligence. But phrenology ‘documented’ that certain features that corresponded with lower intelligence were non-white features. 

The creation of AI systems to determine gender, intelligence, political affiliation- are always erroneous and error ridden, but it comes from a similar biased and harmful place of phrenology.

AI as pharmakon - it’s a poison and a remedy at the same time.”

Lukáš Likavčan
Responsible AI
 
Liz O'Sullivan, VP at responsable AI company Arthur/ activist

Maybe you’ve seen this term floating around recently, “Responsible AI.” What is responsible AI? Is it different from transparency, or trustworthiness, or ethics, or privacy or equitable AI? It’s actually all of the above. In 2018, Accenture (yes, the consulting company) defined responsible AI as → “a framework for bringing many of these critical practices together. It focuses on ensuring  the ethical, transparent and accountable use of AI technologies in a manner consistent with user expectations, organizational values and societal laws and norms.” 

What does responsible AI mean? It means that when creating products that use AI, the creators are thinking through every step of building the AI system (from gathering data, labeling that data, creating a data model, building an algorithmic model) and then how that model will interact with the product, and all of that will then interact with users and impact people. When we think of responsibility, it’s really keeping an eye on every detail that could go wrong and asking how will this harm people? Lastly, there needs to be an easy to understand  explanation how products and AI work to users, so we can accurately understand how these systems are built and why something is happening. 

“Users build mental models of the systems and societies they exist in.”

Caroline Sinders

Transparency has been a hype term in the field of AI and for a good reason. The idea behind transparency is an important one as transparency in practice would mean that:

→ We can figure out which data points, information, correlations, patterns have ended up in this decision making process.

→ We can try to make sense of the values that went into AI, logic behind it and potential harms.

But this is easier said than done! As Paddy Leersen explained "we can try to simplify and boil down true reasons that are understandable, but then you are not often getting a full picture.” 

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Caroline Sinders, critical designer/artist

However, we want to point out that lots of companies are engaging in ‘ethics washing’ where they put out a statement that they are building responsible AI, ethical AI, or engaging in social justice, but offer no proof. So, it’s not just enough to say you’re building responsible AI, you have to really show it.  For example, Facebook and Instagram are now offering explanations as to why an ad is shown to you. But is this responsible AI, is this transparency? These are minimal, first steps, but they are not enough to truly be transparent or responsible.

How can users know if something is built responsibly? This is why transparency is important. Companies should disclose their responsibility frameworks, as a form of transparency, and then explain how they built a product following those frameworks, including where their data came from, and how the model works using specific kinds of data. They should also include how the product was QA-ed (quality assurance - it means testing and ensuring the product is fit for the market).  → You can’t be responsible if you aren’t being transparent.

“It is not always clear who is transparency for.”

Paddy Leersen

Transparency principle is closely connected to the principle of explainability, which, put simply means that we need to be able to understand how something works. That is to say, creators of AI, in this case tech-giants, need to be able to explain the intent behind the decision-making of their AI systems (and why they built the system), what values these data and AI systems reflect, and create a way for us, the users, to make sense of these processes. The underlining power of explainability lies in the fact that explanation brings knowledge back to people in a world where information is a power. Explainability can bring back the control to the people, us, users.

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Aviva de Groot, PhD candidate at Tilburg University

But explainability is hard; algorithms are working with millions of data points and complex technical systems. Even the engineers behind algorithms may have a hard time determining why a system is showing you one video versus another, or one piece of content over another.

“So if you want to explain things that are meaningful to users, we are usually leaving away much of the relevant information.”

Paddy Leersen

What can we do? At the very least, we can demand that companies and platforms to explain in ‘plain language’ (everyday, non technical language that any person can understand) how they created an algorithm, and how they think it works. Users need → clear and understandable explanations as to how an algorithm works and → why it is making certain decisions to show content, and then why that content is being shown.

“Our explanation practices need to become more honest and include more of us. Today, it is in the hands of a few and not many people can understand it.”

Aviva de Groot

The paradox of explaining is two fold:  to explain something as complex as an algorithm, we minimize the explanation to be so understandable and then we leave out important details. For example, Facebook discloses why you’re seeing an ad by saying that something similar was shown to other users. But how Facebook decided to show us that ad is not explained.

Legibility or explainability goes hand in hand with transparency. We need to know why a system is recommending content to us, then we need to know how the recommendation system was built → what data is in the data model and how that shapes the recommendation system → and then how my data is interacting with that system to show recommendations. 

It’s the data gathered + what the model assumes or learns from that data, and + how my own data or interactions impacts that system to give me a recommendation. But these are three distinct parts that all have to be explained, and revealed. 

As Paddy Leersen eloquently explains here "we know the ways ads are targeted is much more complicated than that. The problem is that if we want to explain things in the ways that are meaningful to users, we’re often leaving behind relevant information.”

“We fail to see the reasons for their decisions but also fail to see their actual decisions.”

Paddy Leersen

There are few initiatives around the globe, where governments are trying to intervene to regulate AI. Some countries like Germany and France adopted specific laws to control the spread of hate speech online and the European Union is in the process of adopting a set of laws that should govern our digital communication systems. An easy way to explain the intentions of the legislator in the EU is to find a holy balance of not over-regulating - neither platforms nor speech, and in search for this balance EU initiative, Digital Service Act is grounded in the idea of transparency, operability, risk assessment and other principles of responsible AI, that are unfolded in this essay. But, as Paddy Leersen explained "transparency alone is not enough, transparency is a means to hold people accountable, but accountability also requires power, it requires principles that can be applied and actors who can enforce these principles.“

So, as with any law touching upon a myriad of digital rights issues, EU legislative initiatives are going to curtail our interaction online, as much as online platforms are currently doing. The question regrettably boils down to who is going to control digital space: states or platforms.

 
Paddy Leersen, PhD candidate at University of Amsterdam
 
Eliška Pírková, policy analyst at Access Now

“One of the goals of the [EU] legislators is to establish systematic regulation of the online gatekeepers, to create sets of responsibilities in line with the international human rights law and fundamental principles and also to tackle dominance they gained.”

Eliška Pírková
Future AI

We talked to many brilliant minds about what they are hopeful for when it comes to AI? What is the future of AI, should we even be hopeful?  

Their responses were as brilliant as they are. And we are just going to open the doors of their hopes for you.

Lukáš Likavčan proposed to think about AI as a way to "better place ourselves and our purpose in the large field of non-human, environmental, planetary agencies and intelligences. In that sense, AI also unveils that we are just one of many intelligences that inhabit this earth and accordingly we should enter into some stage of negotiations between these different intelligences of which we are only a fragment, or one particular element.” 

 
Lukáš Likavčan, philosopher

Janus Rose brilliantly reminded us that: “everyone collectively is pushing against, trying to dismantle systems of patriarchy and white-supremacy. This is all within our power, we have the ability to change this but it does not start with AI, it starts with targeting the root problem and using our individual and collective power to provide people who are not being provided for and from there we can take upwards and produce AI that works for everyone. We need to keep in mind that it is  not going to be a tech solution that saves us from the problems we are facing, because the problems we are facing from AI and algorithms are FUNDAMENTAL as to the ways our societies are built and until we address these things, there is always going to be a software - UPDATE FOR OPPRESSION!”

 
Janus Rose, editor at Motherboard/Vice

“In the last years, things have really shifted and people don’t need to be explained why they need responsible AI"  as Liz O’Sullivan concludes and she believes that "there is a plenty of room for responsible AI to continue to grow as long as there is responsible AI community that grows with it.”

 
Liz O'Sullivan, VP at responsable AI company Arthur/ activist

In the similar vein Aviva de Groot calls to “seize the momentum, people are becoming aware that knowledge needs to be working for everyone!

 
Aviva de Groot, PhD candidate at Tilburg University

Or think about using AI to generate art, as Lukáš Likavčan proposes:“one of the very nice ways to derail the use of these language models like the GPT3 model is to use them in a poetic manner, as generators of poetics. So, you can use these algorithms, already now in a very creative manner and then transcending these frontiers of creativity that also means frontier of poetics. That can lead to extension of the imagination of original creators of these technologies or can lead to some subversive movement in society, that can then lead to throwing sand into the gears of technological hegemonies.” 

 
Lukáš Likavčan, philosopher

Can liberation be a playful dance, Mutale Nkonde proves it can. “We found that by using micro-influence that  people were really excited about democracy and exercising their power for good so we are thinking about what new economies can be built for gaming algorithms for good […] or anything where we can be using technology to save the environment, instead of gaming technologies for bad we game them for good, and in that develop new ways of communicating that sit outside journalism, for example.”

 
Mutale Nkonde, founder of AI for the People

Can we hope that social media platforms will become more responsible? Caroline Sinders believes that things are slowly changing: “This movement that’s been happening for the past almost six years around ethical technology that is starting to formalize into things like checklists and concrete standards that people can reflect on as well as emerging and more solidified best practices. And I am hopeful that in the next few years any company will be able to access exercises they can use when they are in their own design sprint or building process and reflect on a question: have we made something safely and securely and have we centered a variety of viewpoints?

 
Caroline Sinders, critical designer/artist

Eliška Pírková hopes that upcoming EU regulations will establish "a system in place where users and their rights come first, where we achieve empowerment of users, where we return control and agencies back to users over the control and information they receive and impart. And that we will finally establish a set of transparency requirements that will empower users" so we "will have a choice" - to be a part of these algorithmic models or to move somewhere else.

 
Eliška Pírková, policy analyst at Access Now
Explore AI

AI is all around us, but that doesn’t mean we can’t push back against it, fight it, and subvert  it. Sometimes the best way to intervene with AI, to rebel against AI, is to get creative. We’ve found and outlined a few different ‘interventions’ you can take at home to trick and trip up. You can add noise to the machine by altering data sets, use fashion to obfuscate computer vision systems, document AI in your daily life, and build new alternatives to algorithmic oppressive systems. 

Creative Interventions

These creative interventions are inspired by anti-surveillance rebellion and the works of Octavia Butler. Mutale Nkonde mentioned using avant garde makeup to trick or confuse facial recognition systems, a tactic uncovered and made famous by researcher and artist Adam Harvey in his CV Dazzle project. Technologist  and artist Joselyn McDonald goes a step further by creating intricate flower face crowns to shield one’s face from facial recognition. Her suggestion is that you turn on your phone, open up INstagram and use a face filter to test your flower shield or CV dazzle. Keep adjusting until the face filter doesn’t recognize your face.

Here are some examples we love: 

  • CV Dazzle by Adam Harvey 
  • Mother Protect Me by Joselyn McDonald
  • Another creative intervention is this sweatshirt that tricks license plates readers and causes them to malfunction. Created by the punnily named Adversarial Fashion
  • Journalist Paris Martineau shared her sweatshirt she purchased  
  • Print that tricks to counter facial recognition and you can put it on your mask

Creating Noise in the Machine

Facebook seems to know everything about us, from what we like, where we live, and who we are friends with.  From this data they gather about us, Facebook is able to use targeted advertising systems and create recommendation algorithms and filter bubbles to serve people content, sometimes to disastrous and dangerous results. But there are ways to trip up Facebook’s algorithms and we have a few suggestions below. You can use these same suggestions on any big tech social network like Instagram, TikTok, Twitter and others.

  • Trick 1: Unfollow everyone on facebook (but you’ll still be friends). What do you see? What does your feed look like? Is it it empty, strange, bizarre?
  • Trick 2: Change some key information about yourself that is wrong: maybe it’s your location, age, relationship status, gender, birthday or favorite movies. What kinds of ads or content are you seeing now?
  • Create a brand new profile, and follow a few well known figures you would not normally follow. What do you see here? What content is suggested? Reflect on how or why the algorithm is suggesting this.

Tricking Computer Vision and AI

Change your images! Using the new software “Fawkes”, upload an image to Fawkes, which adds in ‘digital noise’ that’s invisible to the human eye to that image. What Fawkes does is it tricks Facebook into thinking your image is another person. It doesn’t completely cloak you but like wearing a mask, it makes you look different to the computer vision systems on Facebook. Now, upload the image to Facebook and see if the auto tagging feature (that uses facial recognition) works!

Learn More About Fawkes Here

Documenting Your Day to Day

AI is all around us, in almost everything we use. Try this activity inspired by the People’s Guide to AI, to better ‘see’ the AI in your daily life. Take a piece of paper, and throughout your regular day, write down when you think you use AI. For example, when you use a search engine or even a search function on a social network, write that down. Try opening up Netflix or Spotify. What are the recommendations you see?

Learn More!

There’s still so much to talk about and learn in the AI space. We’ve selected a few of our favorite articles and projects on confronting AI, creating better AI, and creating better systems online. A way to create a better world is to decolonize technology, start crafting exits from surveillance capitalism, understanding when technology won’t fix a problem, and creating better responsible systems that center marginalized communities and their needs.