Do you ever wonder why your Facebook profile and perhaps a best friend or parent’s are not the same? Why are our Spotify Discover Weekly or recommendations so different? Why is it that I never hear the latest hits, but my partner is recommended techno?
Consider this → have you ever had a conversation with a friend about a certain type of product, let’s say a camera, and then you check your Instagram and suddenly see an Ad featuring the exact same camera. That’s kind of scary, isn’t it?
How do these systems work and what are the consequences of such pervasive and invasive systems? Remember → these tricks are human-made, more precisely they are created, designed and applied by social media platforms and alike.
In the following videos, academics, policy analysts, researchers and artists will introduce us into the invisible and yet powerful world of AI-driven social media platforms.
As Paddy and Caroline explained, what you see or don’t see, what you hear and more importantly what you are EXPOSED to is entirely decided by the platforms and AI, fueled by your data, previous interaction with a certain content, suggestions and preferences (are you into politics, sports, music, nature?). Every interaction, share and comment counts and the more they know about you → the longer they will incentivize you to stay on their platforms. The goal is engagement, from you. Simple as that.
But, this engagement goal comes with significant consequences to you, me, our communities, societies and media.
To make users stay and like, share, comment → it all involves interacting with AI at some level→ as users we are interwoven in a kaleidoscope of our likes, shares, preferences, desires, worries, questions, and thoughts.
As a result, you get to see content that is similar to content you previously interacted with, most likely already matching your views, values and wishes. This phenomenon is often called “filter bubble” or “informational rabbit holes.”
Digital spaces and platforms host a variety of content, from information, news, music, videos, art, etc. and they already gather a lot of data ABOUT users - what we like, where we live, who we are, how old we are, etc. All of this gets combined together with our data, the content we interact with and HOW we interact with content. In this way, platforms build ideas about people (millions and millions of people, communities, cities, and countries). One of the ways they gather this information is through ad tracking, data brokers, and tracing cookies. Tracing cookies are little bits of information placed within web pages and your computer that send information about you and your browsing habits to companies. This information is specific information the company wants - like what previous websites you've visited, for example.
“Becoming ‘environmental of computation’ (Jennifer Gabrys) tells us how different computational processes, including AI and algorithms, are being used for something that is not longer visible thing in the foreground of our everyday lives, it becomes something invisible but active background - a hidden infrastructure of life.”
AI driven platforms can elevate harmful content, as well as important content, often rendering misinformation the same as journalism content → as long as the posts, regardless of factual correctness, are attracting lots of eyeballs, clicks and interactions, the posts will be shared.
However removal of illicit content is tied to a number of risks to human rights and digital participation.
“Not only that we fail to see the reasons for certain decisions but we fail to see actually decision.”
It’s important to emphasize, when talking about AI, we talk about AI that is designed, coded, breadth and tuned by social media platforms, we are talking about the big 5 Giants (Facebook, Google, Amazon, Apple, Microsoft) → they truly are giants in terms of money, scale, and the amount of people who use these platforms. Not only are the creators behind these platforms incredibly wealthy, but the platforms themselves also have enormous political power. During the Cambridge Analytica case, the director of Facebook refused to testify before the UK government. As Lukáš brilliantly parsed digital platforms are non-state sovereignties, with political influence and pseudo-diplomatic relations with states. He underlined that states are transforming into platforms, but that platforms are also becoming pseudostates by framing and regulating our digital realities.
“This all points out the to fact that freedom of expression is experiencing tough time around the world, and that activism and human rights work in this field is crucial and important."
Check this video: Your data, our democracy
Have you heard of the phrase “move quickly and break everything around?” This is a strategy that comes out of Silicon Valley and has been used to guide the building of platforms. However, these platforms are “breaking” parts of society and negatively impacting journalists and journalism. These platforms are an authoritative decision makers when it comes to dissemination of the media content.
As Caroline mentioned: “we are not immediately cognizant” that we are engaging with AI since AI can be ‘masked’ or just embedded in products invisibly. In this new socio-technological (dis)order, where platforms can dictate the (new) rules of the game and AI facilitates content and interactions in our daily digital world, the media is struggling to find their way through all of this.
“Any information feed that we are exposing ourselves to and data that are we are constantly being fed is influencing us in some ways.”
Platforms create algorithmic newsfeeds and recommendations that surface and share content to users (that means us), because of predictions these platforms create from our data, our profiles, and what we like and share. With so much data ABOUT us → these platforms decide if we are going to be exposed to quality journalism or misinformation or disinformation, or entertainment or key political topics or music or the weather forecast. The list goes on.
“The logic of platforms fundamentally runs contrary to what we have aimed to produce as good quality journalistic content, which is sometimes not entirely based on what would people want to read.”
We should not trust platforms to decide for us which media content is accessible to us, and what kinds of information is of public importance. As Paddy Leersen explained: “After all, when it comes to media we don’t tend to trust to government to decide for us what should be shown and how the media should work, we need journalists, academics, activists and other participants in the public debate to be able to understand what decisions are being made in order to remain critical, in order to remain independent.”
There is a problem of duality here: which is reporting on AI while having to use AI tools to share that reporting. There can be a steep learning curve for journalists and the media when reporting on things like AI because of how technically difficult and extremely opaque AI systems are. Then, there is the added layer of sharing that research and journalism; where newsrooms and journalists have to turn to platforms like Twitter and Facebook to share their message, and are having to rely on these opaque and broken systems like recommendation algorithms to surface that content to new users. This problem is a bit like an ouroboros.
Here are two great playful tools to help you explore the logic of AI: content amplification: from conspiracy theories to historical facts and a ready to use fact-checker.
“AI - makes it almost like a smokescreen that enables platforms and governments to absolve themselves from responsibility by claiming that their systems are objective. It is difficult to say what is going on because of the black box nature of these systems.”
AI’s impact on speech is an important topic as it matters enormously who and how decides to make certain information and content accessible over other. It matters if platforms interfere with our election process and incentivize certain political views over others. It matters that we all are able to access and read local and world news. It matters not only for the public debate → it matters because:
“We need to make sure that we all have a say on what happens within the public debate, public debate is central for knowledge production.”