Tech & Rights

Who Should Decide What We See Online?

Online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms.

by LibertiesEU

Our lives are closely intertwined with technology. One obvious example is how we browse, read, and communicate online. In this article we discuss two methods companies use to deliver you content: ranking and moderating.

Ranking content

Platforms use automated measures for ranking and moderating content we upload. When you search for those cat videos during lulls at work, your search result won’t offer every cat video online. The result depends on your location, your language settings, your recent searches, and all the data the search engine knows about you.

Services curate and rank content while predicting our personal preferences and online behaviors. This way, they influence not only our access to information but also how we form our opinions and participate in public discourse. By predicting our preferences, they also shape them and slowly change our online behavior.

They decide what we read and watch. It's like being in a foreign country on a tour where only the guide speaks the language. And the guide gets to choose what you see and who you talk to. Similarly, these online services decide what you see. By amplifying and quantifying the popularity of certain types of sensational content that boosts engagement, accompanied by the often unpredictable side effects of algorithmic personalisation, content ranking has become a commodity from which the platforms benefit. Moreover, this may lead to manipulation of your freedom to form an opinion. However, the freedom to form an opinion is an absolute right, which means that no interference with this freedom is allowed by law and cannot be accepted by any democratic society.

The automated curation of our content determines what type of information we receive and how much time we spend browsing the platform. Most of us don't have enough information about how recommendation algorithms form the hierarchization of content on the internet. Some people do not even know that ranking exists. The meaningful transparency in curation mechanisms is a precondition for enabling user agency over the tools that help to shape our informational landscape. We need to know when we are subjected to automated decision making, and we have the right to not only an explanation but also to object against. In order to regain our agency and awareness over content curation, we need meaningful transparency requirements to be implemented by online platforms. Robust transparency and explainability of automated measures are preconditions to exercise our rights to freedom of speech, so that we can effectively appeal against undue content restrictions.

Content moderation

Online platforms curate and moderate to help deliver information, but they also do so because EU and national lawmakers impose more and more responsibility on them to police content uploaded by users, often under threat of heavy fines. According to the European legal framework, platforms are obliged to swiftly remove illegal content, such as child abuse material or terrorist content, once they are aware of its existence. We all agree that access to illegal content should be forbidden. However, in some cases the illegality of a piece of content is very difficult to assess and requires a proper legal evaluation. For instance, a video can be either a violation of copyright, or it could be freely reuploaded if used as a parody.

The line between illegal and legal can be challenging. The tricky part is that online platforms rely on automated decision-making tools as an ultimate solution to this very complex task due to the scale of managing online content. To avoid responsibility, platforms use automation to filter out any possibly illegal content. But we need safeguards and human intervention to control automation. We can't exclusively rely on these tools.

What safeguards do we need?

Without a doubt, content moderation is an extremely challenging task. Every day, online platforms have to make tough choices and decide what pieces of content stay online and how we find them. The Automated decision-making process will never solve the social problems of hate speech or disinformation, not to mention the issue of terrorism. They won't because they can't. While automation can work well for online content that is manifestly illegal irrespective of its context, such as child abuse material, it continues to fail in any area that is not strictly black and white. No tools will save us from social problems, and none of them should have the final say about protection of free speech or your private life.

As we stand now, online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms. They have to disclose information on how they respect our freedom of speech and what due-diligence mechanisms they have implemented. They have to be transparent about their everyday operation, their decision-making process and implementation, as well as with their impact assessments and other policies that have an impact on our fundamental human rights.

Besides transparency, we also need properly elaborated complaint mechanisms and human intervention whenever there is an automated decision-making process. Without people, with no accessible and transparent appeal mechanisms or without people being accountable for policies, there cannot be an effective remedy. If there is a chance that content has been removed incorrectly, then this needs to be checked by a real person who can decide whether the content was legal or not. And we should always still have the right to bring the matter before a judge, who is the only person legally qualified to make the final decision on any matter that may compromise our right to free speech.

More on this topic: Automation and Illegal Content: Can We Rely on Machines Making Decisions for Us?

Authors: Eliška Pírková from Access Now & Eva Simon from Liberties

Donate to liberties

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

We’re grateful to all our supporters

Your contributions help us in the following ways

► Liberties remains independent
► It provides a stable income, enabling us to plan long-term
► We decide our mission, so we can focus on the causes that matter
► It makes us stronger and more impactful

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

Subscribe to stay in

the loop

Why should I?

You will get the latest reports before everyone else!

You can follow what we are doing for your right!

You will know about our achivements!

Show me a sample!