Tech & Rights

Informing The Disinfo Debate: A Policy Guide for Protecting Human Rights

Disinformation can wreak havoc on society if unchecked, as shown by widespread vaccine reluctance in Europe. Significant changes are needed to protect our fundamental rights from the corrosive impact of disinformation campaigns on social media.

by Franziska Otto

Liberties, in cooperation with Access Now and EDRi, published a policy guide on how to tackle disinformation while protecting human rights as a continuation of the report published in 2018. Our aim is to feed the ongoing discussion between the European Commission and tech companies about the review of the Code of Practice of Disinformation.

Disinformation is not a new phenomenon. Spreading false information about opposing groups is something that has happened throughout history – just think of the medieval witch trials. However, due to the 2016 US elections and the ongoing COVID-19 pandemic, the topic has certainly gained more attention. What is new today is the fact that digital technologies make it a lot easier to create, spread and amplify disinformation. Indeed, as the business model of online platforms relies on tracking and harvesting user data, they often welcome harmful content because it keeps people on their sites for longer. This business model not only leads to a huge power imbalance between platforms and their users, but it also has very real consequences for human rights.

Tackling disinformation is a complex issue and is not as easy as it might seem. Multiple things need to be kept in mind, for example that the protection of fundamental rights cannot fall victim to the fight to limit the negative impact of disinformation. And, to a certain extent, we will have to live with the existence of disinformation. What we can achieve, however, is reducing its negative impacts.

So far, the European Union has not come up with effective solutions, instead focusing on quick fixes. Against the backdrop of the establishment of the Digital Services Act (DSA) and the Code of Practice on Disinformation, which is to be published in 2022, the EU now has the chance to craft platform governance that enforces existing legislation while safeguarding fundamental rights.

Our goal is to set out policy recommendations for EU co-legislators to combat disinformation while fully protecting fundamental rights. We base our recommendations on the premise that disinformation is not the cause but rather a symptom of deeper societal problems, such as racism, sexism, and inequality.

Our report sets out certain actions that need to be taken in order to achieve this goal:

  • Advertising based on tracking and targeting of personal data needs to be phased out. Targeted ads should be limited to using the information users provide voluntarily and explicitly for that purpose. Users should also be able to access, review and change what a platform knows about them.
  • To give users ownership over their data and to provide proper transparency, content-recommendation systems need to be explained. This information should include the data input and how the algorithm was tested, which will make it easier to contest algorithmic decision-making.
  • Additionally, we call for strong enforcement of the GDPR. The GDPR safeguards the rights of EU residents and prevents the misuse of their personal data for targeting purposes.

Want to learn more about the impact of disinformation on online platforms on fundamental rights as well as what we can do to reduce it? Read our joint report.

Support our work on disinformation Donate

Civic Space Policy Paper 2022

Civic Space Needs Our Protection

Read more