There are several effective means to defeat disinformation by reinforcing the integrity of online services, limiting the monetization of disinformation, empowering users to exercise their right to get access to information, and strengthening open and non-discriminatory cooperation between platforms, academic researchers, and fact-checkers. However, successful mitigation of the harms of disinformation depends on proper enforcement of the existing law.
Stay in the loop.
The General Data Protection Regulation ensures proper safeguards against targeted and tailored deceptive messages. The upcoming Digital Services Act should introduce meaningful and robust transparency mechanisms for online advertising, content policing, and algorithmic system developments. These laws have to be enforced and applied to the problem of disinformation. Liberties advocates for meaningful, transparent, and enforceable rules. Even the best laws and self- and co-regulatory mechanisms are pointless if they are not enforced and backed up by an appropriate oversight mechanism. Our policy paper was written to feed the Commission’s approach to review the Code of Practice of Disinformation in March 2021.
Mitigate disinformation while preserving free speech
The EU has an obligation to respect the Charter of Fundamental Rights in addressing disinformation. The EU’s obligation to protect free speech implies that unwanted content such as disinformation and misinformation will always exist to a degree. Furthermore, disinformation on online platforms is not the cause but rather a symptom of broader societal problems, such as the dysfunction of politics, racism, sexism and inequality. It is not possible to eliminate disinformation without addressing these underlying factors
No extra power to the powerful
Although social media is not the root cause of disinformation, it does intensify the impact of false information. The business model of platforms such as Facebook and Twitter, and other tech giants such as Google and Amazon, is based on monetizing information of any kind, including disinformation. These companies already have undue influence and power over culture, society, the economy and politics.
The EU’s response to disinformation should not end up further empowering these companies. Authorising, encouraging or mandating these companies to engage in more data gathering, more tracking, more monitoring, and more fact-checking would give them even more information about their users. This will not only give them even greater power and influence, but also allow them to collect the very kind of information that makes disseminating disinformation possible and profitable.
Tech companies must be transparent about their activities. They should provide coherent reports, meaningful data sets, and both state- and language-level databases.
Tech companies must also be transparent about their algorithms. Tech companies might have legitimate interests in selling their goods and services and protecting their intellectual property. However, this cannot be accepted as a justification to bar users, researchers, and regulators from understanding what goals and criteria companies have built into the algorithms they use in order to protect democracy and fundamental rights.
Integrity of services
To protect democracy and fundamental rights, policy-makers, researchers and regulators must understand the impact of tech companies on fundamental rights and democratic debate. Strengthening measures to protect the integrity of their services against the use of manipulative techniques will limit the amplification of disinformation campaigns. Therefore, risk assessment and risk mitigation, service design, including the recommender system, content curation and moderation, and the advertising system should be transparent and auditable.
We warn against ‘real account policy’ or suspending the opportunity to communicate anonymously. At-risk groups, such as members of the LGBTQ community, people who live with mental illness, or victims of domestic violence, are either targeted by their governments or face societal discrimination. These groups rely on anonymity to protect themselves and should not be deprived of access to services, such as social media platforms.
Addressing the online manipulation business model
Tech companies’ policies about what content to show and promote to users and what to sideline, and the way they police content, are driven almost purely by their economic interests. The business model of online platforms – the monetization of disinformation – is the core problem.
Tech companies employ micro-targeting by using user data as the basis for decisions about what adverts to show them. This misuse of data to manipulate users reinforces the need for a strong ePrivacy Regulation. One that changes the balance of incentives for companies away from a model that relies on data harvesting and data dissemination, and on sensationalism and shock.
Previously on Liberties
Image: "Disinformation" by @kevinledo seen in the Wynwood Arts District of Miami, Florida (Flickr/CC)