The rise of social media has implications for our fundamental rights, perhaps none more so than our freedom of speech. There is no doubt that our right to free speech extends online. But there is considerable and complex debate on how to regulate the online sphere, particularly social media. How the regulations are constructed, where the lines are drawn, will have huge implications for our freedom of speech on social media.
What does free speech mean?
Free speech means you have the freedom to express yourself in any way that does not take away the rights of other people. You can (and should) feel free to criticize the work your elected officials are doing. You should not feel free to hold band practice late into the night, because that could take away your neighbors’ right to privacy. And when they complain about the noise, you can’t encourage people to destroy their property or worse. But up to that point, you’re free to express yourself.
This is why free speech is so central to democracy. Democracy means that everyone in society makes collective decisions about the laws they live under and who administers them. The free exchange of ideas, opinions and information provide us with the knowledge we need to make those decisions. That’s also why free speech and the organs that support it, such as free media and civil society, are often the first things that disappear in autocracies.
And because we can’t have democracy without free speech, we have to be careful about any actions that could limit it. We need independent voices to make decisions about which forms of expression are legitimate and which are not. This extends to the online world, where there’s an ongoing struggle to balance the users’ rights and the interests of tech companies and governments.
How free is speech on social media and on the internet in general?
The extent to which someone can freely express themselves online varies from country to country. In the EU, the bloc has laws that protect our freedom to express ourselves online. In some cases, the ease of online speech has allowed it to step far beyond the bounds of free speech – consider online bullying or threats, or the sharing of extremist content or child pornography. These forms of “expression” are not protected speech.
But in other areas, drawing the line is more complicated. The EU has been dealing with how to protect the rights of copyright owners against the right of people to share legal content. Should such an enormous and difficult task be farmed out to AI? Surely some of it must be, but how this is done could have profound implications for free speech.Liberties has been adamant that compromising free speech, even putting it at potential risk, is a no-go. And that’s how it should be – if we are to err, let it be that not enough of our fundamental right to free speech was limited, and not that we gave too much of it away. That’s why we’ve advocated for users’ free speech during the EU’s work on new copyright law. And why we warned European decision-makers that their plan to regulate online terrorist content might unduly restrict free speech.
We are also mindful of the role online platforms have in determining free speech. Although we may use their services to share our thoughts, there is an obvious danger in making them arbiters of what is and is not free speech. Such decisions need to be made by independent judges, and certainly not by companies with a vested interest in making sure the content they allow and promote is good business for them.
What is important to know about free speech rights on social media?
The rise of social media has given new importance to protecting free speech. People are often able to stay anonymous when they say things – not necessarily a bad thing, especially in places where criticizing the government can put you or your family in danger. Or when you want to seek help for a private medical issue. But social media allows people to use anonymity to bully, harass, intimidate or stalk people.
Social media also gives everyone a platform. Again, this is not an inherently bad thing. It not only allows anyone to share their ideas, but connects us faster and cheaper, allowing us to exchange ideas and create things. But it also gives people the ability to easily spread disinformation that can cause harm both to individuals and society as a whole.
How do social media companies filter speech?
Social media companies can filter speech, and thus limit free speech, by using both humans and artificial intelligence to review content that might not be free to share. They can remove what you share or block you from sharing content lawfully if your content is not protected speech, for instance if you use social media to incite violence against someone. And, of course, social media companies have terms of service that have myriad more causes for sanction. (Although it can be the case that their terms of services can breach the law by limiting lawful content.)
Do you want to learn more about free speech?
Perhaps the most drastic form of social media filtering speech is by blocking some people from using their service at all. This has the effect of limiting the voices that can be heard on a platform. Some would argue that’s a good thing, and this is certainly the case when people have spread hate speech or incited violence. These issues were front and center when a certain former president of the United States was blocked from Twitter and Facebook following the attack on the U.S. Capitol.
What does the future hold for free speech on social media?
It may be a short and disappointing answer, but the truth is that we don’t know what the future holds. There seems to be a consensus that we shouldn’t allow illegal content to be shared on the internet. But it’s easier said than done. Companies, politicians and rights groups all have disagreements about how exactly to do this, and which considerations should be given more weight than others.
Regulating online speech is complicated. But if we leave it up to social media companies and their algorithms, our free speech, and thus our democracy, will suffer. They should use a fraction of their profits to create a complaints system where you can always request human review of a decision to filter content. And, if necessary, anyone should be able to go to a judge to have their case heard.
But the truth is, at the moment we don’t really know how their algorithms work. We don’t really know how much material they remove or block, or for what reasons, or how they curate our news feed. To make sure they’re doing their best to protect free speech, all this information has to be available to researchers, authorities and independent watchdogs, like Liberties, who can check on them.