In 2018, almost 3.2 billion people around the world actively used social media. Indeed, in recent years, these platforms have provided users with tools and opportunities inconceivable just a few years ago. But given the scale and scope of the platforms’ impact on our lives, social media also raise serious concerns about some of our fundamental rights as users, and in particular our freedom of expression and privacy.
Social media companies usually offer their services “for free”; but in order to create a Facebook, Instagram or Twitter account, users have to agree to the platform’s Terms of Service (ToS), which dictate what the user, and the platform, can or cannot do. Users have no way to negotiate: they either accept the terms or cannot use the platform at all. Opting out, while always possible, has consequences for personal and business relationships, knowledge, and engagement in public discussion.
ToS, and the systems that implement them, allow social media platforms to determine what users see, access and share, and what they don’t. This poses a problem for users’ human rights, as ToS currently provide lower free speech protections than those guaranteed by the international human rights framework. Platforms’ policies on content removal are often written using opaque and vague language, meaning they are inconsistently or poorly applied, and removal processes lack procedural safeguards and transparency. This is made worse by the fact that users are often not notified about removal or provided with the reasons behind it, nor with adequate remedies in cases of wrongful removal.
For example, Facebook declares that the platform does not allow any organizations or individuals that are engaged in terrorist activities to have a presence on Facebook. However, in the ToS, there is no clear definition of terrorist activities, and the large majority of content removal is currently performed by algorithmic systems, which are not sufficiently capable of understanding context and nuance. This means material relating to terrorism but not actually supporting it, such as news articles, human rights research, or critical commentary, can be removed by companies with limited oversight. Indeed, users around the world complain that social media platforms are blocking accounts and content without clear grounds in either their community standards or in international law. For example, activists in Myanmar seeking to expose and document genocide and serious human rights abuses against the Rohingya minority, had their posts removed with little recourse. Social media users’ freedom of expression is not limited by the principles of necessity and proportionality, as it should be under international law, but rather by private rules based on company priorities, property or profit. Facebook has now started to publish reports on its content removal activities, but these reports fall short in providing information concerning wrong removals.
On top of these content removal issues, ToS also grant social media the power to collect a disproportionate amount of users’ personal data, which are then used to profile users and to segment, target and customize the content, messages and adverts they will be exposed to. Profiling and personalization are of vital importance for social media platforms’ advertising-reliant business model but have a negative impact on users’ data protection and to their freedom of expression, which implies the right to decide what content they want to see and share.
ToS therefore represent an unreasonable restriction and violation of users’ rights. It is no surprise, therefore, that various governments now plan to issue rules that directly or indirectly address ToS. Nevertheless, if such initiatives are not compliant with international human rights standards, governments could allow a system where the lawful speech of millions of people is monitored, regulated, and censored. The recently published UK Online Harms White Paper, which purports to more closely regulate social media in order to curb "illegal and unacceptable content", leans in the direction of over-correction and could potentially harm press freedoms.
When ToS are imposed by dominant private companies, they do not constitute fair commercial terms for users. To address this problem, regulators have an efficient instrument at their disposal: competition rules. Competition law aims to guarantee that markets remain open and fair to new entrants. These rules do not condemn dominance per se, but dominant players cannot take certain actions, even if these actions would not be concerning from smaller players. In particular, dominant players cannot exploit consumers, for example by imposing unfair commercial conditions for the use of a product or a service.
In the EU, relevant case law has provided an interpretation of the concept of “fairness” of commercial conditions by reference to the minimum balance that must exist between the rights, interests and obligations of the contracting parties. A condition going beyond what is “absolutely necessary” for the achievement of the one party’s objective has been considered as an “unfair” limitation of the freedom of the other party, and thus abusive if enacted by a dominant player. So the fact that, in many instances, ToS do not respect this proportionality and necessity test is not just a concern for human rights law, but also for competition rules.
A number of competition authorities, such as the German and the Italian ones, have started to look closely at companies’ ToS, to ascertain if the latter constitute an abuse of dominant position. At times this has led to action, including fines, where regulators have found, for example, that Facebook didn’t give consumers sufficient information and choice in the data transmission included in their ToS.
However, competition authorities can do more. They can use competition rules to prohibit platforms from using ToS that exploit users by violating their human rights, and impose as a remedy that ToS comply with fundamental rights as guaranteed by international law. Moreover, users who want their data to be protected, or who want to have control of the content they see and access on social media, should also be able to find real alternatives on the market. But users will have real choice only if efficient competitors are able to enter the market and gain critical mass, and if users can move their data across platforms. Finally, regulators can play a role in preventing further concentration in the market, for example by stopping social media platforms from acquiring other players in the same or adjacent markets.
Competition rules have a fundamental role to play in protecting human rights, by ensuring that dominant social media companies’ are unable to exploit their dominant position to undermine individuals’ rights, and that social media markets become a healthy ecosystem where information flows freely.