The European Union (EU) has adopted a new regulation that is designed to crack down on the dissemination of terrorist content online.
Despite safeguards to preserve freedom of speech, the new EU regulation is unlikely to curb potential tension with U.S. free-speech standards.
The new EU regulation can result in the imposition of financial penalties for non-compliance.
Some authorities may have intelligence-driven rationales for allowing certain problematic content to remain online, which could force companies to choose between competing requests.
On June 7, 2021, the European Union (EU) adopted new rules, known as the Terrorist Content Online Regulation (2021/784), targeting the online dissemination of terrorist content. The new rules are the most aggressive effort to date by the EU to compel hosting service providers to remove terrorist content. There is little doubt that service providers, like social media companies, have provided an avenue for terrorists to organize, fundraise, and plan attacks. Silicon Valley social media companies bear the greatest responsibility for recent terrorist use of the Internet. The rise of the so-called Islamic State and Jabhat al-Nusrah in Syria and Iraq coincided with slick propaganda, YouTube videos, and crowdfunding efforts organized over Facebook and Twitter. More recently, white supremacist Brenton Tarrant livestreamed his deadly attack targeting Muslims in Christchurch, New Zealand, over Facebook. In announcing the EU regulation, EU Commission official, Margaritis Schinas, invoked Tarrant’s act of terror by noting, “we are cracking down on the proliferation of terrorist content online and making the EU’s Security Union a reality. From now on, online platforms will have one hour to get terrorist content off the web, ensuring attacks like the one in Christchurch cannot be used to pollute screens and minds.”
As Schinas noted in his statement, the key provision of the new EU regulation is the maximum one-hour time period hosting service providers will have to remove terrorist content from their platform upon submission by a competent authority. However, in the case of such a request, paragraph 17 of the EU regulation notes, “except for in duly justified cases of emergency, the competent authority should provide the hosting service provider with information on procedures and applicable deadlines at least 12 hours in advance.” In practice, unless in extremis situations, service providers could have far more than one hour to internally deliberate upon the competent authority’s request.
Yet, there are valid concerns that 2021/784 may create situations where free speech protections may be at risk. This is a particularly acute situation in the United States, home to the world’s most popular social media companies, with quite different free speech standards than the EU. In an effort to curb this critique, the EU explains that the directive applies to solicitation (fundraising or operate on behalf of a terrorist group), incitement (advocating for terrorist offenses, to include glorification), and in providing instructions on how to conduct attacks. Yet, in the United States, individuals cannot be prosecuted for merely glorifying acts of violence (albeit such action could potentially violate the community and safety guidelines of any number of social media companies). Thus, there remains significant tension between the EU regulation and U.S. free speech standards. This tension could create situations where U.S. hosting service providers may have to pick their poison – run afoul of U.S.-protected free speech or EU regulation 2021/784. The regulation does build in safeguards to protect “fundamental rights,” such as annual report requirements, user notification of content removal determinations, complaint provisions to allow for content removal determinations to be overturned, and protections for content distributed for journalistic, research, or artistic purposes. Still, these safeguards are unlikely to assuage concerns regarding the potential abridgment of speech.
EU 2021/784 is a regulation with teeth — for non-compliance, hosting service providers will face penalties for not removing identified content within an hour. Article 18 of 2021/784 lays out seven factors that can determine whether a financial penalty is applied to the violator. These factors range from the nature and gravity of the content circulating on the platform to whether the infringement was intentional. Other key factors include the level of cooperation between the service provider and the competent authority, as well as the size and financial strength of the provider. In this regard, the EU is more likely to levy financial sanctions against large companies, such as Facebook, Google, and Twitter, than a smaller company. Yet, there remains room for interpretation on penalties that could be imposed, since EU 2021/784 cites a European Commission Recommendation from 2003 that defines small, medium, and micro-sized businesses (e.g. fewer than 250 employees). Regardless, persistent failure to implement provisions of EU 2021/784 could result in any sized company facing penalties that can be up to 4% of the platform’s turnover.
Finally, while the new regulation allows for service providers to engage with competent authorities to justify non-removal of content, it is unclear how that could work in extremely complex situations. For instance, a European-based competent authority may put forward a request for expedited removal of content to a company that conflicts with a third-party non-EU country request. In theory, U.S. law enforcement authorities may be monitoring extremist networks operating on a company’s platform and have requested that the company not remove the content in fear of jeopardizing an investigation that could tip-off the extremists. It is easy to hypothesize how an EU authority could put forward a request to take down the same content. In such a scenario, the company may not be at liberty to provide the EU competent authority with any information related to the U.S. request — putting the company at loggerheads with EU 2021/784. The aim of the EU’s new regulation is noble — to fight terrorist misuse of the Internet. Yet, the devil is in the details since it is highly probable that the implementation of EU 2021/784 will inevitably result in future legal, financial, and law enforcement challenges.