The State of Online Speech in the EU: A Comparative Study of DSA 2023 VLOP Reports

In this previous article, I delve into the history of the EU’s Digital Services Act (DSA) and the arguments behind empowering online platform users as well as the study of content moderation actions.

The EU established a database for the reports and moderation data submitted by the platforms it designates as Very Large Online Platforms (which may continue to expand in the future as new platforms gain more traction in the EU). This data is captured as “statements of reason” for action-ed content or user accounts. The data will serve researchers to study the effects of moderated content be it “speech” or commercial as the VLOP definition encompasses buy-and-sell marketplaces like Tiktok (recent), Amazon and Zoolando.

The 6 platforms of interest in my study are:

Instagram
Facebook
X
Tiktok
YouTube
Wikipedia

I chose these platforms because of their prominence in hosting political speech, political advertising spend and organic creator political content. Traditionally, they have been at the forefront of what is known as “civic integrity” with knowledge centres around elections, voting, and other civics actions.

Before we dig deepter into numbers and the study, I would like to share these additional three resources I have authored in the past three years:

Initial Remarks: Unfinished Outsider-Party Processes

As noted in the previous article, the DSA reports respond in large parts to just 6 articles (articles 9, 10, 15, 16, 17 and 23).

Two new processes missing are third-party ‘trusted’ flaggers and out-of-court arbitration/settlement processes mandated by articles 16 and 21 and to be established by the new “Digital Services Coordinator of the Member State.” These third-party processes have not been established yet.

Large platforms such as YouTube, Meta Platforms, X and TikTok have had partnerships with “trusted” flaggers and trusted flaggers programa since many years. These partners provide contextual knowledge and flag category-specific content such as terror-related content, “Dangerous organizations” content or minor exploitation content (e.g. CSAM) amongst other categories.

Total users
The ratio or the proportion of “authority orders” compared to the number of total users. The content or user accounts removed and suspended is, at the end of the day, what might makes the difference between one platform and another.

In the EU, TikTok has accumulated a large number of monthly active users surpassing X which launched more than 10 years prior (Twitter launched globally in 2006 while TikTok was launched in September 2017 and may have really only picked up in 2019).

The volume of “violative’ content across all 6 platforms is unremarkable compared to the presumed volume of content posted on these platforms and so is the total number of complaints received per platform. The total number of users is more closely representative of online account activity rather than unique users accounting for “finstas” and “alts.”

The total numbers of users in the EU per platform are:

  • Facebook – 259.5 million accounts
  • Instagram – 249.9 million accounts
  • X – 126.1 million accounts
  • Tiktok – 135.6 million accounts
  • YouTube – 416.6 million accounts
  • Wikipedia – 258,961 million million (“unique device” visits)

Authority orders
As referenced now, several times, “authority orders” or Member States authority orders are removal orders and user information requests by governments and local jurisdictions of EU member states. Local laws may empower government agencies or courts to request user information about service recipients from online platforms or the geo-restrictions of specific pieces of content in an EU member state country. If you must known, all of the above platforms except for X received 0 authority orders from EU member states governments to act on illegal content. X received 6 requests from government authorities in France, Italy and Spain.

With regards to Article 10 authority orders or information government requested about service recipients, X received 1,728 (broken down by country in their report). TikTok received 452 orders (Annex E). Instagram and Facebook (and possibly other Meta platforms such as WhatsApp, Meta Quest, etc.) received a total 666 requests. The rest received null.

Content moderation (human resources, indicators of accuracy, other resources)
Content moderation numbers are underwhelming. For X, the total volume of “labelled” content (age-restriction label or adult material) is 90,930. The total volume of actioned (sanctioned) content is 54,614. Specifically, the figure for “violent speech” is 19,241 both automated moderation and manual review (74). In comparison, Meta platforms, Facebook and Instagram, have reported removing glaring volumes of content that are 46,697,806 and 76,298,413 pieces of content — the reason which, for the vast majority of this content, is not listed.

The volume of content removed on TikTok in the EU during the short period of end of August to mid or end of September 30 is 4,038,586. YouTube removed a total of 935,285 pieces of content. When it comes to Wikipedia, the Wikimedia Foundation empowers users to take moderation actions themselves through a slew of community and consensus-built “talks” and guidelines (this may vary per language). Articles “listed” for deletion are listed here. The deletion process on Wikipedia is a community-driven process.

Minor note here: Only YouTube lists “misinformation” as a content action category. The number of deleted videos on YouTube for the short reporting period of less than two weeks (Aug 28 to Sep 10) is a mere 2,474.

Manual review vs. automated review: The human toll of violative content
Let’s start with YouTube. YouTube doesn’t state outright the volume of content it removes after a manual review. For Google Maps, Google reports automated detection rates vary from 73% to 90% for racy, violent and profane content.

For outright illegal and violative content, according to each platform’s T’s and C’s, the automation rates differs. Facebook and Instagram automated removal rates are 94% and 98.4%. Tiktok’s automated removal rate is a bewildering 45% meaning that manual reviewers had to review and make decisions to remove the rest of the content either via TikTok’s own initiative or via a flag.

X reported uniquely on ‘illegal’ content that was actioned after being flagged by other users including trusted flaggers partners in according to article 16. The total volume of flagged illegal content is 12,099 items of which only 99 pieces were deleted globally. The rest, 11,998 pieces of content, were geo-restricted in the EU. The total volume of flagged content is 71,206. 60,875 of flag reports were handled via a manual review by a human content moderator. Though these flags had to be examined, more than 25,000 pieces of content flagged in this short reporting period, of merely 6 weeks, were found to be not violating X’s Community Guidelines or freedom of speech and not reach philosophy or EU illegal content laws. As with the beginning of this paragraph, in the end, only 16% of reported content was actioned, in this restricted, and only a negligible volume of content (less than 1%) was removed in the EU and globally.

Human resources, median handling time
Content moderation automation tools have become an industry with larger companies successfully raising multiple rounds of funds such as CheckStep (former client), ActiveFence (previous sponsor for my work), Cinder and a handful more. AGI companies such as Anthropic and OpenAI promote their LLM API services for content moderation.

Some of the reporting regarding the linguistic expertise of the human content moderators employed to review content in the EU jurisdictions is too revealing. For instance, EU languages such as Irish, Latvian, Polish and Maltese have a nonuniform number of allocated human content moderators across these six platforms. TikTok and Twitter and YouTube (Google) report employing 0 human content moderators with knowledge of the Irish Gaelic language while Meta reports 42. This is a stark difference and possibly only a categorization of the linguistic proficiency of these employees. Possibly, Irish-based employees who went to Irish schools may have a functioning understanding of the language for cultural connotation and nuance purposes.

TikTok, Google and X (registered as TUIC or Twitter International Unlimited Company in EU) all have large European headquarters in Ireland and employ Irish employees. It is interesting to note that because of the limitlessness of the human resources for these languages, large platforms like TikTok and X don’t record data or complaints for an entire country (e.g. Malta, Irish language posts in Ireland and everywhere else in the EU). The appealing handling process in this instance is rather obscure.

Perhaps, the most revealing data from this section is the work years intervals breakdown presented by X/TIUC for their human content moderators. Unsurprisingly, it takes a drastic fall off after the 5-year mark. Only *8% of the global content moderators workforce at X/TIUC have more than 7 years experience in the content governance industry.

*The figure 8% is assumed by multiple approximations ~ assuming 638 is the total of contracted content moderators averaging at least 1 year of experience.

It is important to note that the mass layoffs of late 2022 and 2023 may have affected these numbers. The site integrity, content governance and civics integrity teams at X and Meta platforms may have included 10+ year long veterans of the industry who may have pioneered our current approach to what is known as online content governance and online safety policy disciplines. But, today, they are in the wild. Hopefully, they may have joined a regulator agency position to impart their industry knowledge to the governments and regulators attempting to regulate these services.

Complaints-handling systems, indicators of accuracy
Other figures that are underwhelming in these reports are the number of total complaints received for content moderation actions and account suspensions/suspension of services. What is also “alarming” is the high number of oft-overturned decisions in these cases which ranges in the intervals of 20 percent-plus and even *56% for TikTok.

*TikTok throws a major asterisk at this number noting that the number of restored content (267,140) in the short reporting period of September 1st to September 30th may have included restored content from complaints outside that period. The total complaints received in September 2023 was 472,254. The number of complaints remains remarkably low. TikTok removed near 4 million pieces of content for different reasons, in the same reporting period (or close to), and this number is only 11% of all possible actions.

Note: While the database published by the EU Commission include individual statement of reasons for these moderation actions, they do not include the original content itself. The jury is out whether platform users need to be more alert to appealing content moderation reasons more actively or if that may be used as a tactic to merely exhaust the resources of these platforms and tax their resources and processes for what-would be genuine and legitimate manual review requests and ultimately subvert this appeal system. Digital authoritarianism morphs and changes everywhere especially in the context of conflict, political dissent, and legitimate civil unrest in all parts of the world.

For Instagram and Facebook, tables “15.1.d.(1) Number of organic content complaints and resulting restored content” showcase the accuracy of content moderation decisions (automated and manual, flagged and reviewed on the platform’s own initiative) across a list of content moderation and community guidelines “violation” grounds. The (in)accuracy rates are not flattering. The highest inaccuracy rates for Instagram and Facebook are for the category of “Adult Nudity and Sexual Activity” at 28% (Instagram) and a whopping 46% for Facebook. On Facebook, the inaccuracy rate for the “violation” category of Hate Speech is an unforgiving 39%.

X/TIUC reports its own internal breakdown of indicators of accuracy per complaint category, automated and manual review and per language. Their self-reported numbers indicate a high level of accuracy with regards to rightfully actioned content on the platform.

When it comes to Wikipedia, the handling of complaints by users is typically managed by the language community first or the Wikiproject community. It is a different. Content and content moderation decisions are entirely (or for the most part) staffed by volunteers who make decisions such as pausing edits for article (NPR: What is a recession, Wikipedia can’t decide). Complaints are also handled by the community. The inclusiveness of the reliability/notability and other guidelines remain much of a debate amongst members of the community. In 2020 and 2021, I supported a research project that consisted of multistakeholder edit-a-thons, conversations and editorial approach to the gender inclusiveness of reliability guidelines for English, Spanish and French Wikipedia. You can read the synthesis report and watch a summary video presentation here.

This research was led by Amber Berson, Monika Sengul-Jones and Melissa Tamani and was partially funded by Wikicred. The report is initially available in English.

Conclusion
The first DSA reports were an interesting example of how the very large platforms envisage their reporting duties under this new law.

There remains several contention issues with the legislation, for instance, both access to data for “vetted” researchers and the need to anonymize this data to protect the users’ privacy.

The database has machine-readable data for “statements of reasons” submitted by the platform for their moderation actions within the EU but does not provide any more granularity for scrutinizing the platforms’ actions on contentious content governance issues such as mis- and disinformation, course, uncivil or divergent political discourse, the need for digital preservation and freedom of expression and (online) assembly/organizing in the times of war or actions specifically in the context of elections or intense political upheaval phases/episodes in each Member State of the European Union.

From a global overview of the reported figures and results so far, the content moderation actions by these platforms is minuscule compared to the total volume of content hosted on these platforms. It is also unclear whether the DSA extends to content posted outside the EU but targets EU dwellers and residents which is how the internet operates: global and interoperable.

From an internet governance perspective, the EU is making strides towards making this geographical location a data island, inhibitting data flows outside this island and imposing mandates on intermediary services at the risk of fragmenting the global internet. This will put the onus on each individual member state to police content available in its jurisdiction — a task the EU member state central and regional authorities may not have the resources or expertise for. These decisions or the consequences of this law (the GDPR-ification of the DSA or the future AI Act) may even the subject of internal rife and stifling political dissent.

Prev PostThe Digital Services Act in Action: Evaluating Platform Transparency and User Safety
Next PostThe Persistent Dilemma of Content Moderation in Conflict Reporting

Leave a reply