“Enforcing the Rule of Law in Online Content Moderation” is the fifth article in a series on intersections between business law and the rule of law, and their importance for business lawyers, created by the American Bar Association Business Law Section’s Rule of Law Working Group. Read more articles in the series.
Every day, online intermediaries like Facebook, Instagram, and YouTube make decisions about user activity that can result in the removal of a user’s content or restrictions on a user’s account. Platforms often make such decisions based on standards set by national laws or the platform Terms of Service (ToS). Naturally, there are times when users disagree with a platform’s decisions. In such cases, users might seek to challenge whether the platform got its decision right. An affected user might try to appeal such decisions through a platform’s in-house appeal mechanisms, which all major platforms have established. A Facebook user might further pursue the slim chance of review by Facebook’s Oversight Board.
But what about the traditional pathway to justice: the courts? In the US, uniquely, users have few—if any—chances to successfully challenge the content decisions of social media platforms in court. However, in Europe, the picture is different. For some years now, European courts have been working their way through various platforms’ Community Standards, carving out due process rights and legal boundaries for platforms. As a result, many users have successfully sued Facebook and other social media platforms to reinstate content or accounts.
This article explains two landmark decisions to illustrate how German courts have applied contract law to infuse the principles of the rule of law into the decisions governing social media platforms.
The Anomaly of the US’s (Nearly) Total Platform Discretion
In the US, there have been no reported court cases of users successfully suing online platforms for reinstatement of content or accounts.[1] In part, this might be the product of a common law approach to contracts. Some platforms, for example, shield themselves from liability through termination-for-convenience clauses that allow platforms to suspend accounts without legal recourse for users. However, the ultimate reason for the absence of any viable recourse to courts is the current interpretation of § 230 of the Communications Decency Act (CDA). The provision is well recognized as providing immunity to platforms for non-IP-related content decisions, with immunity for inaction, e.g. not removing content,[2] as well as allowing for any good faith decisions to take down content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”[3] This means platforms are protected when they choose not to act as well as when they actually choose to act.
With respect to immunity when taking action, the less prominent feature of § 230, US courts have found robust immunity for social media platforms against suit in § 230. In the words of the 9th circuit, when ruling on redress regarding a platform decision to delete a user profile: “[A]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.”[4] Courts have interpreted CDA § 230 as granting platforms wide discretion to self-regulate via Community Standards.[5] Given this wide discretion, legal scholars are skeptical as to whether suing social media platforms to reverse moderation decisions can be successful.[6]
Europe’s Middle Ground: User Rights through Contract Law, Cultivated by the Courts
In Europe, there is no legislation comparable to CDA § 230 (a legislative provision unique to the US), and courts, especially civil law courts, have strong tools to squash ToS that do not adequately reflect principles of fairness and rule of law.
In this landscape, unjustified moderation decisions amount to breach of contract, resulting in a claim for reinstatement. But when is a platform’s decision unjustified? In Germany, e.g., answering this question typically boils down to a two-step inquiry: First, is the content/user-behavior in question violating the law or Community Standards? If not, the user can demand reinstatement. Second, if the court does find that the conduct is violating a platform’s Community Standards (but no laws) the court considers whether these Community Standards are valid. If the standards are invalid, the court squashes the relevant standards and the user may claim reinstatement, in the absence of any valid justification for content moderation.
Such judicial review of Community Standards is based on general principles of German contract law: when ToS do not meet minimum standards of fairness, courts will find the ToS invalid and inapplicable. Judicial review of ToS is based on the assumption that the consumer has neither the opportunity nor the bargaining power to re-negotiate the ToS. Substantive judicial review (when contractual disputes arise) is meant to counterbalance this reality. When looking at moderation decisions based on Community Standards, courts will apply this “ToS-review” to Community Standards.
For some years now, users have been bringing actions for reinstatement to German courts, arguing that their content was not violative of any given standards and that even if it were, the relevant portions of the platform ToS / Community Standards were unfair, thus void. It is difficult to derive an exact number, but approximately 50 such court decisions have been reported thus far in Germany. Most often, the claimants seeking reinstatement are men, and disputes center around right-wing content, which the platforms typically categorize as “Hate Speech.” Often, though not always, claimants are successful, and the platforms are required to re-publish the disputed content or to re-establish an account.
July 2021 – Landmark Decisions by the German Federal Court of Justice
In Germany, claims of this nature have already reached the Constitutional Court, which has issued a preliminary injunction requiring Facebook to reverse the suspension of an account belonging to a right-wing political party.[7] While the order was issued in the context of a preliminary proceeding and issued without clarification of the merits of the legal issues at hand, in 2021, the German Federal Court of Justice became the first High Court world over to deliver landmark decisions when reviewing content removal and account suspensions by Facebook.[8]
In one such case, Facebook removed a user’s comments about an online video showing a person (assumed by the user to be an immigrant) refusing to be checked by a female police officer. The plaintiff commented on this video excerpt, stating: “What are these people doing here … no respect … they will never assimilate and will be a taxpayer’s burden forever … these gold pieces[9] are only good for murder … theft … rioting …” Facebook deleted these comments on the grounds that they constituted “Hate Speech.”
In the second such case, the Plaintiff had posted a message which included the following: “Germans get criminalized, because they have a different view of their country than the regime. Immigrants here can kill and rape and no one is interested!” Facebook deleted the post and temporarily restricted the account of the user who posted it, placing the user’s account on read-only mode for 1 month.
In both cases, the Federal Court of Justice reversed the lower courts’ rulings and ordered Facebook to reinstate the content and reinstate user accounts with full privileges. The clarifications of law delivered through these decisions strengthen the Rule of Law in the context of platform moderation powers in the following ways:
1. Holding Parties Accountable to Contractual Terms
To start with, the decisions by the Federal Court of Justice reaffirmed that upon registration, the platform and the user enter into a contract.[10] Under this contract, the platform’s obligation is to allow the user to post content. Accordingly, the platform may not delete content without justification.
2. While Platforms are Not State Actors, Constitutional Guidance may Shine Upon their Moderation Powers
In its decisions, the Federal Court of Justice took an in-depth look at how constitutional rights govern the legal questions at hand. Before these decisions, it had been heavily disputed whether large platforms like Facebook could be bound to fundamental rights like state actors. Some scholars have argued—and courts have found—that a platform like Facebook, while not being a state actor, provides an essential public forum fulfilling functions like the state (the speaker’s corner of the 21st century). Such an argument would jump on an idea which in past precedents, the German Constitutional Court had openly opined about: “where private companies take on a position that is so dominant as to be similar to the state’s position, or where they provide the framework for public communication themselves, the binding effect of the fundamental right on private actors can ultimately be close, or even equal to, its binding effect on the state.”[11]
Notably, the Federal Court of Justice did not find that Facebook’s platform fell into this (narrow) category of state-like providers of frameworks for public communication; although Facebook did provide a substantial means of online communication, it was not found to be the doorkeeper to the internet as such.[12]
After rejecting this argument (of Facebook as being state-like), the Court elaborated how constitutional rights indirectly govern the case. In Germany, under the well-established Drittwirkung-doctrine, fundamental rights serve as a strong guidance for interpreting obligations between private parties, especially when courts must apply abstract terms like the “appropriateness” or “fairness” of certain obligations (which formed the decisive question in the case: Are Facebook’s Community Standards adequately fair?).
Consequently, the Court carefully evaluated which constitutionally protected positions should be brought into balance:
The Court acknowledged the users’ free speech rights and that the constitutional principle of equality before the law supported strong protection against discriminatory treatments by the platform. The Court based its decision to bind Facebook to principles of equal and just treatment of users on three considerations: First, the fact that Facebook is, by its own decision, opening its services to a broad public;[13] second, that at least a portion of citizens highly depend on social networks;[14] and third, due to lock-in-effects users cannot easily substitute one large platform with another.[15]
The Court also found freedom of commerce weighed in favor of Facebook when it acts to self-regulate communication standards to protect the safety and well-being of its users, who themselves could have a valid interest in respectful communication on “their” platform.[16] The court also highlighted Facebook’s own speech rights, when “speaking” by setting Community Standards. Moreover, the Court acknowledged that platforms have a practical need to moderate content and users in order to mitigate liability risks.[17]
3. Within Limits, Platforms Might Self-Regulate by Defining “Permissible Speech”
After all this balancing of interests, the Court then proceeded to its more granular conclusions. It allowed Facebook to self-define communication rules through its Community Standards, which might go beyond speech restrictions of German law (“the police could not arrest you for it, but Facebook might block it”). Thus, platforms might legitimately ban “awful but lawful” hate speech.[18] However, the Court did not grant Facebook total discretion.[19] Instead, it held that restrictions on speech must be grounded on objective reasons. Since Facebook opened its platform to general discourse, political opinions could not be outlawed as such.[20] Though the Court does not explain this finding any further, one might conclude that there is generally little room for viewpoint-based restrictions, but greater flexibility for banning certain (e.g., aggressive) forms of speech.
4. The Articulation of Private Due Process
Furthermore, the Court veritably “invented” strict private due process by arguing that Facebook’s ToS would otherwise be unfair, meaning it would cause undue disadvantage for the user. To prevent undue disadvantage to users, the Court advanced the following framework for platforms seeking to restrict content or accounts:
- Diligent Investigation: Platforms must take reasonable efforts to investigate before a decision. To limit the risks of discriminatory behavior,[21] moderation decisions must be reasonably justified, which requires the platform to reasonably examine the situation.
- Prompt Information: To bring colliding interests in balance, platforms must inform their users about a decision. For content removal, the Court found that a platform may take immediate action and notify users after the fact.[22] For account suspensions and restrictions (e.g., placing an account in read-only mode), the Court finds that the affected user generally (with some possible exceptions) must be informed before a decision is implemented.[23]
- Providing Reasons and Considering Appeals: When informing its user of content- or account-related decisions, the platform must provide a statement of reasons. Moreover, users must have the ability to appeal.[24]
The Court’s Conclusion and its Outlook
On this basis, the German Federal Court of Justice found that, because Facebook’s ToS did not sufficiently include an explicit statement of due process, the relevant provision (§ 3.2. of the ToS: “… We can remove or block content that is in breach of these provisions …”) was unfair and thus inapplicable. On this basis, the Court found that there was no sound legal basis for the content removals and account suspension in question.[25]
On a more abstract level, the decisions are a win for both users and social media platforms as well. First and foremost, this is because the Court did not treat Facebook as a state actor, instead reaffirming that even Facebook is—within limits—allowed to self-define communication standards. Moreover, even though the decisions created some bureaucratic burdens, they have also created valuable legal certainty. Platforms now know what to do; the court has given them instructions: what to write into their ToS and how to safeguard sufficient due process for their users when making moderation decisions. One can expect that other European courts will follow this precedent.
Reconsidering CDA § 230 in light of the European Approach
If we look from Europe across the Atlantic: might American users and businesses benefit from the German solution found by focusing on injecting due process through contract law?
The Culture War over CDA § 230
Certainly, the temperature is rising around whether § 230 delivers just outcomes for businesses and users alike. For some time now, politicians from both sides of the aisle in the US have been considering limitations on the protections that CDA § 230 grants to online platforms. Originally, most critics of § 230 focused on holding platforms accountable when they did not act or amplified harmful content, especially through their algorithms.
However, especially after major platforms took action to suspend former President Donald Trump, concerns have been raised over whether platforms should—in part—be stripped of their protection on the other side of the coin; that is, for when they do take action—what this paper is about. These concerns about the platforms’ moderation powers are often politicized and often frame California-based big tech as pushing a (left-wing) political agenda. Content moderation is described as viewpoint discrimination and ideological censorship. Such a narrative, of course, might overshadow reflections on reasonable grounds for the platforms’ actions.
For this side of the coin, a wide range of suggestions are on the table,[26] from requiring platforms to be more transparent about their moderation decisions, to safeguarding more uniform decisions, to narrowing the scope of the moderation liability exemption, or drastically limiting content moderation powers by introducing must-carry obligations.[27] Notably, in this debate, some voices argue against watering down of platform immunity for moderation decisions: platforms could become ungovernable when being held to First Amendment standards.[28] While this argument seems convincing, it does not argue against finding a middle ground: holding platforms accountable at least to minimum due process. Others are rightfully pointing out that loosening § 230’s immunity for making decisions undercuts its principal benefit—namely, cheap and reliable defense wins.[29] I find this generally convincing, too. Indeed, rising costs of moderation decisions could make platforms reluctant to take reasonable actions, or even lead to “over-put-back”: if you know that certain actors, e.g., white supremacist groups, are going to challenge every content decision, this might incentivize the platform to restore (“put-back”) content even though it is against the platform’s Community Standards. However, this shouldn’t be an argument against at least injecting minimum due process into moderation decisions, which would leave platforms sufficient immunity for taking good faith decisions.
Can Company Promises and User Expectations Help Re-interpret CDA § 230?
One wonders: Is there no middle path even under the existing CDA § 230(c)(2)(A)? Does the wording of § 230(c)(2)(A) necessarily grant immunity for all content moderation? And, even if it does: might not contract law be relied upon to work, at least in part, around § 230? Would this result in middle-ground outcomes like in Europe as I have described above?
At its core, the European approach is about translating what today’s mega-platforms are and what they promise to be. They are no longer underground bulletin boards where one might expect his or her content to stay online as long as the administrator is in a good mood or too busy. To the contrary, modern platforms present themselves as public forums where speech restrictions should not be arbitrary, as the motto is: “You should be able to speak your mind.”[30] Users rightfully expect any sanctions not to be based on Mark Zuckerberg’s whims or preferences, but (only) upon reasoned grounds. Indeed, the Facebook ToS is tied to legitimate concerns: “We want people to use Facebook to express themselves …, but not at the expense of the safety and well-being of others …”[31] User expectations of platform self-restraint are reaffirmed through the platform ToS: “You therefore agree not to engage in the conduct described below … We can remove or block content that is in breach of these provisions.”[32] Platform Rule of Law is strongly justified as we are talking about a non-altruistic relationship: Users “pay” for using the platform with their data and by accepting advertisements.
A change in user expectation regarding platform governance is no coincidence, as platforms increasingly accept their role as community governors, that is, enforcing values and norms, as Kate Klonick has described thoroughly.[33] But this, of course, goes hand in hand with self-restraint in adherence to common values and norms—one could say the rule of law.
Could this, in the US, translate into the legal sphere?
Eric Goldman has described potential theoretical pathways,[34] though he obviously would reject the following conclusions which seem moderate from a European’s perspective:
- Revisit “objectionable content” and “good faith.” In its plain words, CDA § 230(c)(2) does not provide blanket immunity for any moderation decision, but regarding “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” material and only for moderation decisions taken “in good faith.” If the platform erred in its assessment about a violation of its ToS, courts could question whether the content can amount to being “otherwise objectionable.”[35] Moreover, courts could inject due process through the “good faith” requirement: if a platform does not inform a user and does not listen to defense (denying appeal), this could amount to “bad faith.”[36]
- “Promissory estoppel” through abstract promises. Theoretically, online platforms should be free to waive their discretion for content moderation (which is otherwise backed by § 230), especially through marketing representations and contract provisions.[37] Courts have affirmed that a user-specific individual promise could lead to promissory estoppel, thus waiving the protection of CDA § 230.[38] Lawyers could test waters beyond individual promises: If counsel would find that platforms abstractly “promise” to act on reasonable grounds only (see above), one could conclude that they waive § 230’s subjective total discretion standard with no due process.
The suggested interpretation would merely hold platforms accountable to their own rules and inject minimum due process. Such modest self-restraint is something that nowadays even Facebook seems to suggest as a threshold for § 230 CDA. If platforms still want to hold on to full discretion, they would be free to do so, as long as they are explicit and non-ambivalent about this.
Of course, given the common platform-friendly interpretation of § 230, the suggested interpretation (waiving § 230 in parts through abstract promises of due process) faces an uphill battle, as courts are generally (rightfully) reluctant to translate unclear contract provisions and marketing representations into estoppel of privileges (one does not give up rights through warm words).
However, as user expectations of platform self-restraint (Rule of Law-inspired, perhaps) continue to grow, these questions invite reinterpretation. In Europe, as I have shown, lawyers and courts have been pushing large platforms towards more Rule of Law in moderation decisions. Maybe, there is a similar, as yet untraveled, road ahead under current US law, too.
[1] A highly interesting collection of reports on (unsuccessful) cases can be found at Eric Goldman’s Technology & Marketing Law Blog (section “content regulation”).
[2] CDA § 230(c)(1).
[3] CDA § 230(c)(2)(A). See Eric Goldman, Online User Account Termination and 47 U.S.C. §230(c)(2), UC Irvine Law Review, Vol. 2, 2012, 659 (662).
[4] Riggs v. MySpace, Inc., 444 F. App’x. 986 (9th Cir. 2011), citing Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1170-71 (9th Cir. 2008).
[5] See, e.g., Green v. Am. Online, 318 F.3d 465, 471 (3d Cir. 2003): “allows … to establish standards of decency;” Langdon v. Google, 474 F. Supp. 2d 622, 631 (D. Del. 2007): “provides … immunity for … editorial decisions regarding screening and deletion.”
[6] E.g., Daphne Keller, Who Do You Sue? State and Platform Hybrid Power Over Online Speech (2019), p. 4, 12 and 16.
[7] Federal Constitutional Court’s Order of 22 May 2019, 1 BvQ 42/19 (English press release here).
[8] Federal Court of Justice, decisions of 29 July 2021 – III ZR 192/20 and III ZR 179/20.
[9] The term “gold pieces” is used as a sarcastic description of refugees. It refers to a former statement of German politician Martin Schulz, who in 2016 argued in favor of welcoming refugees as follows: “The way we benefit from these people, is more valuable than gold, it is the unperturbed belief in the dream of Europe.”
[10] Federal Court of Justice, decision of 29 July 2021 – III ZR 192/20, para 40.
[11] Decision of 6 November 2019 – 1 BvR 16/13 “Right to be forgotten I,” para 88 (English version here).
[12] Federal Court of Justice, decision of 29 July 2021 – III ZR 192/20, para Rn. 71.
[13] Id. at paras 76 – 78.
[14] Id. at para 78.
[15] Id. at para 79.
[16] Id. at para 87.
[17] Id. at paras 88 – 89.
[18] Id. at para 90.
[19] Id. at para 93.
[20] Id. at para 93.
[21] Id. at para 96.
[22] Id. at paras 95 – 99.
[23] Federal Court of Justice, decision of 29 July 2021 – III ZR 179/20, para 87.
[24] Federal Court of Justice, decision of 29 July 2021 – III ZR 192/20, para 97.
[25] The Court briefly considered whether the content in question was illegal according to Criminal Law (which would have yielded a right to remove it irrespective of the ToS). However, the Court found it did not, so it ordered Facebook to reinstate the content and to refrain from restricting accounts for the given reasons again.
[26] A good overview is to be found on the Wikipedia page for § 230.
[27] See, e.g., Eugene Volokh, “Treating Social Media Platforms Like Common Carriers?”, 1 Journal of Free Speech Law, 377 (2021).
[28] Jack M. Balkin, Free Speech is a Triangle, 118 Colum. L. Rev., pp. 2011, 2026 (2018).
[29] Eric Goldman, Online User Account Termination and 47 U.S.C. §230(c)(2), UC Irvine Law Review, Vol. 2, 2012, 659 (671).
[30] Twitter, in its about section.
[32] Facebook, ToS 3.2; italics not in original text.
[33] Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598 (2018).
[34] Eric Goldman, Online User Account Termination and 47 U.S.C. §230(c)(2), UC Irvine Law Review, Vol. 2, 2012, 659.
[35] This implies strengthening an objective understanding of CDA § 230(c)(2), which is disputed, Id. at 662. Goldman even argues that platforms might not be able to waive § 230 protection.
[36] Id. at 665 and pointing towards Smith v. Trusted Universal Standards in Elec. Transactions, No. 09-4567 (RBK/KMW), 2011 WL 900096, at *25–26 (D.N.J. Mar. 15, 2011).
[37] Id. at 667.
[38] Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009): “under the theory of promissory estoppel, subsection 230(c)(1) of the Act does not preclude … cause of action” (action was brought against the provider for not taking action after user-specific promise to do so).