CENTER FOR ETHICS AND THE RULE OF LAW​

Proceed with caution: Why curtailing Section 230 immunity is not the solution to social media regulation

Controversy surrounding the ethical and legal responsibility technology companies have to moderate third-person user-generated content has been firmly rooted in American political consciousness since Twitter’s decision to label and fact-check President Donald Trump’s tweets in May 2020. Trump’s Executive Order on Preventing Online Censorship coupled with Twitter’s and Facebook’s policies of labeling posts have complicated an already strained relationship between the administration and technology sector. Section 230 of the Communications Decency Act, which grants internet service providers protection from civil liability for hosting content while authorizing those same providers to moderate, in good faith, content the providers deem objectionable, lies at the heart of this tension. In essence, the point of contention is whether the immunity provision is in any way conditioned upon “good faith” moderation.

In accordance with the executive order, the National Telecommunications and Information Administration (NTIA) filed a petition for rulemaking with the Federal Communications Commission (FCC) on July 27, 2020, to clarify the scope of Section 230 immunity. On August 3, 2020, FCC Chairman Ajit Pai announced the beginning of a 45-day period during which the FCC will review public commentary on this issue. Additionally, on September 23, 2020, the Department of Justice sent draft legislation to Congress aimed at furthering the reforms proposed in President Trump’s executive order. Thus, the time is ripe for us to critically reflect on both the consequences of curtailing Section 230 and the recent history that spurred this contentious debate.

Section 230(c)(1) has become synonymous with the epigram “the twenty-six words that created the Internet”; however, this provision that gives websites immunity from civil liability by attributing the content to a third party has come under severe scrutiny. Individuals from across the political spectrum, including former Vice President Joe Biden in January and President Trump in May, have urged its repeal. In an August 27, 2020, interview with Vanity Fair, Senator Lindsey Graham echoed this sentiment stating, “If you’re going to have a social media site like QAnon or anything else, you spread this stuff at your own peril.” Although the executive order’s reinterpretation of Section 230 is unlikely to withstand judicial scrutiny, it is critical to address the repercussions of eroding Section 230 immunity. Big Tech companies must be held to a higher standard of accountability; however, the consequences of repealing Section 230 cannot be ignored.

What is Section 230 and why has it been integral to the Internet’s development?

Title 47 U.S. Code § 230, entitled “Protection for private blocking and screening of offensive material,” contains two important provisions: 230(c)(1) and 230(c)(2). The first provision states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Companies are not liable for third-person user-generated content; consequently, an individual can sue those responsible for authoring a defamatory statement on Facebook but cannot sue Facebook. Unlike newspapers, which are liable for all published content, social media companies are protected by Section 230 immunity and are not regulated as publishers. This immunity has several notable exceptions. It does not provide immunity for content that runs afoul of intellectual property protections, constitutes a federal crime, or violates either the Fight Online Sex Trafficking Act (FOSTA) or the Stop Enabling Sex Traffickers Act (SESTA). It also does not provide immunity for content generated by the website itself.

The second provision states that a provider or user of an interactive computer service does not assume liability as a result of “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” This protects websites when making content moderation decisions, allowing them to either moderate, host, or restrict access to objectionable content.

These two Section 230 provisions are best understood as a response to the contrary judicial decisions in Cubby, Inc. v. CompuServe Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co. in the 1990s. In the CompuServe case, a New York federal district court dismissed a defamation suit against CompuServe, an online service provider, for material posted in its third-party newsletter. Because CompuServe chose not to moderate any of its content, it was treated as a distributor and determined to be not liable. In the Prodigy case, however, a New York state court issued the opposite opinion, ruling that Prodigy, another online service provider, was liable for the defamatory statements posted on its bulletin board. Given that Prodigy actively moderated its content to create a more inviting platform, the court determined it had acted as a publisher and was, therefore, liable. These cases created “the moderator’s dilemma” wherein companies were disincentivized to responsibly moderate content because engaging in any editorial practices would immediately shift their role from distributor to publisher, making them liable for defamatory statements.

Former House Representatives Chris Cox (CA/48) and Ron Wyden (OR/3) sought to remedy this conflict of interest by introducing the “Internet Freedom and Family Empowerment Act,” which gave companies both a “sword” to moderate content at their own discretion and a “shield” to protect them from liability. At approximately the same time, Senator James Exon (D-NE) introduced the Communications Decency Act to criminalize the transmission of offensive online content to minors. Exon’s amendment and the Cox-Wyden amendment were added to Title V of the Telecommunications Act of 1996 entitled “Obscenity and Violence” but more commonly referred to as the “Communications Decency Act of 1996.” The Supreme Court struck down the majority of the Communications Decency Act as an unconstitutional violation of the First Amendment; however, the Cox-Wyden provisions remained and were added to the Communications Act of 1934 as Section 230. This brief review of Section 230’s legislative history helps clarify its intended purpose. While the Exon amendment represents a flagrant attempt by Congress to regulate Internet content, Section 230 was established in the spirit of preserving a competitive and free digital marketplace, protecting an open forum for the exchange of ideas, encouraging future technological innovation, and empowering users to choose platforms that reflect their expectations for content moderation.

What are the consequences of repealing Section 230?

Section 230 has been heavily scrutinized by both sides of the aisle, albeit for different reasons. Democrats argue that Section 230 immunity has allowed Big Tech to abdicate responsibility for moderating harmful content, such as hate speech and disinformation, that are widely circulated on social media platforms. Furthermore, Section 230’s liability protection applies equally to fringe sites such as 8chan and others known to propagate white supremacy, anti-Semitic messages, and hate speech. The courts’ willingness to embrace broad interpretations of Section 230 was made apparent in Force v. Facebook, Inc where victims of Hamas attacks could not sue the social media company for unlawfully assisting the terrorist group. According to Democrats, this lack of regulation has created a digital environment free from accountability and easily susceptible to abuse. Republicans, on the other hand, believe that left-leaning media outlets are systematically censoring and silencing conservative voices. While evidence of this remains largely anecdotal, Republicans argue that repealing Section 230 would expose biased content moderation based on political affiliation.

Trump’s Executive Order on Preventing Online Censorship drastically limits Section 230 immunity by urging the FCC to link subsection (c)(2) with (c)(1) in its reinterpretation of Section 230. Although legal precedent indicates that these provisions should be assessed independently, the proposed changes would disqualify a company from Section 230(c)(1) immunity if it failed to meet the Section 230(c)(2) standard for “good faith” content moderation. Furthermore, the Secretary of Commerce is to file a petition for rulemaking with the FCC in order to formulate regulations and clarify the scope of immunity, which was submitted on July 27, 2020. The petition for rulemaking seeks to, inter alia, clarify the relationship between subsection (c)(1) and (c)(2), provide additional guidance on what constitutes “objectionable content” and “good faith” content moderation, and more clearly define what content moderation practices fall within Section 230 immunity and the circumstances needed to be considered an “information content provider.” The executive order also advises the Federal Trade Commission (FTC) to prohibit unfair content moderation practices and allow individuals to take legal action if their terms of service have been violated.

There are multiple issues with Trump’s executive order. The FCC and FTC are independent federal agencies outside the president’s purview and cannot be compelled to take action via executive order. This, coupled with the FCC’s lack of authority to enforce Section 230, further undermines Trump’s attempt to constrain social media companies. Thus, the executive order’s perplexing and legally questionable foundation makes it unlikely to withstand judicial scrutiny. In fact, the Center for Democracy & Technology filed a lawsuit on June 2, 2020, on the grounds that it violated the First Amendment. According to the 1974 Miami Herald Co v. Tornillo decision, newspapers are entitled to editorial and curatorial freedom under the First Amendment. Social media companies have the same right to host, arrange, select, and moderate content as shown in the cases Prager U v. Google and Manhattan Community Access v. Halleck. Separating its legal defensibility from its consequences, the executive order would allow the government to strip immunity from companies based on a disagreement over the content. Once the company was determined to have violated practices of “good faith” content moderation, it would become liable for all user-generated content and face an unprecedented surge of private litigation.

The desire to curtail Section 230 protections is driven, in part, by the outsized dominance of Big Tech; however, its repeal may paradoxically solidify their power and influence. Section 230 serves as an implicit financial subsidy to technology companies, allowing start-ups to thrive in an online environment with limited legal liability. Without this immunity, only well-established companies would have the necessary funds and resources to accommodate the influx of lawsuits, thereby preventing the development of competing platforms. Section 230 has enabled companies like Google and Facebook to surpass their predecessors AOL and MySpace by allowing technological innovation to flourish absent legal liability. While this has not been without consequence, it is important to consider the effect of Section 230’s repeal on the free exchange of ideas. The threat of defamation suits and legal liability will force companies to choose between severely restricting content and largely abandoning moderation practices. Regardless, the outcome is both antithetical to the Internet’s purpose as a platform for robust and diverse discussion and undermines efforts to curb hateful and harmful messaging.

The complexity of regulating social media companies should not dissuade the government from devising new strategies to introduce greater accountability into the tech sector. Some politicians and pundits have suggested that Section 230 immunity should be contingent on social media platforms proving their political neutrality. While this is both inherently problematic and a misrepresentation of the law, the notion of using intermediary liability as a lever to incentivize more robust and transparent content moderation practices may hold merit. Furthermore, shifting public perception of social media companies as intermediaries without social responsibility to stewards of public discourse encourages the adoption of a business model that integrates and prioritizes the employment of content moderators and expands audits of the moderation process.

Under the supervision of an expert advisory panel, such as a blue-ribbon panel of subject matter experts, academics, regulators, and activists, social media companies would be subject to regular reviews of their data and content moderation practices. Andrew Marantz, a staff writer for The New Yorker, wrote, “We can protect unpopular speech from government interference while also admitting that unchecked speech can expose us to real risks. And we can take steps to mitigate those risks.” As the government and Big Tech grapple with divergent approaches to regulation and the adversarial threats of Section 230 repeal, we are apt to remember that the most viable solutions will be achieved through partnership with these companies, not punitive restrictions against them.

Ashley Fuchs is a political science and classical studies double major at the University of Pennsylvania’s College of Arts and Sciences and a Benjamin Franklin Scholar. She was a 2020 summer intern at the Center for Ethics and the Rule of Law.

The views expressed in this article are those of the author and do not reflect the official policy or position of the Center for Ethics and the Rule of Law or the University of Pennsylvania Carey Law School.

Mailing List

Submissions

Submissions to The Rule of Law Post. Please refer to CERL’s submission guidelines for additional details on the blog post format. Should your submission be accepted, we ask that you please complete the Agreement to Transfer Copyright.

Please upload text in one document under 6 mb. Preferred format as a simple text file (.txt).

Share Proceed with caution: Why curtailing Section 230 immunity is not the solution to social media regulation on:

LinkedIn
Twitter
Facebook
Reddit
Email
Print
Proceed with caution: Why curtailing Section 230 immunity is not the solution to social media regulation