Uproot or Upgrade? Revisiting Section 230 Immunity in the Digital Age
The internet has drastically altered our notion of the press. When newspapers were still made of paper, facts determined by reasonable investigation had to be reported as facts and corrected in light of better information. Opinions, clearly labeled as such, were reserved for the opinion pages. And no respectable newspaper of that era would have accepted unlimited, uncensored content from strangers. This did not mean that press of that day operated without external restraint. Indeed, in the heyday of newspapers, the U.S. Supreme Court repeatedly grappled with the various friction points between press freedom and government expedience.1 The internet and its promise of a free flow of information has changed all that.
Social media now occupies much of the space once occupied by our morning paper. Poll after poll confirms that most Americans are getting their news, not from time-honored news publications, but from a smattering of content—some vetted, much not—shared in Facebook timelines, Twitter feeds, and Reddit threads. These outlets are simultaneously responsible for distributing information, largely supplied by others, to an audience without borders or external restraints, yet unaccountable for tort, libel or slander laws, personal privacy, accuracy, or even the potential for inciting harmful behavior. Their influence isn’t merely unfettered, but congressionally insulated from virtually any restraint or responsibility, thanks to Section 230 immunity.
Section 230 of the Communications Decency Act of 1996 broadly immunizes internet platforms from liability for content published by third parties and for removing or restricting access to certain classes of content. Congress enacted Section 230 in the early days of the internet, when platforms were young and had a limited potential audience, as well as a limited ability to affect behavior. The concept of internet platforms—available worldwide, with free-flowing information and endless perspectives, unencumbered by government regulation, and encouraged to engage in self-regulation—seemed both new and attractive. Besides, there were billions to be made, innovations to be rewarded, and taxes on capital gains to help balance the federal budget. Of course, there were competing interests at stake: For instance, the European Union expectedly opted to prioritize privacy and antitrust regulation. The United States meanwhile placed a premium on innovation. But there were other reasons. Defending the need for immunity, internet platforms pointed to the sheer volume of information flowing through their servers, making it virtually impossible to screen everything. One commentator captured the moment this way:
When Section 230 was adopted in 1996, it would have been impossible for a service like AOL to monitor its users in a wholly effective way. AOL couldn’t afford to hire tens of thousands of people to police what was said in its chat rooms, and the easy digital connection it offered was so magical that no one wanted the service to be saddled with such costs. Section 230, which granted platforms broad immunity for third party content published on their services, was an easy sell.
Section 230 immunity has since been stretched beyond these original aims, shielding even those platforms that deliberately solicit or host illegal activity. The modern internet is inescapable, even essential, seeping into virtually every aspect of our lives. And its potential for affecting human behavior has long since been realized. News reports following mass shootings tell of inflammatory websites the assailants frequented as they contemplated opening fire in classrooms, synagogues, and public gatherings. Social media platforms favored by the young have been used to encourage teen suicide, and disgruntled spouses have used social media to humiliate their exes.
All this has led to serious policy discussions about whether it is time to revisit the blanket immunity provided by Section 230. Earlier this year, Attorney General William Barr suggested the time may have come to hold internet platforms accountable for the content flourishing on their sites and services. “No longer are tech companies the underdog upstarts,” he said in a February speech reflecting on the broad immunity provided by Section 230. Sounding a bit like President Teddy Roosevelt blasting away at the railroad and steel producers at the turn of the last century, Attorney General Barr stated: “They have become titans.”
The Department of Justice has since followed up on the attorney general’s observations, proposing a set of carveouts from Section 230’s liability shield. These include limiting or denying immunity to “truly bad actors” who facilitate or solicit federally unlawful activity; to platforms that facilitate child abuse, terrorism, and cyber-stalking; and in cases in which the platform had “actual knowledge or notice” of specific content’s illegality. The Department would also reserve “broad immunity” for platforms willing to assist government authorities in identifying unlawful activity and obtaining the offending content in a “comprehensible, readable, and usable format.” Another proposal would modify Section 230’s liability shield for platforms’ moderation efforts: rather than the current catch-all for removing content that is “otherwise objectionable,” the Department proposed rounding out the list by adding “unlawful” and “promotes terrorism” in its place.
The president, too, has waded into the debate over Section 230. As social media platforms have begun experimenting with fact-checking his posts and, more recently, removing provocative political ads, the president has threatened to roll back their immunity, at one point signing an executive order aimed at curtailing some of Section 230’s protections. To what end, however, remains to be seen. Eliminating Section 230 immunity seems a strange way to reduce the kind of content regulation that motivated the president’s order. Indeed, the opposite appears more likely. Losing immunity may well encourage Twitter, Facebook, and others to engage in even more rigorous content vetting to avoid legal exposure.
Platforms’ increased willingness to police content instead presents a different, albeit equally important question: What role do we expect social media platforms to play in online discourse? Has social media truly become today’s newspaper, accountable to the same journalistic ethics governing other members of the press? And if so, is each user effectively a contributor subject to editorial oversight? Media is part of the name, after all. Or are they just glorified chatrooms? In which case it seems that concerns about factual accuracy should take a back seat to matters of personal privacy, public safety, and national security—interests whose harms correspond more directly to the liability protection platforms presently enjoy. Amidst such uncertainty, one thing is certain: before we can expect to have a fruitful discussion about the continued need for Section 230 immunity, we have to address this threshold question of categorization.
Until then, volunteer efforts and public pressure remain the only available tools. Large tech firms have, with some success, hired thousands of employees and used artificial intelligence and algorithms to ferret out harmful content. The argument that voluminous web traffic exceeds platforms’ regulatory capacity—that nothing has changed in the technological lightyears that have lapsed since 1996—makes less sense today. In the United Kingdom and the European Union, for example, Section 230-like immunities have vanished in the face of public demand that internet platforms more effectively monitor and police the content appearing at and within their digital doors. The United States is still mulling its options.
Even as social costs continue to mount, Section 230 has proven quite robust. Just last year, it gave Google, Microsoft, and Yahoo! an escape hatch from liability after they allegedly conspired with “scam locksmiths” who flooded search results with false and misleading advertisements. And it offered a similar safe haven to the dating app Grindr, whose online safety features failed to prevent a victim’s ex-boyfriend from creating a profile impersonating the victim and violating his privacy. Simply put, immunity applies “regardless of whether the defendant acquired knowledge that the third-party content it published was false.”
The problems with the status quo are as obvious as they are numerous. Fashioning an appropriate (and durable) response is another story. Simply abolishing Section 230 immunity seems excessive, even counterproductive. Wide open allowance of tort, defamation, and privacy violation claims could quickly overburden not just the companies forced to incur additional costs policing content, but also the judicial system tasked with furnishing a forum when those efforts inevitably go awry. Others have urged the use of antitrust laws: if technology giants don’t shape things up, then break them up. In fact, the Justice Department’s recent reform proposals emerged from what began as an antitrust inquiry. But the use of laws designed to curb the market power of nineteenth-century railroads seems a square-pegged solution for a round-holed problem. New tools may be necessary to address the problem with these publisher-distributor hybrids we call internet platforms. And Congress must decide how to fashion them in a way that accommodates modern realities.
Congress has done this before, and it can do it again. Improper use of the internet has not gone altogether unnoticed by lawmakers. Section 230, by its express terms, does not apply to criminal sanctions. As identifiable harms have become apparent, Congress has responded with a series of statutes criminalizing cyberthreats and extortion, economic espionage, unsolicited emails, identity theft, access device fraud, and cyberstalking.2 Criminal statutes, however, may have limited long-term, industry-wide corrective effect. Enforcement requires a higher level of proof and is typically subject to intense scrutiny where human liberty is at stake.
Hoping to address these lingering concerns, Senators Lindsey Graham (R-S.C.) and Richard Blumenthal (D-Conn.) introduced the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, or “EARN IT” Act, in March 2020. As its title suggests, the proposed federal legislation would convert Section 230 immunity from a default protection to an earned benefit. Specifically, it would create a national commission which would recommend “best practices” for providers of interactive computer services. Noncompliance with the commission’s “suggestions” could result in loss of Section 230 immunity, though the attorney general would have the authority to override the commission’s recommendations and write his or her own standards. The proposal has expectedly produced significant criticism from the tech industry and privacy advocates, who have labeled the proposal an attack on free speech, a back door for law enforcement to access encrypted communications, and an assault on innovation. The Justice Department’s more recent proposals have garnered similar reactions. Not exactly a promising start—consider the swift demise of previous internet legislation slapped with similar Orwellian labels.
We recommend something less sweeping—something that would put compliance in the stream of technological advances, avoid unnecessary collisions with user privacy concerns, and keep the government from wading into the political thicket of free speech regulation.
Congress might consider refashioning Section 230 as an affirmative defense to tort, defamation, and other claims stemming from users’ unlawful online activity. This approach would give online platforms a complete defense to such claims after showing that they employed reasonably available technology to screen out harmful content.3 A platform whose screening systems could identify potential customers for advertisers who sell tank tops would be hard-pressed to say that it could not reasonably implement techniques to filter out content designed to steal trade secrets.4 Determining the constraints of “reasonably available technology” would require an analysis of both the screening technology available on the market and the defendant-platform’s ability to afford it. Adopting this approach would equip judges with a sliding scale as they mull Section 230’s application in any given case—allowing upstarts a similar grace period afforded to yesterday’s internet trailblazers while holding today’s tech behemoths to a standard better tailored to their increased capacity for self-regulation.
Of course, an affirmative defense poses its tradeoffs. Tech firms would undoubtedly incur additional costs relative to the status quo. But its burdens would be borne by those large entities best suited to shoulder them, which also happen to be the platforms most likely to see unlawful content go viral. Though substituting an affirmative defense for a broad immunity all but guarantees increased litigation, assigning the defense’s administration to the judiciary instead of a federal agency promises to alleviate some of the concerns about speech suppression and encryption backdoors posed by the likes of EARN IT and the Justice Department’s recent proposals. Meanwhile, unlike outright abolition, it strikes more of a balance between concerns about harmful content on one side, and worries about preserving innovation and judicial resources on the other.
One lingering concern about an affirmative defense approach is its generally late-stage utility in litigation. Consider the following example: A Facebook user makes a defamatory post, and the victim sues the user and Facebook for defamation. Assume further that Facebook’s diligent use of the best available moderating technology entitles it to a complete defense. Facebook likely cannot dispose of such a claim with a motion to dismiss because it bears the burden of proving the defense. Even with a rock-solid case, then, Facebook would be forced to endure the most expensive phases of litigation—most notably, discovery—before disposing of the case at summary judgment. The plaintiff conversely would bear the relatively light burden of proving that the post was defamatory.
A slightly more conservative approach avoids this scenario. Instead of recasting immunity as an affirmative defense, Congress could leave Section 230 alone and enact a separate cause of action for “failure to take reasonable steps to prevent [insert unlawful act here].” Doing so preserves the desirable qualities of the affirmative defense approach (case-by-case adjudication, judicial administration, and minimal privacy trade-offs) while respecting the important gatekeeping function that motions to dismiss play in our civil justice system. By keeping the onus on plaintiffs, who must plausibly and precisely allege what actions the platform could have taken to mitigate harm before saddling them with costly discovery and summary judgment motions, platforms who do their due diligence have some assurance against the nightmare scenario of paying for prevention only to face the same legal costs as the worst offenders.
We concede that neither approach readily addresses the political elephant in the room: censorship. There is a tendency to equate the debate over platforms’ liability for their users’ behavior with simultaneous discussions about the immense power platforms have to censor user speech. And indeed, some critics of Section 230 immunity have raised perceived censorship as reason for reform.5 Yet their collision is hardly inevitable. Sure, threatening the immunity of purportedly biased platforms offers a potent political cudgel. But in a strictly legal sense, they are two different issues. One is whether (and if so, to what degree) absolute immunity under Section 230 has outlived its usefulness, which we assess by weighing the political, economic, and social costs of various approaches to platform liability for unlawful user behavior. The other concerns platforms’ considerable power over speech in what people might consider today’s public square. While one might ultimately prove useful in coaxing platforms to address the other, the censorship debate’s preoccupation with platforms’ removal of allegedly harmful but otherwise lawful content asks a separate question—namely, has the time come to require platforms to provide users the same free speech protections that Congress must afford the protestors on its front steps? Modernizing Section 230 doesn’t require us to answer to that politically fraught question, so we won’t.
Ultimately, the proper course will have to be charted by Congress, which in turn must balance the interests of affected parties with the need to continue fostering technological innovation. And with the stakes this high, it should.
- 1See, e.g., Nebraska Press Ass’n v. Stuart, 427 U.S. 539 (1976) (forbidding prior restraints on media coverage of criminal trials); New York Times v. United States, 403 U.S. 713 (1971) (requiring “actual malice” for public officials to sue press for defamation); New York Times v. Sullivan, 376 U.S. 254 (1964) (requiring government to demonstrate “grave and irreparable” danger to justify prior restraint of publication of government secrets).
- 2See 18 U.S.C. §§ 875, 1028(a)(7), 1028A(a)(7), 1029, 1037, 1831–39, 2261A(2).
- 3Benjamin Wittes and Professor Danielle Keats Citron have suggested just this sort of legislative fix. In their view, platforms should enjoy immunity from liability if they can show that their response to unlawful uses of their services is reasonable. The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 Fordham L. Rev. 401, 419–20 (2017).
- 4In the 2002 Steven Spielberg film, “Minority Report,” Tom Cruise’s character walks into a department store, and a virtual greeter says, “Hello, Mr. Yakamoto, welcome back to the GAP. How’d those assorted tank tops work out for you?”
- 5See Kelcee Griffis, “New Legislation Could Signal Critical Mass On Section 230,” Law360 (June 17, 2020) (calling recent legislative proposals a sign of “broader backing for the White House’s recent efforts to target online censorship”); Kerry Picket, “Senator wants easier path for lawsuits against tech firms over perceived political bias,” Washington Examiner (June 17, 2020) (“Sen. Josh Hawley wants people to be sue [sic] big tech companies he says are selectively censoring political speech and hiding content created by their competitors.”).