Lawful but Awful? Control over Legal Speech by Platforms, Governments, and Internet Users
In his quixotic bid to buy and reform Twitter, Elon Musk swiftly arrived at the same place nearly every tech mogul does: he doesn’t want censorship, but he does want to be able to suppress some legal speech. In a matter of weeks, Musk moved from saying that Twitter users should be “able to speak freely within the bounds of the law” to saying that Twitter should take down content that is “wrong and bad.”
This rapid evolution was hardly surprising to anyone who follows online content moderation issues. A platform overrun with the legal content Twitter currently prohibits—including barely-legal pornography, scams, extreme violence, and worse—would bring public condemnation and alienate both users and advertisers. Not only would that cut into Twitter’s profits, but it would also greatly reduce the platform’s value as a megaphone for power users like Musk. Presumably some combination of these concerns drove his shift from opposing all censorship to opposing “bad” speech.
American lawmakers don’t have Musk’s options for policing speech online. Many seem not to appreciate just how much content falls into the “lawful but awful” category—speech that is offensive or morally repugnant to many people but protected by the First Amendment. This blind spot can profoundly distort policymakers’ expectations about the consequences of platform regulation.
This Essay begins by describing the swathe of speech in the lawful-but-awful category and arguing that platform regulation proposals which disregard it may be doomed to fail. It then describes three possible approaches to “governing” legal speech. The first is to let platforms set the rules. That’s our current approach under the First Amendment, helped along by the law known as the Communications Decency Act Section 230 or “Section 230.” The second option is to let the government set rules for legal speech. Recent laws from Texas and Florida both attempt to do that under the confusing moniker of “common carriage.” Such an exercise of state power raises obvious First Amendment questions, but it is not unprecedented. Older U.S. laws both restricted and compelled carriage of some lawful speech on communications technologies like broadcast. As I will discuss, though, it is hard to imagine a functional and constitutional version of broadcast regulation for the internet. The third and, to my mind, best option is to give users more power to decide what lawful speech they want to see. That idea is already embedded in little-discussed provisions of Section 230. Recent proposals to introduce competing “middleware” content moderation services can update that approach for the platform age.
The constitutional limits on platform regulation seem increasingly up for grabs. Platforms’ concentrated power over important channels of public discourse makes them, in the eyes of many observers, threats to democracy. It also makes them highly attractive levers for government control over online speech. Any analysis of new platform regulation must look closely at the rights and roles of internet users, online platforms, and the state. It also needs to be smart about the extra-legal factors—what Larry Lessig once called the “norms, markets, and architecture”—that shape laws’ real-world impact. Lawmakers will make unwise choices if they disregard social norms and market demand for content moderation. By the same token, they may tie their hands unnecessarily if they assume that the internet’s current architecture is immutable.
I. The Demand for Governance of Lawful Speech
Much of the terrible speech online, ranging from racist polemics to medical misinformation, is legal. It will remain so unless the Supreme Court substantially changes its interpretation of the First Amendment. This inconvenient jurisprudential truth is too often overlooked in discussions of platform liability. The New York Times succinctly described the issue while retracting portions of a 2019 article that initially and inaccurately blamed Section 230 for “hate speech, anti-Semitic content and racist tropes” online. As the Times’ retraction explained, “the law that protects hate speech . . . is the First Amendment, not Section 230.”
Errors like this reflect the wide gap between speech prohibited by social norms and speech prohibited by law. The internet’s erosion of norms as a constraint on human behavior unleashed a flood of speech in the lawful-but-awful category: material that cannot be prohibited by law but that profoundly violates many people’s sense of decency, morality, or justice. Even in countries that prohibit far more speech than the United States, research suggests that lawmakers greatly overestimate how much offensive speech is actually illegal. These mistakes matter because they obscure real limits on lawmakers’ options.
The lawful-but-awful category in the United States includes material that is almost universally condemned, on moral or normative grounds, when it appears on social media. Critics across the political spectrum seemed united, for example, in urging platforms to remove horrific footage from the racist massacres in Christchurch and Buffalo—even though many of those posts were likely protected by the First Amendment. Many Americans would presumably also want platforms to intervene in response to the online equivalents of acts held protected in prominent Supreme Court cases—like Nazis marching through a neighborhood of Holocaust survivors (the online analog might be Nazis celebrating genocide in a forum for survivors’ families) or protestors picketing a soldier’s funeral with signs saying “Thank God for IEDs” and “You’re going to hell” (an online analog might be posting to a Facebook memorial page).
A particularly chilling similarity can be found between two items shared via Facebook. One came from the perpetrator of the tragic 2022 school shooting in Uvalde, Texas, who wrote “Ima go shoot up a elementary school rn.” The other comes from a defendant who successfully asserted his First Amendment rights before the Supreme Court in 2015. His Facebook post said:
I’m checking out and making a name for myself
Enough elementary schools in a ten mile radius to initiate the most heinous school shooting ever imagined
And hell hath no fury like a crazy man in a Kindergarten class
The point here is not to defend or criticize current First Amendment law or to suggest that lines between legal and illegal speech are stable or uncontested. Nor is the point to say that speech of the sort listed here would be legal in every possible context. It is also not to diminish the existence of genuinely illegal content online—I have written extensively about that and about its relation to laws like Section 230 elsewhere.
The point is that huge amounts of online speech will, like the Nazi marches and funeral protests described above, fall into the lawful-but-awful category by almost anyone’s standards. Ignoring this issue distorts policy discussions—including among critics who, like Elon Musk, worry that platforms are suppressing legitimate speech or distorting political discourse.
This blind spot leads policymakers on the left to assume that hate speech and disinformation are online because of statutory immunities and that eliminating those immunities would make the speech go away. It leads policymakers on the right to misgauge the impact of laws compelling platforms to “stop censoring” content and carry all lawful speech. Voters across the political spectrum may share Republican lawmakers’ concern about platforms’ power to silence important speech. But those voters are unlikely to want “anti-censorship” laws if the result is that a grandmother looking at family photos on Facebook will now find extreme pornography and beheading videos or that a teenager watching dance videos on TikTok will see bullying, racist diatribes, and pro-suicide and pro-anorexia content.
II. Who Governs Lawful Speech Online?
Like Elon Musk, many internet users and politicians want to reduce platform “censorship.” But they still want someone to weed out much of the barrage of spam, gore, and other “bad” speech that the Internet has to offer. Broadly speaking, this curation could be done by three entities: platforms, governments, or users. Each choice has real upsides—and real downsides.
A. Platforms Decide: Section 230 and Laissez-Faire Rules
Congress enacted Section 230 in 1996. The law was designed to let platforms set their own rules for speech while simultaneously avoiding government censorship. On the anti-censorship side, Section 230 immunized platforms from most civil and state claims based on user speech (while preserving their federal criminal law obligations for things like child abuse images and international terrorism). This immunity shields platforms from what one ruling called the “ceaseless choices of suppressing controversial speech or sustaining prohibitive liability.” On the pro-content-moderation side, Section 230 expressly encouraged private platforms to set their own rules and immunized them from “must-carry” claims by users who believe that platforms should be compelled to distribute their speech.
Section 230’s immunities fulfilled their goal of encouraging private content moderation. Without them, we might instead have the internet that the law’s drafters saw coming under pre-Section 230 case law: one on which platforms either barely allowed users to post at all or else turned a blind eye to even the most egregiously illegal posts in order to avoid being treated as legally responsible editors. Section 230 also substantially succeeded in promoting what it called a “true diversity of political discourse . . . and myriad avenues for intellectual activity” online. The web of today remains vast and strange (if perhaps not quite as strange as it once was). It is easier than ever to put new content online.
At the same time, power over ordinary Internet users’ speech is now remarkably concentrated. Billions of global users now depend on platforms like Facebook and YouTube to mediate their communications. Those companies have the capacity to profoundly shape public discourse through private “speech rules” ranging from formal Terms of Service to back-end ranking algorithms.
The mix of so many users on just a few platforms makes it effectively impossible for platforms to choose speech rules that users will agree on. Their decisions on topics ranging from nudity to medical information will inevitably appear irresponsible to some and censorious to others. In a world with a dozen Facebooks or five major search engines, platforms’ choices would not be as consequential. As it is, though, the whims of an Elon Musk or a Mark Zuckerberg become the rules we live under.
B. Governments Decide: “Common Carriage” and State Control
One response to platform power is to take away private companies’ control over speech and impose government-defined rules instead. So-called “common carriage” laws in Texas and Florida do essentially this. Platforms’ objections have, so far, primarily rested on their own First Amendment rights to set editorial policies. (Federal courts had suspended enforcement of both states’ laws, based on those arguments, at the time this article went to press.) But new, state-imposed rules for online speech also profoundly affect the rights of internet users as speakers, readers, or operators of their own businesses. Below, I will briefly describe some oddities in the Texas and Florida laws before turning to questions about what more overt state interventions in lawful speech might be supported by Supreme Court precedent.
1. Common carriage: platforms as phone companies.
In a few short years, calls to treat major platforms as common carriers have evolved from rhetoric into actual laws. Proponents like Josh Hawley and Ted Cruz suggest that platforms are to be converted into something like phone companies: ceasing all “censorship” of legal speech and showing users all the ugliness that the internet has to offer. Texas’s and Florida’s laws both use the language of common carriage, and they both prohibit something they call “censorship.” When forced to put pen to paper, though, lawmakers did not actually open the floodgates to all lawful-but-awful speech. Instead, they substituted their own preferred speech rules.
Texas’s social media law asserts that large platforms are common carriers. In a section titled “CENSORSHIP PROHIBITED,” it generally bars removing or down-ranking users’ lawful posts based on the viewpoint expressed. But Texas’s legislators, like most people, apparently could not quite stomach all the lawful-but-awful viewpoints expressed online. So, they made some exceptions. Platforms can, for example, remove posts harassing survivors of sexual abuse—even if those posts are legal and regardless of any viewpoint they express. By contrast, platforms may not remove speech that supports domestic terrorism unless they also ban opposing viewpoints. In other words, platforms can’t let users condemn or mourn horrors like the Uvalde or Buffalo shootings, unless they also show users’ posts that condone and celebrate such attacks. Texas lawmakers specifically rejected an amendment that would have allowed platforms to restrict pro-terrorism posts.
Florida’s social media law was also touted as an anti-censorship measure and claims to treat platforms “similarly to common carriers.” But, like the Texas law, it actually subdivides lawful speech into legally favored and disfavored categories. Florida’s special rules are mostly about picking winners: the law requires that platforms’ ranking algorithms give special prominence to any speech “by or about” political candidates, for example, and prohibits content-based moderation of a broadly defined set of “journalistic enterprise[s].”
Whatever we think of the wisdom or morality of the Texas and Florida laws, they are not simple censorship bans or common carriage mandates. Instead, they are new rules favoring or disfavoring currently lawful speech. As the District Court noted in striking down Florida’s law, the new speech rules it imposed were “not content-neutral or viewpoint-neutral” but “about as content-based as it gets.”
The most robust academic defense of platform common carriage falls short of proponents’ anti-censorship goals in a different way: it doesn’t restore speech to places where most platform users will actually see it. Eugene Volokh’s Treating Social Media Platforms Like Common Carriers? spells out what he calls the “far from open-and-shut” case for compelling platforms to host content and make it findable when users search for it. But, as in his previous writings, Volokh firmly concludes that platforms have a First Amendment right to curate their ranked newsfeeds and recommendations. Those are the features most social media users actually see and the places where speakers have a chance to find new audiences or “go viral.” In practice, Volokh’s “must-host” rule is like saying bookstores can decide what to include on their shelves but must keep other books on hand for customers who ask for them. It gives “deplatformed” speakers more or less what the web already offers: a place to post content where people can find it, if they are actively looking. The more sweeping mandates of the Texas and Florida laws would remain unconstitutional.
All of these models run into trouble because of users’, platforms’, and lawmakers’ desire for moderation of lawful speech. Volokh’s solution is to leave private curation in place for most aspects of platform operation but carve out isolated corners for unfettered speech. Texas’s and Florida’s solution is to proclaim common carriage but actually substitute the state’s judgment about some lawful speech for the platforms’ own.
Lawmakers have so far found it impossible to get away from the extra-legal pressures, including norms and market forces, that prompted Elon Musk to back down from his anti-censorship stance. The next subsection of this Essay will explore what would happen if they stopped trying—if they passed laws explicitly governing lawful speech online.
2. New, top-down speech rules: platforms as broadcast companies.
If U.S. lawmakers want to straightforwardly legislate new rules for currently lawful speech, they can find a model in some drafts of the U.K.’s proposed Online Safety Bill. In its simplest versions, that law would make platforms restrict “harmful” but lawful user speech, while protecting speech that is “journalistic” or has “democratic importance.”
This use of government power is not as alien to U.S. legal tradition as it might initially seem. We have many examples of speech regulations tailored to the unique attributes of older communications technologies. The Supreme Court upheld laws requiring broadcasters and cable companies to carry content against their will, for example. It also upheld tighter speech restrictions for particular communications technologies, letting the FCC limit “dirty words” on the radio in part because of radio’s uniquely intrusive nature. The Supreme Court expressly declined to extend those older communications law principles to internet speech in Reno v. ACLU, which has stood as the last word on this question since 1997. But opportunities to revisit it are speeding their way toward an unpredictable Court.
This is a good moment to think about how new, U.K.-style platform regulation would work in reality. That matters for making sound policy, of course. But it also matters for future judicial assessments of what state interests such laws actually advance, what burdens they impose on both users’ and platforms’ First Amendment rights, and what alternatives lawmakers could have chosen.
One obvious question is what these new speech rules would actually say. What specific, lawful speech by users would be barred under a new rule against “harmful” content, for example? Nothing in U.S. regulatory history or case law provides answers. The major Supreme Court cases are about rules for twentieth-century media, where a few privileged voices spoke to comparatively powerless audiences. Those standards are hardly suited to govern today’s internet or to determine what ordinary social media users will be allowed to say, or forced to encounter as a cost of online participation.
Calling new rules “fairness” or “consistency” requirements wouldn’t solve the problem. Any number of culture war flashpoint issues can be framed this way. Is it “fair” for a platform to remove a post announcing a “white power” rally but to leave up a post for a “Black power” rally, for example? Questions like these could fuel a thousand op-eds or undergraduate essays. Government arbiters’ answers would directly regulate lawful speech. This concern, historically raised by Republicans about the Fairness Doctrine for broadcast, can easily become a pressing one for Democrats today.
There are equally difficult questions about due process and the administrative state. The only reason platforms are able to adjudicate billions of “cases” each year is because they offer nothing resembling legal due process. If those same judgments were subject to new, state-defined rules, users would need recourse to an independent adjudicator. Neither courts nor government agencies are equipped to oversee such a deluge of disputes—much less do so while providing constitutionally adequate process. The U.K.’s answer to all this is to empower a national media regulator. The regulator won’t resolve individual disputes, though, which effectively leaves platforms as the final arbiters of any new state-imposed speech rules.
Texas’s and Florida’s approach does not, like the U.K.’s or like older U.S. broadcast cases, depend on a national regulator. Instead, it assumes a state-by-state patchwork of rules to be resolved before local courts. These state laws might conflict. Florida’s law might require platforms to carry a political candidate’s post explaining how to obtain legal abortions, for example. The same speech might be illegal in Texas. Platforms can in principle try to restrict speech based on a speaker’s or reader’s geographic location. Such geotargeting isn’t perfect, though, and it won’t help if state laws claim extraterritorial effect—as Texas’s does, granting rights to anyone who “does business” in the state. The disruptions, logistical snarls, and uncertainty of this fifty-state solution would be a meaningful problem, and not just for platforms. It would also affect anyone who relies on platforms to communicate or do business across state lines. For U.S. lawyers who remember their first-year constitutional law courses, this is the stuff of dormant-commerce-clause nightmares.
It’s hard to imagine new, government-imposed rules for lawful-but-awful speech that would be administrable in practice. An alternative is to get both major platforms and the government out of the business of imposing top-down speech rules and to empower individual platform users instead.
C. Users Decide: Section 230 and “Middleware” Rules
Letting users decide what speech they will see online is not a new idea, but it is an important one. It has returned to prominence recently, under monikers like “middleware,” in proposals to introduce competing content moderation services on social media. But it was also an animating principle of 1990s internet law discussions. As one of the original drafters of Section 230 put it, “Americans don’t want to be at the mercy of speech police, online or anywhere else. Section 230 gives sites, speakers and readers choices, and ensures that those choices will survive the hostility of those with power.”
Political conservatives are not alone in fearing that social media companies themselves have become the speech police and that concentration of users on so few platforms makes their policing consequential. But laws like those in Texas and Florida don’t restore user choice because they don’t reduce the concentration of online speakers and readers under a single set of speech rules. They just tell platforms to impose the state’s preferred speech rules.
Both in theory and effect, Section 230 takes the opposite approach—avoiding centralized control over speech by giving users lots of options. Its immunities for websites were intended to give users a diverse array of sites to choose from, with differing speech rules. But it also encouraged development of tools for users themselves to control what content appeared on their screens. Section 230’s least-discussed immunity protects those who provide users with these “technical means to restrict access” to unwanted content. User-operated controls such as browser settings to accept or block pornography were widely discussed at the time. They appear in Section 230’s legislative history, in 1990s academic literature, and in then-emerging technical standards. The Supreme Court in Reno also described “user-based software” as an alternative to top-down government control over online speech.
Such tools have generally not lived up to the hopes of the 1990s. Browser extensions today let users bluntly block entire websites; other tools depend on crude and faulty text filters. But the more sophisticated tools for content moderation are not in users’ hands. They are in the hands of major platforms, which have invested heavily in machine learning and other mechanisms to enforce their own rules.
It is these more nuanced content moderation tools—the ones that block or rank individual posts, images, or comments within a platform like Facebook—that are under renewed discussion today. New market entrants, the theory goes, could compete to provide users with a “middleware” layer of content moderation, built on top of existing platforms. Users could opt in to the speech rules of their choice but still be able to communicate with other people on the platform.
A Facebook user who objects to violent content, for example, might choose to use Facebook’s existing content moderation for most things but overlay blocking services provided by a pacifist organization. Another Facebook user who loves hockey but hates racist language might choose ranking criteria from ESPN, along with a content filter from the Anti-Defamation League. These two users could still interact and see each other’s posts, as long as neither one violates the other’s chosen rules.
This new layer of competing content moderation services has been described by writers including myself, Mike Masnick, Cory Doctorow, Jonathan Zittrain, and Francis Fukuyama—whose Stanford group dubbed it “middleware.” The model has a lot in common with telecommunications “unbundling” rules, which require incumbents to license hard-to-duplicate resources to newcomers for competing uses. In the platform context, the shared resources would include platforms’ treasure trove of user-generated content and data. In a “hub and spoke” version of this system, incumbents would retain a central role and potentially be charged with any legal takedown obligations, removing truly illegal content from the entire system while leaving all other content available for the competing moderation services. Fully decentralized or “federated” versions of the system, like the social network Mastodon, would abandon centralized control in favor of interoperating network nodes.
Middleware solves some major problems. For those concerned about censorship, it reduces platform power. It even lets users choose the firehose of lawful-but-awful garbage, if that’s what they want. For those concerned about harmful content online, it brings in experts who can do a better job than platforms do now—by parsing online slang used by incel groups, for example, or identifying anti-LGBTQ+ content in regional variants of Arabic. For platforms, it preserves editorial rights to set their own speech rules—they just can’t make them users’ only option. (That wouldn’t stop platforms from saying that a middleware legal mandate violated their First Amendment rights. But it would weaken their argument.)
Middleware is not without problems of its own, of course. It has significant unresolved practical hurdles and raises serious questions about user privacy.1 It also doesn’t address many liberals’ concerns about users choosing echo chambers full of hate speech or disinformation. Conservatives are unlikely to be happy with the major regulatory oversight that would be needed to keep incumbents from disadvantaging middleware competitors—or those competitors from misleading or harming users. But middleware does offer a response to platform power that avoids new, top-down government speech regulation.
If middleware is a way forward, but Section 230’s immunities have not yet enticed it into being, how do we get there? Some thinkers frame this as an issue of competition and propose laws requiring platforms to interoperate with middleware providers. Others suggest at least opening the door to “adversarial interoperability,” preventing platforms from suing would-be rivals under laws like the Computer Fraud and Abuse Act. Still others argue that market forces can bring middleware into being without legal changes. Twitter itself has taken voluntary steps in this direction, letting users choose third party apps like BlockParty to block trolls and harassers. More dramatically, Twitter launched Project BlueSky as a step toward a more fully decentralized system, with nodes outside Twitter’s control.
That brings us back to Elon Musk. Does a middleware-enabled version of Twitter fit his agenda? He did propose “making the algorithms open source,” which in principle might allow third parties to build their own variants. He also hails from a cryptocurrency-adjacent corner of the tech world, where enthusiasm for the “distributed web” and decentralized control runs high. And he is trying to buy the social media platform that is the most bought in to middleware approaches already. (Though apparently, he wouldn’t be buying Project BlueSky, which says Twitter spun it out as an independent entity.) On the other hand, why buy Twitter if not to control it? The Musk era, if it happens, could be just another chapter in Twitter’s long history of supporting third party developers for a while, only to change its mind and cut them off again.
Like everything else about Musk, his intentions about middleware are far from certain. But empowering users to choose their own content moderation rules would offer him a way out of a bind: he could have his “uncensored” platform and continue selling customers a curated one, too.
Conclusion
Today’s major internet platforms wield unprecedented and concentrated power over speech. Critics’ desire to reduce that power is understandable. But “common carriage” rules that flood social media with barely legal garbage would not be an improvement. And laws that preserve centralized power over speech, only to move it into the hands of the government, range from impractical to dystopian. The best policy responses are the ones contemplated in middleware models and in Section 230 itself—disrupting concentrated power over speech and putting more control in the hands of users.
- 1See generally Arturo Talavera, The Future of an Illusion, 3 J. Democracy 111 (1992).