TABLE OF CONTENTS

Introduction

Beware dark patterns. The name should be a warning, perhaps alluding to the dark web, the “Dark Lord” Sauron, or another archetypically villainous and dangerous entity. Rightfully included in this nefarious bunch, dark patterns are software interfaces that manipulate users into doing things they would not normally do. Everyone has been there: rushing through a website and leaving one too many checkboxes marked. Soon, your email inbox becomes flooded. Even worse, your credit card is charged. This is the reality of the Internet—one in which everyone has become accustomed to living—but it is not one that all policymakers and regulators have idly accepted.

In recent years, leaders across the United States have pushed against dark pattern proliferation, introducing laws and enforcement measures aimed at curbing deceptive digital practices. Some senators introduced federal legislation—still unpassed—to make “user interface[s]” that “impair[ ] user autonomy” illegal for companies with over one-hundred million users. Separately, the Federal Trade Commission (FTC) has begun enforcing against companies that use dark patterns. Some states, including California, have passed legislation that restricts the ways that dark patterns can be used.

One major question remains: How does dark pattern regulation intersect with the First Amendment? These restrictions do not just stop developers from styling their websites and applications in certain ways—they have second-order effects on the content itself. A hypothetical limitation on endless scrolling, for instance, effectively prevents the developer from displaying additional information to the user. These constitutional issues have been alluded to in multiple recent cases and, while not yet settled by courts, are likely to become increasingly dominant.

Because of these First Amendment complications, the constitutionality of dark pattern restrictions is an unsettled question. To begin constructing an answer, we must look at how dark patterns are regulated today, how companies have begun to challenge the constitutionality of such regulations, and where dark patterns fall in the grand scheme of free speech. Taken together, these steps inform an approach to regulation going forward.

I.  Dark Patterns on the Web and in the Law

Harry Brignull, a user-experience designer, first coined “dark pattern” in 2010 in an effort to compile examples of deliberately confusing or deceptive user interfaces. Found at www.deceptive.design, his site describes the myriad of different types of dark patterns: hidden advertisements, difficult cancellation processes, undisclosed subscription fees, and more.

In the decade and a half since, the term has moved from technical jargon to legal text. In 2021, California citizens voted to amend the California Consumer Privacy Act (CCPA) to, among other things, ban companies from obtaining consent by using dark patterns. The CCPA defines a dark pattern as a “user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decisionmaking, or choice.” Subsequent regulations provided some examples and clarified that the business’s intent is “not determinative . . . but a factor to be considered” in determining whether something is a dark pattern. Other states followed suit: Colorado, Connecticut, Delaware, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, and Rhode Island all included nearly identical language in their own privacy statutes. 

California went a step further and passed the California Age-Appropriate Design Code Act (CAADCA), which prohibits businesses from using:

dark patterns to lead or encourage children to provide personal information beyond what is reasonably expected to provide that online service, product, or feature to forego privacy protections, or to take any action that the business knows, or has reason to know, is materially detrimental to the child’s physical health, mental health, or well-being.

Furthermore, the FTC has entered the fray. As part of a lawsuit against Epic Games, the maker of Fortnite, the FTC alleged that Epic used dark patterns to charge consumers without obtaining express consent—a violation of the FTC Act’s prohibition on “unfair or deceptive acts.” The parties settled for an FTC-record-breaking $245 million refund to consumers. Since then, the FTC has gone further, suing Amazon for abusing dark patterns in securing Amazon Prime subscriptions. (For instance, Amazon used a small font to display the auto-renewal price of the subscription, supposedly hiding it from consumers.) Amazon’s defense was illustrative, which we will return to shortly. That case very recently ended in a landmark $2.5 billion settlement.

This is the state of dark patterns today: increasing omnipresence, drawing increasing scrutiny. States have enacted laws restricting their use in various contexts, with some suggesting that enforcement is fast approaching. The FTC, without any statutory text expressly prohibiting dark patterns, has likewise started enforcing against dark patterns as a deceptive trade practice.

 

II.  The Companies Strike Back: Harbingers of the First Amendment Challenge

Recently, companies and trade organizations have started to challenge the constitutionality of dark pattern prohibitions. In the FTC’s lawsuit against Amazon, Amazon challenged the FTC’s strategy of targeting dark patterns as unconstitutionally vague and a violation of due process. More interestingly, they supplemented their core argument with a single footnote stating that the FTC’s strategy “raises serious questions under the First Amendment,” claiming that the government is restricting speech beyond just that which is false or deceptive. The court rejected Amazon’s motion to dismiss and considered its free speech claim waived because it was only argued in passing. But such a defense is nonetheless a sign of things to come.

This First Amendment analysis was more fully developed in NetChoice, LLC v. Bonta (2023). There, a trade organization challenged the CAADCA as a restriction on free speech, including a specific challenge to its prohibition of dark patterns. In ruling on a preliminary injunction, the district court determined that the relevant provision failed commercial speech scrutiny: Some of the harms the provision was trying to prevent were not factually supported as “harms,” and the law was not sufficiently tailored to prevent the rest. Accordingly, the court enjoined the law.

The State of California appealed the case. While the Ninth Circuit agreed with the district court that parts of the CAADCA are likely to violate the First Amendment, they disagreed on the lower court’s approach to dark patterns. This disagreement was primarily a result of an undeveloped record: There was too little to establish “the full range of how the CAADCA’s ban on ‘dark patterns’ might apply to covered businesses,” so there was an insufficient basis to strike the provision. As a result, the Circuit upheld the dark pattern prohibition. On remand, the district court struck the dark pattern provision due to vagueness—but found that NetChoice did not identify applications of the law with enough particularity to decide whether the dark pattern prohibition violated the First Amendment. Yet the district court did not rule such unconstitutionality out either.

Returning to the Ninth Circuit’s opinion provides the first judicial discussion of how dark pattern prohibitions may run up against the First Amendment. While the court’s analysis is just a paragraph, it contemplates two possibilities: First, a dark pattern could itself be protected speech and trigger First Amendment scrutiny if restricted. Second, dark patterns could impact other categories of protected speech, like content-editorial decisions, and it is unclear whether those should be considered content-based restrictions or content-neutral restrictions. But it settles neither of these possibilities. To take a crack at it ourselves, we must first take a step back and examine free speech doctrine more broadly.

III.  A Sketch of Free Speech Doctrine

The threshold matter is whether dark patterns are protected speech under the First Amendment, and, if so, what level of scrutiny would apply to restrictions on them. This analysis has a few components. First, the Court has said that the First Amendment “does not prevent restrictions directed at commerce or conduct from imposing incidental burdens on speech.” This statement means that a court must first determine whether the subject of regulation is an expressive element or instead just non-expressive conduct.

If the law regulates expressive conduct, the next step is to determine whether the speech is exclusively commercial. Central Hudson Gas & Elec. v. Public Svc. Comm’n (1980), a seminal case in commercial speech doctrine, emphasized that “speech proposing a commercial transaction” is distinct from other kinds and subject to different standards. Notably, there is no protection for speech “more likely to deceive the public than to inform it.” Otherwise, non-deceptive speech is protected under the First Amendment, and a three-factor test applies: (1) whether the “governmental interest is substantial,” (2) whether the regulation “directly advances the governmental interest,” and (3) “whether it is not more extensive than necessary to serve that interest.” However, in more recent years, the Court has suggested more rigorous scrutiny for content-based restrictions, even in the commercial speech context.

By contrast, if the law regulates non-commercial speech, the analysis bifurcates based on whether the restrictions are content-based or content-neutral. The Court describes a regulation as content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed.” If content-based, the court evaluates the restriction under the strict scrutiny framework: The restriction is only constitutional if it is narrowly tailored to further a compelling governmental interest. Otherwise, the restriction is content-neutral and need only withstand intermediate scrutiny. This requires that the restriction (1) further an important or substantial governmental interest (2) that is unrelated to the suppression of free expression (3) with incidental restriction on First Amendment freedoms that is “no greater than essential to the furtherance of that interest.”

As technology has progressed, the Supreme Court has grappled with how to apply free speech doctrine to software and has been forced to confront the question in various contexts. They struck down a law banning violent video games, acknowledging the expressive effects of video games that may come “through features distinctive to the medium.” This decision, in effect, suggests that the design of the software may be expressive conduct under the First Amendment. In 2023, the Court held that a person has speech rights in the design of their website, claiming that a website is no different from the physical analog of pen and paper. Most recently in Moody v. NetChoice, LLC (2024), free speech was extended to social media content moderation. There, state laws restricted how social media sites could treat certain categories of content. Again, the Court looked to the physical analog—newspapers—and asserted that “expressive activity includes presenting a curated compilation of speech originally created by others.” While these cases only concerned content-based restrictions, they represent the Court’s willingness to expand free speech to digital domains—embedded in software—as if those domains were analogous to physical speech.

 

IV.  Where Do Dark Patterns Fall?

The difficult part is discerning how dark pattern restrictions fit into this sketch of free speech. This question is complicated further by the vagueness and apparent breadth of the way that dark patterns are often defined. To illustrate, we can briefly examine three common examples: “fake scarcity,” “nagging,” and “confirmshaming.”

The “fake scarcity” dark pattern refers to instances in which websites pressure users into doing something with a false claim of limited supply (e.g., “Only 2 remaining!”). Such false claims can rush a consumer’s choice, impairing their decision making and thus falling under California’s statutory definition of dark patterns. However, prohibiting fake scarcity likely poses no First Amendment concern. It would only arise when proposing a transaction, making it commercial speech. Because false or misleading commercial speech is not protected under the First Amendment, a restriction on fake scarcity passes constitutional muster.

More complicated is “nagging.”  Nagging occurs when sites frequently interrupt users to request they do something that they normally would not do (e.g., numerous “Enable Notifications Now” popups may quickly persuade one to click “Ok.”) Prohibiting nagging is a content-neutral restriction on speech: It has nothing to do with the content of the “nag.” Nagging generally asks the user to do something economically beneficial to the company, falling under commercial speech, but may also be used for non-commercial speech. As discussed above, the tests for both intermediate scrutiny and commercial speech are similar, and both require that the government name an important interest and prove that the regulation is sufficiently narrow. Even if the government can name such an interest, it would be difficult to demonstrate that a hypothetical prohibition on nagging is not overinclusive. One can imagine that a company can defend the practice as simply reminding consumers of additional options to enhance their experience. How do you draw the line between sending occasional notifications, which is almost certainly protected commercial speech that the government has an insufficient interest in restricting, and the kind of “nagging” that is worthy of prohibition? This is, perhaps, an impossible exercise in tailoring the restriction. But significant overinclusivity in prohibition would cause it to fail under both intermediate scrutiny and the commercial speech test. (Such nagging is distinct from robocalls, which the Court has allowed restrictions for, because nagging only happens for software that a consumer continues to use.)

Then there is “confirmshaming”: the practice of triggering uncomfortable emotions to influence decision making. A canonical example was a medical supply seller asking to enable notifications. There were two response options: “Allow” and “No, I prefer to bleed to death.” Any regulation on confirmshaming would necessarily be a content-based restriction because it would be invoked based on the content of the text. As discussed above, this requires a greater level of scrutiny. It is then even harder to see how the government might prove that a hypothetical restriction is sufficiently narrow to serve its interest in consumers not being manipulated. Furthermore, the government would need to carefully distinguish confirmshaming from normal advertising, which also often guilts users into action. That is a tall order.

These three dark patterns have three different schemes of constitutional protection (or lack thereof). This difference is the fundamental problem of treating “dark patterns” as a single subject of regulation. Such a problem is complicated further by other, nontraditional dark patterns. Endless scrolling, for instance, is clearly implicated by Moody’s holding that social media platforms have a First Amendment right in their curated feeds. Prohibiting endless scrolling requires social media companies to curate a fixed set of content on each page, which restricts their editorial First Amendment rights implicit in Moody. In short, the answer to the fundamental question of whether dark patterns are protected speech is both the simplest and most complicated one imaginable: It depends.

V.  Dark Pattern Restrictions Going Forward

All of this should inform the way that regulators consider dark patterns going forward. For one, dark patterns should be specifically defined in terms of their individual types, rather than vague, capacious definitions. Compare a statute that simply bans the use of dark patterns—however they are defined—to a collection of provisions banning different dark patterns. In the latter, a constitutional challenge would easily be severable from the rest. Guaranteeing severability is especially important as certain dark patterns, such as false scarcity, are uncontroversially unprotected and regulable. 

Second, there are other ways of disincentivizing the use of dark patterns beyond direct sanctions. As described by Jamie Luguri and Professor Lior Strahilevitz, states can define contractual consent to exclude the use of dark patterns. Therefore, while a state might not be able to fine a company for nagging, they can void nagging-induced agreements on the grounds of nonconsent. Interestingly, this approach is exactly what state privacy statutes have done. As described above, the CCPA and other similar statutes prevent companies from obtaining consumer consent through dark patterns, but they do not outright sanction dark patterns themselves. They do not directly restrict speech. By contrast, the CAADCA—the law at issue in NetChoice v. Bonta—prohibited dark patterns and therefore raises greater First Amendment concerns.

Finally, courts must, at some point, move beyond the simple physical analog analysis they have traditionally done. Software is characteristically different in the way that users interact with the relevant speech. What is the physical-world version of nagging? Unsolicited calls or mail? Maybe, but nagging only happens when the user is using the app, so they can “stop” the nagging by exiting. Is nagging more like a salesperson repeatedly approaching a customer in a store? Maybe, but people only go to stores when they are planning on purchasing something. By contrast, a dark pattern’s intent is generally orthogonal to a user’s interest in the software they use. When “listening” to physical-world speech, a person’s interest in the interaction is usually related to the speech itself (e.g., if someone is not interested in the message of a movie, there is little reason for them to continue watching the movie, and a person enters a store because they want to buy something). In software, there is greater misalignment: The consumer’s interest in CNN is the news, not the pop-up advertisements. To re-engage with their actual interest—the news—the consumer needs to physically “engage” with the speech (even just clicking “X”). This difference is fundamental to the interactivity of software, and courts would be wise to move beyond their inclination to physicalize all digital conceptions of speech. The digital world is simply different. So should be its analysis.

 

*  *  *

Elijah Greisz is a J.D. Candidate at The University of Chicago Law School, Class of 2026.