Beware dark patterns. The name should be a warning, perhaps alluding to the dark web, the “Dark Lord” Sauron, or another archetypically villainous and dangerous entity. Rightfully included in this nefarious bunch, dark patterns are software interfaces that manipulate users into doing things they would not normally do. Because of these First Amendment complications, the constitutionality of dark pattern restrictions is an unsettled question. To begin constructing an answer, we must look at how dark patterns are regulated today, how companies have begun to challenge the constitutionality of such regulations, and where dark patterns fall in the grand scheme of free speech. Taken together, these steps inform an approach to regulation going forward.
Privacy
Parents are turning to autonomous vehicles (AVs) to shuttle their children around, seeing them as a safe and convenient option. AVs promise increased mobility for children but bring with them unparalleled surveillance risks. As parents embrace in-cabin monitoring and location tracking to enhance safety, they also—often unknowingly—authorize the mass collection, retention, and potential disclosure of their children’s most intimate data.
This Essay presents the first case study of children’s privacy in AVs, serving as a lens to critique the prevailing reliance on parental notice and choice as the cornerstone of children's data protection. Drawing on privacy theory, surveillance studies, and child development literature, the Essay argues that the notice-and-choice framework fails to account for children’s distinct privacy interests, particularly when the data collected may be retained indefinitely, repurposed by law enforcement, or sold to data brokers. The Essay calls for real limits on data collection, meaningful restrictions on sharing, and mandatory deletion rules. These principles extend beyond AVs to the technological ecosystem now shaping childhood in the digital age.
Antidemocratic forces rely on intimidation tactics to silence criticism and opposition. Today’s intimidation playbook follows a two-step pattern. We surface these tactics so their costs to public discourse and civic engagement can be fully understood. We show how the misappropriation of the concept of online abuse has parallels in other efforts at conceptual diversion that dampen democratic guarantees. Democracy’s survival requires creative solutions. Politicians and government workers must be able to operate free from intimidation. Journalists and researchers must be able to freely investigate governmental overreach and foreign malign influence campaigns that threaten the democratic process. Surfacing the two-step strategy is a critical start to combating it.
Illinois’s Biometric Information Privacy Act (BIPA) is the country’s most powerful law governing biometric data—data generated from an individual’s biological characteristics, like fingerprints and voiceprints. Over the past decade, BIPA garnered a reputation as an exceptionally plaintiff-friendly statute. But from 2023–2024, the Illinois legislature, Illinois Supreme Court, and Ninth Circuit Court of Appeals all sided with BIPA defendants for the first time. Most significantly, in Zellmer v. Meta Platforms, Inc., the Ninth Circuit dismissed the plaintiff’s BIPA claim because the face scan collected by the defendant could not be used to identify him.
It is unclear whether these developments represent a trend or an exception to BIPA’s plaintiff-friendliness. Which path is charted will largely turn on how courts interpret Zellmer: While Zellmer established that a biometric identifier must be able to identify an individual, lower courts have construed its holding narrowly to require that the entity collecting biometric data must itself be capable of identifying, rather than it being sufficient for any entity to do so. Reading BIPA this narrowly would significantly weaken the statute’s protections.
After detailing how employer and consumer cases catalyzed this recent defendant-friendly shift, this Comment proposes a two-step framework to determine whether a biometric identifier is able to identify, falling under BIPA’s reach. Given BIPA’s broad influence, where courts ultimately land on this question will be crucial to the protection of biometric data nationwide."
Recently, many states have reacted to the growing data economy by passing data privacy statutes. These follow the “interaction model”: they allow consumers to exercise privacy rights against firms by directly interacting with them. But data brokers, firms that buy and sell data for consumers whom they do not directly interact with, are key players in the data economy. How is a consumer meant to exercise their rights against a broker with an “interaction gap” between them?
A handful of states have tried to soften the interaction gap by enacting data-broker-specific legislation under the “transparency model.” These laws, among other things, require brokers to publicly disclose themselves in state registries. The theory is that consumers would exercise their rights against brokers if they knew of the brokers’ existence. California recently went further with the Delete Act, providing consumers data-broker-specific privacy rights.
Assembling brokers’ reported privacy request metrics, this Comment performs an empirical analysis of the transparency model’s efficacy. These findings demonstrate that the transparency model does not effectively facilitate consumers in following through on their expected privacy preferences or meaningfully impacting brokers. Therefore, regulators should follow in the footsteps of the Delete Act and move beyond the transparency model.
Lee Fennell’s Slices and Lumps: Division and Aggregation in Law and Life reveals the benefits of isolating configurations in legal analysis. A key characteristic of configurations, or “lumps” whether found or created, is that they are indivisible.