Beware dark patterns. The name should be a warning, perhaps alluding to the dark web, the “Dark Lord” Sauron, or another archetypically villainous and dangerous entity. Rightfully included in this nefarious bunch, dark patterns are software interfaces that manipulate users into doing things they would not normally do. Because of these First Amendment complications, the constitutionality of dark pattern restrictions is an unsettled question. To begin constructing an answer, we must look at how dark patterns are regulated today, how companies have begun to challenge the constitutionality of such regulations, and where dark patterns fall in the grand scheme of free speech. Taken together, these steps inform an approach to regulation going forward.
Data Privacy
Parents are turning to autonomous vehicles (AVs) to shuttle their children around, seeing them as a safe and convenient option. AVs promise increased mobility for children but bring with them unparalleled surveillance risks. As parents embrace in-cabin monitoring and location tracking to enhance safety, they also—often unknowingly—authorize the mass collection, retention, and potential disclosure of their children’s most intimate data.
This Essay presents the first case study of children’s privacy in AVs, serving as a lens to critique the prevailing reliance on parental notice and choice as the cornerstone of children's data protection. Drawing on privacy theory, surveillance studies, and child development literature, the Essay argues that the notice-and-choice framework fails to account for children’s distinct privacy interests, particularly when the data collected may be retained indefinitely, repurposed by law enforcement, or sold to data brokers. The Essay calls for real limits on data collection, meaningful restrictions on sharing, and mandatory deletion rules. These principles extend beyond AVs to the technological ecosystem now shaping childhood in the digital age.
Antidemocratic forces rely on intimidation tactics to silence criticism and opposition. Today’s intimidation playbook follows a two-step pattern. We surface these tactics so their costs to public discourse and civic engagement can be fully understood. We show how the misappropriation of the concept of online abuse has parallels in other efforts at conceptual diversion that dampen democratic guarantees. Democracy’s survival requires creative solutions. Politicians and government workers must be able to operate free from intimidation. Journalists and researchers must be able to freely investigate governmental overreach and foreign malign influence campaigns that threaten the democratic process. Surfacing the two-step strategy is a critical start to combating it.
Illinois’s Biometric Information Privacy Act (BIPA) is the country’s most powerful law governing biometric data—data generated from an individual’s biological characteristics, like fingerprints and voiceprints. Over the past decade, BIPA garnered a reputation as an exceptionally plaintiff-friendly statute. But from 2023–2024, the Illinois legislature, Illinois Supreme Court, and Ninth Circuit Court of Appeals all sided with BIPA defendants for the first time. Most significantly, in Zellmer v. Meta Platforms, Inc., the Ninth Circuit dismissed the plaintiff’s BIPA claim because the face scan collected by the defendant could not be used to identify him.
It is unclear whether these developments represent a trend or an exception to BIPA’s plaintiff-friendliness. Which path is charted will largely turn on how courts interpret Zellmer: While Zellmer established that a biometric identifier must be able to identify an individual, lower courts have construed its holding narrowly to require that the entity collecting biometric data must itself be capable of identifying, rather than it being sufficient for any entity to do so. Reading BIPA this narrowly would significantly weaken the statute’s protections.
After detailing how employer and consumer cases catalyzed this recent defendant-friendly shift, this Comment proposes a two-step framework to determine whether a biometric identifier is able to identify, falling under BIPA’s reach. Given BIPA’s broad influence, where courts ultimately land on this question will be crucial to the protection of biometric data nationwide."
The latest development in class action litigation is the “future stakes settlement.” Under this novel mechanism, unveiled in the settlement proposal to end a privacy law class action lawsuit against the startup Clearview AI, a defendant grants a privately traded equity stake to the class in exchange for a release of all claims.
Future stakes settlements, though similar to existing mechanisms in class action and bankruptcy law, offer distinct benefits and costs. Through a future stakes settlement, the class may recover against a cashless defendant and receive a larger payout than would be possible through a traditional cash damages fund. But this recovery is uncertain, as the value of a future stake can fluctuate. Furthermore, by transforming injured parties into shareholders, future stakes settlements pose serious moral quandaries.
Existing guidance for settlement agreements under Federal Rule of Civil Procedure 23(e) is insufficient to handle the high degree of risk associated with future stakes settlements. This Comment recommends additional standards that courts should apply when evaluating these settlements. Through these additions, courts can prevent defendant gamesmanship, ensure future stakes settlements are fair to the class, and fulfill the dual purposes of compensation and regulation in class actions.
Data privacy has been at the forefront of recent foreign-policy conversations.