Holders of patents covering technology standards, known as standard-essential patents (SEP), control the rights to an invention with no commercially-viable alternative or that cannot be designed around while still complying with a standard. This gives SEP holders significant leverage in licensing negotiations. Standards development organizations (SDOs) play an important role in curbing opportunistic behavior by patent holders. SDOs require SEP holders to license their patents on fair, reasonable, and non-discriminatory (FRAND) terms. However, courts have mischaracterized FRAND commitments, concluding that these disputes carry a Seventh Amendment guarantee to a jury trial. This mischaracterization undermines the fair resolution of FRAND disputes, and a different approach is necessary. In this Comment, Marta Krason proposes an alternative analytical framework that more accurately characterizes FRAND disputes by drawing on principles from contract and property law, concluding that the constitutionally proper adjudicator is a judge, not a jury.
Technology and Law
The internet plays a crucial role in modern life; however, equal access to it is not guaranteed. Drawing on existing tribal spectrum sovereignty arguments, Morgan Schaack writes that the control exercised by the FCC’s licensing of the electromagnetic spectrum and language common in many tribal treaties create a tribal access right to spectrum under the trust responsibility. Framing this access to spectrum as a trust-protected resource, the Comment situates allowing tiered internet service in the absence of net neutrality as a violation of the government's obligations under the trust responsibility.
Recently, many states have reacted to the growing data economy by passing data privacy statutes. These follow the “interaction model”: they allow consumers to exercise privacy rights against firms by directly interacting with them. But data brokers, firms that buy and sell data for consumers whom they do not directly interact with, are key players in the data economy. How is a consumer meant to exercise their rights against a broker with an “interaction gap” between them?
A handful of states have tried to soften the interaction gap by enacting data-broker-specific legislation under the “transparency model.” These laws, among other things, require brokers to publicly disclose themselves in state registries. The theory is that consumers would exercise their rights against brokers if they knew of the brokers’ existence. California recently went further with the Delete Act, providing consumers data-broker-specific privacy rights.
Assembling brokers’ reported privacy request metrics, this Comment performs an empirical analysis of the transparency model’s efficacy. These findings demonstrate that the transparency model does not effectively facilitate consumers in following through on their expected privacy preferences or meaningfully impacting brokers. Therefore, regulators should follow in the footsteps of the Delete Act and move beyond the transparency model.
This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.
For data, the whole is greater than the sum of its parts. There may be millions of people with the same birthday. But how many also have a dog, a red car, and two kids? The more data is aggregated, the more identifying it becomes. Accordingly, the law has developed safe harbors for firms that take steps to prevent aggregation of the data they sell. A firm might, for instance, anonymize data by removing identifying information. But as computer scientists have shown, clever de-anonymization techniques enable motivated actors to unmask identities even if the data is anonymized. Data brokers collect, process, and sell data. Courts have traditionally calculated data brokering harms without considering the larger data ecosystem. This Comment suggests a broader conception is needed because the harm caused by one broker’s conduct depends on how other brokers behave. De-anonymization techniques, for instance, often cross-reference datasets to make guesses about missing data. A motivated actor can also buy datasets from multiple brokers to combine them. This Comment then offers a framework for courts to consider these “network harms” in the Federal Trade Commission’s (FTC) recent lawsuits against data brokers under its Section 5 authority to prevent unfair acts and practices.
When the past is thought to predict the future, it is unsurprising that machine learning, with access to large data sets, wins prediction contests when competing against an individual, including a judge. Just as computers predict next week’s weather better than any human working alone, at least one study shows that machine learning can make better decisions than can judges when deciding whether or not to grant bail.