Technology and Law

Online
Essay
Automation Rights: How to Rationally Design Humans-Out-of-the-Loop Law
Orly Lobel
Orly Lobel is the Warren Distinguished Professor of Law and Director of the Center for Employment and Labor Policy (CELP) at the University of San Diego.

She graduated from Tel-Aviv University and Harvard Law School. Named as one of the most cited legal scholars in the United States, and specifically the most cited scholar in employment law and one of the most cited in law and technology, she is influential in her field. Professor Lobel has served on President Obama’s policy team on innovation and labor market competition, has advised the Federal Trade Commission (FTC), and has published multiple books to critical acclaim. Her latest book, The Equality Machine, is an Economist Best Book of the Year.  

This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.

Print
Comment
Volume 91.7
Network Harms
Andy Z. Wang
B.S. 2022, San Jose State University; J.D. Candidate 2025, The University of Chicago Law School.

I would like to thank Professor Omri Ben-Shahar for his tremendous guidance and advice. Thank you to the editors and staff of the University of Chicago Law Review for their tireless editing support. A special thank you to Eric Haupt, Jack Brake, Karan Lala, Tanvi Antoo, Luke White, Jake Holland, Bethany Ao, Emilia Porubcin, Benjamin Wang, and Anastasia Shabalov for their invaluable insights and contributions along the way.

For data, the whole is greater than the sum of its parts. There may be millions of people with the same birthday. But how many also have a dog, a red car, and two kids? The more data is aggregated, the more identifying it becomes. Accordingly, the law has developed safe harbors for firms that take steps to prevent aggregation of the data they sell. A firm might, for instance, anonymize data by removing identifying information. But as computer scientists have shown, clever de-anonymization techniques enable motivated actors to unmask identities even if the data is anonymized. Data brokers collect, process, and sell data. Courts have traditionally calculated data brokering harms without considering the larger data ecosystem. This Comment suggests a broader conception is needed because the harm caused by one broker’s conduct depends on how other brokers behave. De-anonymization techniques, for instance, often cross-reference datasets to make guesses about missing data. A motivated actor can also buy datasets from multiple brokers to combine them. This Comment then offers a framework for courts to consider these “network harms” in the Federal Trade Commission’s (FTC) recent lawsuits against data brokers under its Section 5 authority to prevent unfair acts and practices.

Print
Article
v88.2
Competing Algorithms for Law: Sentencing, Admissions, and Employment
Saul Levmore
William B. Graham Distinguished Service Professor of Law, The University of Chicago Law School.

We benefited from discussions with colleagues at a University of Chicago Law School workshop and with Concetta Balestra Fagan and Eliot Levmore.

Frank Fagan
Associate Professor of Law, EDHEC Business School, France.

When the past is thought to predict the future, it is unsurprising that machine learning, with access to large data sets, wins prediction contests when competing against an individual, including a judge. Just as computers predict next week’s weather better than any human working alone, at least one study shows that machine learning can make better decisions than can judges when deciding whether or not to grant bail.