Oversight

Online
Essay
Automation Rights: How to Rationally Design Humans-Out-of-the-Loop Law
Orly Lobel
Orly Lobel is the Warren Distinguished Professor of Law and Director of the Center for Employment and Labor Policy (CELP) at the University of San Diego.

She graduated from Tel-Aviv University and Harvard Law School. Named as one of the most cited legal scholars in the United States, and specifically the most cited scholar in employment law and one of the most cited in law and technology, she is influential in her field. Professor Lobel has served on President Obama’s policy team on innovation and labor market competition, has advised the Federal Trade Commission (FTC), and has published multiple books to critical acclaim. Her latest book, The Equality Machine, is an Economist Best Book of the Year.  

This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.

Online
Essay
The Law of AI is the Law of Risky Agents Without Intentions
Ian Ayres
Oscar M. Ruebhausen Professor, Yale Law School.
Jack M. Balkin
Knight Professor of Constitutional Law and the First Amendment, Yale Law School.

 Harran Deu provided helpful research assistance.

A recurrent problem in adapting law to artificial intelligence (AI) programs is how the law should regulate the use of entities that lack intentions. Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. We think that the best solution is to employ objective standards that are familiar in many different parts of the law. These legal standards either ascribe intention to actors or hold them to objective standards of conduct.

Online
Essay
Kendall v. United States and the Inspector General Dilemma
Daniel Epstein
Daniel Epstein is the Vice President for Legal and Policy at Trust Ventures, a venture capital firm focused on startups facing regulatory barriers. He is also a PhD candidate in administrative law and empirical methods at George Washington University. Prior to Trust Ventures, Dan served as Senior Associate Counsel and Special Assistant to the President in the White House, from inauguration until March 2020. Dan is currently a pending nominee for the United States Court of Federal Claims.

In a span of less than two months, President Donald Trump removed or replaced multiple inspectors general (“IGs”)—statutorily authorized watchdogs within federal agencies.