Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on Facebook
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on Twitter
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on Email
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on LinkedIn
My views on these subjects owe much to my collaborators, especially Michael Barr, Megan Shearer, and Michael Wellman, with whom I have been studying the behavior of algorithmic traders in financial markets, and Howell Jackson, with whom I have been presenting on social media and capital markets at PIFS-IOSCO’s trainings for securities regulators. All errors are my own. Thanks to the participants at the University of Chicago’s Symposium on “How AI Will Change the Law” for helpful comments, and to the editors of the University of Chicago Law Review for their helpful insights.
This Essay argues that the increasing prevalence and sophistication of artificial intelligence (AI) will push securities regulation toward a more systems-oriented approach. This approach will replace securities law’s emphasis, in areas like manipulation, on forms of enforcement targeted at specific individuals and accompanied by punitive sanctions with a greater focus on ex ante rules designed to shape an ecology of actors and information.
She graduated from Tel-Aviv University and Harvard Law School. Named as one of the most cited legal scholars in the United States, and specifically the most cited scholar in employment law and one of the most cited in law and technology, she is influential in her field. Professor Lobel has served on President Obama’s policy team on innovation and labor market competition, has advised the Federal Trade Commission (FTC), and has published multiple books to critical acclaim. Her latest book, The Equality Machine, is an Economist Best Book of the Year.
This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.
He thanks his clerks Nathan Pinnell and Isabella Soparkar for outstanding research assistance.
Professor Monica Haymond’s Intervention and Universal Remedies article invites scholars to focus on the distinctive ways that public law litigation plays out in practice. This Essay takes up her challenge. By questioning common assumptions at the core of structural-reform litigation, this Essay explains the dangers of consent decrees, settlements, and broad precedents. It then goes on to argue that intervention is an important check on these risks, and should be much more freely available in structural reform cases.