Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on Facebook
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on Twitter
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on Email
- Share The University of Chicago Law Review | Too Small to Fail: A New Perspective on Environmental Penalties for Small Businesses on LinkedIn
The author thanks the University of Chicago Law Review Online team for their helpful feedback.
This Case Note first reviews the origins of the postal-matter exception and the FTCA. Then, it analyzes the Fifth Circuit’s holding in Konan and explores contrasting precedent in other circuits, most notably in the First and Second Circuits. Finally, this Note discusses the difficulty of balancing USPS’s interests against enabling suits under the FTCA and considers the implications of providing a tort remedy.
My views on these subjects owe much to my collaborators, especially Michael Barr, Megan Shearer, and Michael Wellman, with whom I have been studying the behavior of algorithmic traders in financial markets, and Howell Jackson, with whom I have been presenting on social media and capital markets at PIFS-IOSCO’s trainings for securities regulators. All errors are my own. Thanks to the participants at the University of Chicago’s Symposium on “How AI Will Change the Law” for helpful comments, and to the editors of the University of Chicago Law Review for their helpful insights.
This Essay argues that the increasing prevalence and sophistication of artificial intelligence (AI) will push securities regulation toward a more systems-oriented approach. This approach will replace securities law’s emphasis, in areas like manipulation, on forms of enforcement targeted at specific individuals and accompanied by punitive sanctions with a greater focus on ex ante rules designed to shape an ecology of actors and information.
She graduated from Tel-Aviv University and Harvard Law School. Named as one of the most cited legal scholars in the United States, and specifically the most cited scholar in employment law and one of the most cited in law and technology, she is influential in her field. Professor Lobel has served on President Obama’s policy team on innovation and labor market competition, has advised the Federal Trade Commission (FTC), and has published multiple books to critical acclaim. Her latest book, The Equality Machine, is an Economist Best Book of the Year.
This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.