Public policy must address threats that will manifest in the future. Legislation enacted today affects the severity of tomorrow’s harms arising from biotechnology, climate change, and artificial intelligence. This Essay focuses on Congress’s capacity to confront future threats. It uses a detailed case study of financial crises to show the limits and possibilities of legislation to prevent future catastrophes. By paying insufficient attention to Congress, the existing literature does not recognize the full nature and extent of the institutional challenges in regulating systemic risk. Fully recognizing those challenges reveals important design insights for future risk legislation.
Public Law
We offer a way of thinking about public-investment institutions as creatures of both public law and private markets. Placing public investment—a distinct public function—in the context of constitutional debates on the legitimate reach of the administrative state, we focus the search for legitimate institutional structure on the interaction between the entity’s efficacy as a market actor and the concept of public accountability. This tension, as well as synergy, is where the fundamental hybridity of public-investment institutions is most visible. We argue that only by considering the unique objectives and tools of public investment as a legitimate sovereign activity can we design workable mechanisms of democratic accountability for public-investment institutions. We hope that our observations shed light on the broader debate about the optimal implementation mechanisms for the nation’s reemerging industrial policy.
This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.
Professor Monica Haymond’s Intervention and Universal Remedies article invites scholars to focus on the distinctive ways that public law litigation plays out in practice. This Essay takes up her challenge. By questioning common assumptions at the core of structural-reform litigation, this Essay explains the dangers of consent decrees, settlements, and broad precedents. It then goes on to argue that intervention is an important check on these risks, and should be much more freely available in structural reform cases.