UCLR Online
This Essay focuses largely on structural responses to AI pricing in antitrust, outlining the bulk of its argument in the context of merger law but also considers monopolization law and exclusionary conduct. It argues that the relationship between the strictness of the law and the sophistication of AI pricing is not straightforward. In the short run, a stricter approach to merger review might well make sense, but as AI pricing becomes more sophisticated, merger policy ought to become less strict: if anticompetitive outcomes are inevitable with or without a merger because of highly sophisticated AI pricing, antitrust interventions to stop mergers will not affect pricing and instead will create social losses by impeding efficient acquisitions. This Essay considers similar questions in the context of monopolization. It concludes by observing that the rise of AI pricing will strengthen the case for antitrust law to shift its focus away from high prices and static allocative inefficiency and toward innovation and dynamic efficiency.
Causal AI is within reach. It has the potential to trigger nothing less than a conceptual revolution in the law. This Essay explains why and takes a cautious look into the crystal ball. Causation is an elusive concept in many disciplines—not only the law, but also science and statistics. Even the most up-to-date artificial intelligence systems do not “understand” causation, as they remain limited to the analysis of text and images. It is a long-standing statistical axiom that it is impossible to infer causation from the correlation of variables in datasets. This thwarts the extraction of causal relations from observational data. But important advances in computer science will enable us to distinguish between mere correlation and factual causation. At the same time, artificially intelligent systems are beginning to learn how to “think causally.”
This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.
Professor Monica Haymond’s Intervention and Universal Remedies article invites scholars to focus on the distinctive ways that public law litigation plays out in practice. This Essay takes up her challenge. By questioning common assumptions at the core of structural-reform litigation, this Essay explains the dangers of consent decrees, settlements, and broad precedents. It then goes on to argue that intervention is an important check on these risks, and should be much more freely available in structural reform cases.
A recurrent problem in adapting law to artificial intelligence (AI) programs is how the law should regulate the use of entities that lack intentions. Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. We think that the best solution is to employ objective standards that are familiar in many different parts of the law. These legal standards either ascribe intention to actors or hold them to objective standards of conduct.
This essay explores whether the use of AI to enhance decision-making brings about radical change in legal doctrine or, by contrast, is just another new tool. It focuses on decision-making by board members. This provides an especially relevant example because corporate law has laid out explicit expectations for how board members must go about decision-making.
AI applications will put an end to negotiation processes as we know them. The typical back-and-forth communication and haggling in a state of information insecurity could soon be a thing of the past. AI applications will increase the information level of the parties and drastically reduce transaction costs. A quick and predictable agreement in the middle of a visible bargaining range could become the new normal. But, sophisticated negotiators will shift this bargaining range to their advantage. They will automate negotiation moves and execute value-claiming strategies with precision, exploiting remaining information asymmetries to their advantage. Negotiations will no longer be open-ended communication processes. They will become machine-driven chess endgames. Large businesses will have the upper hand in these endgames.
This paper examines the distinct features of artificial intelligence (AI) and reaches a broader conclusion as to the availability and applicability of first-order tort rules. It evaluates the accuracy of the argument that AI is similar in essence to other emerging technologies that have entered our lives since the First Industrial Revolution and, therefore, does not require special legal treatment. The paper will explore whether our current tort doctrines can serve us well even when addressing AI liability.
Changing technologies render tax law’s intricacy legible in new ways. Advances in large language models, natural language processing, and programming languages designed for the domain of tax law make formalizations, or “representation[s] of [ ] legislation in symbols[ ] using logical connectives,” of tax law that capture much of its substance and structure both possible and realistic. These new formalizations can be used for many different purposes—what one might call flexible formalizations. Flexible formalizations will make law subject to computational analysis, including creating automated explanations of the analysis and testing statutes for consistency and unintended outcomes. This Essay builds upon existing work in computational law and digitalizing legislation.
Courts, litigants, and scholars should not be confused by the ongoing debate about nationwide or so-called “universal” injunctions: the proper scope of remedies under the Administrative Procedure Act (APA) and other statutes providing for judicial review of agency action is “erasure.” This Article aims to save scholars’ recent progress in showing the legality of stays and vacatur under the APA from muddled thinking that conflates these forms of relief with other universal remedies that face growing criticism.
This Essay proposes using the dilemma defendants face in parallel proceedings as a way to measure the Value of Statistical Freedom (VSF). The VSF (sometimes called the Value of Liberty) can be thought of as an individual’s willingness to pay to not be in prison. The VSF is spiritually similar to the far more prevalent “Value of Statistical Life” (VSL), which measures the willingness to trade money or wealth in exchange for an increase in the mortality probability.
How often do Supreme Court opinions include what might be called “lobbying language,” which endorses a policy position while calling for another government entity to realize it? Reviewing relevant cases, this Essay finds that the sample set includes at least a dozen examples of lobbying language. As it turns out, lobbying is not so unusual for the Supreme Court.