Defending Disclosure in Software Licensing
The authors thank George Hay, Stewart Schwab, and the faculties of Boston University School of Law and Cornell Law School for their comments. Daniel Forester provided excellent research assistance.
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on Facebook
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on Twitter
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on Email
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on LinkedIn
This Article surveys prominent kinds of disclosures in contract law—of facts, contract terms, and performance intentions. We show why the disclosure tool, although subject to substantial criticism, promotes important social values and goals, including efficiency, autonomy, corrective justice, fairness, and the legitimacy of the contract process. Further, proposals to replace disclosure with other alternatives are unrealistic because they are too expensive or complex. Our working example is the American Law Institute’s Principles of the Law of Software Contracts.
Thanks to Uven Chong for research assistance. Anu Bradford offered gracious, insightful, and generous comments on a draft that strikes to be fair, if critical, of her work. For her careful engagement, I am respectfully and deeply grateful. Editors of the University of Chicago Law Review, including Helen Zhao, Daniella Apodaca, and Nathan Hensley, did excellent work on the text.
Contemporary regulation of new digital technologies by nation-states unfolds under a darkening shadow of geopolitical competition. Three recent monographs offer illuminating and complementary maps of these geopolitical conflicts. Folding together insights from all three books opens up a new, more perspicacious understanding of geopolitical dynamics. This perspective, informed by all three books under consideration here, suggests grounds for skepticism about the emergence of a deep regulatory equilibrium centered on the emerging slate of European laws. The area of overlap will be strictly limited to less important questions by growing bipolar geostrategic conflict between the United States and China. Ambitions for global regulatory convergence when it comes to new digital technology, therefore, should be modest.
The author wishes to thank Abdi Aidid, Ben Alarie, Francesco Ducci, Anthony Niblett, Tom Ross, and Michael Trebilcock and participants at the How AI Will Change the Law Symposium at the University of Chicago for helpful comments and conversations.
This Essay focuses largely on structural responses to AI pricing in antitrust, outlining the bulk of its argument in the context of merger law but also considers monopolization law and exclusionary conduct. It argues that the relationship between the strictness of the law and the sophistication of AI pricing is not straightforward. In the short run, a stricter approach to merger review might well make sense, but as AI pricing becomes more sophisticated, merger policy ought to become less strict: if anticompetitive outcomes are inevitable with or without a merger because of highly sophisticated AI pricing, antitrust interventions to stop mergers will not affect pricing and instead will create social losses by impeding efficient acquisitions. This Essay considers similar questions in the context of monopolization. It concludes by observing that the rise of AI pricing will strengthen the case for antitrust law to shift its focus away from high prices and static allocative inefficiency and toward innovation and dynamic efficiency.
Harran Deu provided helpful research assistance.
A recurrent problem in adapting law to artificial intelligence (AI) programs is how the law should regulate the use of entities that lack intentions. Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. We think that the best solution is to employ objective standards that are familiar in many different parts of the law. These legal standards either ascribe intention to actors or hold them to objective standards of conduct.