Defending Disclosure in Software Licensing
The authors thank George Hay, Stewart Schwab, and the faculties of Boston University School of Law and Cornell Law School for their comments. Daniel Forester provided excellent research assistance.
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on Facebook
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on Twitter
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on Email
- Share The University of Chicago Law Review | Defending Disclosure in Software Licensing on LinkedIn
This Article surveys prominent kinds of disclosures in contract law—of facts, contract terms, and performance intentions. We show why the disclosure tool, although subject to substantial criticism, promotes important social values and goals, including efficiency, autonomy, corrective justice, fairness, and the legitimacy of the contract process. Further, proposals to replace disclosure with other alternatives are unrealistic because they are too expensive or complex. Our working example is the American Law Institute’s Principles of the Law of Software Contracts.
This Article has benefited from workshops at Harvard Law School, Northwestern Pritzker School of Law, the University of Chicago Law School, the University of Virginia School of Law, and Yale Law School, in addition to helpful comments from, and conversations with, Ian Ayres, Will Baude, Curt Bradley, Danielle Citron, Alex Hemmer, Aziz Huq, Alison LaCroix, David Strauss, David Weisbach, and Taisu Zhang. We finally thank the Neubauer Collegium and the University of Chicago Data Science Institute for their generous financial support.
This Article has benefited from workshops at Harvard Law School, Northwestern Pritzker School of Law, the University of Chicago Law School, the University of Virginia School of Law, and Yale Law School, in addition to helpful comments from, and conversations with, Ian Ayres, Will Baude, Curt Bradley, Danielle Citron, Alex Hemmer, Aziz Huq, Alison LaCroix, David Strauss, David Weisbach, and Taisu Zhang. We finally thank the Neubauer Collegium and the University of Chicago Data Science Institute for their generous financial support.
The central concern of structural constitutional law is the organization of governmental power, but power comes in many forms. This Article develops an original account of data’s structural law—the processes, institutional arrangements, transparency rules, and control mechanisms that, we argue, create distinctive structural dynamics for data’s acquisition and appropriation to public projects. Doing so requires us to reconsider how law treats the category of power to which data belongs. Data is an instrument of power. The Constitution facilitates popular control over material forms of power through distinctive strategies, ranging from defaults to accounting mechanisms. Assessing data’s structural ecosystem against that backdrop allows us to both map the structural law of data and provide an initial diagnosis of its deficits. Drawing on our respective fields—law and computer science—we conclude by suggesting legal and technical pathways to asserting greater procedural, institutional, and popular control over the government’s data.
Critics of generative AI often describe it as a “plagiarism machine.” They may be right, though not in the sense they mean. With rare exceptions, generative AI doesn’t just copy someone else’s creative expression, producing outputs that infringe copyright. But it does get its ideas from somewhere. And it’s quite bad at identifying the source of those ideas. That means that students (and professors, and lawyers, and journalists) who use AI to produce their work generally aren’t engaged in copyright infringement. But they are often passing someone else’s work off as their own, whether or not they know it. While plagiarism is a problem in academic work generally, AI makes it much worse because authors who use AI may be unknowingly taking the ideas and words of someone else.
Disclosing that the authors used AI isn’t a sufficient solution to the problem because the people whose ideas are being used don’t get credit for those ideas. Whether or not a declaration that “AI came up with my ideas” is plagiarism, failing to make a good-faith effort to find the underlying sources is a bad academic practice.
We argue that AI plagiarism isn’t—and shouldn’t be—illegal. But it is still a problem in many contexts, particularly academic work, where proper credit is an essential part of the ecosystem. We suggest best practices to align academic and other writing with good scholarly norms in the AI environment.
Critics of generative AI often describe it as a “plagiarism machine.” They may be right, though not in the sense they mean. With rare exceptions, generative AI doesn’t just copy someone else’s creative expression, producing outputs that infringe copyright. But it does get its ideas from somewhere. And it’s quite bad at identifying the source of those ideas. That means that students (and professors, and lawyers, and journalists) who use AI to produce their work generally aren’t engaged in copyright infringement. But they are often passing someone else’s work off as their own, whether or not they know it. While plagiarism is a problem in academic work generally, AI makes it much worse because authors who use AI may be unknowingly taking the ideas and words of someone else.
Disclosing that the authors used AI isn’t a sufficient solution to the problem because the people whose ideas are being used don’t get credit for those ideas. Whether or not a declaration that “AI came up with my ideas” is plagiarism, failing to make a good-faith effort to find the underlying sources is a bad academic practice.
We argue that AI plagiarism isn’t—and shouldn’t be—illegal. But it is still a problem in many contexts, particularly academic work, where proper credit is an essential part of the ecosystem. We suggest best practices to align academic and other writing with good scholarly norms in the AI environment.