Privacy Decisionmaking in Administrative Agencies
Much appreciation to Colin Bennett, Malcolm Crompton, Peter Cullen, Lauren Edelman, Robert Gellman, Chris Hoofnagle, Robert Kagan, Jennifer King, Anne Joseph O’Connell, Fred B. Schneider, Ari Schwartz, Paul Schwartz, and the participants at The University of Chicago Law School’s Surveillance Symposium for insight, comment, and discussion; Nuala O’Connor Kelly and Peter Swire for consenting to be interviewed about their experience in privacy leadership roles within the United States government; Sara Terheggen, Marta Porwit Czajkowska, Rebecca Henshaw, and Andrew McDiarmid for their able research.
- Share The University of Chicago Law Review | Privacy Decisionmaking in Administrative Agencies on Facebook
- Share The University of Chicago Law Review | Privacy Decisionmaking in Administrative Agencies on Twitter
- Share The University of Chicago Law Review | Privacy Decisionmaking in Administrative Agencies on Email
- Share The University of Chicago Law Review | Privacy Decisionmaking in Administrative Agencies on LinkedIn
Administrative agencies increasingly rely on technology to promote the substantive goals they are charged to pursue. The Department of Health and Human Services has prioritized digitized personal health data as a means for improving patient safety and reducing bureaucratic costs. The DOJ hosts electronic databases that pool information between agencies to facilitate national law enforcement in ways previously unimaginable. The Departments of Defense and Education mine digital information to effect goals as diverse as human resources management; service improvement; fraud, waste, and abuse control; and detection of terrorist activity.
Critics of generative AI often describe it as a “plagiarism machine.” They may be right, though not in the sense they mean. With rare exceptions, generative AI doesn’t just copy someone else’s creative expression, producing outputs that infringe copyright. But it does get its ideas from somewhere. And it’s quite bad at identifying the source of those ideas. That means that students (and professors, and lawyers, and journalists) who use AI to produce their work generally aren’t engaged in copyright infringement. But they are often passing someone else’s work off as their own, whether or not they know it. While plagiarism is a problem in academic work generally, AI makes it much worse because authors who use AI may be unknowingly taking the ideas and words of someone else.
Disclosing that the authors used AI isn’t a sufficient solution to the problem because the people whose ideas are being used don’t get credit for those ideas. Whether or not a declaration that “AI came up with my ideas” is plagiarism, failing to make a good-faith effort to find the underlying sources is a bad academic practice.
We argue that AI plagiarism isn’t—and shouldn’t be—illegal. But it is still a problem in many contexts, particularly academic work, where proper credit is an essential part of the ecosystem. We suggest best practices to align academic and other writing with good scholarly norms in the AI environment.
Beware dark patterns. The name should be a warning, perhaps alluding to the dark web, the “Dark Lord” Sauron, or another archetypically villainous and dangerous entity. Rightfully included in this nefarious bunch, dark patterns are software interfaces that manipulate users into doing things they would not normally do. Because of these First Amendment complications, the constitutionality of dark pattern restrictions is an unsettled question. To begin constructing an answer, we must look at how dark patterns are regulated today, how companies have begun to challenge the constitutionality of such regulations, and where dark patterns fall in the grand scheme of free speech. Taken together, these steps inform an approach to regulation going forward.
Beware dark patterns. The name should be a warning, perhaps alluding to the dark web, the “Dark Lord” Sauron, or another archetypically villainous and dangerous entity. Rightfully included in this nefarious bunch, dark patterns are software interfaces that manipulate users into doing things they would not normally do. Because of these First Amendment complications, the constitutionality of dark pattern restrictions is an unsettled question. To begin constructing an answer, we must look at how dark patterns are regulated today, how companies have begun to challenge the constitutionality of such regulations, and where dark patterns fall in the grand scheme of free speech. Taken together, these steps inform an approach to regulation going forward.