The negative moral emotions of guilt and shame impose real social costs but also create opportunities for policymakers to engender compliance with legal rules in a cost-effective manner. This Essay presents a unified model of guilt and shame that demonstrates how legal policymakers can harness negative moral emotions to increase social welfare. The prospect of guilt and shame can deter individuals from violating moral norms and legal rules, thereby substituting for the expense of state enforcement. But when legal rules and law enforcement fail to induce total compliance, guilt and shame experienced by noncompliers can increase the law’s social costs. The Essay identifies specific circumstances in which rescinding a legal rule will improve social welfare because eliminating the rule reduces the moral costs of noncompliance with the law’s command. It also identifies other instances in which moral costs strengthen the case for enacting legal rules and investing additional resources in enforcement because deterrence reduces the negative emotions experienced by noncompliers.
Ian Ayres
In February of this year, we published a call for the government to relaunch the federal Gun Control Act’s § 925(c) petition process, which empowers anyone subject to a federal restriction (“disability”) on their ability to purchase or possess firearms to apply to the Department of Justice for restoration of their gun rights.
The Trump Justice Department has moved with some dispatch to relaunch the program—using a workaround we suggested in our piece. In this short Essay, we propose several improvements to the proposed regulation.
Harran Deu provided helpful research assistance.
A recurrent problem in adapting law to artificial intelligence (AI) programs is how the law should regulate the use of entities that lack intentions. Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. We think that the best solution is to employ objective standards that are familiar in many different parts of the law. These legal standards either ascribe intention to actors or hold them to objective standards of conduct.
Volumes
- Volume 93.1January2026
- Volume 92.8December2025
- Volume 92.7November2025
- Volume 92.6October2025
- Volume 92.5September2025
- Volume 92.4June2025
- Volume 92.3May2025
- Volume 92.2March2025
- Volume 92.1January2025
- Volume 91.8December2024
- Volume 91.7November2024
- Volume 91.6October2024
- Volume 91.5September2024
- Volume 91.4June2024
- Volume 91.3May2024
- Volume 91.2March2024
- Volume 91.1January2024
- Volume 90.8December2023
- Volume 90.7November2023
- Volume 90.6October2023
- Volume 90.5September2023
- Volume 90.4June2023
- Volume 90.3May2023
- Volume 90.2March2023
- Volume 90.1January2023
- Volume 89.8December2022
- Volume 89.7November2022
- Volume 89.6October2022
- Volume 89.5September2022
- Volume 89.4June2022
- Volume 89.3May2022
- Volume 89.2March2022
- Volume 89.1January2022
- Volume 88.8December2021
- v88.6October2021
- v88.4June2021
- v88.3May2021
- 87.1January2020
- 84.4Fall2017
- 84.3Summer2017
- 84.2Spring2017
- 84.1Winter2017
- 84 SpecialNovember2017
- 81.3Summer2014
- 80.1Winter2013
- 78.1Winter2011