When one thinks of government, what comes to mind are familiar general-purpose entities like states, counties, cities, and townships. But more than half of the 90,000 governments in the United States are strikingly different: They are “special-purpose” governments that do one thing, such as supply water, fight fire, or pick up the trash. These entities remain understudied, and they present at least two puzzles. First, special-purpose governments are difficult to distinguish from entities that are typically regarded as business organizations—such as consumer cooperatives—and thus underscore the nebulous border between “public” and “private” enterprise. Where does that border lie? Second, special-purpose governments typically provide only one service, in sharp contrast to general-purpose governments. There is little in between the two poles—such as two-, three-, or four-purpose governments. Why? This Article answers those questions—and, in so doing, offers a new framework for thinking about special-purpose government.
Law and Economics
For years, academic experts have championed the widespread adoption of the “Fama-French” factors in legal settings. Factor models are commonly used to perform valuations, performance evaluation and event studies across a wide variety of contexts, many of which rely on data provided by Professor Kenneth French. Yet these data are beset by a problem that the experts themselves did not understand: In a companion article, we document widespread retroactive changes to French’s factor data. These changes are the result of discretionary changes to the construction of the factors and materially affect a broad range of estimates. In this Article, we show how these retroactive changes can have enormous impacts in precisely the settings in which experts have pressed for their use. We provide examples of valuations, performance analysis, and event studies in which the retroactive changes have a large—and even dispositive—effect on an expert’s conclusions.
This Essay argues for the development of more robust—and balanced—law that focuses not only on the risks, but also the potential, that AI brings. In turn, it argues that there is a need to develop a framework for laws and policies that incentivize and, at times, mandate transitions to AI-based automation. Automation rights—the right to demand and the duty to deploy AI-based technology when it outperforms human-based action—should become part of the legal landscape. A rational analysis of the costs and benefits of AI deployment would suggest that certain high-stakes circumstances compel automation because of the high costs and risks of not adopting the best available technologies. Inevitably, the rapid advancements in machine learning will mean that law soon must embrace AI; accelerate deployment; and, under certain circumstances, prohibit human intervention as a matter of fairness, welfare, and justice.
Title VII’s anti-retaliation provision is clear: if an employee complains about employment discrimination, it is illegal for an employer to retaliate against them.
On November 19, 2021, Kyle Rittenhouse was acquitted of homicide charges stemming from his killing of two people—Anthony Huber and Joseph Rosenbaum—at a protest of police violence in Kenosha, Wisconsin. Rittenhouse had armed himself and traveled to the protest, purportedly to defend Kenoshans’ property against looting.