But What Is Personalized Law?
Introduction
Personalized law is on-trend. Data and computing technology have given us personalized medicine, personalized advertising, and the personalization of every imaginable consumer good. Now personalization is also “a rapidly growing phenomenon in legal thought.” In Personalized Law: Different Rules for Different People, Professors Omri Ben-Shahar and Ariel Porat undertake to synthesize and ground the field.
So what is personalized law, exactly? The book is full of examples: individualized speed limits that flash on vehicle dashboards, standards of care that vary by a person’s risk and skill level, fines calibrated to income, default rules sensitive to average group preferences, and medical product disclosures responsive to each consumer’s health profile. Personalized law clearly involves more data, more machine-learning algorithms, more responsiveness to individual circumstances, and more direct communication to individuals. What is less clear is how deep these changes go. On one read, they are incremental—just law doing what law already does with more technological sophistication.
The authors have a different view. They want to make the case that personalized law is not incremental but revolutionary. From the start, they cast personalized law as “a vision of a brave new world,” a radical break with the past. But although they offer a range of characterizations of this radical change, none quite manages to identify what it is about their examples that constitutes a structural transformation in the nature of legal rules.
This Essay aims to help clarify the concept of personalized law. It starts by explaining why the authors’ definitions fall short. Then it attempts to articulate in conceptual terms what unites the examples of personalized law that the book describes and distinguishes them from the status quo.
The first step toward a more precise conception of personalized law—“different rules for different people”—is to consider the nature of rules. As Frederick Schauer has explained, all rules must generalize on some dimension; that is the difference between a rule and an order. The difficult and important questions for legal-rule structure are on what dimension a rule should generalize and the degree to which we should specify its individualized requirements ex ante. This Essay attempts to taxonomize a set of basic differences in how rules can generalize.
With a framework for rule structure in place, it becomes clear that in advocating for personalized law, Ben-Shahar and Porat are really advocating for two conceptually distinct shifts. The first is a shift from rules that prescribe specific conduct toward rules that prescribe a social outcome, like a risk or efficiency target. This is the difference, for instance, between a rule that requires each person to drive below 55 mph and a rule that requires each person to keep accident risk below a certain threshold, such that speed limits will vary by person. The second shift is toward greater ex ante specification of what rules require of individuals.
Lastly, the Essay raises two potential costs of these shifts that Personalized Law does not address at length. First, social-outcome rules that use big data to specify individualized requirements are likely to have disparate racial impact. Contrary to the authors’ assertion, this cannot be fixed by statistical methods. The second possible cost is the experience of community. Forsaking conduct rules might undermine the experience of community in rule compliance. And forsaking loosely specified rules would eliminate a particular experience of collective agency in the development of the law.
I. Incomplete Definitions
Ben-Shahar and Porat define personalized law as, variously, the opposite of uniform law; a “new precision law” that diverges from “old precision law” by incorporating data about internal human experience and thus attaining a much higher degree of precision; and as “precision law characterized by individuation and machine-sorted information.” None of these definitions is sufficient to describe a meaningful structural shift in the nature of legal rules. The central reason is that rules always combine elements of both generality and particularity—that is, of both uniformity and personalization.
A. The Uniform-Personalized Dichotomy
As Ben-Shahar and Porat themselves acknowledge, there is no clean dichotomy between “personalized” and “uniform” law. A great deal of existing law already tailors its commands to an individual’s particular circumstances. Examples include sentencing, damage awards, income taxation, and the disbursement of public benefits. This is hardly a new development. The Code of Hammurabi was particularized to offense type and offender status. In the civil law tradition, the good-faith exception has served as a vehicle of legal personalization since Roman times, as Catalina Goanta chronicles. In the common law tradition, personalization was arguably the modus operandi of traditional courts of equity.
Even the most general rule, furthermore, must particularize on some features. It must be situated in some context. Consider the rule that individuals must conform their conduct to the standard of a reasonable person, that ubiquitous wanderer of legal doctrine. The authors cast the reasonable-person standard as a paradigm case of uniformity. But the standard is not remotely uniform. As the authors acknowledge, the law has evolved distinct reasonableness standards for certain groups in most contexts (e.g., a heightened standard of care for medical experts). More importantly, the reasonableness standard incorporates “the circumstances.” You are required to act in the way a reasonable person in your circumstances would. This qualifier goes a long way toward particularizing the rule. Finally, even the most uniform legal rules admit of exceptions that render them less uniform than they appear.
In light of the pervasiveness of particularization in the law, individualized legal treatment per se does not mark any radical break with the past.
B. The External-Internal–Feature Dichotomy
Nor will technology unleash a structural transformation by enabling legal rules and institutions to take account of internal human traits and experience. Much of the law already does that. Consider the centrality of mens rea in criminal law or the relevance of physical and emotional experience to damages calculations in tort. Even civil liability or the outcome of a property or contract dispute frequently hinges on data about a person’s internal states—awareness or knowledge of certain information, willfulness, intentions, cognitive capacity, etc. Technology might give us much better data about people’s internal traits and experience, and perhaps the availability of that data will lead us to take legal account of it more than we now do, but this too will be a change in degree rather than in kind.
C. The Spectrum of Precision (or Particularity)
The conception of personalized law that dominates the book is as a leap along a spectrum of legal precision. Legal rules have always been personalized to some extent, on this conception, but today’s technology will allow such a profound increase in the precision of legal rules as to fundamentally transform the nature of the law.
Yet even this definition is less informative than it seems at first blush. The first reason is that “precision” is not itself a precise term; it can have multiple meanings. Colin Diver, in The Optimal Precision of Administrative Rules, defines legal “precision” as consisting of three qualities: a rule is precise to the extent it is (1) transparent (uses “words with well-defined and universally accepted meanings within the relevant community”), (2) accessible (“applicable to concrete situations without excessive difficulty or effort”), and (3) congruent with the underlying policy goal. There is no single spectrum of precision. A rule for my kids that bedtime is at 8 p.m. is more transparent and accessible but less congruent with our substantive goals than a rule that each kid must go to bed at a time appropriate for her sleep needs.
The second reason that “precision law” is an underspecified conception of personalized law is that the defining feature of personalization seems to be particularity—individualized variation in the content and communication of legal commands. For the reasons just discussed, particularity is not synonymous with precision. Nor can we speak instead of a spectrum of particularity, with uniform rules at one end and particularized rules at the other. Once again, there is no single spectrum; rather, rules can be generalized and particularized in different ways.
The most particularized rules, for instance, are also uniform on some dimension. Sentencing laws prescribe a uniform punishment range for each crime and uniform criteria for just punishment; judges apply the uniform rules to craft an individualized sentence in each case. The tax code directs every citizen to contribute a portion of her income to the public fisc pursuant to a uniform algorithm; the algorithm, applied to your circumstances, dictates the particular amount that you owe. Even the personalized speed limits that the authors envision are simply particularized applications of two uniform directives: (1) that drivers obey the limits generated by government algorithm and (2) whatever underlying directive (optimization function) the algorithm implements.
What Ben-Shahar and Porat describe as personalized laws are really finely tailored applications of uniform directives. Any legal command must derive from a more general directive, or it cannot be said to belong to a rule at all. A rule entails generality. There cannot be “different rules for different people,” or at least not a different rule for each person on each occasion; a legal command that applies only to one person in one unique set of circumstances is not a rule but an order. If personalized law involves rules, it cannot be defined by juxtaposition to uniform directives.
It is likewise unhelpful to define personalized law by the presence of individualized commands, because any rule can be restated as a set of particularized commands. A uniform traffic fine of $1000 can be phrased as a personalized directive to a top law-firm partner to pay an hour’s salary or to a fast-food cook to pay three weeks’ wages. Even a uniform speed limit can, in theory, be communicated in personalized terms. We could use technology to communicate to each person on the road precisely how hard she must press her accelerator or brake to attain a speed of 55 mph. The point is more than semantic. Because any facially uniform rule applies to people in varying circumstances, it requires something slightly different of each of us. “Uniform” bail schedules mandate release for people who can access the money and detention for people who cannot. Conforming one’s conduct to a reasonableness standard requires greater effort for some than for others.
This is not to deny that there is a meaningful difference between a 55 mph speed limit and a risk standard that produces individualized speed limits. The point is just that the terms “uniformity” and “personalization” (or “generality” and “particularity”) are not sufficient, by themselves, to describe what that difference is. A technology-driven system that instructed each driver exactly how to reach 55 mph would be very precise and particularized in some ways. Yet it is not what Ben-Shahar and Porat have in mind.
Can we do better at defining the phenomenon that the authors call personalized law? The next Part makes an attempt. It begins by trying to identify the ways that generality and particularity can be distributed in legal rules.
II. An Alternate Conception of Personalized Law
A. Every Rule Must Generalize
Given that any rule must generalize on some dimension, the central normative question is on what dimension it should. What is the generalized directive—the optimization function—that should drive the traffic-safety algorithm or the sentencing judge’s deliberations?
The reasonable-person standard illustrates the question. As noted above, the standard requires you to act as a reasonable person in your circumstances would. The enduring question is which circumstances count as context and which do not.
In advocating for a shift from the “reasonable person” to the “reasonable you,” Personalized Law suggests that a personalized reasonableness standard would count all the circumstances—all the features of your physiology and internal experience and all the external features of your situation at the relevant moment in time. But to embed all circumstances in the standard is to eliminate its capacity to guide conduct. Rather than a rule requiring you to act as would a person who shared some of your circumstances, the completely personalized rule would state: “act as a person who shared all of your circumstances would act.” The only person who shares all of your circumstances is you. The completely personalized “rule” simply directs you to act as you would act.
The canonical case State v. Williams (Wash. Ct. App. 1971) illustrates the point in the context of retrospective liability. The Williams’ liability for the death of their son depended on whether a reasonable person in their situation would have averted to the danger and sought medical attention. If “their situation” includes only the physical facts about the baby’s infection—the fever, nausea, discoloration of his cheek and smell of gangrene—then certainly, an otherwise-average member of the polity would have sought medical help. If “their situation” includes the Williams’ educational and cognitive limits and rational terror of losing their child to the state, it is not so clear. And if “their situation” includes every fact about their external and internal circumstances, then a person in their situation would have acted exactly as they did act.
To serve as a standard, the “reasonable person” must be abstracted from any specific person to some extent. Yet it cannot be abstracted entirely! It would be impossible to hold you to the standard of a reasonable person without any circumstances, an ageless, sex-less, disembodied being in empty space. Such a person would have no basis for acting and no circumstances to act upon. The reasonable person standard must always be personalized to some degree. She cannot share your circumstances exactly, nor can she exist in a void. As Kim Ferzan and Larry Alexander put it, she is a construct who sits somewhere between these two poles.
In light of all this, the notion of a personalized “reasonable you” is nonsensical. Either the “reasonable you” is you, in which case she would act precisely as you did act; or she is not you, in which case the standard is still a generalized one, and we still need to know on what dimension it requires uniformity. If the feature that makes the “reasonable you” different from the actual you is the fact that she is reasonable, we still need to know what reasonableness entails.
The point here is that it is not especially meaningful to ask whether a rule should be personalized. All rules should—indeed must—be personalized to some degree. The more important and difficult question is on what feature the rule should generalize.
B. Dimensions of Generality and Particularity
All legal rules are motivated by some background social objective which it is their purpose to promote. In terms of content, a rule can generalize in any of the following ways:
- Prescribe (uniform) conduct to promote the social goal.
Examples include a 55 mph speed limit or prohibition on theft. A rule that prescribes specific conduct has particularized compliance costs given varying individual baselines. But it is possible to determine whether a person has complied with the rule without any knowledge of the baseline from which she began.
- Prescribe a (uniform) social outcome directly.
Social-outcome rules require everyone to act such that the outcome is achieved, for instance by setting a risk or efficiency threshold. Examples include a rule requiring pretrial detention at a given degree of risk or a rule requiring “safe” driving (i.e., that drivers keep the risk of accident below a certain threshold). In administrative law, social-outcome rules are called “performance-based regulation.” A social-outcome rule has particularized compliance costs given varying individual baselines. It is not categorially possible to determine whether a person has complied without reference to her baseline. This is because the actions she must take to meet the threshold—say, for “safe” driving—may depend on her baseline. A healthy thirty-year-old may be able to drive safely on a highway at night, whereas an eighty-year-old with failing sight and poor reflexes may not.
- Prescribe a (uniform) proportionality function.
Examples include day fines, a proportional income tax, and a retributive requirement that criminal sentences be proportionate to desert. A rule that generalizes on proportionality will have uniform compliance costs but will produce particularized results. Here, it is categorically impossible to determine compliance without reference to an individual’s baseline, since the proportionality function embeds the baseline.
Each of these approaches to generalization has a different normative valence. They are not mutually exclusive. The reasonable person standard, for instance, might require some uniformity of conduct but is also meant to generalize on a social outcome (perhaps some combination of risk, efficiency, and fairness?) and might even embed a bit of a proportionality function. Even a speed limit, the quintessential conduct rule, is a social-outcome rule in the sense that it exempts emergency vehicles and is discretionarily enforced so as to maintain a certain safety threshold.
In addition to these different possibilities for substantive generalization, there are also at least two axes for the distribution of generality and particularity in the communication of a rule:
- Any rule can be stated in terms of either its generalized or its particularized requirement.
For instance: a speed limit can be stated as a generalized directive (“do not exceed 55 mph”) or as personalized directives to produce the same result (“reduce your speed by 20 mph”). The reasonable person standard can be stated as a generalized directive (“act reasonably”) or as personalized directives specifying the necessary actions for each individual (“salt your icy steps within the next hour”). A day fine can be expressed as a generalized directive (“pay a day’s worth of income”) or as a personalized directive to pay the relevant amount.
- Legal institutions can specify a rule’s individualized requirements ex ante, ex post, or partly ex ante and partly ex post.
This spectrum is the subject of the “rules-versus-standards” literature. In the lexicon of that literature, a “rule” is a legal directive that is highly specified ex ante, whereas a “standard”is only loosely specified. To the extent that the law does not fully specify a requirement ex ante, the individual must specify the law for herself and find out later if she was right.
C. Personalized Law as Two Distinct Shifts
It is not clear that personalized law maps onto any one of these categories. Sometimes it seems to refer to a substantive shift from conduct rules to proportionality rules—for instance, from fixed traffic fines to income-calibrated fines. Elsewhere, it seems to refer to a shift from conduct rules to social-outcome rules. Personalized medical disclosures, consumer protections, and traffic commands seem to fit this category. In the realm of negligence and criminal law, Ben-Shahar and Porat do not seem to advocate any substantive shift at all, just greater ex ante specification of the individualized requirements of reasonableness and deterrence.
Perhaps “personalized law” is really two ideas: (1) that we should shift from rules that prescribe conduct toward rules that directly prescribe the social outcome we want to achieve, and (2) that we should deploy big-data technology to determine and communicate the individualized requirements of such rules precisely and ex ante. The latter makes the former more practical, but these are conceptually distinct ideas.
There is a final possible meaning of “personalized law,” which is the notion of shifting away from rules-based governance entirely. We could give our machine-learning programs a concrete end goal and unlimited data, then let them issue the commands most conducive to the goal, continuously learning from the outcomes and updating their commands accordingly. This seems to be the world of “micro-directives” that Anthony Casey and Anthony Niblett envision. In this world we restrict the scope of rules-based governance in favor of AI executive command. We make AI the equivalent of Plato’s ship captain, standing at the helm of the vessel—monitoring conditions, issuing orders, sovereign.
This Essay cannot evaluate each of these potential shifts in normative terms. There are already vast literatures that do and surely oceans more to be written. The limited point here is that each of these potential shifts has different normative implications. As debates about personalized law evolve, more precision about what kind of precision is at issue would be helpful.
III. Two Potential Costs
Because it is hard for a law professor to omit normative evaluation entirely, though, this Part raises two concerns with a systematic shift toward highly specified social-outcome rules. They are not necessarily the greatest concerns. (The most daunting obstacle to this kind of a shift is probably the difficulty of mathematically specifying social-outcome objectives, as Cary Coglianese explains in this Symposium and as others have noted before). But they are significant concerns that Personalized Law does not address at length.
The first is the concern for racial equity. Ben-Sharah and Porat recognize how important this is. They do not quite acknowledge the brute fact that social-outcome rules personalized with big data will replicate underlying inequality in the conditions of social reality. A rule that prescribes a social outcome—like a particular risk or efficiency threshold—will have differential compliance costs and differential effects for each of us because we all begin from different circumstances. And those differential effects will track structural inequality. This is not uniquely true of algorithmically tailored laws. It is true of all laws. But the algorithmic personalization of law should help us see it.
The authors recognize the risk of disparate racial impact, but they suggest that statistical methods can eliminate it by removing the variable labeled “race” from machine-learning algorithms, as well as any predictive power of this variable that “proxy” variables pick up: “Without the proxy bias, the prediction will also be free of any disparate impact.”
This statement is incorrect. Even in Crystal Yang and Will Dobbie’s exposition of the statistical approach in question, the race-neutral algorithm still produces racially disparate results. The fact is that if the event we want to predict (the target variable) occurs with disparate frequency across racial lines, the algorithm’s predictions will also be racially disparate. One can configure the algorithm to meet a particular metric of equality, but there will still be disparate impact on some dimension. You just can’t predict outcomes in a racially unequal social reality in “race-neutral” terms.
The authors are right that big-data algorithms could be a potent tool to diagnose and redress inequality. For that to happen, though, the goal of reducing structural inequality must be programmed into the law and take precedence over other goals. We will need an explicit, normative theory of what people are substantively owed on the basis of race given past discrimination and existing structural inequality. The same is true of disparate impact by sex or gender and any other group identity that concerns us.
A second relevant concern, which Ben-Shahar and Porat do not address, is the value of shared experience in interpreting and following rules. They address the value of “social coordination” in a consequentialist sense. And they acknowledge that certain rights that are fundamental to the structure of our society, like free speech, depend on a high degree of uniformity to serve their democratic function. But I mean to raise something else.
It is arguable that one important function of law is to enable and generate the kind of collective experience that characterizes, and perhaps is necessary to, a political community. All of us in the United States, for instance, are subject to a constitution designed to “form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity.” On that foundation and toward those ends we have built an elaborate structure of legal rules and institutions. We are all subject to the same rules. That is what makes a political community—just as, in other contexts, conscious participation in a shared set of rules constitutes a game or a religious community. In philosophical language, the rules are constitutive as well as regulative. The important fact, moreover, is not just that we are subject to the same rules. It is that we see ourselves as subject to the same rules. To be a U.S. citizen is (at least in theory) to experience participation in a shared rules-based endeavor.
It seems possible that an increased emphasis on particularization in the substance and communication of laws could erode our sense of collective experience. This is a familiar dilemma in sentencing law. Brock Turner’s six-month sentence sparked outrage because it was so radically out of step with the sentences that most people convicted of similar crimes receive, not (primarily) because it was substantively inadequate to achieve the goals of punishment in his case.
There is a value to blunt uniform-conduct rules. As Frederick Schauer has written, by generalizing, legal rules intentionally flatten differences among us. They condition legal outcomes on certain features of our thought, actions, and circumstances; they exclude consideration of other features. By so doing they enable an experience of equality. As Schauer writes, “equality is at its core about treating unlike cases alike.” Generality is “an instrument of community”: “To abstract away from our differences is to bring us closer together and to generate a focus on our similarities.” This is not to say that generalized rules always have this effect or that when they do it is always salutary. But it can be.
Perhaps algorithmic personalization of the law will not eliminate the experience of equality that rules enable. After all, every legal rule will still generalize on some dimension. Perhaps the shift toward personalization will force us to deliberate more explicitly about what the goals of our laws are, as Cary Coglianese and David Lehr envision. (In my view, this has been one effect of the trend toward actuarial risk assessment in criminal justice.) If those conversations are inclusive, they could deepen the experience of collective legal subjecthood. Still, “different rules for different people” might undermine the sense that we, as members of a political community, submit ourselves to shared rules in pursuit of shared aims.
In addition to the equality-generating capacity of uniform-conduct rules, loosely specified rules, like reasonableness standards, play a further role in enabling the experience of shared agency. To determine what a “reasonable person” in my situation would do, I must put myself mentally in the place of another person—a member of my political community but not a specific one; rather, an imagined composite of the whole populace—and imagine how such a person would weigh the various interests and reasons that bear upon me. To do so, I must imaginatively synthesize the experience and moral judgment of all the members of the polity. I am aware that everyone else engages in the same process. And I am aware that they are aware that I engage in this process as well. We determine what is reasonable by thinking, collectively and recursively, about our shared experience and values.
I wager, like Seana Shiffrin, that a certain prevalence of loosely specified rules is important to the health of a democratic legal system. They require active participation in the application of the law. They facilitate collective and evolving moral judgment. Reasonableness standards in particular require individuals and courts to assess how a person would behave in a given situation if she had proper respect for the values of the community. What are those values? And what does it mean to have proper respect for them? These are the questions latent in the reams of case law dedicated to questions of reasonableness.
Personalized law, by contrast, does not demand active engagement in the application of the law. A legal institution produces a microdirective; the individual need only comply. There is no scope for the recursive moral imagination that judgements of reasonableness require. Nor does personalized law promote the experience of equality under shared rules. “People are never put into group ‘bins’ under personalized law,” as the authors note. “[E]ach person is a bin of one.”
Conclusion
Personalized Law demonstrates across a range of legal contexts that algorithmic specification of particularized legal requirements is both possible and potentially very useful. But the dichotomous uniform-versus-personalized framing is misleading. All rules, by definition, have elements of both generality and particularity. The deep questions that big-data technologies raise for the law are old questions—about what the goals of a rule ought to be, on what dimension it should generalize, and how fully its particularized requirements should be specified ex ante. Today’s technology offers a powerful tool for specification when we decide to pursue it. But the technology cannot tell us when we should.