How to Evaluate Personalized Law
Personalized law is a new model of rulemaking where each person is subject to different legal rules and bound by their own personally tailored law. A legal norm calibrated for the reasonable person is replaced by a multitude of personalized rules, each person with their own “reasonable-you” standard. Consumer protections are dialed up for particularly vulnerable people. Age restrictions and licenses vary according to individual competence. And borrowers receive personalized loan disclosures tailored to suit their needs and mental capacity.
Our book, Personalized Law: Different Rules for Different People, lays out a vision of such a personalized law regime, one stoked up by big data and implemented with the aid of algorithms. The Book argues that personalized law can do a lot of good but also faces many obstacles. By casting aside one of the most fundamental virtues of law—the uniformity of rules—our scheme of interpersonal differentiation poses formidable dilemmas. Things could go wrong in many ways.
Thirteen Essays in this Personalized Law Symposium indeed focus on what could go wrong. In this brief Introduction to the Symposium, we examine several key features of personalized law targeted by our critics, both to clarify what personalized law is and to ease some of the objections. Evaluating personalized law is an opportunity to think about fundamental aspects of law and order, and we offer some placeholders for this inquiry.
Is Personalized Law New?
There are two ways to criticize a new idea. One says it is not new at all, merely a relabel. The other says it is so outrageously new and different that it must be an aberration, a delusion, and it will not work. Interestingly, both critiques are levied in this Symposium against personalized law. Let us start with the not-new-at-all view.
The idea that legal commands should be tailored to the characteristics of individual actors is indeed not entirely new. There are some pockets of such laws, for example, when tort compensation is based on victims’ predicted loss of income or when criminal sanctions attempt to reflect individual risk predictions. Other legal standards are sometimes applied with an eye to personal factors. Catalina Goanta shows that the duty of good faith has such origins. But the overwhelming landscape of legal tailoring is not personalized. When the speed limit varies with changing weather conditions, the law “particularizes” based on features of the environment, not each person’s characteristics.
Consider rules that particularize but do not personalize. Standards of care, not only on the road, depend on the risk of the activity and the cost of precautions (environment), not on the driver’s impulsiveness or skill (person). Personalized law would transform tort law to the core by varying standards of care person to person. Or, contractual default rules depend on the product, market norms, and prices (environment) but not on the preferences, cognition, wealth, or education of each buyer (person). Again, personalized law would revolutionize transactional law by personalizing every conceivable default rule, from contractual warranties to intestate allocation and retirement savings. Or, mandated disclosures depend on the risks and complexities of a product (environment) but not on what each recipient of the disclosure needs to know and is able to process (person). Personalized law would reinvent disclosures, warnings, and food labels with different bits of information electronically delivered to people at the point of decision.
This technique has never been widely deployed nor received much attention. Why? Two reasons come to mind. First, as Justice Oliver Wendell Holmes once remarked, it was not possible to measure “a man’s powers and limitations” to tailor person-specific duties. Surely, this is no longer the case. Second, the myth of uniform law—that justice is blind to the identity of individual claimants—is widely held, seldom challenged, and scarcely even commented. But uniformity of rules leaves enough discretion in their enforcement. Our Book shows that systematic, data-guided personalization has the potential to outperform discretionary (and often discriminatory) enforcement practices.
The Book examines a model of personalized law that looks very different than traditional legal tailoring. It imagines a data-intense mechanism that, like personalized medicine, identifies personal attributes relevant to the design of legal treatments. While the enterprise of personalized law can be done intuitively, as when judges focus on a few salient characteristics to assess a criminal defendant’s risk of recidivism, it can be also done far more effectively when based on big data and algorithmic analysis in the manner that, for example, insurance companies predict policyholders’ accident risks. If done well, it need not discriminate along group membership. On the contrary, it can reduce the salience of suspect classifications and correct for past injustice.
Why Personalized Law?
Many claims in the Symposium fit in the second type of criticism: personalized law is unworkable, insufficiently justified, and even harmful and dangerous. Some of our critics think that the benefits of personalized law are at most “technocratic”—will “gravitate towards welfare maximization”—and they worry that the scheme conflicts with non-welfarist goals of justice, freedom, environmentalism, and humanism.
We indeed argued in the Book that the benefit of personalized law is “precision,” but don’t let this mechanical term disorient you. A scheme designed to respect people’s heterogeneity and treat them as individuals is not merely an engineering or productivity feat. Is it merely technocratic to dial up consumer protections for the neediest of consumers? To release from jail and prison people who—based on debiased data rather than judges’ discretion—are truly less dangerous? To require the most reckless people who inflict greater harms on innocent victims and on society to take more care?
Our Book may have been too brief in defending the benefits of personalized law as we sought to focus attention on the difficulties and objections. The benefits seemed to us obvious and uncontroversial. Personalized rules have the potential to accomplish any goal of any law more effectively because they can account for more relevant circumstances and differences. It is the specific commands, not the legal objectives, that are personalized. If, say, the goal of tort law is deterrence, personalized law improves deterrence by taking personal characteristics into account. If, instead, tort law pursues corrective justice, the compensatory obligations are tailored more accurately. Personalized law is beneficial for the same reasons that good parenting is often tailored to each child specifically. Uniform rules may be good on average but misfire for people with diverse characteristics and experiences.
It is possible to have a dystopian personalized law regime. China’s social credit scheme, which helps the Chinese regime condition various basic legal rights on people’s “moral” (to wit: obedience) rankings, is a chilling caution. But let’s not forget that it is also possible to have dystopia under uniform laws. Undoubtedly, data-powered personalized law raises the stakes. It can make laws that pursue wicked goals worse just as much as it can make good laws better.
We are attentive to the concern that personalized law might impede values like social cohesion. Although personalized law could in principle pursue the same goals as uniform law and even do so with greater horsepower, it can go terribly wrong. The first order of business for any algorithmic personalization regime is to make its goals transparent, interpretable, and explainable; to have its methods regularly audited; and to identify and correct unintended effects. We are comforted by the knowledge that algorithmic personalized law cannot be put to action without clearly specifying the objective of each rule. Imagine: no longer may lawmakers and judges fudge a law’s objectives or use fuzzy balancing rhetoric to reach their desired results. To build a workable system of optimal personalized commands, lawmakers would have to be very precise about the objectives and their relative weight. The power of nontransparent concentrated interests to shape the law in their favor, or of regulators to abuse legal powers, would greatly subside.
Limiting Principles for Personalized Law
Many of the dangers of personalized law that the Essays in this Symposium identify focus on limits and boundaries. There is a core of personalized law that, we think, is entirely uncontroversial. All mandated disclosures, most default rules, many consumer protections, and various standards of care—namely, the bulk of everyday law—are ripe for personalization. The problem in putting it to action is technical more than principled.
At the other end, some laws should clearly not be personalized. Several of the Essays identify such important boundaries. Netta Barak-Koren, Horst Eidenmüller, and Sandra Mayson each argue that constitutional rights should not be personalized because they are the “glue that binds us together”—we are “one people” because “we enjoy the same constitutional rights and privileges,” and the uniformity of such basic rights is critical for “collective agency.” More concretely, Javier Kordi recognizes that a personalized voting age reflecting competency could increase enfranchisement, but he argues that such visible differentiation might lead to stigmatizing and “acute” distributive effects.
We are in (almost) full agreement. In a short excerpt in the Book, we asked rhetorically whether the Bill of Rights should be personalized and explained why we think it should not. The notion of a personalized constitution, we explained, was a thought experiment to highlight the limits of personalized law. Constitutional rights “have social value that greatly exceeds the private consumption value to the individual rightsholders.” Because giving everyone civil and human rights is critical for personal flourishing and social interactions, uniformity is a prerequisite for democratic social activity.
Barak-Corren suggests that personalized law should also be limited in application to other autonomy-centered rights. How do you personalize law that protects reputations, relationships, family life, privacy, and mental health? Could algorithms be trained to infer or predict the intimated preferences that these rights protect? We are mindful of this limitation, but also recall that uniform laws are not exactly triumphing in these domains. Contract law fails to protect against emotional harms, data privacy law lets firms steer people towards massive opt-outs, and defamation law has a tough damage-valuation problem. Moreover, there is a troubling paternalistic dilemma in insisting on minimum uniform “autonomy” rights when people seem to be indifferent to some of these protections. If we take seriously people’s revealed autonomy preferences, tailoring individualized protections could support the autonomy aspects that each citizen values.
We therefore argue for personalized law even when it implicates individual autonomy. What better example than Adam Davidson’s contribution to this Symposium—how society treats criminal defendants. If the “dangerous few”—those most like to reoffend—must be incarcerated or monitored, why not use more accurate techniques—instead of stereotypes—to identify this group?
Similarly, does the right of privacy have to be uniform? Protecting privacy has its costs, not only for society and third parties but also for the right bearers themselves, who often benefit from narrowing their rights to receive less costly and better tailored services. So while we agree that constitutional rights should generally be uniform, the privacy right example illustrates the potential of personalizing statutory rights to better match people’s personal autonomy values. And if finely-tuned personalized protections of autonomy are unrealistic or unworkable, let’s consider the possibility of crude personalization.
“Crude” Personalization
Crude personalization dials down the intensity of algorithmic personalization. It relies on a small set of categories to generate several discrete buckets of treatment—e.g., low, medium, or high steps—with people slotted to the most suitable one. Hans Christoph Grigoleit views crude personalization as far more acceptable under our legal system. We agree that much of the benefit of personalized law could be achieved this way.
Consider a personalized alcohol purchase age. Instead of a uniform age of twenty-one, the law could apply a personalized cut-off reflecting the idiosyncratic risk of alcohol abuse. But if doing this accurately requires autonomy-sensitive factors, like mental health records, crude personalization could sidestep some of the difficulty and instead use, for example, a three-step scheme with the least risky people granted the license at age eighteen, the riskier at twenty, and the riskiest at twenty-two. Such approximations of riskiness could be based on less intrusive data, primarily on driving records and on evidence of risk-seeking behavior.
Crude personalization might help address Jared Mayer’s thoughtful critique, which asks whether people could realistically know their personalized standards of care and points to people’s notoriously inaccurate beliefs about their characteristics. People would now only need to know—or receive notice of—their risk rating. A regime that puts people in the “high care” step of negligence law based on past infractions, medical conditions, and low driving skill may find clearer paths overcome Mayer’s concerns. Insurers and carmakers already use driving data to provide drivers daily updated scores of their risky habits, instructing them how to reduce risks. The law could do the same.
But crude personalization, even in tort law, is not a panacea. Catherine Sharkey has written in the past against crude personalization of tort damages, which “fuels a racialized and gendered deterrence gap,” and she points out in her Symposium contribution the tension between our desire to personalize the standards of care and keep damages for physical injuries uniform. Indeed, we think that a limiting principle of treating victims equally applies to damages for bodily harm, but it does not dictate uniform standards of care. Courts may use, among other factors, the parties’ incomes in setting the standard of care (such that wealthier parties would be required to purchase more precautions), but ignore it when awarding damages. So while risks to victims should be considered in setting the standards of care, personalized potential lost income to the specific victims ought to be ignored. Indeed, one of the advantages of personalized law is the potential to surgically remove troubling factors from the tailored rule.
The Realism of Algorithmic Personalized Law
We are not computer scientists, and it is possible that our Book is a pipe dream about big data and how humans would be able to guide the algorithms. Could machines truly be trained to identify the relevant attributes for advancing the goals of a law? Could these goals be quantified to optimize the commands? Greg Klass does not think so. Law’s main tool, he says, has always been words and rhetorical persuasion. Statistics and computer code poorly fit in the legal process. The robotic scheme would not be transparent enough to be corrected when things go wrong.
Digging deeper into the trenches, Peter Salib explains why the algorithms necessary for personalized law will be very hard to explain, why their discrimination will be hard to spot and to debias, and why it will be hard to correct the historically biased data used to train the algorithms. Lauren Scholz is perhaps more appreciative of a human-machine collaboration—what she calls “cyborg personalized law”—but suggests a gradual process of allocating to AI personalized law tasks.
We wrote our Book as a road map to full-fledged personalized law. But we are keenly aware that, although some promising steps are already being designed, at present AI technology is not yet up to the task nor is society prepared with the necessary safeguards. Nevertheless, we want to push back against quixotic views of human-guided law as more humane. Johann Wolfgang von Goethe hailed humans’ “godlike” sensibility, boasting that “man alone can do the impossible: he can distinguish, he chooses and judges; [ . . . ] he alone may reward the good and punish the wicked.” Present day judges concur, of course, with that flattery, making their own grandiose declarations about the judiciary as the compassionate protector of litigants’ dignity and of the citizens’ intrinsic right “to look a judge in the eye in a public courtroom while making [their] plea.”
Yes, perhaps “man alone” can judge, and perhaps only judges can “look in the eye” and apply their intuition. But do we really need to survey the immense literature on human biases and cognitive limits and point to the many illustrations of how deeply these biases afflict judges? Is the human-guided legislative, regulatory, and judiciary record so successful, transparent, error-free, correctable, explainable, compassionate, and non-discriminatory that we must fend off against robotic methods that could replace some human interventions and eliminate their biases?