TABLE OF CONTENTS

The place is here, the time is now, and the journey into the shadows that we’re about to watch could be our journey.

–Rod Serling, The Twilight Zone

This is a book of science fiction. Or maybe not.

–Omri Ben-Shahar & Ariel Porat, Personalized Law: Different Rules for Different People

Personalized Law: Different Rules for Different People describes a type of law that does not today exist. This is why the authors describe the book as science fiction. Science fiction, however, is not one genre but several. This Essay uses a few of those genres as ways to think critically about Professors Omri Ben-Shahar and Ariel Porat’s proposals.

Ben-Shahar and Porat intend their book as hard science fiction. Hard science fiction imagines worlds that do not violate the rules of physics or the other sciences but are, in one way or another, more technologically advanced than ours. In his novel The Martian, for example, Andy Weir describes what a human mission to Mars in the year 2036 might look like, given what we know today about the technologies that would make it possible. Personalized Law explores how lawmakers in the near future might use technologies we have today—big data and artificial intelligence—to tailor legal rules to the personal histories and characteristics of individuals in the way that advertisers use those tools today to tailor ads we see on the internet or that insurers do to tailor policies to individual insureds.

There are other genres of science fiction as well. Much science fiction—hard or otherwise—imagines a world either much better or much worse than the one we live in. Dystopian science fiction includes stories of mad scientists and technology run amok (Goethe’s Der Zauberlehrling, Mary Shelly’s FrankensteinThe Matrix), stories of attempted utopias gone wrong (George Orwell’s Animal Farm, Aldus Huxley’s Brave New World, Lois Lowry’s The Giver), day-after-tomorrow scenarios that extrapolate from existing social trends (Anthony Burgess’s A Clockwork Orange, Neal Stephenson’s Snow Crash), and combinations of the above (Philip Dick’s short story, then Stephen Spielberg’s movie, “Minority Report”). Although Ben-Shahar and Porat are not writing dystopian science fiction, one can ask whether the world they imagine doesn’t have dystopian elements.

Works of utopian or dystopian science fiction can also belong to the broader category of social fiction. In these works, the imagined world serves not only as a backdrop for adventures, love stories, or other familiar narrative devices (think of the Star Wars franchise) but also to show us something about our world. In The Left Hand of Darkness, for example, Ursula Le Guin imagines a world in which people alternate between genders in a way that casts light on how assumptions about gender shape our social world. Le Guin describes the book as a thought experiment used “not to predict the future . . . but to describe reality, the present world.” Personalized Law might also be read as a form of social fiction, revealing not only an alternative form of law but also something about the law we have today.

So just what sort of science fiction is Personalized Law? Part I of this Essay engages in something like fan fiction. It accepts the premise that the book is a hard science fiction, maps Ben-Shahar and Porat’s imagined legal reforms, and then identifies salient differences among them. Part II explores reading Personalized Law as dystopian science fiction. Part III considers the book as social science fiction. Part I focuses on the idea of personalization; Parts II and III on the role of big data and artificial intelligence in the book’s proposals.

I. Mapping the Territory

Margaret Atwood observes that works of speculative fiction often come with a map of the world in which the action takes place. Think of the maps at the front of J.R.R. Tolkien’s The Hobbit and The Lord of the Rings, which readers use to trace the heroes’ journeys. This part suggests a few ways of mapping the territory covered in Personalized Law.

Ben-Shahar and Porat state at the beginning of Part I that “[l]aws stipulate commands” and that they will “begin the book with a blueprint for personalized commands.” These pronouncements might strike some readers as odd. First, commands are individual directives. A command is issued to a particular person or group of persons on a particular occasion and requires them to perform a particular action. Laws, in distinction, are almost always formulated as rules. A law provides that if any person or persons satisfy some criteria (driving above a specified speed, agreeing to an exchange), there will be certain legal consequences. I return to this difference between laws and commands in Part II. Second, a command takes the form of “you must x.” If any laws are akin to commands, it is laws that impose duties. But the law does much more than impose duties. H.L.A. Hart, for example, emphasized that laws also confer on persons powers to effect legal change when they wish. And there are Wesley Newcomb Hohfeld’s additional categories of privileges and immunities. Powers, privileges, and immunities are not anything like commands. It might be that these other types of law provide different opportunities for and pose different challenges to personalization.

In fact, at most two, and arguably only one, of the candidates that Ben-Shahar and Porat identify for personalization is a duty tailored to characteristics of the duty bearer. The authors’ go-to example is speed limits, which they suggest could be tailored to the habits and abilities of individual drivers. Ironically, this is a law that might soon be superseded by another new technology: driverless cars. Their other example is the duty of care in the law of negligence. It is not obvious, however, that the duty of care qualifies as a duty in a robust sense. The law sanctions negligence only when it causes injury, and the sanction is not a penalty but compensation to the victim. There is a good argument that so-called duties of care in tort law are not duties as such, but factors to use when allocating the costs of accidents or determining which wrongs generate duties to compensate (which is not to say that they do not also serve to provide an incentive to take reasonable care).

The other laws that Ben-Shahar and Porat identify as candidates for personalization are not duties as such. They fall into five categories.

The first comprises what Hohfeld calls “claim rights.” Examples include privacy rights, which limit how others can use information about the rights-holder; consumer rights, such as mandatory terms that attach to consumer contracts; and rights to the disclosure of information. Although each of these involves a corresponding duty, Ben-Shahar and Porat propose tailoring such laws not to the duty bearer but to the right holder. This is an important distinction. As they observe, the “[p]ersonalization of duties might raise different challenges than the personalization of rights.” It might also raise very different normative concerns.

Second, Ben-Shahar and Porat suggest personalizing some legal privileges. The privilege of driving, for example, might be conditioned not only on a person’s age, but also on other personal information predictive of their driving habits. Or lawmakers might personalize the privilege of purchasing or possessing alcohol or other controlled substances. Privileges of this type are neither legal duties nor claim rights but belong to a category all their own.

Ben-Shahar and Porat also suggest personalizing laws that structure the exercise of legal powers: defaults and altering rules. Today, defaults are commonly tailored using difficult-to-predict standards, such as “reasonable in the circumstances.” Ben-Shahar and Porat would instead use big data and artificial intelligence to identify the optimal defaults for an individual power holder. The rules for intestate distribution, for example, might be personalized based on predictions about how the decedent would have chosen to distribute their assets had they written a will. With respect to defaults, “optimal” might be defined as a legal state of affairs that the person would likely choose; the legal state of affairs the person would most benefit from; the legal state of affairs that would, given the person’s traits, most benefit society; or something else. The reason for choosing one or another default will correlate with the social goal of tailoring. Although Ben-Shahar and Porat spend fewer pages on the idea, one can also imagine personalized altering rules, such as different testamentary requirements for different individuals depending on their education level, their income and wealth, magazine subscriptions, etc. Defaults and altering rules are elements of power-conferring rules. Together they determine how a legal power is exercised, thereby establishing the mechanisms of legal choice.

Legal sanctions comprise a fourth category. These proposals for personalization are the closest to existing practices. There is a long history of tailoring both pretrial-detention decisions and postconviction sentences to the individual characteristics of a criminal defendant. The goal of such personalization might correspond to any of the traditional purposes of criminal punishment. One can imagine big data being used to predict how likely a defendant is to reoffend and, therefore, the size of the societal gains from incapacitation; to predict a defendant’s susceptibility to one or another type of rehabilitation; or even to assess the wrongfulness of an individual defendant’s actions—and therefore the appropriateness of retribution.

Finally, Ben-Shahar and Porat suggest that policing might be personalized to individual traits, such as the likelihood of offending. Here more than elsewhere, “law” is understood in a capacious sense. New approaches to policing address not the rules that govern legal subjects but the mechanism we use to enforce those rules.

The point of mapping these different regions Ben-Shahar and Porat traverse is that we might have very different intuitions about the appropriateness of personalization in one or another area. And from a theoretical perspective, different principles might be at stake in each. A detailed map of the normative terrain would be the project for a longer article. But I will note a few salient differences among the categories.

First, as Ben-Shahar and Porat suggest, in private law it seems relevant whether a legal duty is personalized with respect to the duty bearer or with respect to a correlated right holder. Many existing legal duties require the bearer to account for the individual characteristics of those who have a right to their performance. When deciding how to manage and disburse funds, for example, the trustee of financial assets for a minor should take into account the minor’s situation in life, individual needs, and personal well-being. A court appointed guardian has a duty to take into account their charge’s individual best interests and abilities. The idea that a sophisticated duty bearer, such as a corporation offering products to the public, might be required to use big data to identify the interests, preferences, or abilities of their customers is new wine in a familiar legal bottle. That does not mean the idea is not important. Big data might permit the expansion of both parentalistic and choice-promoting rules to new areas of law, such as privacy, disclosure, and consumer protection. But the structure is a familiar one.

More radical is the idea that a person’s nonvoluntary legal duties might depend on their own personal history and individual characteristics. This is why it is interesting that Ben-Shahar and Porat, despite their identification of law with commands, name so few legal duties as candidates for personalization to the duty bearer. Perhaps the stakes are higher with respect to personalization of this type.

We can go a bit further down this path by thinking about a second salient distinction—that between personalizing first-order duties, such as speed limits, and personalizing second-order duties or sanctions, such as a duty to compensate or criminal punishment. Unlike first-order duties, second-order remedial obligations and sanctions are always conditioned on a person’s prior acts—the commission of a legal wrong. This fact might make personalization more palatable. To commit a legal wrong is to forfeit the right to be treated like everyone else.

Consider another piece of science fiction: Dick’s short story “Minority Report.” Dick imagines a criminal system, “Precrime,” that incarcerates persons based not on murders they have committed but on highly accurate predictions of murders they are about to commit. Proponents of Precrime emphasize its advantage over postcrime punishment: Precrime provides the proper punishment while sparing the life of the victim. Why does the Precrime system nonetheless strike us as unjust? Perhaps because it violates our strongly held cultural ideal that no one is fated to act wrongly, that a person is always more than the sum of their past acts and propensities, that every act of choice is to some degree sui generis. Punishment prior to wrongdoing, no matter how accurate the predictive algorithm, violates that principle.

How does all this relate to first-order duties like speed limits? Speed limits are not punishments. Yet the thought that even though a person has committed no wrong, the state might use their past behavior to limit their sphere of freedom pushes up against the same intuitions and principles. One might feel similarly about personalized privileges, like the privilege to drive or purchase alcohol. Personalizing remedial obligations and sanctions, in contrast, is less problematic.

Yet another salient distinction is between using a person’s personal history and characteristics to assign them one or another legal status (duty, power, privilege, or immunity) and personalizing the rules that implement that legal status. Consider private legal powers. Ben-Shahar and Porat nowhere suggest using big data and artificial intelligence to determine who has the power to contract or to execute an effective last will and testament—though the tools they describe might be deployed to do so. They recommend only that we personalize the mechanisms through which such legal powers are exercised—defaults and altering rules. I find this unsurprising. Personalizing how a person can exercise a legal power strikes me as less problematic than personalizing who has the power.1 The same might be true of duties, privileges, and immunities.2

A last set of conceptual distinctions, or another set of mapping coordinates, lies along a different axis. Ben-Shahar and Porat do not provide a systematic account of the types of individual characteristics that personalized law might attend to. The examples they use, however, suggest three broad categories. Defaults, they argue, should generally be tailored to a person’s likely preferences. Personalized disclosure rules, in distinction, are to be tailored to the right holder’s ability to use information presented in one or another format. Criminal sanctions, finally, should be personalized according to the defendant’s propensity to reoffend or to respond to reform. A complete account of the normative landscape of personalization should also consider possible differences between personalizing for preferences, abilities, or propensities, as well as other possible candidates such as needs or interests. It might be, for example, that we are more comfortable personalizing for preferences and abilities than we are for needs and propensities. Whether, why, and when this is so, and whether there are other sorts of characteristics one might use for personalization, would be worth exploring.

II. A Personalized Dystopia?

O wonder!
How many goodly creatures are there here!
How beauteous mankind is! O, brave new world
That has such people in’t.

–William Shakespeare, The Tempest

The above mapping is a sort of fan fiction. It explores the world that Ben-Shahar and Porat construct in directions they do not go but remaining within the bounds of the project. A different question is whether theirs is a world we would want to live in. Ben-Shahar and Porat occasionally and perhaps ironically describe their model as “our brave new world.” Whether this is a nod to Shakespeare or to Aldous Huxley, it suggests asking whether there isn’t a dark underbelly to their utopia.

The authors spend several chapters on Frankenstein scenarios: possible unintended consequences of personalized lawmaking. These include practical worries about personalized coordination rules, the possibility that sophisticated legal subjects will game the system, and the risk that algorithms will pick up the residue of past discriminatory practices, compounding those wrongs.

Writers of science fiction have suggested yet other ways societies can go wrong. One is captured in Juvenal’s question: “Quis custodiet ipsos custodes?” Who will oversee the overseers? Terry Gilliam’s film Brazil, for example, depicts an absurdly impersonal bureaucracy in which a minor paperwork error (caused by a fly landing in a printer) results in a cascade of misfortune for characters who have no avenues of appeal. The overseer might make mistakes. And in his film adaption of “Minority Report,” Stephen Spielberg imagines the founder of Precrime manipulating the system to hide a murder he commits and to implicate another for the crime. The overseer might also misuse their power.

Existing forms of lawmaking and adjudication are not immune from quis custodiet problems. Michael Allan Wolff has documented the persistent influence of typos in U.S. Supreme Court slip opinions. And Robert Caro describes how Robert Moses effectively nullified a legislative provision sunsetting the Triborough Bridge Authority by burying an apparently innocuous sentence deep in an amendment to a separate section of the enabling statute. But there are reasons to think that these risks are more significant in Ben-Shahar and Porat’s brave new world.

In our legal system, lawmaking and adjudication are accomplished using words. Legislators and their staffs debate and discuss; regulators generate reports, solicit comments, and hold hearings; parties write briefs and engage in oral arguments; judges and clerks write memos and ultimately issue opinions. Words also play a role in Ben-Shahar and Porat’s world. But the tools of lawmaking and adjudication also include statistical analysis, computer code, and machine learning. We will need workers with specialized knowledge to implement personalized law: computer scientists who will create the algorithms, data managers who will provide the inputs, and statisticians who will check the results.

There is no reason to think that computer scientists, data managers, and statisticians are less trustworthy than are legislators, lawyers, and judges. But their errors and abuses are less visible. Because our legal system operates primarily in words, when things go wrong, it is relatively easy to understand how and why. Adam Liptak can explain to readers of the New York Times a Supreme Court typo and its effects; Caro can describe Robert Moses’s sleight to a general audience. A flaw or trick in an algorithm using machine learning to synthesize millions of pieces of data, in distinction, might be intelligible only to those with the technical expertise to identify or understand it. Handing over parts of lawmaking to technicians means rendering parts of it obscure. The overseers’ work is hidden from view. The Hollywood script almost writes itself.

But the problems run deeper. Algorithmically driven personalization differs from existing forms of legislation and adjudication in two additional significant respects.

The first returns us to the topic of rules and commands. Existing forms of legislation and adjudication generate and apply rules: general propositions that associate with generically described actions, events, and other facts about the world specified legal consequences. Even when a common-law court decides a case of first impression—one for which there is yet no rule—the norms of adjudication require it to announce a rule to explain its holding.

Algorithmically driven personalized law, in distinction, generates commands or other particularized dictates. A person is told that here and now they may drive no faster than forty-four miles per hour. But no one is told under which circumstances they should drive that speed or when someone else should. An algorithm might tell a probate court how to distribute an individual intestate decedent’s property. But it will not provide a rule that one could use to predict when another decedent will be provided a similar distribution or what this decedent might have done to get a different one. The algorithm issues its dictates in the form of commands, not rules.

All this is important because the rule of law requires treating like cases alike. The rule of law presupposes a law comprised of rules, not commands. Algorithmically driven personalized law, as Ben-Shahar and Porat imagine it, does not provide that.

The objection suggests an alternative: rather than personalizing law into commands, big data and artificial intelligence could in some circumstances be used to further specify legal rules. Although any legal rule can be stated in a general form (“if any person . . ., then . . .”), a rule can be more or less specific depending on how many conditions replace the first ellipsis and how detailed the legal consequences replacing the second. This is not what Ben-Shahar and Porat propose. Nor does greater specification of rules get us all the advantages of personalized rulemaking. Still, one might take the project in that direction.

A second difference between algorithmically driven personalized law and existing forms of lawmaking generates a more intractable worry.

Although the processes of legislation and adjudication are often complex and jargon laden, they are largely accessible by and explicable to the public. That access is important not only because it makes it easier to discover errors and abuses (the quis custodiet worry) but also because law should be a democratic institution. A democratic law does not merely issue rules and rulings; it provides reasons for them.

Ben-Shahar and Porat’s proposal requires artificial intelligence to crunch masses of data to get desired outcomes. But artificial intelligence operates in a black box. Machine-learning algorithms are programmed to try out many different ways to analyze the data, assess the outputs, and then use those assessments to further develop the processes. After thousands, tens of thousands, or millions of iterations, the resulting processes can be so complex as to be impossible to unpack or understand. Those who program the machine to learn do not, in the end, know what it has learned—what happens inside the box—but are left evaluate the program’s success only by sampling its outputs.

When our law treats different people differently—when it punishes some and not others, when it denies privileges or powers to some that it grants to others, and even when it applies different defaults to different types of parties—those subject to the law expect to be told why.3 Turning over personalization to artificial intelligence threatens to render those reasons opaque. In short, the algorithm provides neither rule nor reason. If it spoke French, it might explain its commands in one sentence: L’état, c’est moi.

Like Ben-Shahar and Porat’s book, the above paragraphs engage in a form of science fiction, albeit of a different genre. Ben-Shahar and Porat do not advocate using big data and artificial intelligence to personalize all law. And as I observed in Part I, the stakes of personalization differ depending both on the type of law and on the types of characteristics the algorithm attends to. The above reductio ad dystopiam does not demonstrate that there are not gains to be had from some types of personalization in some areas of law. It does, however, suggest some costs of algorithmically applied personalization no matter where or how it is applied.

III. Personalized Law as Social Fiction

And when the poets can’t come up with anything
And have said absolutely everything in their plays
They lift the crane just like a finger
And the spectators get their money’s worth.

Antiphanes

Ben-Shahar and Porat have an answer to worries such as the above: Their brave new world may not be perfect, but it is better than the one we live in. If artificial intelligence occurs in a black box, at least it is not subject to the self-interest and persistent biases of legislators and judges. “[P]ersonalized law has the potential to sanitize lawmaking and law enforcement” both of the influence of special interests and of biased human decision-making. It does so by providing a different type of transparency: transparency of goals. Machine-learning algorithms cannot decide for themselves what counts as a correct analysis of the data. That must be specified at the outset by human decision makers. Whereas, today, legislators and judges need not agree upon the goals they seek to achieve, algorithmic lawmaking and adjudication will force them to specify “[t]he law’s goals and values . . . in advance with exact precision so that algorithms would know what to maximize.” Ben-Shahar and Porat celebrate transparency of this type:

Personalized law would require a degree of clarity and forethought in setting the objective of any law. No longer could lawmakers fudge this determination, deferring the explicit reconciliation of the law’s competing goals to judges, or inviting enforcers to tease out the goals by subsequent refined inquiry. Any ambivalence, crudeness, or uncertainty over the law’s goals and the costs associated with deviations from the goals would disrupt the personalization algorithm, or leave too much power in the hands of those writing the code. Moreover, lawmakers would not be able to merely state several cumulative goals of a statute; an exact weighing of their relative importance would instead be necessary.

Personalized lawmaking requires lawmakers to come to agreement on a law’s goals and, when those goals conflict, on their relative weights. Only then can they turn things over to the computer scientists, data managers, and statisticians to determine the means that best achieve those goals in proportion to the importance of each.

Although Ben-Shahar and Porat identify transparency of goals as a benefit of algorithmically personalized law, this transparency is not what drives their project. The book focuses on the substantive gains that might be had from personalized law, as distinguished from process benefits. Transparency of legislative ends is, in their story, a happy side effect. But we need not wait for the rule of algorithms to ask lawmakers to better articulate the ends they seek to achieve and their relative weights. If complete transparency of ends is such a good, why not require it today?

One explanation is practical. Where there is disagreement within society about the proper ends of lawmaking, “fudging it” permits lawmakers to reach compromise. But there is also a more principled reason to resist the idea that lawmakers should first fully specify the law’s ends, after which they or someone else will determine the best means to achieving them: it relies on a false picture of practical reasoning.

Practical reasoning is not algorithmic, proceeding mechanically from major premise (ends) through minor premise (available means) to conclusion (act). Intelligent practical reasoning requires that as we reason through the means of achieving our ends, we also reflect on those ends, further specify them, and sometimes alter them in light of the available means. Although we generally begin a practical project with a general idea of our ends, rarely can we fully specify them or their relative weights in all circumstances. We arrive at those specifications only by thinking through, or even trying out, possible means of achieving those initial ends. If one or another means is particularly attractive or brings additional benefits, we might add it to the ends to be achieved. If the available means are too expensive, too harmful, too unfair, or too unjust, the reasonable thing to do is to alter our ends. We reason about ends not in the abstract, but in the concrete—through our attempts to realize them in the world.

An algorithm-driven personalized law is like a planned economy, except that, whereas in a planned economy officials decide in advance how much bread, milk, and toilet paper the citizenry will need, in personalized law, lawmakers decide in advance the ends of a law and their relative weights. If such a system looks strange, it is because it assumes a level of foresight and judgment that even the most dedicated lawmaker does not have.

Ben-Shahar and Porat’s science fiction account of algorithmic lawmaking thereby highlights something we might not otherwise notice about existing forms of lawmaking: they are often structured to allow for experimental and iterative forms of practical reasoning. The common law is the example par excellence. A common law court begins with rules and principles established by prior decisions but should stand ready to limit, revise, and even occasionally reject them based on experience with their practical application. Arthur Linton Corbin, who titled his major work, “A Comprehensive Treatise on the Working Rules of Contract Law,” captures the idea:

In a superficial aspect, the application of rules to cases may seem to be a deductive process; a pre-existing general rule is the major premise from which the judge arrives at a particular conclusion applicable to John Doe. In fact, however, the law in its growth and application is an inductive process. The supposed pre-existing rule is a mere assumption of the court. . . . The supposed general rule is an inductive conclusion on the part of the judge from preceding individual instances. His decision of the case is a new instance which later judges and theorists will use as the basis of a new induction. In all cases the judge must construct his own major premise.

Empirical and iterative reasoning about ends is not, however, limited to the common law. A legislature might identify goals at a high level of generality, then leave it to an agency to further specify them through processes of deliberative rulemaking and revision. And the U.S. Constitution itself invites the legislature, the courts, and the executive to continually specify and respecify the goals, principles, and limits set out in it through their application and, one hopes, in light of our experience with them.

Personalized Law is science fiction not only because it imagines an alternative form of law but also because it imagines an alternative form of lawmaking. Perhaps in some spheres, algorithmic lawmaking of this type is possible and could bring benefits. But in many others, it presupposes abilities that human lawmakers do not possess. Here Ben-Shahar and Porat engage in science fiction of an especially interesting type. The form of lawmaking they imagine does not so much show us a better way of doing things as it does help us better understand the way we do things now.

  • 1The distinction between granting a legal power and determining how it is exercised can be slippery. Legal powers come with altering rules, and any given altering rule might make it more or less costly to exercise the power. When the costs get high enough, one might say that the power, for practical purposes, no longer exists. That said, deciding whether to grant a legal power is a decision of a different type than is deciding how that power is exercised.
  • 2Ben-Shahar and Porat suggest that “default rules provide a good starting point” for their personalization project, as people can always contract around them. My point is different. It is that personalized implementation rules are perhaps, as a class, less troublesome than the personalized assignment of one or another legal status.
  • 3In a 241-page book, Ben-Shahar and Porat devote one paragraph near the end to the so-called right to explanation.