TABLE OF CONTENTS

Omri Ben-Shahar and Ariel Porat wrote an exciting and provocative book that manages to stir your imagination and occupy your thoughts long after you’re done reading it. Their discussion of personalized law is incredibly timely, even visionary. Ben-Shahar and Porat are not the first to write about personalized law, as they note throughout the book. But the book is the first major attempt to disrupt the familiar course of law chasing technology and offer a comprehensive framework that precedes, rather than lags after, the next major technological revolution. While algorithms have penetrated a few areas of legal decision-making, they are still far from exerting their full influence on the legal system. In the near future, predictive algorithms will dramatically change the law. This book provides an opportunity for students, scholars, and legislators to consider this future and address the issues it is likely to raise before it becomes too late to influence its course. Readers should seize the opportunity that the book offers and enter a discussion that is certain to reshape law as we know it.

The book takes the wise choice of spending relatively little space on the benefits of personalization and focusing on the challenges, problems, and critiques that may arise with respect to personalized law. While the book covers impressive ground in all these respects, an area that could have benefited from further elaboration is constitutional law. In this Essay, I will reflect on two important issues that the book leaves fairly open or unaddressed, which I believe can enrich the important discussion of personalized law that Ben-Shahar and Porat ignite. The first issue is the question of personalizing constitutional rights, and the second is the concern that personalized law will diminish autonomy and choice and promulgate determinism.

I. The Problems with Personalizing Constitutional Rights

Ben-Shahar and Porat dedicate a relatively brief discussion to the question of whether constitutional rights should be personalized. The question is discussed as part of a section titled “Personalized Bundles of Rights and Duties”1 in which they suggest that “personalization of the law could also vary the size and especially the components of the prepacked bundles across individuals.”2 The general idea is that different people assign different values to different rights and may be more or less in need of certain protections. Personalized bundles of rights would cater to these differences like a salad bar: people would get a fixed “budget” or “portion” and will be able to choose in idiosyncratic ways their preferred combination of rights, including more of the components they value highly.

The main focus of Ben-Shahar and Porat is statutory rights—e.g., credit protections or consumer law—but they also consider personalized bundles of constitutional rights. They provide the following example: “Many people care about substantive due process rights, but some place higher value on protections against takings while others on voting rights or access to contraceptives. Under a uniform scheme of constitutional law, rights are equally allocated to people regardless of their preferences. No one gets ‘more’ freedom of speech, more or less than one vote, or a higher standard of consent-to-search. Under personalized law, citizens could receive different levels of protection for each of these individual rights—more of the protections that specifically matter to them, and fewer of those they do not value—while the ‘total allocation’ being equal.”3

Ben-Shahar and Porat express several reservations about personalizing constitutional rights. Their first point is that constitutional rights “have social value that greatly exceeds the private consumption value to the individual rightsholders.”4 They then seem to suggest that in constitutional law people’s preferences should be ignored and law should not be personalized because the purpose of constitutional law is not the individual but the social value of everyone having rights. This argument depends on the assumption that all or most people today actually use their constitutional rights in a way that generates social value. Ben-Shahar and Porat provide the examples of voting and speech and worry that allowing people to give up these rights in particular will erode democracy. However, both the assumption and the resulting concern are overstated. It is not at all clear that people actually utilize their rights uniformly today—particularly voting and speech—or that they generate positive social value while exercising their rights. A large number of people do not vote at all, and many do not engage in the kind of democracy-sustaining speech that Ben-Shahar and Porat are interested in (political, academic, creative, etc.). At the same time, many people in fact engage in socially destructive speech (fake news, hate speech, etc.). Indeed, while constitutional rights are quintessential for a functioning democracy, their social value does not depend on a uniform and universal use. Rather, the social value of rights appears to rest on more complex dynamics that involve groups of varying sizes at different time points. This is not to say that personalization does not risk democracy. It does, but for different reasons that I explain below.

The second issue that Ben-Shahar and Porat identify is the difficulty in making “concrete the notion of an ‘equal sum’ of [constitutional] rights.”Id. In other words, how can one determine whether bundle A is equal in size to bundle B? While Ben-Shahar and Porat do not elaborate on the difficulties of quantifying rights, they concede that this is a challenge with all rights and protections, not only constitutional rights. It is, indeed, a major limitation.

Ben-Shahar and Porat conclude with the statement that “[i]n the end, while various market mechanisms have developed to allow personalization of rights, the extension of this practice to statutory and constitutional law is troubling.”5 However, it is not entirely clear whether Ben-Shahar and Porat are fully committed to this conclusion. In other parts of the book they enthusiastically argue for personalizing privacy rights and differentializing treatment based on immutable characteristics, namely applying personalization to the constitutional rights of privacy and equality. The tension between the general discussion of constitutional rights and the specific discussion of concrete rights raises questions about the overall framework for thinking about personalization in constitutional law.

While this Essay cannot exhaust this crucial and fascinating question, I would like to extend Ben-Shahar and Porat’s analysis in a few respects. My general view is that personalizing constitutional rights is deeply problematic. While empirical evidence about people’s preferences and law’s varied outcomes often provides quintessential information for the application of constitutional law to specific problems (see, for example, here and here), using such evidence to create unique bundles of personalized rights raises fundamental problems that Ben-Shahar and Porat have not discussed in their book.

First, the scope of uniform constitutional rights is already understood to be maximal: the point beyond which more of right A will overly burden the exercise of right B or jeopardize important societal interests. Consider free speech. People may speak freely about everything and through whatever means they choose up to the point where speech becomes fighting words, threatens national security, overly intrudes on privacy, or becomes defamation (a partial list of which the content has varied throughout history and still varies across societies). The common feature is that the level of protection and even the scope of free speech are determined—uniformly, for all rightsholders—at the point of substantial harm. Personalization cannot offer more of right A without unsettling this delicate balance. More freedom for some is less freedom for others.

Relatedly, personalization cannot offer improved bundles of rights. To create a personalized bundle, personalization will take away some constitutional rights or reduce the scope of these rights, to afford more of other constitutional rights. But when all rights are already set at their maximum level, it is not possible to compensate people for rights lost by giving them more of other rights. As noted above, giving one person freedom beyond its maximal socially constructed scope would mean that other people or societal interests are necessarily hurt. More speech or religion rights for some is less safety, equality, or health for others. Therefore, if bundles can be implemented at all, they will inevitably provide people with less rights on the sum than under uniform constitutional law.

Third, constitutional rights matter greatly; but the importance of some of these rights becomes apparent only on rare and specific occasions, sometimes very late in life. This will hinder the ability of algorithms to accurately determine preferences. Take the innocent person who is accused of a crime she did not commit. Until that point, the person may have never thought about her due process rights and had no preference about these rights; due process was other people’s business as far as she was concerned—a concept that would never be relevant for her, or so she thought. An algorithm would have surely determined that this person cares very little for due process rights. Clearly, these preferences change dramatically once the person is accused of a crime she did not commit. But exactly at that point, the personalized due process rights that appear on the police officer’s screen are calibrated at the lowest level.

Ben-Shahar and Porat reassure us in other sections of the book that the algorithm will continuously learn from the data; and when people’s preferences and circumstances change, their personalized rights will change as well—even from day to day.6 But this adaptive method appears to be relevant only for decisions we make on a daily basis, such as how much social media we consume or our driving and eating habits. Frequent activities generate masses of data and allow the algorithm to detect changes in preferences and adapt the personalized rules over time. In contrast, information on people’s constitutional rights preferences will be scarcer and discontinuous and, therefore, less reliable. For the many people who discover the importance of constitutional rights only when they really need them, the algorithm will misfire at the exact critical moment.

A fourth concern relates to the impact of rights-bundles on the political process. Ben-Shahar and Porat worry about the difficulty of computing the total sum of rights, but a separate concern is that rights-bundles will encourage individuals who perceive their cause as lost on the political process to forgo their voting rights and focus on getting extra rights from the algorithm in the realms they care about the most. For example, pro-gun individuals in liberal states would forgo an attempt to influence the political outcomes and would instead seek more arm-bearing rights from the algorithm. As this pattern may not be manipulative—ideological preferences may be entirely sincere—it is unclear whether Ben-Shahar and Porat’s proposals for mitigating the risk of manipulation will be helpful to counter such a scenario. If such a process materializes, personalized constitutional rights will bypass democracy and fragment society. Personalization will become the mechanism that erodes the commitment to resolving conflict through the political process and will become the easy way out from the hard work of finding agreement amidst disagreement, which is necessary in every well-functioning society.

II. Personalization, Determinism, and Autonomy

Ben-Shahar and Porat dedicate more than half of the book to dealing with the concerns and problems that personalized law provokes. They discuss the concern of discrimination by algorithm and the related concern about the equal protection of the laws. They also discuss concerns about social coordination and manipulation and concerns about who will collect, store, and have access to the masses of personal and sensitive data needed for the operation of personalized law. Yet the book misses an opportunity to discuss a pair of acute concerns: first, that personalized law will dimmish autonomy and second—the flipside—that personalized law will advance determinism.

Ben-Shahar and Porat provoke our imagination at outset of the book with the fictitious diary of Abigail and David, a couple living their lives in a world full of personalized rules.7 Every aspect of the couple’s lives is governed by personal rules, from their morning workout through data-sharing practices up to their personalized default estates. While there are many benefits that Abigail and David enjoy as a result of personalized law—they receive personalized warnings and advice related to their health and driving conditions, for example—the couple’s diary is also profoundly troubling because of the level of control the algorithm exercises over their lives. Indeed, Abigail’s and David’s lives are seemingly run by personalized commands instead of by their own choices and judgments. One may ask: What is left of people’s personhood in a personalized law world? Ben-Shahar and Porat argue that each person gets what he or she prefers. But at the same time, preferences are determined in many cases they discuss based on the combination of all of the subgroup affiliations to which people belong. We’d like to think that there is something to personhood beyond the sum of the groups to which the person belongs. Scholars and lawmakers who are averse to nudges are likely to view personalized law as a horror story. And even many of those who defend nudges will not want to live in a fullypersonalized law world.

Now, it would be unfair to argue that personalized law reduces the whole of the person to the combined weight of her group affiliation and dictates every aspect of life. There’s much to life beyond driving limits and estate preferences, and potentially many choices are not governed by any law at all. But as more and more aspects of people’s lives become data—through social media, Google searches, school grades, health records, dating apps, and more—I would urge Ben-Shahar and Porat to provide a clearer and more elaborate discussion of the stopping principles of personalization with respect to people’s autonomy, including what realms of life ought not be subject to personalized laws, and to provide guarantees that personalized law will not diminish individual choice. After all, law is everywhere, with rules applying to relationships both between strangers and between loved ones as well as at the beginning and end of life. Will all of that be subject to algorithmic construction?8

Put simply, the worry here is that algorithms will determine people’s lives in ways that will circumvent their ability to narrate and author their own lives.

This concern is not unique to personalized law, of course. Rather it grows with every new piece of evidence on how the algorithms of social media platforms and various apps manipulate opinions and emotions as a regular business practice.

Still, personalized law extends the concern about algorithmic construction of human decision-making. Law—personalized or uniform—is a norm backed by state power. The more the law is tailored to the person, the likelier it is that the person shifts to a different course as a result of law’s influence, a course they might not have taken otherwise. Indeed, guiding behavior is often the explicit goal of law. But the more we assume a person is something based on her past and tailor the law according to this history, the less the person can break from her past and behave differently. Or at least this is the worry, and we face the duty to examine how this worry can be avoided.

To be sure, Ben-Shahar and Porat are thoroughly committed to addressing the related concern that algorithms will exacerbate discrimination by internalizing bias or by compounding bias.9 But algorithms are still quite far from reaching the goal of avoiding bias, and it is not at all clear that they can be taught not to discriminate. Quite the contrary, thus far algorithms have primarily demonstrated their remarkable ability to pick up on every discriminatory pattern on the internet and turn it into their outcome.

Ben-Shahar and Porat are well aware that this is an issue and focus considerable time on one difficulty: that excluding inputs such as race or sex does not guarantee nondiscriminatory outputs because often race and sex are correlated with a host of other features. As Ben-Shahar and Porat write,10 a few approaches to de-bias algorithms have been discussed in the literature. However, the proposed solutions have not yet been tested and applied in practice; so this remains an open challenge for personalized law. As long as algorithms reproduce bias, they also participate in diminishing autonomy by steering people toward socially constructed paths, predicted by their immutable characteristics.

A related issue is how to include sex and race in the analysis—for example in affirmative action or distributive contexts—without making these factors decisive, a question that has often comes up in university admissions cases such as Grutter v. Bollinger (2003), Gratz v. Bollinger (2003), and Fisher v. University of Texas (2016). Ben-Shahar and Porat believe that personalized law provides a recipe by incorporating large amounts of data that include many features other than membership in protected groups: “[i]ndeed, the consideration of each factor can’t get any narrower than in a regime that takes all correlated factors into account.”11 Personalized law is thus similar to constitutionally permissible “holistic review” approaches, with the added advantage of relying on a large set of features, “so long as it is designed to ensure that the weight put on membership in protected groups is not excessive.” Being one factor among a host of factors guarantees, they argue, that a factor like sex or race is not “decisive by itself.”

This logic rings true, but in many contexts the availability of a large set of features does not guarantee that one or two features are not decisive with respect to the outcome. Sometimes, one feature can dwarf dozens of other features in prediction models. For example, age dwarfs other features in predicting a foster child’s likelihood of being adopted or finding a foster home. Furthermore, if the legal test for decisive is set by the Supreme Court at about 13% contribution to the outcome,12 sex and race are likely to have this weight in a host of areas, including some that the authors discuss as examples of permissible personalization.

For example, with respect to wills and estates, Ben-Shahar and Porat supply data showing that 55% of men bequeath all their property to their spouse, compared with only 34% of women, and that men leave twice the share of their estates to their partners than women. According to the Supreme Court, then, sex should be considered as a “decisive” factor, even if many other features are taken into account in the process of personalization. This analysis follows the book’s proposed framework regarding the constitutionality of personalized law but seems to defeat some of the very personalized laws that the book proposes. More generally, despite Ben-Shahar and Porat’s assurance that under personalized law the treatment of each person is always individualized,13 the treatment ultimately depends on the data. No assurances can be made that personalized laws will not rely on immutable characteristics as decisive factors when such factors greatly contribute to prediction.

But even if the problems of uprooting discrimination and ensuring equal protection under algorithms are eventually solved and personalization passes constitutional muster, the problem of determinism and diminished autonomy remains a concern.

To be sure, the problematic influence of personalization on autonomy is more troubling in some areas of law than in others. To have personalized driving limits based on one’s current physical state could be a major improvement, whereas personalized privacy rights could be much more concerning for autonomous decision-making. Driving rules refer to a simple, straightforward, momentary decision that could result in major harm to the self and to others (how fast to drive the car on this specific road and moment). Personalized driving limits that flash on one’s dashboard provide important information that is often inaccessible to drivers and can improve their decision-making in real time, enhancing their autonomy. For example, drivers often lack the ability to translate their current cognitive and emotional mode precisely into how fast they should drive or how much space to keep from the next car. Personalized driving limits that are communicated in real time help drivers make the call, while leaving the ultimate choice how to drive in their own hands.

In contrast, privacy rights—and particularly privacy waivers—require balancing numerous multifaceted considerations (for example, what level of privacy to have; with respect to different kinds of information; and different types of relationship). The implications that publicity may entail range from fame and fortune to loss of reputation, relationships, and employment. These implications are often indirect and less foreseeable than the implications of reckless driving. (Consider the person who posts a controversial opinion on Facebook in what turns out to be a public discussion, which leads to the termination of her employment). People’s preferences regarding the extent to which their information is publicly available may be ill-informed, and their preferences regarding the balance between sharing and keeping to themselves are often less straightforward than their driving preferences.

It is unclear how algorithms would gain access to people’s actual privacy preferences given the complexity of the issues to consider. Furthermore, if algorithms are fed with data on people’s current privacy preferences—that are mostly dictated by uniform contracts—it is quite certain that the prediction model would not truly be able to capture the real variation in privacy preferences, as it is currently masked by the method in which privacy preferences are elicited. The risk here is that personalized privacy laws will diminish people’s autonomy and steer them in a certain direction that is not necessarily aligned with their preferences.

While our currently uniform laws may be far from providing optimal protection for people’s privacy rights, personalized law might exacerbate the problem. By routing people into overly determined courses of decision-making that they have not sufficiently considered, personalized law runs the risk of becoming a self-fulfilling prophecy: algorithms would match consumers with personalized rules that are not truly representative of their actual preferences but are sticky enough to lock in consumers.

Ben-Shahar and Porat perhaps believe that the concern about autonomy is not weighty because personalized law provides a thoroughly individualized treatment that takes all relevant factors into account—including, as the authors suggest, with regard to discrimination and equal protection. But the multitude of factors used in the assessment of each person actually make the autonomy problem worse. Algorithms are going to make mistakes—they are, after all, human products. Their predictions will not be flawless. They will provide accurate predictions for some and misfire for others. Additionally, it is often harder to find errors in an algorithm than in a simple uniform rule. The risk of determinism is particularly concerning for people for whom the algorithms will produce inaccurate or erroneous predictions. But how will people in this group know that they are being stirred in the wrong direction? And will they be able to challenge personalized law?

When each person is assessed according to a very large number of factors, people are going to have difficulty understanding what factors generated the personalized law that applies to them. Naturally, they will not be able to fully compare their situation with that of others because the law is personalized for each person. Furthermore, as Ben-Shahar and Porat argue elsewhere in the book, the extensive volume of characteristics used by the personalized algorithm makes it very hard for people to discern what factor influenced the rule.14 Ben-Shahar and Porat stress that personalized law algorithms should be transparent and that people should be entitled to see the factors that affect the personalization algorithm. But they also acknowledge that this would require technical expertise and that “it is highly unlikely that anyone other than watchdogs would scrutinize the algorithm.”15

To this we might add that predictive algorithms are typically constructed to reach a reliable prediction outcome and not to focus on the specific contribution of each factor to the prediction. The weight they allocate to specific features is often unstable. This element makes it particularly difficult to challenge the use of a specific feature or the algorithm as a whole, understand the decisiveness of a feature, and correct errors at the individual level. Left with little ability to understand the personalized law that applies to them, people may give up and accept the algorithmic rule—thereby sacrificing their autonomous choice for algorithmically constructed choices.

Conclusion

Ben-Shahar and Porat did a marvelous job in showing us what the future of personalized law could look like and what the limitations and worries that should be addressed in such a world are. In this Essay, I sought to elucidate two concerns from a constitutional law perspective. First, and more generally, personalization appears incompatible with how our legal system has constructed constitutional rights and with the relationship between rights and the democratic process. Second, and more specifically, personalized law presents concerns for autonomy that must be addressed before it may be deemed a viable legal policy. These concerns are more troublesome in some areas of law than in others. But there are also general concerns about algorithms pushing people into misguided paths without sufficient ability to challenge those paths, which should be addressed in any legal system that seeks to personalize law.

  • 1Omri Ben-Shahar & Ariel Porat, Personalized Law: Different Rules for Different People 99 (2021).
  • 2Id. at 100.
  • 3Id. at 102.
  • 4Id. at 103.
  • 5Id. at 104.
  • 6See, e.g.Ben-Shahar & Poratsupra note 2, at 148 (in the context of individuals’ riskiness scores).
  • 7Ben-Shahar & Poratsupra note 2, at 4–7.
  • 8Note that this is a separate worry from the concern that all of life will become data for the use of personalizing algorithms or the concern about the necessary guarantees to prevent misuse of these abundant and hypersensitive data.
  • 9See Ben-Shahar & Poratsupra note 2, at 136–37, 155–62.
  • 10See, e.g.id. at 136–37, 160–62.
  • 11Id. at 153 (emphasis in original).
  • 12In Gratz v. Bollinger (2003)the Supreme Court nullified an affirmative action policy that provided twenty extra points for minority applicants on a 150-point scale. This translates to 20/150, or 13%.
  • 13See Ben-Shahar & Poratsupra note 2, at 163.
  • 14See id. at 196.
  • 15Id. at 197.