In this Essay, we seek to systematically explore and understand crucial aspects of the dark side of personalized business to consumer (B2C) transactions. We identify three areas of concern. First, businesses increasingly engage in first-degree price discrimination, siphoning rents from consumers. Second, firms exploit widespread or idiosyncratic behavioral biases of consumers in a systematic fashion. And third, businesses use microtargeted ads and recommendations to shape consumers’ preferences and steer them into a particular consumption pattern.

Siphoning rents, exploiting biases, and shaping preferences appear to be relatively distinct phenomena. However, these phenomena share a common underlying theme: the exploitation of consumers or at least an impoverishment of their lives by firms that apply novel and sophisticated technological means to maximize profits. Hence, the dark side of personalized B2C transactions may be characterized as consumers being “brought down by algorithms,” losing transaction surplus, engaging in welfare-reducing transactions, and increasingly being trapped in a narrower life.

It is unclear whether first-degree price discrimination creates an efficiency problem, but surely it raises concerns of distributive justice. We propose that it should be addressed by a clear and simple warning to the consumer that she is being offered a personalized price and, in addition, a right to indicate that she does not want to participate in a personalized pricing scheme. Similarly, behavioral biases may or may not lead consumers to conclude inefficient transactions. But they should be given an opportunity to reflect on their choices if these have been induced by firms applying exploitative algorithmic sales techniques. Hence, we propose that consumers should have a right to withdraw from a contract concluded under such conditions. Finally, shaping consumers’ preferences by microtargeted ads and recommendations prevents consumers from experimenting. They should have a right to opt out of the technological steering mechanisms created and utilized by firms that impoverish their lives.

Regulation along the lines proposed in this Essay is necessary because competitive markets will not protect unknowledgeable or otherwise weak consumers from exploitation. A general “right to anonymity” for consumers in the digital world could be the macrosolution to the microproblems discussed. If it were recognized and protected by the law, it might be possible to reap (most of) the benefits of personalization while avoiding (most of) its pitfalls.



The rise of big data and artificial intelligence creates novel and unique opportunities for business to consumer (B2C) transactions. Businesses assemble or otherwise gain access to comprehensive sets of data on consumer preferences, behavior, and resources. An analysis of that data allows them to profile consumers. This leads to stark and novel forms of information asymmetries: businesses know at least as much about consumers as consumers know about themselves, and sometimes even more. Smart sales algorithms are used to market products and services, microtargeting idiosyncratic consumer preferences with personalized offers. As a consequence, firms have enormous leverage to shape private transactions—knowledge is power.

Much of the existing literature on the effects of these new technologies on B2C transactions assumes that businesses will make a “benign” use of them. It has been suggested that big data analytics can help increase customer satisfaction and loyalty.1 Personalized online shopping promises to cure decision-making paralysis caused by an abundance of options: in limiting consumers’ options, smart sales algorithms facilitate choice and thus result in the optimal satisfaction of “real” consumer preferences.2 Businesses might also increasingly offer specific contract terms that best meet consumers’ preferences.3 All in all, personalizing B2C transactions appears to promise increased efficiency—both on the micro (transactional) and on the macro (societal) level.

By contrast, in this Essay we systematically explore crucial aspects of a potential dark side of personalized transactions. Big data and artificial intelligence may enable businesses to exploit informational asymmetries and/or consumer biases in novel ways and on an unprecedented scale. Incentives to take advantage of naïve or biased consumers exist,4 and competitive pressures may force businesses to engage in exploitative practices.5 At first sight, this poses stark challenges both to market efficiency and to individual autonomy.

Our aims are threefold. First, we seek to identify the effects that personalization has or may have on B2C transactions in terms of efficiency and distribution—of rents in a microeconomic sense—and on individual autonomy and agency. Second, we analyze whether, and to what extent, there is a regulatory need to counteract identified detrimental effects by reference to specific regulatory objectives, such as efficiency or fairness. Third, we examine the regulatory tools that might be employed to this end and assess their comparative merits. Our focus here is on contract law. At the same time, the available tools include self-help remedies: increasingly, consumers are gearing up to protect their private sphere and bargaining power vis-à-vis businesses.6 Any regulatory intervention to correct identified negative effects of personalized B2C transactions must be calibrated such that innovation and the beneficial effects of big data and artificial intelligence for businesses, consumers, and society as a whole are not stifled.

We identify three aspects of the dark side of personalized B2C transactions as particular areas of concern. First, businesses increasingly engage in first-degree price discrimination, siphoning rents from consumers. Second, firms systematically exploit behavioral biases of consumers, such as their inability to correctly assess the long-term effects of complex transactions or their insufficient willpower. And third, businesses use microtargeted ads to shape consumers’ preferences and steer them into a particular consumption pattern, effectively locking them into a lifestyle determined by their past choices and those of likeminded consumers.

At first sight, siphoning rents, exploiting biases, and shaping preferences appear to be distinct phenomena. However, on closer inspection, these phenomena share a common theme: the potential exploitation of consumers or at least an impoverishment of their lives by firms who apply sophisticated new technologies to maximize profits. Hence, the dark side of personalized B2C transactions may be characterized as consumers being “brought down by algorithms”—losing transaction surplus, engaging in welfare-reducing transactions, and increasingly being trapped in a narrower life.

It is unclear whether first-degree price discrimination creates an efficiency problem, but surely it raises concerns of distributive justice. We propose that it should be addressed by a warning to the consumer that she is being offered a personalized price and, in addition, a right to indicate that she does not want to participate in a personalized pricing scheme. Similarly, behavioral biases may or may not lead consumers to conclude inefficient transactions. But they should be given an opportunity to reflect on their choices if these have been induced by firms applying exploitative algorithmic sales techniques. Hence, we propose that consumers should have a right to withdraw from a contract concluded under such conditions. Finally, shaping consumers’ preferences by microtargeted ads prevents consumers from experimenting and leading a multifaceted life. They should have a right to opt out of the technological steering mechanisms utilized by firms that impoverish their lives.

An important concern with respect to the dark side of personalized transactions that this Essay discusses arises from the reinforcement of discriminatory practices.7 This concern informs our analysis, but we do not focus on it. The problems this Essay discusses are problems partly or even primarily for other reasons than their effects on inequality and prejudice in a society. In other words, discriminatory consequences might well make these problems worse. But siphoning rents, exploiting biases, and shaping preferences would still raise serious concerns even if they did not have discriminatory effects.

The remainder of this Essay is organized as follows: Part I studies the siphoning rents problem, Part II the exploiting biases problem, and Part III the shaping preferences problem. Part IV explains that competitive markets will not protect unknowledgeable or otherwise weak consumers from exploitation. In the Conclusion, we bring together our findings and consider whether consumers in the digital world should enjoy a more general “right to anonymity” to avoid being brought down by algorithms.

I. Siphoning Rents

Price discrimination occurs when a firm charges a different price to different groups of consumers for identical or similar goods or services for reasons not associated with the cost of supply.8 It is not a new phenomenon. Just think of coupons clipped by certain consumers—with lower opportunity costs than others—to obtain a discount when buying certain goods. What we are witnessing with the advent of big data and artificial intelligence are new forms of price discrimination carried out on a different scale.

A. Personalized Pricing

Price discrimination comes in different forms. Third-degree price discrimination happens when different groups of consumers receive different prices, as in the coupon example. Second-degree price discrimination relates to different prices charged to different buyers depending on the quantity or quality of the goods or services purchased. Volume discounts are an example. Finally, first-degree price discrimination means that individual consumers receive different prices based on their individual preferences and reservation values.

All three forms of price discrimination require that firms possess some market power, are able to limit arbitrage by consumers, and can find a way to segment them. Companies like Amazon, Google, or Facebook possess significant market power due to network effects. Arbitrage by consumers is feasible to some extent—think of reselling goods on eBay—but limited in scope and scale. What is particularly new in the data-driven economy is that first-degree price discrimination becomes possible.9 Firms obtain data on individual consumers—such as their location, device and browser used for orders, and browsing and shopping history—that allow firms to assess each individual consumer’s reservation price for a particular product or service.10 Your personal digital assistant—Amazon’s Alexa or Apple’s Siri—may prove to be a “devious” agent, eavesdropping on conversations about your urgent desires in what used to be your private sphere.11

Evidence for first-degree price discrimination is still relatively scarce. In 2014, researchers used the accounts and cookies of over three hundred real-world users to detect price steering and discrimination on sixteen popular e-commerce sites. They found evidence of some form of personalization on nine of these sites.12 Other researchers have observed signs of discrimination based on the originating web page.13 The potential gains from first-degree price discrimination are significant. Models suggest that firms could increase profits by as much as 12 percent just by using consumers’ web browsing data.14 Hence, we can expect firms to increasingly experiment with first-degree price discrimination.15 Indeed, competitive pressures might force them to do just that.

B. Evaluating Personalized Pricing

Evaluating first-degree price discrimination is not a mathematical exercise. Everything depends on the measuring rod used. It makes an enormous difference, for example, whether the goal is to maximize some measure of efficiency—for example, Kaldor-Hicks efficiency or the net effects on total (producer and consumer) surplus—or whether we are judging first-degree price discrimination against a certain conception of justice or fairness: studies have found that consumers emphatically view individual price discrimination as unfair.16 Even if we could agree on the measuring rod, it might be difficult if not impossible to precisely assess the effects of such price discrimination on consumers and producers.

Let’s assume that we could settle on the total producer and consumer surplus as the relevant measure. We can seek to assess the effects of first-degree price discrimination qualitatively. When a seller is able to determine the reservation price of a potential customer, it can effectively benefit from the full surplus of the bargain. With personalized pricing, market prices disappear and, with them, the surplus that inframarginal consumers were allowed to pocket when shopping at market prices. If consumers overestimate the benefit they get from a product or service, they may even be worse off by contracting at a personalized price.17 At the same time, demand increases and consumers who were not able to purchase a good or service before are now “in business.” Hence, firms benefit, as do some consumers, whereas other consumers fare worse than before.

We also need to consider secondary effects. Firms might push consumers too hard toward their respective reservation prices, overshooting the target and losing profitable deals. Consumers might refrain from purchasing because of concerns about their privacy.18 Savvy consumers will employ tools to protect their privacy and bargaining power.19 Rent-seeking investments by firms and defensive measures by consumers create deadweight welfare losses. Firms and savvy consumers face a prisoners’ dilemma, leading to a technological arms race. This increases firms’ costs and reduces demand, that is it reduces the beneficial effect of first-degree price discrimination on marginal consumers.20 The losers in this game might be those consumers we care about the most: those with the lowest ability and willingness to pay. Further, consumers as a group are made poorer—collectively, they spend more on a particular good or service, and this reduces their demand for other goods and services.

On the other hand, price discrimination can also benefit consumers as a group by increasing competition if rival firms want to offer lower prices to different consumers or consumer groups. This can happen if firms run different sales algorithms that lead them to adopt different pricing policies. Further, firms may also apply sophisticated price discrimination tools vis-à-vis their suppliers, reducing production costs and increasing total profits generated vis-à-vis their customers.

The overall effects are hard if not impossible to measure empirically. In a model already mentioned, it was suggested that Netflix could increase its profits by 12.18 percent by applying first-degree price discrimination and that aggregate consumer surplus would fall by 7.75 percent.21 Similarly, in another model, it was suggested that iTunes could increase its revenues by 48 to 65.7 percent, but total consumer surplus would fall by 26.2 to 32.8 percent.22 However, these models ignore the secondary effects discussed above. If anything can be said with reasonable certainty based on this discussion and the model estimations, it is that, in the aggregate, first-degree price discrimination benefits firms and harms consumers. The net effects are unclear.

C. Self-Help and Its Limits

Given that consumers regard personalized prices as highly unfair, they will attempt to avert the harm suffered using self-help remedies. Consumers may try to achieve anonymity vis-à-vis businesses. Both software and hardware tools—such as Tor and Anonabox—can be employed to this effect. Consumers may also use tools to enhance their bargaining power and improve their decision-making processes—becoming “algorithmic consumers.”23 The startup ShadowBid, for example, shows price history charts on Amazon and lets consumers state their personal reservation price. It then purchases automatically when the price drops below this threshold.24

Such self-help tools surely do not work perfectly. Withholding personal data by, for example, disallowing tracking cookies may come at the (opportunity) cost of being shut out of certain transactions. ShadowBid purportedly gets consumers the best price available on Amazon. But this price may either still be a personalized price or, if the bid is made anonymously, not the lowest price available elsewhere, thereby asking consumers to pay more than they want to.

Further, it has already been remarked that the arms race between businesses and consumers creates a prisoners’ dilemma in a distributive conflict—deadweight (efficiency) losses are produced in pursuit of the biggest slice of the available surplus. At the same time, sophisticated self-help will be used primarily by the savviest consumers—with regressive effects on others, creating a severe fairness issue. Hence, self-help cannot and should not be the only answer to first-degree price discrimination.

D. Potential Regulatory Responses

Regulatory responses beyond self-help tools may be found in very different fields of the law, such as contract, antitrust, the rules on unfair trade practices, and tax. A challenge in this context is that big data and artificial intelligence tools—and with them business practices—are changing rapidly and in unforeseen directions. Further, it is critical that any regulatory intervention helps move such practices in the “right” direction: fostering promising innovation and preventing hazards.

An extreme regulatory response would be a ban on personalized pricing. It has been suggested, for example, that price differences not justified by different costs should run afoul of the unconscionability doctrine in contract law.25 However, the case for such a radical measure is very weak. After all, the main concern about personalized pricing is that, in the aggregate, consumers lose some of the total surplus that they enjoyed previously; some consumers are even better off with price discrimination. This is very different from “classical” applications of the unconscionability doctrine, in which consumers are overcharged by 100 percent or even more compared to market prices.26

By contrast, a very light-touch regulatory measure would aim at increasing consumers’ awareness of personalized pricing. An obligation to disclose the application of first-degree price discrimination appears innocuous and potentially effective to leverage consumer autonomy.27 It is not surprising, therefore, that the House of Lords in the United Kingdom has recommended that “online platforms be required to inform consumers if they engage in personalised pricing.”28 Arguably, such a duty already exists today in the European Union (EU), including in the United Kingdom, based on Article 7(2) of the Unfair Commercial Practices Directive.29

The critical question is whether leveraging unassisted self-help by consumers is enough. As discussed, such self-help produces deadweight efficiency losses and has regressive effects on some consumers.30 A way out of this dilemma may be a form of government-assisted self-help that contains the negative efficiency effects of leveraging up by businesses and consumers and reduces the regressive effects on certain consumers. Notice of personalized pricing could come in the form of highly suggestive visual displays or sounds,31 and consumers could be given the right to opt out of personalized pricing by clicking on a stop button.32 Such a right would not imply that consumers could always buy at a nonpersonalized price. It would simply mean that, if a business wanted to make a consumer who has pressed the button an offer, it would have to be a nonpersonalized price offer. Clicking the opt-out button would send a signal to firms, inviting them to make such an offer. Arguably, firms have an interest in receiving such notice: it is valuable feedback. On the other hand, triggering this process is in the interest of the customer who might thereby benefit from a more attractive nonpersonalized offer from her preferred supplier.

The described opt-out regime receives some normative support from the idea that the data on which a personalized pricing scheme is based “belongs” to the individual in the sense of being valuable personal property.33 To a certain extent, this idea probably informs and motivates the attitudes of those consumers discussed above who view personalized pricing as distinctly unfair. This is at least a plausible viewpoint. After all, firms are exploiting people’s private data rather than publicly available information.

The opt-out regime would significantly reduce the costs of self-help: consumers could get out of the personalized pricing trap easily and cheaply. Any regressive effects on less sophisticated consumers would be minimal. If a consumer clicks herself out of the market by receiving an unattractive nonpersonalized price offer, then this is her own choice—volenti non fit iniuria. Consumers might of course check out what they can get by opting out and switch back if the test proves unsatisfactory. But they would do so at their own risk, as the new personalized price might be higher than the old one—the firm’s algorithm will have adjusted to the new information.

The most critical question is this: Do consumers who opt out of personalized pricing impose costs on businesses that lead to a reduction of supply and demand vis-à-vis other consumers? Under the opt-out scheme, a segmentation of the market would emerge with different equilibria: a market with nonpersonalized prices for those who opt out and a market with personalized prices for those who do not. Those who opt out will be consumers with relatively higher reservation prices—they gain the most from fleeing personalization. Firms of course know this. Hence, the new nonpersonalized market price will be higher than the prepersonalization market price. This reduces firms’ profits compared to full personalization. The effect will be a higher price level in the personalized market segment for the “remainers” because the cross-subsidy from the “leavers” is now missing. Hence, some marginal consumers will be priced out of the market. At the same time, in the aggregate, consumers will recoup some of the rents lost to businesses under full personalization.

The upshot of this analysis is this: a “visceral warning” of personalized pricing is easy to defend as a regulatory measure to address the siphoning rents problem. The case for a simple opt-out right (stop button) for consumers is less clear-cut. Such a right will harm some (marginal) consumers but benefit consumers in the aggregate compared to full personalization. Siphoning rents from consumers by firms is a problem because the overwhelming majority of consumers feel ripped-off by first-degree price discrimination. Hence, it is a defendable policy choice not only to warn consumers about personalized pricing but also to give them a right to opt out of such a scheme.

II. Exploiting Biases

For a long time, economic models were based on the assumption that market participants are fully rational agents, maximizing utility or profits in all their actions. Empirical research in cognitive psychology has shattered this assumption and unveiled biases in our thinking and decision-making: we systematically deviate from the model of an economic person.34 Humans find it difficult, for example, to accurately assess the long-term effects of complex commercial transactions,35 they hyperbolically discount future effects (related to the first problem),36 they overvalue salient and present information,37 and they exhibit loss aversion in their decision-making—seeking risks to avert perceived detrimental outcomes.38

A. Consumers in Strategically Set Rationality Traps

All these biases are well-known by now. The crucial question in the context of our inquiry is whether the rise of big data and artificial intelligence creates new regulatory challenges, or exacerbates old challenges on a new scale. Our concern in this Section is whether businesses might abuse their novel technological tools to systematically exploit consumers by steering them into strategically set rationality traps.

Scholars have voiced such concerns. It has been suggested that consumers might suffer from a “mass production of bias” engineered by businesses to “create suckers.”39 Real-life examples abound.40 Online gaming algorithms are designed to create and maintain behavioral addictions.41 The online shopping website GILT is reported to use so-called “micro-cliffhangers” to get customers hooked.42 Such cliffhangers and the status quo bias are also instrumental in engineering a string of film purchases and “binge-watching.”43

For biases to work and for addictions to develop and be maintained, the transactional environment is critical. Just think of somebody who would like to quit smoking but is surrounded by cigarette packs. Big data and smart sales algorithms pose stark challenges in this respect: they are literally everywhere and, in particular, in our homes. We have already mentioned the dangers of our personal digital assistant turning “devious” by identifying our urgent desires or even addictions.44 Facial recognition algorithms will detect when customers are down and weak.45 Just think of subscription offers from dating platforms to depressed singles that arrive during the Christmas holidays. All this might be further intensified through immersive virtual reality experiences.

It appears that the regulatory challenges associated with the new technological tools available to businesses come in two different forms. First, firms might be able to exploit widespread biases of humans more systematically. Second, they might, for the first time, be able to exploit individual (idiosyncratic) biases.

An example of the first type of behavior is credit card “teaser rates” charged for an introductory period. It has been observed that this “highly salient benefit to consumers generally comes with less-visible costs attached,”46 exploiting a combination of biases described above (salience, difficulty of correctly assessing the effects of complex transactions, and discounting future effects). The design of such sales schemes may greatly benefit from big data analytics and artificial intelligence.47

An illustration of the second type of strategy might be the exploitation of highly individual-specific information on certain biases. A smoker who wishes to quit may be bombarded both with treatments—medications, chewing gums, etc.—and with the occasional delivery of (allegedly) nonharmful cigarettes or even regular cigarettes to the smoker’s doorstep: just enough to keep the person addicted and in need of further “treatments.” Similarly, a consumer who grossly underestimates individual cell phone usage may be exploited by a cell phone provider who knows exactly how often, when, and where this consumer will make use of its service.

B. Evaluating Rationality Traps and Self-Help by Consumers

Few will sympathize with these business practices. However, it is difficult to precisely assess the type of regulatory problem and whether it is of a gravity that justifies or even requires regulatory intervention. Strategically set rationality traps may or may not be an issue from an efficiency standpoint. Both the credit card “teaser rates” problem and the problem of the smoker who wishes to quit involve an intra-individual conflict between short-term and long-term preferences: the consumer reveals a short-term preference for cash or cigarettes and, at the same time, has a long-term preference for a lower debt level or for quitting smoking. One cannot say a priori that one type of preference is better or worse than the other without engaging in a paternalistic exercise of judging the value of preferences—thereby violating one of the fundamental axioms of economic theory.48

All this becomes an efficiency (and fairness) issue primarily if one considers secondary effects, in particular the potential reaction of (certain) segments of the consumer population. It has already been mentioned that consumers might engage in defensive practices, such as attempting to achieve anonymity in their transactions with businesses. They might also try to improve their decision-making capacity, bargaining skill, and power vis-à-vis businesses by resorting to new technological tools.49 Consumers can use a variety of debiasing tools, many of which are based on quantitative methods, to accurately assess risks and costs/benefits of a particular decision.50 “Algorithmic consumers” can use “self-restraint preference algorithms” to give greater weight to their long-term preferences.51

The problems with this development are manifold.52 First, a technological arms race between businesses and consumers produces deadweight efficiency losses. Second, defensive algorithms used by consumers might be developed and supplied by businesses to foster their interests—a version of the “devious” personal assistant already mentioned. And third, such tools will be used primarily by the savviest consumers. The losers in this strategic game will be those consumers who suffer the most from severe biases.

C. Potential Regulatory Responses

If self-help by consumers is not a (fully) satisfactory response to the problem of strategically set rationality traps, then what is? Consumers are afforded some minimum protection in jurisdictions worldwide by general contract law rules and doctrines, such as capacity, unconscionability, undue influence, misrepresentation, and duress. These rules and doctrines capture extreme cases. But they do not normally provide a remedy if widespread or idiosyncratic biases are exploited by businesses.

Disclosure does not seem to be an effective remedy either. We know that consumers do not read the fine print,53 and requiring businesses to flag an issue in a more prominent form (“Mind the long-term effects!”) will probably not be of much use to the less savvy consumers—the main target group of any regulation—who fail to understand the problem in the first place. A personalized “visceral notice” or warning—for example, by an avatar appearing on the screen of a consumer who browses the internet in a condition of acute vulnerability—might be more effective. At the same time, the European experience with shock tactics to curb smoking was not a success: Europeans still hold the world’s “smoking crown” despite smoking bans, high taxes, and graphic death warnings.54

This leaves us with another remedy that has gained significant importance as a consumer protection tool worldwide, especially in the European Union: withdrawal rights. European consumers have such a right, for example, with respect to contracts concluded on their doorstep, loan agreements, timeshare contracts, and distance-selling contracts.55 It gives them an opportunity to reflect on the wisdom of their choices in a cooling-off period.

Withdrawal rights are justified when consumers’ preferences are distorted by exogenous influences, for example in the doorstep-selling case. Such rights are also justified when consumers suffer from endogenous distortions or biases, for example with respect to loan agreements.56 Building on these two independent rationales for withdrawal rights, it appears justifiable to grant consumers such a right whenever their choices have been influenced by big data and artificial intelligence tools. Insofar as algorithm-driven sales tools target idiosyncratic biases, the notion of personal data “belonging” to the individual in question provides an additional normative foundation for a withdrawal right. Consumers would have the option to reflect on their choices, consult with trusted third parties within a week or two, and withdraw from a transaction if, upon reflection, they consider it to be disadvantageous overall.

Of course, withdrawal rights are no panacea, and they have certain negative secondary consequences as well. There is no guarantee that debiasing takes place in the cooling-off period. Further, the costs of a withdrawal right for all consumers will be borne also by “algorithmic consumers” who don’t need a withdrawal right in the first place. But the return policy of most large businesses today is already such that consumers are granted a voluntary withdrawal right.57 Hence, market practice is moving in this direction. The danger of distortions between different segments of the consumer population should not be overstated.

III. Shaping Preferences

The preceding discussion was directed at discrete transactions that, in one way or another, do not comply with a consumer’s “true” preferences. In this Part, we turn to a different kind of problem that may affect transactions that fully satisfy consumer preferences. The problem arises when those preferences are not the result of a “process of individuation” mastered by the consumer in question but rather the outcome of a fabricated informational sphere, built in a constant feedback loop around preferences voiced at an earlier time.

A. Consuming in the Filter Bubble

Assuming, as we do here, that the algorithm guiding the consumer is fully benign in the sense that it was successfully designed to express and satisfy the true preferences of the consumer, what is the worry? The answer is that the curation of options and the presentation of goods or services that “others like you” happened to buy have serious repercussions for choice itself. The potentially detrimental effects of a curated access to reality were first canvassed in the context of public policy and democratic choice. The term “filter bubble” has developed into an epigraph for the consequences that constant exposure to personalized news may have on the shaping of political preferences.58 When people are limited to curated news streams that reinforce initial political predispositions, public discourse and public opinion changes, with repercussions for the outcome of elections.59 The internet user has a certain initial political attitude that the personalized algorithm reacts to by searching for and presenting information that conforms to the initial preference. This process of selective information continues in the following rounds so that a constant “feedback loop” develops that works to deepen the initial political attitude of the user: “Your identity shapes your media, and your media then shapes what you believe and what you care about.”60 Regarding the public sphere, one troublesome effect of constant exposure to information skewed in one direction is a radicalization of political viewpoints.61

The core insight of this debate, captured in the filter bubble metaphor, may be carried over to choices in the marketplace. Here also, exposure to curated information affects “how we think.”62 In the commercial realm, algorithms that provide only information that conforms to preexisting preferences squelch creativity, decrease diversity, and encourage a passive approach to reality.63 The first move of the consumer, be it purchasing a book or ordering a pizza for dinner, determines the offers presented to her at the following stage, when she thinks about the next purchase or the next dinner. In the third round, the algorithm will be even more precise with the limitation of the options available and present an even narrower subset of choices—and so on. In effect, the first preferences articulated by the consumer will lead her down a path of ever more personalized and specialized options that keep her in her comfort zone. The natural processes of dealing with challenges to existing preferences and of developing them further come to a standstill.64

As in the political arena, the outcome will likely be an “extreme” consumption pattern. Imagine someone who ordered a guidebook advising on beautiful hiking trails in a particular region. The next time around, she may be offered some basic hiking gear—and like it. In the third round, the equipment offered may be more sophisticated—and again conform to this consumer’s (newly shaped) preference for hiking. Then, the algorithm might introduce equipment needed for more ambitious forms of mountaineering—and eventually offer a package tour to climb Mount Everest. In the end, a consumer who initially articulated a clearly defined interest for a certain activity is led on a path deep into the niche in which that particular activity is held in high regard. Such a consumer may never get the chance to explore the fun involved in completely different categories of activity, such as waterskiing, dancing, or tennis—also because satisfying existing tastes is using up more and more of this consumer’s income.

These effects will likely become much more serious with the use of personal butlers or home assistants. Home assistants work with a personalized algorithm that spares the consumer the agony of choice by taking past choices as a blueprint for current preferences. If my home assistant knows that I like pizza for dinner, it will probably order one when I ask for food or even without me uttering such a wish once I get home. A person who channels much of her demand for food through the home assistant may fail to appreciate the wealth of delicious dishes other than pizza.

The Internet of Things and the shopping bots coming with it will add another layer of consumer lock-in. A washing machine that automatically orders detergent when existing supplies run out is certainly helpful in sparing the consumer the unrewarding chore of going to the store and selecting, from among the many brands, the detergent that best fits her preferences. Most people will care little about ordering the same brand over and over again and be happy with missing out on innovations in the field. However, not all choices are as negligible as this one. Let the consumer turn away from the washing machine and look at the refrigerator. A refrigerator that orders new supplies when the existing ones run out can only follow past preferences. A person primarily fed by the contents of her refrigerator gets more of the same food every day. Without some form of intervention, an initial choice of a high-calorie diet will lead to excessive consumption of calories over a lifetime.

B. The Evaporation of Consumer Welfare

The examples of autonomous washing machines and refrigerators already suggest that algorithms that facilitate or even substitute choice are ambivalent, neither all good nor all bad. Insofar as algorithms help to articulate existing consumer preferences, they promise important savings in transaction costs and improved consumer satisfaction. However, from a dynamic perspective, algorithms come with a downside, namely their preference-shaping power that narrows the range of choices and shrinks the world for consumers.

An analysis of how serious this problem is must begin with the acknowledgement that the shaping of preferences by others is anything but new and is, in fact, inevitable. The assumption that preferences are somehow “created” by an individual out of the blue, in a largely autonomous process, based on congenital predispositions, is obviously wrong. More realistically, preferences have been categorized as endogenous, meaning that they are developed by the individual as a step in a chain of prior decisions.65 While this idea was suggested with a view to environmental protection, it is also relevant for decisions in the market for consumer goods. At all times, consumers have been exposed to information that has been curated to fit their preferences.66 The good old brick-and-mortar stores may be categorized as a traditional filtering project.67 Even a store for unspectacular product categories, such as tools and home improvement materials, will choose its outfit and design as well as select the goods it stocks on the basis of the perceived preferences of its customers. Hence, it seems that personalized algorithms do little more than transpose the traditional ways of curation and filtering to the digital world.

While preference shaping itself is nothing new, the novel technology of personalized algorithms raises traditional marketing techniques to a new level of sophistication. In the brick-and-mortar world, consumers are required to choose between shopping environments they want to expose themselves to. These shopping environments are created for groups and cannot target the individual. With the advent of personalized algorithms, the many competing, group-oriented, brick-and-mortar quasi algorithms of the physical world collapse into individualized algorithms in the digital world. And this alone may be a cause of worry. Many activities remain unproblematic until they are scaled to a degree that causes problems. The use of carbon fuels is an example in point, and personalized shopping may be another. While humans have never been able to take advantage of the full informational richness of the world, they were never before the object of deliberate and effective preference-shaping by others, targeting the individual.68

This problem is particularly acute when the algorithm that shaped the preferences was designed by the party on the other side of the relevant transaction. But even if the designer of the preference-shaping algorithm is not the counterparty to the transaction, it still remains a self-interested actor in the marketplace. It would be naïve to think that the master of the algorithm will act only altruistically, with a view to the well-being of the individuals whose preferences are manipulated.

These effects will be less serious for adults who slowly become familiar with and accustomed to the new technologies. After all, their preferences have developed independently of algorithmic interference, in a world characterized by serendipity. Contrast this with teenagers and future generations who grow up with personalized algorithms already in place. They lack the privilege of a longer period of time for the independent development of mature dispositions. There is no chance to autonomously develop, test, and modify initial dispositions through experience of trial and error. The preference-shaping effect of algorithms will therefore be most pronounced for the young.69

Technically, algorithmic consumption patterns of the kind just described do not restrict freedom of choice. The individual consumer remains the master of her consumption decisions, free to “get what she wants,” that is to satisfy whatever preferences she may have. The question rather is whether preferences that have been developed from initial choices with the help of a computer program may still be taken seriously. The algorithm-induced shaping of consumer wants undermines the basic assumption of microeconomics, namely that efficiency is based on the optimal satisfaction of individual preferences.70 When the seller “manufactured” the preferences of the buyer, it is no longer clear that a contract of sale, entered into voluntarily, maximizes the welfare of both parties. The function of the bargained-for contract, to ensure optimal satisfaction of preferences for both sides, becomes moot. And with it, the concept of social welfare, understood as the aggregate of individual well-being, becomes illusory.

But how serious is this problem? After all, nobody forces consumers to shop online, to use digital butlers, or to install refrigerators that automatically replenish supplies. The brick-and-mortar world still exists so that consumers can avoid the risks described above—at least to a certain degree as physical retail spaces are becoming increasingly “smart” as well. In any event, consumers can disregard the choices presented by a personalized algorithm and choose something different. Consumers may also use digital technology to insulate themselves from algorithmic choices. One example already mentioned is technology that allows the internet user to hide her identity. If the seller cannot identify who is browsing her website, she cannot put the personalized algorithm to work.71

These measures of “civil disobedience” to algorithms are all fine and not even particularly difficult to implement. However, there should be no illusion as to the frequency with which such instruments of self-help will be deployed in practice. Deviating from defaults is costly, and the majority of people shun these costs most of the time. For them, the so-called “effort tax” is prohibitive.72 In the present case, the tax will be particularly large, as consumers would have to deviate from a routine reflecting their own past choices, which have shaped their current preferences. People who have eaten pizza for months or years would have to come up with the resolve to order something else. Netflix subscribers would have to click away the suggestions made by the service in order to explore movie genres previously unknown to them. It will require a fair amount of tolerance for disappointment to leave the trodden path and try something new and unknown. Remember that part of the problems caused by algorithmic choice is that consumers are locked into their comfort zones. Leaving comfort zones is never easy.

Even when measures of self-protection are adopted by consumers, sellers will not remain idle but look for ways to circumvent these measures, again with the help of digital technology. So-called ad blockers that hide embedded advertisements are already available and widely used. Internet providers have responded with the requisite counter-software that requires users to deactivate their ad blockers.73 Such a digital arms race imposes substantial costs on both parties and thus decreases the pie created by the transaction in question. In addition, measures of self-protection have a regressive effect, as they are mostly used by sophisticated consumers, who have the skills, resolve, self-confidence, and mental ease to employ them.

C. Potential Regulatory Responses

When thinking about potential regulatory responses to reduce the harmful effects of filter bubbles, the imposition of substantive standards that personalized algorithms would have to meet comes to mind. This approach would result in a commercial equivalent of the fairness doctrine that once governed the coverage of political issues by media outlets in the United States.74 Algorithms would have to comply with the principle of nondiscrimination75 and conform to a serendipity requirement.76 Introducing an element of randomness into the algorithm would help to loosen the grip that past choices exert on present transactions. However, for such a policy to be effective, it would be necessary for regulators to have the ability and the resources to investigate and analyze a large and growing number of highly complex algorithms that are used in the marketplace. Such algorithms are designed with the help of the finest software engineers available. A government agency would most likely be unable to match these skills and to second-guess design choices made by sophisticated private actors, except perhaps for the most egregious violations of the law.

The focus must rather be on simple remedies that help consumers protect themselves without triggering a digital arms race. The standard remedy in the law of consumer contracts, namely disclosure, would not work.77 For whatever disclosure mandates may be worth in other contexts, disclosure cannot protect consumers against personalized algorithms, as everybody is aware of them anyway. Suppliers like Amazon and Netflix tell their customers explicitly and conspicuously that “other users” or “other viewers” found the proposed offers interesting. Consumers may be sleepwalking in a personalized environment, but at least they are aware of the kind of environment they are walking in.

Another idea, often voiced in public discourse, is algorithmic transparency, that is disclosure of information about the outline and parameters of algorithms.78 It should be resisted for the same reasons as disclosure more generally. Many studies have shown that consumers do not even read the fine print that sets out crucial contract terms, as it requires an investment of time and effort that rational consumers are unwilling to make.79 Obtaining the relevant knowledge about algorithms would require an even greater investment. It makes no sense to force online companies to disclose information that the person supposedly in need of such information is unable or unwilling to digest.80

If empowerment of consumers is to work in the real world, it must target earlier stages of the decision-making process. Consumers should be able to choose between a personalized and a nonpersonalized experience, that is between an “architecture of control,” constructed from past choices, and an “architecture of serendipity” that reflects average choices or follows some other principle of selection and ordering.81 Such a meta-choice remedy would allow the consumer to choose between a personal user profile and digital anonymity.82 Some major players in the online market already provide this option. Google, for example, allows users to opt out of personalized advertising. However, the default is set in favor of personalized advertising, and the website providing the opt-out remedy makes it quite burdensome to actually use it.83

Many individuals will not form a single meta-preference as to the use of algorithms. Imagine a consumer who welcomes an algorithm to ensure constant supply of, for example, detergent but would equally insist on making nutritional choices herself, without digital assistance. Others will have other meta-preferences that define the boundary between curated and idiosyncratic choice. It seems important that individuals get the chance to act on these meta-preferences and to exercise their meta-choice. The better option therefore is to shift the choice option to a stage after a digital shadow of the person was created and the algorithm stands ready to present the consumer with curated choices. The consumer would retain the option to opt out of personalized shopping by turning off the algorithm for a particular transaction.84 This would allow the consumer to flip-flop between algorithmic shopping and spontaneous choice.

This may be a case for regulatory intervention: platforms and other internet companies targeting consumers with personalized algorithms could be required by law to allow consumers to opt out of curated choice. This option should be readily available and not hidden in some arcane corner of the relevant website. Nor should it come with a set of confirmation messages and other sorts of digital paraphernalia that make it more burdensome than necessary to actually opt for a nonpersonalized digital

There is no question that forcing platforms and other providers to allow for noncurated choice interferes with their business models. However, it must be noted that such intervention would not affect the terms of the individual transaction. Nor would it force contracts on businesses that the firms in question did not want. The only thing such regulation would do is force suppliers to let customers make their own choices between the goods and services on visible display rather than soft-forcing specific transactions on these customers. It seems difficult to find anything wrong with such an autonomy-preserving measure.85

IV. Digital Market Failure

Readers who find our account of the dangers imminent in the digital revolution much too alarmist should consider that the new technologies now becoming available not only open up new options for businesses in differentiating prices and shaping transactions. They also dramatically reduce the protective force of the market for consumers. Markets are never perfect, and consumers are never perfectly informed. But under circumstances as they prevailed in the past, ill-informed consumers could often rely on a rather small group of well-informed consumers to move the market in the right direction.86 The most straightforward example is price: every consumer benefits from a comparatively low market price that is the fruit of competition of sellers for the relatively small group of consumers who do engage in price comparisons and search for the lowest price. This protective effect rests on the inability of sellers to distinguish between knowledgeable and ignorant buyers and their inability to determine the reservation price of individual customers. As Part I explains, the inability of sellers to distinguish between buyers in order to offer prices that reflect the individual customer’s knowledge and price sensitivity is waning. Sellers who have perfect knowledge about an individual customer’s information level and preferences can price discriminate to the fullest extent. The market offers no protection at all. Rather, each individual becomes his or her own market.

The waning protective force of the market not only enables sellers to price discriminate. It also facilitates the exploitation of biases (Part II), and it allows sellers to disproportionately profit from shaped preferences (Part III). In traditional markets, sellers do not know the “weak spots” of an individual customer and thus are unable to turn them into “sweet spots” for themselves. In a similar vein, businesses are unable to shape the preferences of customers and, more importantly, are unaware of the preferences an individual customer has. Again, this knowledge will become available with the help of big data analytics. In all three cases, market forces offer no protection, as sellers are no longer confined to transacting on the same terms with anybody within an anonymous crowd of customers willing to purchase their goods and services.


Discussions revolving around the digital revolution and how it will affect human lives often fall into one of two extremes. The utopian extreme paints a bright future in which individuals will be relieved of many meaningless chores of the analog world, freeing up time for whatever is more important in life than, for example, replacing your detergent. The dystopian view looks into another crystal ball showing a dark future in which individuals are remote-controlled by a few super-powerful internet giants. In this Essay, our aim was to avoid these two extremes in discussing one of the great innovations of recent time, namely big data analytics that has enabled the personalization of transactions. There is no doubt that personalized choice will have beneficial effects for consumers, and these effects should be welcomed.

The bright side of personalization should not, however, conceal its dark side. For the first time in human history, first-degree price discrimination has become feasible. When it is used, it tends to strip consumers of any surplus they might have hoped to gain from a transaction. Second, personalization also enables sellers to deliberately and systematically exploit an individual’s biases and weaknesses for profit. Third, personalization algorithms greatly reduce the diversity of information the consumer is exposed to. As a result, existing preferences are reinforced over and over, and individuals increasingly lead narrow, impoverished lives. Taken together, siphoning rents, exploiting biases, and shaping preferences raise the specter of consumers being “brought down by algorithms,” losing transaction surplus, engaging in welfare-reducing transactions, and increasingly being trapped into a narrow life—with serious negative consequences for fundamental societal values, such as efficiency, justice, and individual autonomy/agency.

Precisely because the balance of benefits and harms is mixed for personalized algorithms, it would be poor advice to ban them or to severely restrict the use of user data—for instance, on grounds of privacy. On the other hand, complete inaction would be misguided too. While it is true that consumers could protect themselves using software and hardware tools, self-help of this sort is likely to lead to an inefficient digital arms race. It would also have a regressive effect on certain consumers, raising distributional concerns. The least savvy, patient, and forward-looking consumers would probably lose out, as they stand to miss the opportunities for self-help that may be available. Hence, the problem with self-help remedies is not that they do not exist but rather that they would diminish the pie for everyone, and most seriously for the disenfranchised members of society.

In addition, the competitive market, the one institution that used to protect ignorant or otherwise vulnerable consumers from exploitation by their respective counterparties, will no longer be able to serve this function in a world of algorithmic choice. The veil of the anonymous customer behind which vulnerable consumers could hide for protection will be lifted, and each consumer will be exposed, “naked” in her own market. The consequences that this development will have for the law of consumer protection are as yet unexplored.

Against this background, this Essay argues for light-touch regulation that would neither limit the use of personalized algorithms nor impose a framework for their design. What it would do, however, is to afford consumers meaningful exit strategies. Disclosure mandates, requiring businesses to inform consumers about the use of personalized algorithms, are meaningful only with respect to the siphoning rents problem. They are not effective with respect to the problems of exploiting consumers’ biases and shaping their preferences. What is required are opt-out rights that are offered at the right moment and that are easy to use. Thus, consumers should be able to opt out of personalized pricing and insist on an offer at market price, they should be able to revoke transactions entered into under the influence of sales algorithms at a moment of vulnerability, and they should retain the option to go shopping on the internet with their sunglasses on—that is, anonymously.

In fact, a general “right to anonymity” would not only guard against the perils associated with the shaping of preferences but also protect against price discrimination and the algorithm-driven exploitation of individual biases. In this sense, a “right to anonymity” in the digital world appears to be the macrosolution to the microproblems discussed in this Essay. If it were recognized and protected by the law, it might be possible to reap (most of) the benefits of personalization while avoiding (most of) its pitfalls. And this is as much as one can expect.

  • 1See Laika Satish and Norazah Yusof, A Review: Big Data Analytics for Enhanced Customer Experiences with Crowd Sourcing, 116 Procedia Comp Sci 274, 278 (2017); Seshadri Tirunillai and Gerard J. Tellis, Extracting Dimensions of Consumer Satisfaction with Quality from Online Chatter: Strategic Brand Analysis of Big Data Using Latent Dirichlet Allocation *3–4 (unpublished study, Mar 2014), archived at
  • 2For evidence of how more choices can lead to decision-making paralysis, see Barry Schwartz, The Paradox of Choice: Why More Is Less 19–20 (HarperCollins 2004).
  • 3See Ariel Porat and Lior Jacob Strahilevitz, Personalizing Default Rules and Disclosure with Big Data, 112 Mich L Rev 1417, 1471–76 (2014).
  • 4See Kfir Eliaz and Ran Spiegler, Contracting with Diversely Naive Agents, 73 Rev Econ Stud 689, 690 (2006).
  • 5See Michael D. Grubb, Overconfident Consumers in the Marketplace, 29 J Econ Perspectives 9, 12–13 (2015); Xavier Gabaix and David Laibson, Shrouded Attributes, Consumer Myopia, and Information Suppression in Competitive Markets, 121 Q J Econ 505, 507–11 (2006); Stefano DellaVigna and Ulrike Malmendier, Contract Design and Self-Control: Theory and Evidence, 119 Q J Econ 353, 389 (2004).
  • 6See generally Michal S. Gal and Niva Elkin-Koren, Algorithmic Consumers, 30 Harv J L & Tech 309 (2017).
  • 7See Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 68–71 (Crown 2016).
  • 8See Ariel Ezrachi and Maurice E. Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy 85–89 (Harvard 2016).
  • 9See Ariel Ezrachi and Maurice E. Stucke, The Rise of Behavioural Discrimination, 37 Eur Competition L Rev 485, 485–86 (2016).
  • 10See Akiva A. Miller, What Do We Worry about When We Worry about Price Discrimination?—The Law and Ethics of Using Personal Information for Pricing, 19 J Tech L & Pol 41, 48–54 (2014).
  • 11Ariel Ezrachi and Maurice E. Stucke, Is Your Digital Assistant Devious? *16 (University of Tennessee Knoxville College of Law Legal Studies Research Paper No 304, Sept 2016), archived at
  • 12Aniko Hannak, et al, Measuring Price Discrimination and Steering on E-commerce Websites *317 (Proceedings of the 2014 Conference on Internet Measurement), archived at
  • 13See Jakub Mikians, et al, Detecting Price and Search Discrimination on the Internet *5–6 (Proceedings of the 11th Association of Computing Machinery Workshop on Hot Topics in Networks, Oct 2012), archived at
  • 14See Benjamin Reed Shiller, First-Degree Price Discrimination Using Big Data *4 (unpublished paper, Jan 19, 2014), archived at
  • 15The German airline Lufthansa, for example, has recently announced that it will engage in first-degree price discrimination. See Moritz Stoldt, Lufthansa Führt Dynamische Flugpreise ein (reisetopia, Jan 10, 2019), archived at
  • 16See, for example, Timothy J. Richards, Jura Liaukonyte, and Nadia A. Streletskaya, Personalized Pricing and Price Fairness, 44 Intl J Indust Org 138, 140 (2016); Kelly L. Haws and William O. Bearden, Dynamic Pricing and Consumer Fairness Perceptions, 33 J Consumer Rsrch 304, 306–07 (2006).
  • 17See Oren Bar-Gill, Algorithmic Price Discrimination When Demand Is a Function of Both Preferences and (Mis)Perceptions, 86 U Chi L Rev 217, 240–42 (2019).
  • 18See Tobias Regner and Gerhard Riener, Privacy Is Precious: On the Attempt to Lift Anonymity on the Internet to Increase Revenue, 26 J Econ & Mgmt Strategy 318, 332 (2017).
  • 19See Part I.C.
  • 20See Paul Belleflamme and Wouter Vergote, Monopoly Price Discrimination and Privacy: The Hidden Cost of Hiding, 149 Econ Letters 141, 142 (2016).
  • 21See Shiller, First-Degree Price Discrimination Using Big Data at *29 (Table 3) (cited in note 14).
  • 22See Ben Shiller and Joel Waldfogel, Music for a Song: An Empirical Look at Uniform Song Pricing and Its Alternatives *30, 42 (National Bureau of Economic Research Working Paper No 15390, Oct 2009), archived at
  • 23Gal and Elkin-Koren, 30 Harv J L & Tech at 310 (cited in note 6). Super-savvy strategic consumers might also try to trick the algorithms by manipulating their digital profile, etc.
  • 24ShadowBid (2017), archived at
  • 25See Mark Klock, Unconscionability and Price Discrimination, 69 Tenn L Rev 317, 375–76 (2002).
  • 26The origin of this doctrine dates back to the laesio enormis of Roman law. See Frier, et al, eds, 2 The Codex of Justinian bk IV § 44.2 at 997 (Cambridge 2016). According to laesio enormis, the seller of a parcel of land could rescind the contract if the purchase price was lower than half of the real value. See Horst Eidenmüller, Justifying Fair Price Rules in Contract Law, 11 Eur Rev Contract L 220, 221 (2015).
  • 27Of course, there are certain technical regulatory issues that would need to be addressed, in particular the precise requirements that trigger a disclosure duty—such as the necessary level of personalization—and its exact content. However, these issues are solvable.
  • 28Online Platforms and the Digital Single Market *76 at ¶ 291 (House of Lords Select Committee on European Union, Apr 20, 2016), archived at
  • 29European Parliament and Council Directive on Unfair Business-to-Consumer Commercial Practices in the Internal Market 2005/29/EC (May 11, 2005), archived at
  • 30See Part I.B.
  • 31See M. Ryan Calo, Against Notice Skepticism in Privacy (and Elsewhere), 87 Notre Dame L Rev 1027, 1034–47 (2012) (discussing “visceral notice”).
  • 32See Michael S. Gal, Algorithmic Challenges to Autonomous Choice *32 (unpublished paper, May 20, 2017), archived at
  • 33This is of course a very controversial idea, and we do not seek to take sides in the debate about “personal data as private property.” We use this idea only to explain the negative reaction and value judgment of consumers to personalized pricing.
  • 34The literature is vast. For two contributions by Nobel Prize winners, see generally Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux 2011); Richard H. Thaler, The Winner’s Curse: Paradoxes and Anomalies of Economic Life (Free Press 1992).
  • 35See Gökçe Sargut and Rita Gunther McGrath, Learning to Live with Complexity, 89 Harv Bus Rev 68, 70 (2011).
  • 36See generally David Laibson, Golden Eggs and Hyperbolic Discounting, 112 Q J Econ 443 (1997).
  • 37See Amos Tversky and Daniel Kahneman, Availability: A Heuristic for Judging Frequency and Probability, 5 Cognitive Psychology 207, 219–20 (1973).
  • 38See Daniel Kahneman and Amos Tversky, Prospect Theory: An Analysis of Decision under Risk, 47 Econometrica 263, 268–69 (1979).
  • 39Ryan Calo, Digital Market Manipulation, 82 Geo Wash L Rev 995, 1007–12, 1018 (2014) (emphasis omitted).
  • 40For a comprehensive account of behavioral addiction created by product marketing and modern consumerist culture, see generally Adam L. Alter, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked (Penguin 2017).
  • 41See generally Dongseong Choi and Jinwoo Kim, Why People Continue to Play Online Games: In Search of Critical Design Factors to Increase Customer Loyalty to Online Contents, 7 CyberPsychology & Behav 11 (2004).
  • 42Alter, Irresistible at 205–07 (cited in note 40). “Cliffhangers” exploit our desire to finally resolve an open question or issue: we are “wired for closure.” Id at 192.
  • 43Id at 208–12. The status quo bias comes into play because a further purchase is set as the “sticky” default. See id at 207–09.
  • 44Ezrachi and Stucke, Is Your Digital Assistant Devious? at *16 (cited in note 11).
  • 45See Illah Reza Nourbakhsh, Robot Futures 12–15 (MIT 2013) (discussing the “deadly accurate manipulation of desire”).
  • 46John Armour, et al, Principles of Financial Regulation 214 (Oxford 2016). For a comprehensive analysis of consumer credit card contracts, see Oren Bar-Gill, Seduction by Contract: Law, Economics, and Psychology in Consumer Markets 51–115 (Oxford 2012).
  • 47For illustrative applications, see generally Jon Walker, Artificial Intelligence Applications for Lending and Loan Management (Techemergence, Mar 27, 2018), archived at
  • 48See Kenneth J. Arrow, Social Choice and Individual Values 25–26, 28–30 (Yale 2d ed 1963) (arguing that individuals are free to choose among alternatives despite the imposition of a social welfare function).
  • 49See Part I.C.
  • 50See Debiasing (Lee Merkhofer Consulting), archived at
  • 51See Gal, Algorithmic Challenges to Autonomous Choice at *9 (cited in note 32).
  • 52For further discussion, see Part I.B.
  • 53See Yannis Bakos, Florencia Marotta-Wurgler, and David R. Trossen, Does Anyone Read the Fine Print? Consumer Attention to Standard-Form Contracts, 43 J Legal Stud 1, 32 (2014); Omri Ben-Shahar and Carl E. Schneider, More than You Wanted to Know: The Failure of Mandated Disclosure 67–70 (Princeton 2014).
  • 54See Tobacco: Data and Statistics (World Health Organization Regional Office for Europe, 2018), archived at (noting that, according to WHO data, 28 percent of European women and men aged fifteen and above smoked in 2013).
  • 55For an overview of European contract law and withdrawal rights, see Hein Kötz, 1 European Contract Law: Formation, Validity, and Content of Contracts; Contract and Third Parties 191–95 (Oxford 1997) (Tony Weir, trans).
  • 56See Horst Eidenmüller, et al, Towards a Revision of the Consumer Acquis, 48 Common Mkt L Rev 1077, 1096–1107 (2011); Horst Eidenmüller, Why Withdrawal Rights?, 7 Eur Rev Contract L 1, 7–18 (2011).
  • 57See, for example, Walmart Policies and Guidelines (Walmart, 2018), archived at; Return Policy (CVS Pharmacy, 2018), archived at
  • 58See generally Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (Penguin 2011). See also Joseph Turow, The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth 8 (Yale 2012).
  • 59See Cass R. Sunstein, #republic: Divided Democracy in the Age of Social Media 1–26 (Princeton 2017); Cass R. Sunstein, 2.0 3–13 (Princeton 2007); Cass R. Sunstein, 3–16 (Princeton 2001).
  • 60Pariser, The Filter Bubble at 125 (cited in note 60).
  • 61See Cass R. Sunstein, Going to Extremes: How Like Minds Unite and Divide 23–25 (Oxford 2009); Cass Sunstein, #republic at 59–97 (cited in note 61).
  • 62Pariser, The Filter Bubble at 76 (cited in note 60).
  • 63See id at 94; Sunstein, #republic at 7 (cited in note 61).
  • 64See Cass R. Sunstein, Choosing Not to Choose: Understanding the Value of Choice 161–63 (Oxford 2015).
  • 65See Cass R. Sunstein, Endogenous Preferences, Environmental Law, 22 J Legal Stud 217, 238 (1993).
  • 66See Sunstein, #republic at 17 (cited in note 61) (noting that “[f]iltering is inevitable”).
  • 67See Sunstein, Choosing Not to Choose at 159 (cited in note 66) (pointing out that personalized default rules existed prior to the digital age, for example, as fruits of accumulated knowledge about the preferences of a loved one).
  • 68See Sofia Grafanaki, Drowning in Big Data: Abundance of Choice, Scarcity of Attention and the Personalization Trap, A Case for Regulation, 24 Richmond J L & Tech ¶¶ 12–13, 29, 59, 66 (2017).
  • 69See Gal, Algorithmic Challenges to Autonomous Choice at *30–31 (cited in note 32).
  • 70See Gal and Elkin-Koren, 30 Harv J L & Tech at 311 (cited in note 6); Gal, Algorithmic Challenges to Autonomous Choice at *3–5 (cited in note 32); Yochai Benkler, Degrees of Freedom, Dimensions of Power, 145 Daedalus 18, 23–24 (2016).
  • 71See Part I.C.
  • 72Sunstein, Choosing Not to Choose at 34–37 (cited in note 65).
  • 73See Google Embraces Ad-Blocking via Chrome (The Economist, Feb 17, 2018), archived at
  • 74See Sunstein, #republic at 84–85 (cited in note 61); Grafanaki, 24 Richmond J L & Tech at ¶¶ 76–80 (cited in note 70).
  • 75See note 9 and accompanying text.
  • 76See Engin Bozdag, Bias in Algorithmic Filtering and Personalization, 15 Ethics & Info Tech 209, 218, 222 (2013).
  • 77But see Calo, 82 Geo Wash L Rev at 1044 (cited in note 39).
  • 78See Omer Tene and Jules Polonetsky, Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw J Tech & Intel Prop 239, 268–72 (2013). See also Gal, Algorithmic Challenges to Autonomous Choice at *31 (cited in note 32).
  • 79See note 54 and accompanying text.
  • 80Consumers might receive some help from intermediaries who explain to them—in nontechnical terms—how the preference-shaping is achieved. But then again, the question is whether any new information consumers receive is of any interest to them. We think it is not.
  • 81See Sunstein, Choosing Not to Choose at 110, 161–62 (cited in note 66) (coining and explaining the terms “architecture of control” and “architecture of serendipity”). See also Sunstein, #republic at 5 (cited in note 61).
  • 82See Calo, 82 Geo Wash L Rev at 1047 (cited in note 39) (discussing the ability to “opt out of the entire marketing ecosystem”); Ezrachi and Stucke, Is Your Digital Assistant Devious? at *19–20 (cited in note 11) (discussing the ability to “opt out of personalized ads or sponsored products”); Gal, Algorithmic Challenges to Autonomous Choice at *32 (cited in note 32) (discussing use of a “stop button”).
  • 83See Lauren E. Willis, When Nudges Fail: Slippery Defaults, 80 U Chi L Rev 1155, 1185–91 (2013).
  • 84For a similar idea of placing individuals in control of their “digital personae,” see Tene and Polonetsky, 11 Nw J Tech & Intel Prop at 266 (cited in note 78). See also generally Doc Searls, The Intention Economy: When Consumers Take Charge (Harvard Business 2012).
  • 85The opt-out right discussed in this Section would affect only microtargeted ads and not ads in general. Hence, the core business model of internet giants is not in danger—they would just be less profitable than they are today.
  • 86See Alan Schwartz and Louis L. Wilde, Intervening in Markets on the Basis of Imperfect Information: A Legal and Economic Analysis, 127 U Pa L Rev 630, 640–47 (1979).