Artificial intelligence (AI) has the potential to alter the interpretation of the duties of care, skill, and diligence. As these duties form the foundation for the BJR and equivalent provisions, the development of AI is also expected to impact the BJR. There is a broadening importance, in an increasingly data-driven business environment, of the requirement to gather sufficient information before making a decision and to use information in a valid manner. Changes are both quantitative (how much information to collect) and qualitative (which types of information to collect). The changes also relate to the methods of decision-making, including the role of measures and statistics over intuition.

TABLE OF CONTENTS

I. Introduction

Board members are required to make informed and reasonable decisions in the best interest of the company, and their failure to do so is grounds for liability. In most jurisdictions, however, corporate law shields board members from liability for poor business decisions under the business judgment rule (BJR) or its functional equivalent. The rule is based on the principle, articulated in Robinson v. Pittsburgh Oil Ref. Co. (Del. Ch. 1924), that “directors of [a] corporation . . . are clothed with [the] presumption which the law accords to them of being [motivated] in their conduct by a bona fide regard for the interests of the corporation whose affairs the stockholders have committed to their charge.” As a general principle, therefore, the courts will not review directors’ business decisions.

The BJR aims to shield directors and officers from personal liability for their decisions as long as they act in good faith, with reasonable care, and in the best interests of the company. In the United States and Germany, the bar to liability is explicit; in the United Kingdom, France, and other jurisdictions, the bar derives from procedural or evidentiary hurdles to recovery. In any case, the BJR is subject to some conditions and criteria.1 Directors and officers must make decisions honestly and with integrity, that is, without any personal motives or conflicts of interest that could influence their judgment. Moreover, they are expected to exercise a certain level of diligence and prudence in their decision-making process. This means that they should gather relevant information, consider various alternatives, and make informed decisions based on the available facts. Finally, directors and officers are obligated to act in the best interests of the company and its shareholders. This typically involves promoting the success of the company, which includes its sustainability, as well as the interests of various stakeholders. If directors and officers meet these criteria when making decisions, courts will generally defer to their judgments, even if the decisions ultimately result in negative outcomes for the company. However, the BJR does not shield directors and officers from liability in cases of fraud, self-dealing, or gross negligence. Overall, the BJR sets procedural requirements that ensure a level of trust and confidence in corporate decision-making while also providing directors and officers with the freedom to exercise their discretion in managing the affairs of the company.

Our paper starts from the hypothesis that artificial intelligence (AI) has the potential to alter the interpretation of the duties of care, skill, and diligence. As these duties form the foundation for the BJR and equivalent provisions, the development of AI is also expected to impact the BJR. More specifically, we show the broadening importance, in an increasingly data-driven business environment, of the requirement to gather sufficient information before making a decision and to use information in a valid manner. Changes are both quantitative (how much information to collect) and qualitative (which types of information to collect). The changes also relate to the methods of decision-making, including the role of measures and statistics over intuition.2

II. AI-Powered Information Gathering

AI has opened the door to analyzing unprecedentedly vast amounts of data. These developments may create threats to our individuality and agency as citizens,3 though the nature and scale of their effects remain hypothetical. Some less drastic but also more plausible practical impacts can, however, already be documented, including in the field of corporate law. It is far from surprising that this branch of law lends itself to observations as to the impact of AI on decision-making. It can be recognized that the culture of gathering information and analyzing it—often by resorting to quantitative and probabilistic methods—has formed the traditional basis of rational corporate decision-making, whether at the level of the shareholders’ meeting, or, in larger companies, the board of directors, which is the focus of the present paper. However, AI appears to have opened the door to a new era for two reasons.

One is the quantity of data that can be analyzed and, therefore, the validity and reliability of the results derived from that data. The second is the nature of the analyses, forecasting, and simulations that can be performed based on the extracted information; machine learning and deep learning represent turning points. For example, a Chicago-based company that provides software for managing operational risk has recently developed a generative AI–powered assistant that uses AI to sift large volumes of operational risk data, identify the relevant elements for corporate decision-makers, and generate executive summaries, instant insights, intelligent recommendations, and best-practice improvements. As companies continue to utilize and develop AI, business decisions and continuing risk assessments and reporting are increasingly data-driven, especially at the strategic level.

III. The AI Judgment Rule

Under the U.S. formulation of the BJR, when directors and officers make business decisions without being “reasonably informed,” they violate “the duty of directors to act on an informed basis, . . . [which] forms the duty of care element of the business judgment rule.” On that basis, they will lose the protection of the BJR. There is little doubt that AI can provide information superior to purely human expertise. The Coca-Cola Company, for instance, has fundamentally improved its marketing strategies based on data and AI. To decide on marketing, advertisements, and social media strategies to target Coke lovers around the world, the company uses big data analytics, image recognition, and AI. Because of these data-driven marketing decisions, the company reported an 189% uplift in sales. Data-driven technologies like AI can provide information that is useful in the decision-making process and that exclusively human expertise would not be able to produce.4 Typically, directors should be expected to benchmark their business strategies against an AI’s prediction and utilize AI-assisted assessments of risks and benefits.

Given the potential gains in savviness for corporate decision-making, some corporate use of digital technologies seems necessary to meet the duty to act on an informed basis. We call this foreseeable evolution of a corporate law duty requiring decision-makers to make use of AI the AI judgment rule.

It is worth noting that in jurisdictions without an explicit BJR, the result will be similar as long as the lack of information amounts to or supports a finding of wrongful conduct.5 More precisely, most corporate law jurisdictions require corporate decision-makers to act on an informed basis. For example, Section 93(1) of the German Stock Corporation Act (AktG) establishes an obligation to obtain information by stating that members of the board must make business decisions “on the basis of appropriate information.” In our digital age, how can this “appropriateness” standard be met without resorting to digital technologies? How can it not be opportune to take advantage of large amounts of data being processed faster and in more depth than humans ever could? As long as this data is relevant, it enables directors and officers to proceed with more thoroughly informed business decisions.

IV. The Requirements of the AI Judgment Rule

We have established that, given the advantages of AI in analyzing large amounts of data, decisions made without its support may no longer be considered reasonably informed. This threshold of information represents, in turn, a requirement to retain the protection granted by the BJR. Nevertheless, to benefit from the BJR’s safe harbor, the board does not have to exhaust all available sources of information but may weigh the costs and benefits of obtaining information against each other: “[T]he amount of information that it is prudent to have before a decision is made is itself a business judgment of the very type that courts are institutionally poorly equipped to make.”6

Unlike the business judgment itself, however, this weighing is subject to judicial scrutiny. Most corporate law jurisdictions stipulate minimum requirements to gather information. For instance, in Smith v. Van Gorkom (Del. 1985), the finding that directors’ duties had been breached was based on the insufficient preparation for the decision, not on its substance. The Van Gorkom court held that this decision “was not the product of an informed business judgment,” though the precise standards of care differ.7 The more accurate and affordable digital technologies and big data8 become, the more widespread their use in business will be, making it more difficult for directors to justify not taking advantage of such technologies, even if their intensity, scope, and reach still require intensive debate.  

The AI judgment rule can be all the more justified if sector-specific rules explicitly formulate in their codes of conduct corresponding requirements. For example, § 25(a) of the German Banking Act does this for financial institutions.

As a corollary, the information requirement may also gain importance at the procedural level. The informational limit has traditionally not been central to court decisions applying the BJR.9 However, the developments prompted by the availability of AI, outlined above, have the potential to increase the relative importance of the duty of care in applying the BJR. Beyond the observation we set out above that utilizing big data findings will be considered a reasonable expectation, it is possible to speculate about other potential new AI-enabled requirements. For instance, courts might increasingly expect directors to harness AI to reach more rational decisions. Not only would they be expected to access information generated by big data analysis, but they might also be required to enroll AI to correct for well-known biases such as confirmation bias and hyperbolic discounting, or specific biases that they are more prone to suffer from themselves.

Similar new standards may well apply to the monitoring system set up to discharge the duty of oversight. As established in In re Caremark International Inc. Derivative Litigation (Del. Ch. 1996), and Marchand v. Barnhill (Del. 2019), the board must make a good faith attempt to configure a system that supplies it with the information that is necessary to respond to risks and issues.10 What does this mean in an AI age? While a detailed analysis is beyond the scope of this paper,11 it is not hard to see how the corporate use of AI can add to the list of risks the board must manage (including compliance with AI-related regulations). Additionally, it is clear that AI can augment the continuous monitoring system that directors are expected to put in place. AI technologies may, for instance, provide predictions about the probability of infringements or sophisticated surveillance of employees’ behavior.

V. The Scope of the AI Judgment Rule

A question that is rarely discussed relates to the conditions under which data-driven insights and predictive analytics are relevant, as well as usable, by decision-makers. This directly informs the appropriateness or reasonableness of AI-enabled information and, therefore, the type of information required in the context of the BJR or its equivalent.

Risky environments and uncertain ones differ from this perspective. While probabilities and predictions are meaningful to inform decisions under risk, this is not the case when there is no relevant data and limited opportunity to learn from the past. In connection with the hiring of executives, for example, research shows there is a 70% unexplained variance in performance prediction. The corollary is that in this matter, decisions have to rely on another ground to be reasonable. Overfitting when data is noisy represents another danger and induces a false sense of security.12 The now-well-recognized occurrences of AI “hallucinations” embody yet another limit to the reliability of AI beyond well-defined and controlled usages. Such aporia underlines the place to be reserved to noncomputational grounds for decision-making and supports the use of intuition for some corporate decisions. Looking at the future, while AI has pushed frontiers, an open question pertains to the domain of calculability and to the parts—if any—of the human experience, in the corporate context in particular, that cannot be reduced to calculations. Against this background, competition among firms may well intensify on these matters that, beyond computational power that will equalize in asymptote, will increasingly be recognized as highly differentiating.

The practical implication is that additional expertise will also be needed to navigate the various tools available and when to use which one for optimal results. It may, for instance, be useful to remind corporate decision-makers of the strength of statistical machines at solving well-defined problems, as well as of their weakness at defining which problems must be solved in the context of a complex corporate situation. “Deep artificial neural networks are statistical machines that analyse correlations between pattern of pixels or other inputs, and they work best in stable, well-defined worlds [where large amounts of data are available]. Yet the more ill-defined a problem is, and the more uncertainty exists, the less successful statistical machines are.”13 However, human behavior is a key source of uncertainty: algorithms predicting attraction to romantic partners or crime recidivism do not perform better than laypeople.14

In order to prevent a reductionist perspective, companies may develop a culture whereby corporate board members are invited—and for more impact, incentivized—to make use of all different types of rationalities: science and reason,15 but also intuition and imagination.16  As already stressed, big data presupposes a set of data to analyse, which is typical of the risk management approach. In cases where there is no such data set, for example, in relationships with unprecedented geopolitical developments impacting the value chain, another approach is warranted. Such alternative methods may, for instance, rely on intuitions understood as assessments that appear quickly in one’s consciousness and are supported by a feeling based on long experience, while the underlying rationale remains unconscious.17

Another practical issue and possible limit to the AI judgment rule concerns the dynamic between humans and AI: how to work with AI in “co-intelligence,”18 whether AI tools assist, advise, or have an autonomous functioning. Work on this matter remains limited, but some empirical or experimental studies show little human ability to mix AI-based information and more traditional reasoning and a tendency to rely fully on AI-generated advice, especially for number-heavy questions. Targeted training of directors and officers is therefore necessary to manage AI information in a manner that is productive. Such training also promotes the independence expected from board members to meet the fiduciary duty of independence or, in other jurisdictions, minimize the risk of engaging in negligent conduct.

VI. Conclusion

From a comparative perspective, the BJR is not uniform across jurisdictions, though its board-enabling function cannot be traced in most developed countries. Our paper participates in the early assessment of the impact of AI on corporate law across various jurisdictions. It analyzes the grounds supporting an AI judgment rule and illustrates some of its characteristics in the context of the AI-augmented duty of care.

* * *

Professor Geneviève Helleringer is a faculty member at ESSEC Business School-Paris and the University of Oxford.

Professor Florian Möslein is the Director of the Institute for Law and Regulation of Digitalisation at the Philipps-University Marburg.

  • 1See generally Stephen A. Radin, The Business Judgment Rule (6th ed. 2009).
  • 2See Iain McGilchrist, The Matter with Things: Our Brains, Our Delusions, and the Unmaking of the World 777 (2021) (“No one of [science, reason and intuition] can, on its own, be relied on, because of limitations in the scope, kind and degree of knowledge each is capable of offering; but . . . each has something valuable to contribute.”).
  • 3See generally David Runciman, The Handover: How We Gave Control of Our Lives to Corporations, States and AIs (2023). The blueprint for negotiating challenges to individual autonomy, Professor David Runciman believes, has been established over several centuries by the related threats from state and corporate power. The “singularity”—the hypothetical future point in time at which technology growth becomes uncontrollable and irreversible—would really be the “second singularity,” Runciman argues. The first singularity came with the age of Enlightenment, with our ability to “imagine what it would be like to organize collective enterprises as though they had the durability of machines.” Runciman likens the idea of government to an algorithm. The Leviathan of state—or of Google or Meta—is an expression of our collective selves without a soul or a conscience. In its ideal formulation, it offers continuity and shared purpose; when it goes rogue, the “man-made monster” has the capacity to exaggerate all our destructive failings. Experience teaches us how this story ends: “They are meant to work for us, but it is already possible to imagine we will end up working for them.”
  • 4See Florian Möslein, Robots in the Boardroom: Corporate Law and Artificial Intelligence, in Research Handbook of Artificial Intelligence 649, 661 (Woodrow Barfield & Ugo Pagallo eds., 2018).
  • 5See, e.g., Cede & Co v. Technicolor, Inc., 634 A.2d 345, 367 (Del. 1993) (“The duty of directors of a company to act on an informed basis, as that term has been defined by this Court numerous times, forms the duty of care element of the business judgment rule.”).
  • 6In re RJR Nabisco, Inc. S’holders Litig., 1989 WL 7036, at *19 (Del. Ch. 1989).
  • 7For detail and further references, see Möslein, supra note 4.
  • 8Cf. generally Roland Müller, Digitalization Decisions at the Board Level, in Governance of Digitalization 43 (Michael Hilb ed., 2017).
  • 9Cf. Aronson v. Lewis, 473 A.2d 805, 812 (Del. 1984) (holding that the BJR provides a presumption that the directors or officers “of a corporation acted on an informed basis, in good faith and in the honest belief that the action taken was in the best interests of the company”).
  • 10See Marchand v. Barnhill, 212 A.3d 805, 824 (“Caremark . . . require[s] that a board make a good faith effort to put in place a reasonable system of monitoring and reporting about the corporation’s central compliance risks.).
  • 11See Genevieve Helleringer and Florian Möslein, The Digital Duty of Oversight (forthcoming 2025) (on file with authors).
  • 12See generally Daniel Kahneman, Olivier Sibony & Cass Sunstein, Noise: A Flaw in Human Judgment (2021).
  • 13Gerd Gigerenzer, The Intelligence of Intuition 82 (2023).
  • 14See generally Gerd Gigerenzer, How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms (2022).
  • 15See McGilchrist, supra note 2, at 47.
  • 16Id.
  • 17Gigerenzer, supra note 13, at 3.
  • 18See generally Ethan Mollick, Co-Intellingence: Living and Workng with AI (2024).