AI In The C-Suite: Rethinking Director Reliance Under DGCL § 141(e) In The Age of Algorithms
By: Chelsea Marie Mojica

We now live in an Age of Algorithms where algorithmic machines are seen as “a kind of glue binding the world together through reliable pathways of mathematics and symbolic logic.”[1] With just a push of a button, artificial intelligence (“AI”) can mimic human functions, gather data and make a prediction, and generate various outputs.
Although many believe that modern AI began its development in the mid 1900s[2], AI use has significantly increased within the past few years. Notably, in 2022, a small research laboratory, famously known as OpenAI, released its first version of ChatGPT, an AI chatbot that can answer questions, develop emails, and everything in between.[4] In 2025, OpenAI’s CEO Sam Altman announced that “more than 800 million people use ChatGPT every week.”[5] Professionals are using AI in various ways. Among the many users are UK Judges, who have the ability to use ChatGPT as a supplement to write judgments.[6] Two lawyers, Peter LoDuca and Steven A. Schwartz, “submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”[7] As a result, the two attorneys were sanctioned and required to each pay $5,000 to the client they represented.[8]
Given the rapid evolution and integration of AI in corporations, its ramifications on corporate fiduciary decision-making will likely become substantive and pervasive.[9] However, the exact effects of AI use on corporate fiduciary decision-making and liability within the C-suite remain unknown.[10] Currently, Delaware fiduciary law does not adequately address such implications of AI in corporate decision-making.
Delaware Corporate Law Background
The concept of Delaware corporate fiduciary duties Since then, corporate directors are deemed to be fiduciaries of corporations on behalf of shareholders as beneficiaries, with:
The common law elements of a fiduciary relationship: (1) the shareholder is a party which owns property, namely the shares of a corporation; (2) the shareholder puts their trust in a corporate manager, i.e., a director or officer of the corporation, to provide services to manage that property for the benefit of the shareholder; (3) the shareholder is relying on the corporate manager’s skill and judgment, and thus is dependent or subservient to the corporate manager’s decision making while providing those services; and (4) the corporate manager accepts that role and obligation.[12]
As a corporate fiduciary, directors and officers have two major obligations: the duty of care and the duty of loyalty. Broadly speaking, the duty of care requires a corporate fiduciary to “inform themselves prior to making a business decision of all material information reasonably available to them.”[13] The duty of loyalty obligates the corporate fiduciary to act in good faith and to always place the interests of the shareholder above the fiduciaries, especially preventing any conflict of interest with the corporation and shareholders.[14]
To shield directors, Delaware has the Business Judgment Rule (“BJR”).[15] As a general principle of the BJR, courts will not review directors’ business decisions because they are “clothed with the presumption which the law accords to them of being [motivated] in their conduct by a bona fide regard for the interests of the corporation whose affairs the stockholders have committed to their charge.”[16] This presumption not only protects directors from liability, but also encourages directors to make corporate decisions freely, providing a general benefit for the company and its shareholders.[17]
The BJR and fiduciary duties are what make Delaware corporate law unique: the law embraces a “board-centric” model of governance that recognizes the need for balancing the rights of directors and their shareholders, while providing shareholders with a means to hold directors accountable for bad actions.[18]
Furthermore, Delaware corporate law has a Safe Harbor Statute that protects directors when they rely on good faith information to make board decisions. According to the Safe Harbor Statute:
A member of the board of directors, or a member of any committee designated by the board of directors, shall, in the performance of such member’s duties, be fully protected in relying in good faith upon the records of the corporation and upon such information, opinions, reports or statements presented to the corporation by any of the corporation’s officers or employees, or committees of the board of directors, or by any other person as to matters the member reasonably believes are within such other person’s professional or expert competence and who has been selected with reasonable care by or on behalf of the corporation.[19]
Up until now, all cases invoking the Safe Harbor Statute involve information gathered and produced by humans,[20] not by AI systems. This is because the Safe Harbor Statute only protects directors who rely on a “person” with professional and expert capacity.[21] As it stands, AI systems do not fit neatly into any of the categories listed in the Safe Harbor Statute:[22] they are not officers, employees, committees, or “experts” capable of bearing responsibility or communicating methodological limitations.
Are directors afforded the Safe Harbor protection when they rely on experts who rely on AI? Additionally, as AI systems continue to permeate into the everyday activities of the corporate world, it is only a matter of time before directors will skip consulting a professional and, instead, rely on AI generated data to make corporate decisions that impact its shareholders. Various boards are already prioritizing AI adoption and advancement within their companies at the forefront of their business plans.[23] Therefore, the fiduciary laws must adapt to account for directors’ reliance on AI.
AI’s Implications for Fiduciary Duties
There are clear benefits to the use of AI in the C-Suite. AI does not take into consideration emotions and biases that may cloud the judgment of a human decision maker that may implicate the duty of loyalty.[24] With regard to the duty of care, AI could “significantly enhance the quality of board decisions by providing deeper insights, unbiased analysis, and decision support tools that go beyond human cognitive limitations.”[25] In fact, AI is already being used to gather and synthesize data in a more digestible form to aid C-Suite executives in oversight and decision-making, with the ability to synthesize information in personalized forms that cater to the executive’s learning style.[26] Companies have used generative AI to create predictive analytics to help customers spot certain trends early on.[27] Having such a risk analysis system in place may be used as evidence to satisfy the Caremark legal threshold.[28] Overall, AI use in corporations has led to a general increase in company profits.[29]
However, as seen in Mata, it would be naïve for corporations and lawmakers to ignore the fact that AI poses its own risks and has the potential to produce more detrimental mistakes. The duty of care expects directors to inform themselves, evaluate risks, and exercise reasonable judgment. Modern AI systems complicate this expectation because they generate outputs that may be technically inscrutable—even to experts—and rely on training data that may contain latent bias, systemic error, hallucination, or adversarial vulnerabilities.[30] Moreover, many AI models have algorithmic biases, disadvantaging or even excluding groups based on race, gender, sexual orientation, age, or socioeconomic background.[31] Additionally, how exactly AI works can be difficult to explain.[32] Therefore, how is it possible for directors to prove their process satisfies the Caremark threshold when they “have no idea how or why the decisions are made?”[33]
Also, what if an AI detection system incorrectly labels a risk? Now imagine that the board in Caremark had an AI system that monitored and flagged the possible unlawful “kickbacks.”[34] However, the system only labels potential kickbacks 40% of the time. As the Court stated, a board is only liable if there is a systematic failure to exercise oversight over the material risks.[35] As such, even if a board has a weak, inaccurate system in place, they may escape liability under a breach of the duty of loyalty claim. This gives directors extensive legal protection for a system that has the potential to be very inaccurate and produce colossal mistakes.[36] Although this is a high fact-specific inquiry, the implications of AI failures in C-suite decision-making leaves shareholders extremely vulnerable to a serious doctrinal blind spot: impactful enough to cause harm to the company, but difficult to characterize as “red flags” under existing precedent because the signals of AI malfunction may be subtle, technical, or statistically ambiguous. If shareholders cannot plead sufficient facts to establish a breach of a fiduciary duty, their only option is to plead corporate waste, an extremely difficult claim that rarely succeeds.[37]
As AI continues to integrate itself in the C-Suite, there will come a time where directors can and will fully rely on AI-generated data and predictions when making monumental corporate decisions. Unless certain guardrails are put in place and guidance is followed, the use of AI designed to create efficiencies and streamline processes can store up issues in the long run which cannot be easily unwound. Specifically, if directors increasingly rely on generative AI data instead of making their own independent judgments, it will go against the fundamental principles of Delaware Corporate Law. Even worse, if boards of directors rely on faulty, hallucinated data, like in Mata,[38] millions of shareholders are vulnerable to grave mistakes without a means to rectify the situation through Delaware fiduciary law.
Delaware Lawmakers Must Reform Corporate Law
Delaware’s fiduciary duty framework was developed around human directors exercising judgment, deliberation, and oversight to benefit the company and its shareholders.[39] The business judgment rule, the touchstone of Delaware corporate law, functions on the premise that human directors exercise informed and rational discretion. AI can often substitute machine-driven correlations for human judgment. However, if a board’s decision is substantially shaped by AI, should the deference traditionally afforded to director judgment extend to algorithmic outputs that directors cannot fully evaluate? Unlike humans, AI currently lacks professional credentials, cannot provide sworn assurances, and cannot be sanctioned for misconduct. How can we ensure shareholders and directors are both protected from AI’s implications?
This regulatory inertia heightens systemic risk. Directors navigating the integration of AI into corporate decision-making face a striking absence of meaningful legal guidance. Without guidance, boards may default to broad deference to AI tools marketed as “objective,” despite known risks of inaccuracy or bias. The DGCL does not articulate clear standards for how fiduciary duties of care, loyalty, or oversight should apply when boards rely on algorithmic tools. This threatens Delaware’s long-standing position as the leading corporate jurisdiction.[40] A state known for its sophisticated and adaptive corporate law[41] cannot allow foundational doctrines to lag behind the technologies that now shape corporate strategy.
The DGCL assumes human judgment, human reasoning processes, and traditional information-gathering methods; they do not address whether delegating evaluative functions to AI satisfies or undermines the duty of care, how AI interacts with the duty to be informed above the Caremark threshold, or whether reliance on AI raises loyalty concerns. When AI-driven decisions go wrong, Delaware law provides no clear mechanism for assessing whether liability lies with the board, management, vendors, or the system’s designers. This lack of clarity risks diluting fiduciary accountability and complicates shareholder enforcement in litigation. The time is now for Delaware lawmakers to reform corporate law to reflect fiduciary duties during the Age of Algorithms.
https://wphk-law.com/blog/how-can-you-prove-a-breach-of-fiduciary-duty-in-a-business-partnership/
[1] Ed Finn, The Black Box of the Present: Time in the Age of Algorithms, 86 The Johns Hopkins University Press, 557, 566 (2019). See Manal Ahdadou, et al., Unlocking the potential of augmented intelligence: a discussion on its role in boardroom decision-making, 21 Int’l J. Disclosure & Governance 433, 434 (2023) (explaining that we live in an “augmented age” where intelligent systems are expected to help humans think and make decisions).
[2] Cole Stryker & Eda Kavlakoglu, What is Artificial Intelligence (AI)?, IBM (Aug. 9, 2024), https://www.ibm.com/think/topics/artificial-intelligence.
[3] Since 2022, AI use in companies has increased from 50% to 72%. Alex Singla et al., The state of AI: How Organizations Are Rewiring to Capture Value, McKinsey & Co. (Mar. 12, 2025), https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-how-organizations-are-rewiring-to-capture-value.
[4] Cade Metz, OpenAI Plans to Up the Ante in Tech’s A.I. Race, N.Y. TIMES (Mar. 14, 2023), https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html.
[5] Rebcca Bellan, Sam Altman Says ChatGPT Has Hit 800M Weekly Active Users, TechCrunch (Oct. 6, 2025, 10:30 AM), https://techcrunch.com/2025/10/06/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users/.
[6] See Cts. & Tribunals Judiciary, Artificial Intelligence (AI) Guidance for Judicial Office Holders (2025), https://www.judiciary.uk/wp-content/uploads/2025/10/Artificial-Intelligence-AI-Guidance-for-Judicial-Office-Holders-2.pdf (providing the UK judiciary guidance on AI use); see also James Titcomb, Judges Given Green Light to Use ChatGPT in Legal Rulings, The Telegraph (Dec. 12, 2023), https://www.telegraph.co.uk/business/2023/12/12/judges-given-green-light-use-chatgpt-legal-rulings/ (“The Judicial Office has issued official guidance to thousands of judges in England and Wales saying AI can be useful for summarizing large amounts of texts or in administrative tasks.”).
[7] Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023).
[8] Id. at 477.
[9] Alfred R. Cowger Jr., Corporate Fiduciary Duty in the Age of Algorithms, 14 Case W. Res. J.L. Tech. & Internet 138, 138 (2023), https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?article=1151&context=jolti.
[10] Id. at 138.
[11] David Cowan Bayne, The Philosophy of Corporate Control: A treatise of The Law of Fiduciary Duty 34 (Loyola University Press, Chicago 1986).
[12] Cowger Jr., supra note 9, at 151.
[13] Smith v. Van Gorkom, 488 A.2d 858, 872 (Del. 1985) (quoting Aronson v. Lewis, 473 A.2d 805 (Del. 1984)).
[14] See Cowger Jr., supra note 9, at 151–52.
[15] See Del. Code Ann. tit. 8, § 141(e).
[16] Robinson v. Pittsburg Oil Refining Corp., 126 A. 46, 48 (Del. Ch. 1924).
[17] Lori McMillan, The Business Judgment Rule as an Immunity Doctrine, 4 Wm. & Mary L. Rev. 521, 528 (Apr. 2013) (“In order to maximize shareholder wealth and grow a corporate enterprise, directors must
often make business decisions that entail an assumption of risk; very seldom does return exist without risk, and there is generally presumed to be a positive correlation between the two.”).
[18] J. Travis Laster & John Mark Zeberkiewicz, The Rights and Duties of Blockholder Directors, 70 Bus. Law. 33, 35–36 (2014).
[19] Del. Code Ann. tit. 8, § 141(e).
[20] See In re W. Nat’l Corp. S’holders Litig., No 15927 WL 710192, at *23 (Del. Ch. 2000) (“[A]s a legal and practical proposition, the Special Committee could and did reasonably rely on its expert advisor to obtain and analyze the specific information needed to value the Company.” (citing Del. Code Ann. Tit. 8, § 141(e))); In re RJR Nabisco, Inc. S’holders Litig. No. 10389, 1989 WL 7036, at * 16 (Del. Ch. 1989) (“Our law, of course, recognizes the appropriateness of directors relying upon the advice of experts when specialized judgment is necessary as part of a business judgment.”).
[21] See Del. Code Ann. tit. 8, § 141(e).
[22] Id.
[23] Kira Ciccarelli, A Pulse Check on AI in the Boardroom, Corp. Bd. Member (Sept. 18, 2025), https://boardmember.com/a-pulse-check-on-ai-in-the-boardroom/ (“An impressive 64% of surveyed directors selected AI as a top business priority, outpacing other options such as mergers and acquisitions or strategic partnerships (58%), supply chain diversification (36%), and portfolio restructuring (32%).”).
[24] See Adriana Salatino et al., Influence of AI Behavior on Human Moral Decisions, Agency, and Responsibility, 15 Sci. Reps. 12329 (Apr. 10, 2025), https://doi.org/10.17605/OSF.IO/ZYFQD; see also Jessica Erickson, Beyond Wall Street: Inside the Legal Battles of Private Companies, 50 J. Cor. L. 102 (2025) (discussing how emotions and relationships play a pivotal role in starting a company and its implications during business disputes, which is often overlooked past the black letter law of business operations).
[25] Didier Cossin & Yukie Saito, Three Ways Artificial Intelligence Is Transforming Boards, IMD (Oct. 13, 2025), https://www.imd.org/ibyimd/artificial-intelligence/three-ways-artificial-intelligence-is-transforming-boards/.
[26] Singla et al., supra note 3.
[27] See Marcus Taylor, AI in the C-Suite: Using AI to Shape Business Strategy, CIO (Nov. 4, 2024), https://www.cio.com/article/3598890/ai-in-the-c-suite-using-ai-to-shape-business-strategy.html.
[28] See In re Caremark Int’l, 698 A.2d 959, 971–72 (1996) (“If the directors did not know the specifics of the activities that lead to the indictments, they cannot be faulted.”).
[29] Singla et al., supra note 3.
[30] Aaron Drapkin, AI Gone Wrong: The Errors, Mistakes, and Hallucinations of AI (2023 – 2025), Tech.Co (Nov. 5, 2025), https://tech.co/news/list-ai-failures-mistakes-errors (“French data scientist and lawyer Damien Charlotin has revealed a report that identified as many as 490 court filings across the past six months that included AI hallucinations.”).
[31] See Matthias Holweg et al., The Reputational Risks of AI, Cal. Mgmt. Rev. (Jan. 24, 2022), https://cmr.berkeley.edu/2022/01/the-reputational-risks-of-ai/.
[32] Id.
[33] Id.
[34] Federal law prohibits kickbacks, a “renumeration to induce or reward referrals or business that will ultimately be paid for by a federal healthcare program.” Steve Alder, What Is the Anti-Kickback Law in Healthcare?, HIPPA J. (Sept. 17, 2024), https://www.hipaajournal.com/anti-kickback-law-in-healthcare/.
[35] See in re Caremark, A.2d at 971.
[36] See Tshilidzi Marwala, Never Assume That the Accuracy of Aritificial Intelligence Information Equals the Truth, U.N. U. (July 18, 2024), https://unu.edu/article/never-assume-accuracy-artificial-intelligence-information-equals-truth (highlighting the real-world consequences of AI decision-making mistakes and explaining that AI is not a substitute for truthfulness).
[37] Brehm v. Eisner, 746 A.2d 244, 263 (Del. 2000) (“An exchange that is so one sided that no business person of ordinary, sound judgment could conclude that the corporation has received adequate consideration meets the “waste test” in the context of due care violation allegation in a shareholder derivative suit.”). See Stephen B. Brauerman, Wasting Away: The Futility of Asserting Waste Claims in the Court of Chancery, Bayard Law (Aug. 8, 2012), https://www.bayardlaw.com/insights/wasting-away-futility-asserting-waste-claims-court-chancery (explaining that the corporate waste claims are difficult to plead because the heightened pleading standards for plaintiffs). But see Harwell Wells, The Life (and Death?) of Corporate Waste, 74 Wash. & Lee. L. Rev. 1239 (2017) (highlighting the revival and slight increase of prevalence of the corporate waste doctrine in courts).
[38] See Mata, 678 F. Supp. at 448.
[39] See Del. Code Ann. tit. 8, § 141(e).
[40] Amy Simmerman et al., Delaware’s Status as the Favored Corporate Home: Reflections and Considerations, Harv. L. Sch. Forum Corp. Governance (May 8, 2024), https://corpgov.law.harvard.edu/2024/05/08/delawares-status-as-the-favored-corporate-home-reflections-and-considerations/ (“The sheer number of entities formed in Delaware reflects its dominance in this area. In 2022, more than 313,650 entities were formed in the state of Delaware, resulting in more than 1.9 million entities total in Delaware. Delaware also continues to be the state of incorporation for nearly 68.2 percent of the Fortune 500, 65 percent of the S&P 500, and approximately 79 percent of all U.S. initial public offerings in calendar-year 2022.”).
[41] Id. (“Prior to the early 1900s, New Jersey had been the most significant state for incorporations. Aware of New Jersey’s early success and in an effort to encourage corporations to domicile in Delaware, Delaware amended its constitution in 1897 to permit incorporation under general law instead of by special legislative mandate, and in 1899 adopted a general corporation law modeled largely after New Jersey’s approach. These developments, in addition to the written opinions issued by Delaware’s Court of Chancery, helped make Delaware a natural home for corporations looking to leave New Jersey after that state adopted more restrictive laws related to corporations and trusts in 1913.”).
Leave a Reply
You must be logged in to post a comment.