Richmond Journal of Law and Technology

The first exclusively online law review.

Democratizing Debt: Blockchain’s Quest for Inclusive Lending

By Bharat Manwani[1]

 

The size of the world’s debt market is humongous. By the end of 2022, it had surpassed $300 trillion.[2] The sheer size itself, brings along several challenges with mainstream lending markets such as financial inclusion and subprime problems. Financial inclusion has been affected post-global financial crisis due to stricter lending rules, particularly impacting small businesses, immigrants, and women. The rise of alternative lending markets, like peer-to-peer lending, fills the gap but faces challenges of high risk, fraud, and default due to less stringent regulations. Technology has played a part attempting to make lending markets relatively inclusive and efficient than previously, particularly the applications of AI with alternate credit scoring[3] that have made borrowing more inclusive. However, these problems persist, leaving small enterprises, immigrants and women with no recourse other than attaining credit at higher interest rates. High NPAs with respect to retail lending compel institutions to make borrowing even more inaccessible. Blockchain could be a promising avenue to resolve these concerns, with Decentralized Financing as a viable approach.

 

Lending markets and ancillary financial activities operate upon the notion of ‘trust’. The lender, banks or other institutions must bestow ‘trust’ in the borrower to uphold their end of the agreement. Otherwise, the parties would have to fall back on the legal systems to enforce such agreements in order to recover their debts. The need for ‘trust’ is the reason why financial systems tend to exhibit greater size and advancement in nations characterized by elevated social capital, increased trust levels, and more robust legal frameworks.[4] The introduction of blockchain, with lending activities conducted within decentralized finance (DeFi) markets holds potential to alleviate the need for trust altogether. DeFi dispenses with the requirement of trust, since lending agreements automatically enforce the terms of the “smart contract.”

 

What is DeFi? Understanding Smart Contracts as Vending Machines

As utopian as it gets, DeFi is not just a theoretical framework anymore. Over 6 million people trade within DeFi markets, with the total value locked surpassing $50 billion at the start of this year.[5] Financial arrangements in DeFi markets are designed solely with the help of smart contracts, which rely upon the Ethereum blockchain. These smart contracts are essentially computer codes that transfer digital assets relying upon pre-determined instructions agreed upon by contracting parties. Illustratively, it shares parallels with a vending machine. A vending machine relies upon a logical function to dispense a particular item at the instance of receiving money, with no intermediary such as a store or cashier overseeing the said transaction. Smart contracts contain an enforcement mechanism that bears semblance to a vending machine, they execute automatically once certain conditions are satisfied without the involvement of any intermediary. These encompass confidential, permissionless financial frameworks that endeavor to supplant conventional intermediaries through the execution of smart contracts that are immutable, deterministic computer programs within a blockchain environment. Every DeFi lending protocol comes with its own peculiarities, but almost all of them diverge from centralized credit assessment and deploy smart contracts to manage crypto assets.[6] In contrast to established financial frameworks reliant on intermediaries administered by external entities, this paradigm automation of contract execution holds the potential to mitigate challenges linked to human discretion[7] (like fraud, censorship, and cultural bias), enhance the accessibility of financial services, and complement established financial domains.

 

DeFi lending protocols offer significant advantages over conventional frameworks. With the advent of blockchain but more particularly the idea of “decentralizing” finance aims to bridge information asymmetry in lending markets.[8] Implemented within a public blockchain, the precise content of smart contracts within each DeFi lending protocol is openly accessible and subject to audit. Moreover, users’ historical engagements with protocols, along with their lending and borrowing activities, are meticulously documented on the blockchain, ensuring a transparent record. Market information is transparent and universally accessible. Another core advantage of DeFi lies in its approach to liquidity management. Within DeFi lending, funds contributed to lending protocols are strategically aggregated, fostering optimal utilization and enhancing overall market liquidity. This efficacy is bolstered by the integration of smart contracts and blockchain technology, which facilitate streamlined processes for lending, borrowing, and arbitrage, characterized by both rapidity and cost-efficiency. Furthermore, DeFi lending protocols extend their impact by ensuring the complete transferability and exchangeability of debt holdings. This is achieved through a robust mechanism wherein IOU tokens, representative of these holdings, are safeguarded by the guarantee of redemption to fund suppliers. This design not only promotes trust and reliability within the DeFi ecosystem but also contributes to a fluid marketplace where debt instruments can be seamlessly managed and traded.

 

Current use cases and the Collateral Paradox

Despite the apparent advantages of DeFi lending, its current use cases are restricted to mainly two practical applications; token rewards and arbitrage trading. Retail users often end up in a borrowing spiral wherein they continue to borrow cryptocurrency to pay off previous DeFi loans. Interestingly, such lending protocols reward tokens to their users for making periodic payments, and hence retail users voluntarily create such borrowing spirals. Another practical application of DeFi lending is arbitrage trading, wherein users borrow a certain cryptocurrency to sell it on a different platform which has it listed at a higher price. Such arbitrage transactions and token rewards have fueled the growth of DeFi lending protocols, and hence their market capitalization is valued over $4.75 billion dollars in August 2023.[9] A particular conundrum exists, which has cramped the real-word applications of DeFi lending. The fact that such loans can be secured after depositing collateral, which is in the sole form of cryptocurrency. Built on the foundation to serve retail users that are underserved by traditional finance and to make it more inclusive, DeFi lending protocols fallaciously require users to deposit tokenized collateral of up to 150% of the loan amount. The ridiculous collateral preconditions are in place owing to the volatile nature of cryptocurrency itself, and hence this lingers as a major disincentive to enter the world of decentralized finance.

 

Real Estate Tokens: New kid on the block(chain)

The collateral problem obscures a technological revolution altogether, combatting this conundrum would leverage DeFi’s innovative elements. Nevertheless, a solution exists, one that lies within the fundamental design of DeFi protocols themselves. These protocols offer composability in the blockchain, which essentially enables the interaction of different financial components. It is illustratively the same as building a Lego set, where the user can link several financial products to a single transaction. For example, a retail user when entering a lending contract, could link cryptocurrency that he has already lent to another user, to act as collateral for the present transaction. Duplicating the nature of Lego blocks, several different smart contracts are snapped together into blocks within a single transaction, promoting a more efficient blockchain.[10] This precise capability offers the opportunity to tokenize real world assets, which would fill the lacuna created by the collateral conundrum.

 

The approach behind the solution is straightforward, tokenized real-world assets could replace cryptocurrency as security of debt, and ultimately dispense with the strict collateral preconditions in DeFi lending protocols. Additionally, the relative low volatility of real estate would mitigate overcollateralization of such secured loans, making decentralized finance efficient and financially more inclusive. The proposed solution requires sovereign authorities to have frameworks in place, which would encompass properties or any sort of physical infrastructure and store them within a digital ledger. Blockchain’s composability would further enable users to pledge their tokenized assets as security to initiate DeFi lending protocols. Lending contracts would essentially comprise enforcement mechanisms that automatically execute, and hence transfer real-estate token ownership to the lender in the case of non-payment. This facilitates the acquisition and divestiture of fractional ownership in properties, streamlines the process, enhances market liquidity and assures that secured loans with DeFi lending protocols do not require overcollateralization of loan amounts.[11] Retail users would no longer need to pledge cryptocurrency in order to take a DeFi loan, expanding the class of individuals that would have access to debt within this framework. Blockchain based land record management models have already been implemented in certain parts of India[12], wherein the framework stores ownership information and transaction histories on a decentralized ledger, leaving no scope to tamper with such data. The moot point being that such efficient frameworks are already in place and offer the opportunity to make decentralized finance inclusive and accessible to a greater extent.

 

The world’s lending market is replete with challenges of financial inclusion and subprime issues, and hence DeFi reevaluates the trust-based lending system within traditional finance. Its transformative potential is evident in its operational shift from theoretical concept to practical reality, offering transparent, automated, and efficient lending mechanisms. The major roadblock in the way of technology revolutionizing lending systems, remains to be the collateral conundrum. Nevertheless, the remedy is inherent within DeFi, the ability to tokenize real-world assets, substituting the volatile cryptocurrency collateral. Real estate tokenization aligns with the existing frameworks for blockchain-based land record management, offering a way to enhance inclusivity and accessibility while streamlining lending processes. By reimagining lending dynamics through DeFi’s automated smart contracts and blockchain’s reliability, the potential emerges for a more equitable, transparent, and efficient financial future.

 

 

 

 

 

[1] Bharat Manwani is a Student pursuing BBA LLB (Hons.) at Gujarat National Law University (GNLU), Gandhinagar. He takes active interest in the intersection of Law and Technology. For any feedback, he can be contacted at bharat21bbl020@gnlu.ac.in

[2] Rodrigo Campos, Global debt on the rise, emerging markets cross $100 trillion mark, Reuters (May. 17, 2023), https://www.reuters.com/markets/global-debt-rise-em-crosses-100-trillion-mark-iif-2023-05-17/#:~:text=The%20Institute%20of%20International%20Finance,second%2Dhighest%20quarterly%20reading%20ever.

[3] Joginder Rana, Alternate Credit Scoring Can Further Financial Inclusion, Outlook India (May. 21, 2022), https://www.outlookindia.com/business/alternate-credit-scoring-can-further-financial-inclusion-in-india-read-here-for-details-news-197707.

[4] Luigi Guiso, The Role of Social Capital in Financial Development, 94 Am Econ Rev 3 (2022).

[5] What are the fastest-growing DeFi categories by market share, CNBC (May. 9, 2023) https://www.cnbctv18.com/cryptocurrency/what-are-the-fastest-growing-defi-categories-by-market-share-16612751.htm.

[6] Massimo Bartoletti, SoK: Lending Pools in Decentralized Finance, Vol. 12676 Lecture Notes in Computer Science (2022).

[7] Chiu, Jonathan and Ozdenoren, Emre and Yuan, Kathy Zhichao and Zhang, Shengxing, On the Fragility of DeFi Lending, SSRN E-Journal (2023).

[8] Jiahua Xu and Nikhil Vadgama, From Banks to DeFi: the Evolution of the Lending Market, in Horst Treiblmaier, Defining the Internet of Value (Springer 2022).

[9] Crypto, <https://crypto.com/price/categories/lending> (Aug. 31, 2023).

[10] Sirio Aramonte, DeFi lending: intermediation without information? BIS Bulletin (Jun. 14, 2022), https://www.bis.org/publ/bisbull57.pdf.

[11] Andres Zunino, The Future Of Real Estate: Tokenization And Its Impact On The Industry, Forbes (May. 22,  2023) https://www.forbes.com/sites/forbestechcouncil/2023/05/22/the-future-of-real-estate-tokenization-and-its-impact-on-the-industry/?sh=42f96ee346bf.

[12] Shruti Shashtry, Karnataka to use blockchain for property registration, Deccan Herald (Jan. 4, 2021) https://www.deccanherald.com/india/karnataka/karnataka-to-use-blockchain-for-property-registration 934862.html.

 

Lawyering in the age of AI

By Kevin Fraizer[1]

 

 

The Age of AI will demand more from fewer lawyers. Emerging technologies have already demonstrated the capacity to chip away at the foundation of the rule of law, our legal systems, and our social order. Lawyers who relied on AI to write their briefs gave the public cause to question the ethics and expertise of the profession.[2] Courts have struggled to assess the reliability of evidence in light of the spread of deepfakes—yet another source of instability and public skepticism.[3] Regulators have failed to heed calls for safeguards against potentially destructive technologies.

Collectively, AI and related innovations are best characterized as socially disruptive technologies that, if left unchecked, can upend the systems, structures, and norms that foster stability and predictability across society.

Do lawyers have an obligation to respond to threats to the social order?

The short, easy, and (currently) correct answer is, “No.” Which begs a follow-up question, “Should they?” I think so.

The short, easy, and correct answer requires little explanation. The Model Rules of Professional Conduct, which have been adopted by most states,[4] impose few society-based obligations on lawyers. Sure, the Rules assign lawyers with a “special responsibility for the quality of justice,” but that’s only when they act in their personal capacity as a “public citizen.” [5] Though “quality of justice” is vague, it would take a creative interpretation to suggest that lawyers have a responsibility to proactively guard the rule of law and legal system upon which justice relies.

The Rules likewise encourage lawyers to “further the public’s understanding of and confidence in the rule of law and the justice system” [6] However, that’s a different thing than imposing a requirement on lawyers to identify and mitigate threats to the administration of law.

Absent from the Rules is any explicit obligation to think about the downstream effects of representing or advising clients pursuing work that may pose existential threats to the legal system itself. Of course, a lawyer “may” withdraw from such representation if the client “insists upon taking action the lawyer considers repugnant or with which the lawyer has a fundamental disagreement” or if “other good cause” exists,[7] but this permissive language inadequately squashes the temptation to represent the most powerful (well-paying) clients, especially when lawyers likely lack the background knowledge necessary to evaluate the risks posed by a client’s work.

To answer, “should lawyers as a profession respond to threats to the rule of law?” I need to first dispel an assumption and then make a few of my own.

People may assume that the functions of the law and the nature of the legal profession have remained fixed, more or less, over time. That’s not the case. The functions of the law have changed in response to societal conditions, such as economic, political, technological, and cultural shifts. In turn, legal education, legal services, and the principles guiding the legal profession have as well. Famed legal scholar Roscoe Pound identified five such epochs:[8]

(1) keeping peace in a given society during the stage of primitive kin organization;
(2) preserving the status quo during the era of the Greek city-state and in Roman and medieval times;
(3) making maximum free self-assertion possible during the Renaissance;
(4) satisfying wants in the twentieth century; and
(5) satisfying social wants in modern times.

Pound sweeps with a broad brush that lends his analysis to attack by historians and legal scholars, but the important thing isn’t to pinpoint exactly how and when the law changed, only to establish that it is prone to do so. As summarized by George Paton, author of A Textbook of Jurisprudence, “The law of any period serves many ends, and those ends will vary as the decades roll by.” [9] On the whole, the law morphs as necessary to “contribute to the maintenance of social order,” according to Harold Berman.

The stability of the social order also orients the legal profession, which must adjust legal education and practice to develop a profession suited to the preservation of that order. This dynamic is not a secret. Back in 1941, Karl Llewellyn and E. Adamson Hoebel noted that “law-jobs entail such arrangement and adjustment of people’s behavior that the society remains a society and gets enough energy unleashed and coordinated to keep on functioning as a society.” [11]

Are we entering a new legal epoch?

In this case, the short answer is “Yes.” A few assumptions, if accepted, demonstrate why the law and legal profession must evolve.

First, we’re indeed living at the “hinge of history.” [12] In other words, the decisions made by humans today and in the coming years may alter the trajectory of our species for centuries, if not longer. Contemporary society stands at the furthest point along a “seemingly inexorable trend toward higher levels of complexity, specialization and sociopolitical control,” per Joseph Tainter, leaving us with the collective power to shape the future.[13]

Second, this crucial period of influence will persist as we develop more powerful technologies, become more interconnected as a species, and forgo making choices today that would defer important decisions to future generations. In brief, given our substantial and enduring power over the future, we need new social structures and norms to wield that power in a way that secures the social order.

Third, the law and legal profession are not immune to the effects of socially disruptive technologies. Technological progress will change legal services by automating several tasks. Thomson Reuters suggests the following tasks may disappear from legal practice sooner than later: contract drafting and review, document filing, legal research, legal compliance, due diligence, IP protection, document retrieval, and document creation.[14] Lawyers must not ignore the fact that their societal value will increasingly come from a set of smaller and smaller, but important skills.

In this new legal epoch, what ends should guide the law and the legal profession?

Based on our assumptions, the primary aim of the law in the coming decades as well as for the foreseeable future must be to avoid instability by protecting the social order. This is no easy task. The accumulation of ever more complex systems of governance, exchange, and social interaction has exposed the vulnerabilities of many stabilizing institutions.[15]

The stability of our domestic and international social order depends, in part, on legal systems and professionals responding to emerging technologies,[16] diminished trust in governing institutions,[17] persistent and dire famines, inequality, climate change, mass migration, and geopolitical tensions.

Avoiding instability requires realizing some broad goals, but two standouts as essential: governing institutions must have the flexibility and resources to timely respond to threats; and individuals and groups must retain social resilience—defined by our capacity to collaborate and collectively solve problems.

Why should lawyers have such a major role in realizing those broad goals?

The study of law needs to become “the study of mechanisms of social governance,” a transition recommended by Professor Thomas Ulen.[18] In some ways, this transition is already underway and was a long time coming.

Back in the 1950s, Henry Hart and Albert Sacks observed that “[a] framework of law–that is, a legal order–encloses and conditions everything people living in an organized society do and do not do.” In turn, they framed lawyers as “an architect of social structures,” who must become experts in the “design of frameworks of collaboration for all kinds of purposes[.]”

Decades later, Ulen picked up that thread.[20] He argues that the rise of “Law and Economics” within legal education institutions and among legal scholars signaled a shift to law as a predominantly interdisciplinary field. The integration of legal and economic studies revealed a fundamental change in the nature of legal inquiry. He maintains that “our understanding of the law will [now] be advanced most adventitiously by bringing to bear whichever social, behavioral, and natural sciences help us to understand how best to govern ourselves.”[21] Hart and Sacks likewise argued for a legal practice oriented around the future–equating lawyers to “specialist[s] in the high art of speaking to the future[.]”[22]

A focus on social governance would introduce law students to “all the procedures and institutions we use to advance our individual, familial, and collective aims.”[23] Sounds timely, right?

Ulen acknowledges that this transition would necessitate a big shift in legal education but lawyers must rise to the occasion because, as explained by Ulen, “there is no discipline other than law in the modern research university that would be so bold as to stride across disciplinary boundaries in search of insights useful to answering questions regarding social governance.”[24] As set forth by Jay Kesan of the University of Illinois, “[t]here is a clear need for professionals who are both educated and have professional work experience in science/technology and in law.”[25] This transition, though, cannot amount to just a tweak—Ulen insists that the transition must go beyond “law-and-(some other single discipline) to law-and-all-other-scholarly-knowledge.”[26]

Educating Architects of Social Structures

Law schools do not currently have the faculty nor the course offerings sufficient to educate Architects. Relatively small changes regarding who attends law school and what they learn and experience through their studies could create the necessary educational infrastructure. Generally, law schools ought to orient their student bodies and course offerings around one guiding principle: interdisciplinarity.

The legal profession cannot realize its potential as “Architects of Social Structures” by treating the study of law as its own discipline. Hart, Ulen, and others make clear that the study of the law must include the study of the fields inevitably affected by the law. To expedite this interdisciplinary approach to legal education, law schools should strive to create student bodies with expertise in a wide range of fields.

In the short run, schools should prioritize admitting students with undergraduate STEM backgrounds. Harvard Law School, recognizing the increasingly intersectional, interdisciplinary role of lawyers started down this path a decade ago. As of 2016, its efforts had proven moderately successful: STEM-focused students made up about 12 percent of the admitted class that cycle.[27]

In the long term, schools should explicitly recruit individuals with a master’s level of education or greater in STEM fields. Professor Pierre Larouche hypothesizes that students with even a year of graduate education in a STEM field will be able to meaningfully participate in a core part of the next generation of legal education—namely, “joining multi-disciplinary teams and interact[ing] with colleagues from other disciplines.”[28] Through such teams, Larouche anticipates that students will see the value of interdisciplinary dialogue and realize that collaboration with other disciplines will “enrich” their respective fields.[29]

Until most legal skills have been automated, however, there’s still a case for students undergoing the contemporary “J.D.” education. So not every student should be required to have such background or to develop it during law school.

However those hoping to earn a Doctorate in Social Architecture (or whatever degree this new generation of lawyer earns) must demonstrate “basic competence” in a range of different fields to join the ranks of a new kind of lawyer. Students will not develop the requisite competencies if their interdisciplinary education amounts to “periodically dating” someone in that field; Ulen calls for legal education providing “sustained and marriage-like interactions” with other disciplines.[30] In other words, students who learn to merely collaborate with scholars in other fields will become lawyers unable to fill the duties of an Architect.

Law professors themselves can lean into this interdisciplinary mindset as well. For instance, legal scholars can seek out interdisciplinary conferences, collaborate on scholarship with scholars in other fields, and create and contribute to interdisciplinary journals. Thankfully, the transition to a new “typical” law professor may already be underway. Both Ulen and Judge Richard Posner independently detected an uptick in the diversity of fields represented by the legal academy.

Administrators can help faculty conduct more interdisciplinary work by developing new incentive structures and career paths—the current approach, in which “bring a productive scholar in one discipline is a far safer route than being a productive scholar (or innovator) across disciplines”—will not suffice. These tweaks, though, should only be a temporary strategy to motivate pre-existing faculty members. In time, the “typical” law professor will need to have a very different profile if students are going to receive adequate training in scholarly consolidation and collaboration.

Conclusion

The Age of AI has exposed the stagnant nature of legal education and the outdated aspirations of the profession. Despite the practice of law changing drastically and the needs of society shifting, “[w]e teach law today in an almost indistinguishable way from how it was taught fifty years ago,” explains Ulen;[31] and, we remain committed to focusing our services on client, rather than, communal needs.

Whereas Ulen moderated his pleas for a pivot by specifying that he was not advocating for “root-and-branch reform,”[32] I’d argue that the Age of AI demands just that—an honest and inclusive conversation about the purpose of the legal profession in our modern era that leads to immediate reforms. Anything short of immediate change will leave the profession unable to help reduce instability and protect the social order.

Lawyers occupy a privileged position in society and, consequently, have an obligation to use their influence and skills in ways that mitigate societal threats. Though trust in the legal profession has been higher, laypeople still look to lawyers to solve problems big and small. Though other professionals have more expertise in the most critical scholarly fields, lawyers have the unique opportunity to help consolidate that expertise and leverage it defense of the social order.

If we do not start discussing what society in 2100 will require of lawyers, then we will fail to train those lawyers. I hope this piece sparks some of those conversations and pushes more lawyers to ask what obligations the profession has to our collective well-being today and into the future.

 

 

 

 

 

 

 

[1] Kevin Frazier is an Assistant Professor at the St. Thomas University College of Law and a Research Affiliate at the Legal Priorities Project. Frazier earned a Master of Public Policy from the Harvard Kennedy School and a JD from the UC Berkeley School of Law. Send feedback to kfrazier2@stu.edu.

[2] Lyle Moran, Lawyer cites fake cases generated by ChatGPT in legal brief, Legal Dive (May 30, 2023), https://www.legaldive.com/news/chatgpt-fake-legal-cases-generative-ai-hallucinations/651557/

[3] Shannon Bond, People are trying to claim real videos are deepfakes. The courts are not amused, NPR (May 8, 2023), https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-are-not-amused

[4] About the Model Rules, American Bar Association (accessed Aug. 2, 2023), https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/

[5] Model Rules of Prof’l Conduct Preamble.

[6] Id.

[7] Id. at R. 1.16.

[8] Roscoe Pound, An Introduction to the Philosophy of Law, ch. 2, The End of Law 25-47 (1954).

[9] George Paton, A Textbook of Jurisprudence 86 n.3 (3d ed. 1964).

[10] Harold J. Berman, The Nature and Functions of Law 31(1958).

[11] Karl Llewellyn & E. Adamson Hoebel, The Cheyenne Way 290 (1941).

[12] Richard Fisher, Could right now be the most influential time ever? Richard Fisher looks at the case for and against – and why it matters, BBC (Sept. 23, 2020), tps://www.bbc.com/future/article/20200923-the-hinge-of-history-long-termism-and-existential-risk

[13] Ben Ehrenreich, How Do You Know When Society Is About to Fall Apart?, N.Y. Times (Nov. 4, 2020), https://www.nytimes.com/2020/11/04/magazine/societal-collapse.html

[14] How legal workflow automation turns thousands of tasks into one, Thomson Reuters (May 23, 2023), https://legal.thomsonreuters.com/blog/how-automation-turns-thousands-of-tasks-into-one/

[15] See Ehrenreich, supra note 12..

[16] Christopher F. Chyba, New Technologies & Strategic Stability, 149 Daedalus 150 (2020), https://doi.org/10.1162/daed_a_01795

[17] David W. Oxtoby, Distrust, Political Polarization, and America’s Challenged Institutions, Am. Acad. of Arts & Sciences (2023), https://www.amacad.org/news/distrust-political-polarization-and-americas-challenged-institutions

[18] Thomas S. Ulen, The Impending Train Wreck in Current Legal Education: How We Might Teach Law as the Scientific Study of Social Governance, 6 Univ. St. Thomas L.J. 302 (2009).

[19] Henry M. Hart, Jr. & Albert M. Sacks, The Legal Process: Basic Problems in the Making and Application of Law 174, 175 (Willaim N. Eskridge, Jr. & Phillip P. Frickey eds., 1994).

[20] See generally Ulen, supra note 17.

[21] Id. at 313.

[22] Hart & Sacks, supra note 18, at 176.

[23] See Ulen, supra note 17, at 314.

[24] Id. at 320.

[25] Jay Kesan, Bridges II: The Law-STEM Alliance & Next Generation Innovation, 112 N.W. U. L. Rev. 141, 141 (2018), https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1255&context=nulr_online

[26] Ulen, supra note 17, at 320.

[27] Claire E. Parker, To Keep Pace with Tech, Law School Seeks STEM Students, Harvard Crimson (May 6, 2016), https://www.thecrimson.com/article/2016/5/6/HLS-admissions-STEM-recruiting/

[28] Jay Kesan, Bridges II: The Law-STEM Alliance & Next Generation Innovation, 112 N.W. U. L. Rev. 144, 145 (2018), https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1255&context=nulr_online

[29] Id.

[30] Ulen, supra note 17, at 326.

[31] Id. at 329.

[32] Id. at 330.

 

Image Source: https://www.forbes.com/sites/bernardmarr/2020/01/17/the-future-of-lawyers-legal-tech-ai-big-data-and-online-courts/?sh=28d11630f8c4

 

Artificial Intelligence, Real Discrimination

By Jessiah Hulle[1]

_____ 

“Success in creating AI could be the biggest event in the history of our civilization. . . . [But a]longside the benefits, AI will also bring dangers.” – Stephen Hawking (2016)[2]

Two years ago, various news outlets reported that Amazon uses artificial intelligence (“AI”) “not only to manage workers in its warehouses but [also] to oversee contract drivers, independent delivery companies and even the performance of its office workers.” The AI is a cold but efficient human resources manager, comparing employees against strict metrics and terminating all underperformers. “People familiar with the strategy say . . . Jeff Bezos believes machines make decisions more quickly and accurately than people, reducing costs and giving Amazon a competitive advantage.”[3]

This practice is no longer unusual. In fact, AI-assisted human resources (“HR”) work is now commonplace. Recently, over 70% of human resources leaders surveyed by Eightfold AI confirmed that they use AI for HR functions such as recruiting, hiring, and performance management. In that same survey, over 90% of HR leaders stated an intent to increase future AI use, with 41% indicating a desire to use AI in the future for recruitment and hiring.[4] Already, “three in four organizations boosted their purchase of talent acquisition technology” in 2022 alone and “70% plan to continue investing” in 2023, regardless of a recession.[5] Research by IDC Future Work predicts that by 2024, “80% of the global 2000 organizations will use AI-enabled ‘managers’ to hire, fire, and train employees.”[6]

Skulking in the shadows of this enthusiastic adoption of AI for HR work, however, is a problem: employment discrimination. Like humans, AI can discriminate on the basis of protected classes like race, sex, and national origin. This article briefly addresses this problem, summarizes current local, state, and federal laws enacted or proposed to curtail it, and proposes two solutions for modern employers itching to implement AI-assisted employee management tools but dreading employment litigation.

AI-assisted discrimination

“Machine learning is like money laundering for bias.” – Maciej Cegłowski[7]

Employers can use AI to assist with a host of tasks. Some niche AI-assisted tasks, such as moderating internet content[8] or providing health care services,[9] implicate legal issues and invite civil litigation. Others do not. But the AI-assisted task currently receiving heightened legal scrutiny from the government is employment decision-making, including hiring, assigning, promoting, and firing. The reason for this scrutiny is straightforward: AI can, and sometimes does, discriminate against protected classes.

How does this happen? Put simply, the problem of AI discrimination boils down to a single maxim: garbage in, garbage out.[10] An AI that “learns” how to think from biased information (“garbage in”) will invariably produce biased results (“garbage out”). A funny example of this is Tay, a rudimentary AI chatbot designed by Microsoft that turned into a Nazi after only a day of “learning” on Twitter.[11] A serious example is predictive policing software, which can unfairly target racial minorities after “learning” about crime rates from historical over-policing patterns in minority neighborhoods.[12]

In the field of human resources, “garbage in” fed to an AI can range from historical data tainted by past discrimination (e.g. segregation-era Whites-only hiring practices) to statistics warped by employee self-selection (e.g. self-selection of male candidates into engineering). The resulting “garbage out” formulated by AI is employment discrimination under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Americans with Disabilities Act (“ADA”), the Age Discrimination in Employment Act (“ADEA”), and other civil rights statutes.[13]

Amazon is a prime example (no pun intended). In 2018, the company was forced to discontinue an AI program that filtered job applicant resumes because it developed an anti-woman bias. “Employees had programmed the tool in 2014 using resumes submitted to Amazon over a 10-year period, the majority of which came from male candidates. Based on that information, the tool assumed male candidates were preferable and downgraded resumes from women.”[14]

 

Local and state regulation

To curb AI-assisted discrimination, at least one locality and numerous states have enacted or proposed laws regulating bias in AI employment decision-making.

New York City

New York City is the clear leader on this front. In 2021, the city enacted an ordinance that requires employers using AI for job application screening to notify job applicants about the AI and conduct an annual independent bias audit of the AI if it “substantially assist[s] or replace[s] discretionary decision making.”[15] The city began enforcement of the ordinance for hiring and promotion decisions in July 2023. “The law [only] applies to companies with workers in New York City, but labor experts expect it to influence practices nationally.”[16]

Illinois and Maryland

On the state level, Illinois enacted the Artificial Intelligence Video Interview Act in 2019 to combat AI discrimination in screening initial job applicant interview videos.[17] The statute “requires employers that use AI-enabled analytics in interview videos” to notify job applicants about the AI, explain how it works, obtain the applicant’s consent, and destroy any analytics video within thirty days upon the applicant’s request. “If the employer relies solely on AI to make a threshold determination before the candidate proceeds to an in-person interview, that employer must track the race and ethnicity of the applicants who do not proceed to an in-person interview as well as those applicants ultimately hired.”[18]

Maryland enacted a similar statute in 2020, requiring employers to obtain a job applicant’s consent before using AI-assisted facial recognition technology during interviews.[19]

It appears that the impetus behind the Illinois and Maryland laws is a belief that AI-assisted facial recognition and analysis programs discriminate against less-privileged job applicants because such programs are trained on data from past, privileged applicants. As argued by Ivan Manokha, a lecturer at the University of Oxford, companies that use these programs “are likely to hire the same types of people that they have always hired.” A possible result is “inadvertently exclud[ing] people from diverse backgrounds.”[20]

Other states

Outside New York City, Illinois, and Maryland, numerous states have also proposed laws or empaneled special committees to address AI-assisted employment discrimination. For instance, the District of Columbia,[21] California,[22] and Massachusetts[23] have all introduced bills or draft regulations in the last two years to address this issue. And various states, including Alabama, Missouri, New York, North Carolina, and Vermont, have proposed or established committees, taskforces, or commissions to review and regulate AI issues.[24]

Virginia

So far, Virginia has neither enacted nor proposed a law to specifically regulate AI-assisted employment discrimination. In January 2020, Delegate Lashrecse D. Aird introduced a Joint Resolution to “convene a working group . . . to study the proliferation and implementation of facial recognition and artificial intelligence” because “the accuracy of facial recognition is variable across gender and race,”[25] but it was tabled by a House of Delegates subcommittee.[26]

Nevertheless, it is possible that AI programs can still violate antidiscrimination laws in the state. Virginia antidiscrimination law — which protects traits ranging from racial and ethnic identity[27] to lactation,[28] protective hair braids,[29] and (for public employees) smoking[30] — presents a veritable minefield of legal issues for an AI program to traverse in screening job applicants and employees. For instance,

Virginia . . . recently passed a law that protects employees who use cannabis oil for medical purposes. This law distinguishes “cannabis oil” from other types of medicinal marijuana and has specific definitions of what is and is not protected. An algorithm that fails to take these nuances into consideration might inadvertently discriminate against protected cannabis users.[31]

Federal guidance

The federal government has also issued guidance condemning AI-assisted employment discrimination.

EEOC

Although the Equal Employment Opportunity Commission (“EEOC”) has yet to issue a formal rule on AI-assisted employment discrimination, it has clearly condemned the practice through various informal guidance documents, a draft enforcement plan, and at least one civil lawsuit.

First, in May 2022 the EEOC issued a question-and-answer-style informal guidance document explaining that an employer’s use of an AI program that “relies on algorithmic decision-making may violate existing requirements under [the ADA].”[32]

The EEOC explained that, most commonly, employers violate the ADA when they fail to provide a reasonable accommodation “necessary for a job applicant or employee to be rated fairly and accurately by [an AI program]” or rely on “an [AI] that intentionally or unintentionally ‘screens out’ an individual with a disability.” A “screen out” occurs when a “disability prevents a job applicant or employee from meeting — or lowers their performance on — a selection criterion, and the applicant or employee loses a job opportunity as a result.”

The EEOC provided multiple examples of AI-assisted screen-outs that possibly violate the ADA. In one example, an AI chatbot designed to engage in text communications with a job applicant may violate the ADA by screening-out applicants who indicate “significant gaps in their employment history” because of a disability. In another example, an AI-assisted video interviewing software that analyzes job applicant speech patterns may violate the ADA by screening-out applicants who have speech impediments. In a third example, an AI-analyzed pre-employment personality test “designed to look for candidates . . . similar to the employer’s most successful employees” may violate the ADA by screening-out job applicants with PTSD who struggle to ignore distractions but can thrive in a workplace with “reasonable accommodations such as a quiet workstation or . . . noise-cancelling headphones.” All of these examples follow the same theme: AI programs can reject job applicants based on external data without considering reasonable accommodations.

The EEOC warned that employers remain liable under the ADA even if an AI program is administered by a vendor.[33]

Second, in May 2022 the EEOC sued an international tutoring company for ADEA discrimination resulting from an AI-assisted automated job applicant screening program. According to the complaint, the company’s online tutoring application solicited birthdates of job applicants but automatically rejected female applicants aged 55 or older and male applicants aged 60 or older. The company filed an amended answer denying the allegations in March 2023. The case is currently pending.[34]

Third, in January 2023 the EEOC announced in its Draft Strategic Enforcement Plan for fiscal years 2023 to 2027 that it was committed to “address[ing] systematic discrimination in employment.” The plan specifically announced the following subject matter priority for the agency:

The EEOC will focus on recruitment and hiring practices and policies that discriminate against racial, ethnic, and religious groups, older workers, women, pregnant workers and those with pregnancy-related medical conditions, LGBTQI+ individuals, and people with disabilities. These include: the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.[35]

In accordance with this strategic plan, the EEOC “launched an agency-wide initiative to ensure that the use of software, including artificial intelligence (AI), machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.”[36] The EEOC also held a four-hour public hearing on “Navigating Employment Discrimination in AI and Automated Systems,” which is currently hosted on its website[37] and YouTube.[38]

Fourth, the EEOC joined a Joint Statement with the Consumer Financial Protection Bureau, Department of Justice Civil Rights Division, and Federal Trade Commission promising to “monitor the development and use of automated systems,” “promote responsible innovation” in the field of AI, and “vigorously . . . protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”[39]

Finally, in April 2023 the EEOC published a second technical guidance document explaining that AI-assisted employment decision-making programs can violate Title VII.

The EEOC noted that modern employers use a variety of algorithmic and AI-assisted programs for human resources work, including scanning resumes, prioritizing job applications based on keywords, monitoring employee productivity, screening job applicants with chatbots, evaluating job applicant facial expressions and speech patterns with video interview programs, and testing job applicants on personality, cognitive ability, and perceived “cultural fit” with games and tests. However, under this new EEOC guidance document, all algorithmic and AI-assisted programs “used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees” fall within the ambit of the agency’s Guidelines on Employee Selection Procedures under Title VII in 29 C.F.R. Part 1607. In other words, if an AI program discriminates against a job applicant or employee in violation of Title VII, the EEOC evaluates the violation the same as if it was committed by a person.[40]

Again, the EEOC warned that employers remain liable under Title VII even if a discriminatory AI program is administered by a vendor.

It is expected that, in accordance with its four-year strategic plan and Joint Resolution, the EEOC will issue further guidance on this issue in the next few years.

The White House

In 2022 the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights. The Blueprint reaffirmed that “[a]lgorithms used in hiring . . . decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination” and suggested five principles to “guide the design, use, and deployment of automated systems to protect the American public.” The second principle proposed the following right: “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.” According to the White House,

This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.[41]

Although this Blueprint is currently all bark, it portends the future bite of enhanced enforcement by the Biden Administration against AI-assisted discrimination.

Congress

So far, Congress has proposed bills to regulate AI generally but not to regulate AI-assisted employment discrimination specifically.[42] However, this does not mean that Congress is unaware of the issue. For instance, in March 2023, Alexandra Reeve Givens, the President and CEO of the Center for Democracy & Technology, testified before the U.S. Senate Committee on Homeland Security and Government Affairs that “if the data used to train the AI system is not representative of wider society or reflects historical patterns of discrimination, it can reinforce existing bias and lack of representation in the workplace.”[43]

Takeaways

In sum, as more employers use unregulated AI to assist with human resources tasks, the potential for inadvertent, disparate impact, and even intentional discrimination increases. New York City, Illinois, and Maryland have already enacted laws directly regulating AI-assisted recruiting. Other states have proposed similar, or even stricter, laws. Accordingly, employers must tread this area of AI usage carefully.

Perhaps the best takeaway for employers is a quote ostensibly taken from a 1979 presentation at IBM: “A computer can never be held accountable[.] Therefore a computer must never make a management decision.”[44] Good HR staff know antidiscrimination laws inside and out. In this current wild west of AI regulation, employers should rely on well-versed HR staff to review AI work, just like employers rely on employees to review intern work. Moreover, employers should require a human to make final hiring, assigning, promoting, and firing decisions. In fact, requiring a human decision is a loophole in the New York City ordinance, which only requires a bias audit for AI programs that “substantially assist or replace discretionary decision making.”[45] Additionally, employers should stay abreast of new regulatory guidance on AI from the EEOC as it releases.

The second-best takeaway for employers is the old joke: “The early bird gets the worm, but the second mouse gets the cheese.”[46] Many employers want to be an early bird in implementing new AI programs to boost HR functions. This desire is understandable. It seems like everyone else is already onboard the AI train. In 2017, the Harvard Business Review published an article claiming that “[t]he most important general-purpose technology of our era is artificial intelligence.”[47] Now, in 2023, close to 75% of surveyed HR leaders report using AI for human resources tasks. And that percentage only increases as companies scramble to get the worm. As Jensen Huang, the co-founder and CEO of trillion-dollar-valued Nvidia, recently predicted in a speech, “[a]gile companies will take advantage of AI and boost their position. Companies less so will perish.”[48] However, employers — especially small businesses — should also consider the benefits of being the second mouse. Everyone, from Fortune 100 corporations to local, state, and federal governments, is currently testing the scope of liability for AI-assisted discrimination.[49] This beta testing phase exposes employers to high potential risk and cost.[50] Therefore, although not the “coolest” approach, it behooves many employers to simply wait until this issue is either litigated and regulated or solved by the invention of a relatively bias-proofed human resources AI.

 

 

 

 

[1] Jessiah Hulle is a litigation and investigations associate at Gentry Locke in Roanoke, Virginia. He graduated from the University of Valley Forge in 2017 and Washington and Lee University School of Law in 2020

[2] Dom Galeon, Hawking: Creating AI Could Be the Biggest Event in the History of Our Civilization, Futurism (Oct. 10, 2016), https://archive.is/M7DDD.

[3] Spencer Soper, Fired by Bot at Amazon: ‘It’s You Against the Machine’, Yahoo Finance (June 28, 2021), https://archive.is/Tpc5Q.

[4] Gem Siocon, Ways AI Is Changing HR Departments, Business News Daily (June 22, 2023), https://archive.is/Nf7rg.

[5] Lucas Mearian, Legislation to Rein in AI’s Use in Hiring Grows, Computerworld (Apr. 1, 2023), https://archive.is/Lx9xD.

[6] Lucas Mearian, The Rise of Digital Bosses: They Can Hire You – And Fire You, Computerworld (Jan. 6, 2022), https://archive.is/NuZLo.

[7] Maciej Cegłowski, The Moral Economy of Tech, Idle Words (June 26, 2016), https://archive.is/t6q6m (quoting remarks given at the SASE Conference in Berkeley).

[8] See, e.g., Force v. Facebook, Inc., 934 F.3d 53, 60 (2d Cir. 2019) (rejecting claim that Facebook’s AI-enhanced algorithm negligently propagated terrorism).

[9] See, e.g., Sharona Hoffman & Andy Podgurski, Artificial Intelligence and Discrimination in Health Care, 19 Yale J. Health Pol’y L. & Ethics 1 (2020) (arguing that AI-assisted algorithmic discrimination, especially on the basis of race, in health care should be actionable under Title VI).

[10] R. Stuart Geiger et al., “Garbage In, Garbage Out” Revisited: What Do Machine Learning Application Papers Report about Human-Labeled Training Data?, 2:3 Quantitative Sci. Stud. 795 (Nov. 5, 2021), https://archive.is/9D477 (quoting this maxim as a “classic saying in computing about how problematic input data or instructions will produce problematic outputs”).

[11] Amy Kraft, Microsoft Shuts Down AI Chatbot after It Turned into a Nazi, CBS News (Mar. 25, 2016), https://archive.is/xScSA (reporting that Tay went from stating “humans are super cool” on March 23, 2016, to “Hitler was right I hate the jews” on March 24, 2016).

[12] Will Douglas Heaven, Predictive Policing Algorithms Are Racist. They Need to Be Dismantled., MIT Tech. Rev. (July 17, 2020), https://archive.is/clURU.

[13] See generally Keith E. Sonderling et al., The Promise and the Peril: Artificial Intelligence and Employment Discrimination, 77 U. Miami L. Rev. 1 (2022).

[14] Guadalupe Gonzalez, How Amazon Accidentally Invented a Sexist Hiring Algorithm, Inc.com (Oct. 10, 2018), https://archive.is/INQrl.

[15] N.Y.C. Local Law 144; N.Y.C.R. §§ 5-300, 5-301, 5-302, 5-303, 5-304 (2021), https://archive.is/WmrAm.

[16] Steve Lohr, A Hiring Law Blazes a Path for A.I. Regulation, N.Y. Times (May 25, 2023), https://archive.is/mGYGU.

[17] 820 I.L.C.S. 42/1 et seq.

[18] Paul Daugherity et al., States Scramble to Regulate AI-Based Hiring Tools, Bloomberg Law (Apr. 10, 2023), https://archive.is/a6gZt.

[19] Md. Labor and Emp. Code § 3-717 (2020).

[20] Ivan Manokha, Facial Analysis AI Is Being Used in Job Interviews – It Will Probably Reinforce Inequality, The Conversation (Oct. 7, 2019), https://archive.is/9U2Jn.

[21] J. Edward Moreno, New York City AI Bias Law Charts New Territory for Employers, Bloomberg Law (Aug. 29, 2022), https://archive.is/QLQVD (“District of Columbia Attorney General Karl Racine introduced a bill [in 2021] that would mirror New York City’s law and would put the onus on employers to ensure AI tools they use aren’t discriminating against certain candidates.”).

[22] Id. (“The California Civil Rights Department announced [in early 2022] that it’s drafting regulations to clarify that the use of automated decision-making tools is subject to employment discrimination laws.”).

[23] Hiawatha Bray, Mass. Lawmakers Scramble to Regulate AI Amid Rising Concerns, Bos. Globe (May 18, 2023), https://archive.is/uCAHh (“[Massachusetts state senator Barry] Finegold has filed a bill that would set performance standards for powerful ‘generative’ AI systems . . . . [C]ompanies would need to make sure that AI systems aren’t used to discrimination against individuals or groups based on race, sex, gender, or other characteristics protected under antidiscrimination law.”).

[24] Report: Legislation Related to Artificial Intelligence, Nat’l Conf. of State Legs. (Aug. 26, 2022), https://archive.is/76Gjf (collecting proposed and enacted laws pre-August 2022).

[25] Va. H.J.R. No. 59 (2020 Session).

[26] HJ 59 Facial recognition and artificial intelligence technology; Joint Com. on Science & Tech to study., Va. Legis. Info. Sys. (Jan. 29, 2020), https://archive.is/fpjn6.

[27] Va. Code § 2.2-3900(B)(2).

[28] Va. Code §§ 2.2-3901, 2.2-3902.

[29] Va. Code § 2.2-3901(D).

[30] Va. Code § 2.2-2902.

[31] Amber M. Rogers & Michael Reed, Discrimination in the Age of Artificial Intelligence, ABA (Dec. 7, 2021), https://archive.is/ujiCa.

[32] Technical Guidance Document: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC (May 15, 2022), https://archive.is/OnMM2.

[33] Id.

[34] EEOC v. iTutorGroup, Inc. et al., No. 1:22-CV-2565 (E.D.N.Y. May 5, 2022).

[35] Draft Strategic Enforcement Plan 2023-2027, 88 Fed. Reg. 1379, 1381 (Jan. 1, 2023).

[36] Artificial Intelligence and Algorithmic Fairness Initiative, EEOC (2023), https://archive.is/SBeWt (promising to “issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions”).

[37] Id.

[38] Navigating Employment Discrimination in AI and Automated Systems, YouTube (Jan. 31, 2023), https://www.youtube.com/watch?v=rfMRLestj6s.

[39] Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, EEOC (2023), https://archive.is/9AybV.

[40] Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, EEOC (Apr. 2023), https://archive.is/u1s5p.

[41] Blueprint for an AI Bill of Rights, The White House (Oct. 2022), https://archive.is/OBbK8; Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House (Oct. 2022), https://archive.is/17aZb.

[42] See, e.g., U.S. Congress to Consider Two New Bills on Artificial Intelligence, Reuters (June 8, 2023), https://archive.is/7nnwl (reporting that one bill “would require the U.S. government to be transparent when using AI to interact with people” and the other “would establish an office to determine if the United States is remaining competitive in the latest technologies”).

[43] Testimony of Alexandra Reeve Givens, Artificial Intelligence: Risks and Opportunities, U.S. Senate Comm. on Homeland Security and Gov’t Affs. (Mar. 8, 2023), https://archive.is/LH3gv.

[44] See, e.g., An IBM slide from 1979, CSAIL – MIT, Facebook (Dec. 19, 2022), https://archive.is/MQF2n (sharing viral picture of quote); Gizem Karaali, Artificial Intelligence, Basic Skills, and Quantitative Literacy, 16:1 Numeracy 1, 4-5, 5 n.13 (2023) (attributing the quote to a 1979 IBM presentation).

[45] Steve Lohr, A Hiring Law Blazes a Path for A.I. Regulation, N.Y. Times (May 25, 2023), https://archive.is/mGYGU (quoting the president of the Center for Democracy & Technology criticizing this loophole as “overly sympathetic to business interests”).

[46] See, e.g., Wesley Wildman, Jokes and Stories: Wisdom Sayings, B.U.: Wesley Wildman’s Weird Wild World Wide Web Site (Jan. 1, 2000), https://archive.is/5wWhI.

[47] Erik Brynjolfsson & Andrew McAfee, The Business of Artificial Intelligence, Harv. Bus. Rev. (July 18, 2017), https://archive.is/RSvSb.

[48] Vlad Savov & Debby Wu, Nvidia CEO Says Those Without AI Expertise Will Be Left Behind, Bloomberg (May 28, 2023), https://archive.is/TLsR8.

[49] Cf., e.g., Ryan E. Long, Artificial Intelligence Liability: The Rules Are Changing, LSE Bus. Rev. Blog (Aug. 16, 2021), https://archive.is/mdcEM (discussing corporate civil liability for AI work).

[50] See generally Keith E. Sonderling et al., The Promise and the Peril: Artificial Intelligence and Employment Discrimination, 77 U. Miami L. Rev. 1 (2022).

 

 

 

Image Source: https://imgs.xkcd.com/comics/ai_hiring_algorithm.png

White Paper: Office of the User Advocate

By Kevin Frazier[1]

 

 

PURPOSE OF THE OFFICE OF THE USER ADVOCATE (UA)

THE UA AND THE RIGHT TO PETITION

UA POWERS

    Accountability

    Advocacy

UA PLACEMENT WITHIN META

TENURE AND SELECTION OF THE UA TRIAD

ORGANIZATION OF THE UA

CONCLUSION

 

 

PURPOSE OF THE OFFICE OF THE USER ADVOCATE (UA)

The content moderation system developed by Meta provides limited and, as set forth below, inadequate means of participation by users. The current system confines user participation to a single user challenging a content decision on a single post.[2] This limited form of participation diverges from Meta’s stated goals as well as human rights norms and laws.[3]

The creation of the Office of the User Advocate (UA), as proposed in my Richmond Journal of Law and Technology article[4] and detailed further below, will close the gap between Meta’s goals and the rights currently afforded to Meta users via its content moderation system. Housed within Meta, the UA would guarantee that users had a more meaningful role in every part of content moderation—from the development and amendment of Community Standards to the application and adjudication of those standards.

The UA will work on behalf of users to hold Meta accountable, to advocate on behalf of their interests, and to represent them in formal proceedings involving Meta, the Oversight Board (OB), and other stakeholders. Meta has rightfully celebrated its prior efforts to involve users in its content moderation system[5]—the UA is simply a deliberate and permanent means of continuing those efforts. As Meta expands its user base, refines its content moderation system, and creates new platforms, the UA will become even more valuable to Meta’s efforts to comply with human rights norms and law.

 

THE UA AND THE RIGHT TO PETITION

Meta has embraced a human rights framework for structuring and evaluating its content moderation decisions.[6] Fulfillment of Meta’s human rights responsibilities requires the creation of a means for users to exercise their right to petition, without which freedom of expression cannot fully be realized.[7]

The right to petition constitutes a fundamental right under human rights law.[8] An open procedure for individuals or groups alleging a violation of their rights or the rights of those they represent is an “essential, even irreducible, means of giving effect” to the protections set forth under human rights law.[9] In short, for Meta to protect the rights of its users, it must provide an “opportunity to demand that the right[s] be protected.”[10] The UA would provide such an opportunity.

By creating this opportunity, Meta can set a meaningful precedent for other platforms. Though some platforms tout their role as public spheres, the rule-making processes allow a staggering small role for the public. This discrepancy will become all the more apparent as platforms continue to play a larger role in communal discourse.

 

UA POWERS

Though Meta currently affords users who allege an improper decision; to remove content or leave that content up, the opportunity to challenge such a decision, this mechanism falls short of the sort of opportunity necessary to enforce the human rights norms and laws at the heart of Meta’s Community Standards.[11] This mechanism prohibits challenges by groups of affected users, prevents contestation of a specific Community Standard rather than a single post, and does not allow for proactive amendment of those standards in anticipation of crises and other events likely to increase the odds of violations.

Given the size of Meta’s user base, it is impossible for every challenge by a user or group of users to go through the entire content appeals process.[12] Meta has responded to this reality by attempting to prioritize cases for review by the OB based on the severity and virality of a post, and the likelihood of post violating Meta’s Community Standards.[13] Though perhaps unintentionally and in a way that would mirror the decisions of users, Meta has usurped the ability of users to signal their own priorities with respect to the enforcement and content of Meta’s Community Standards.

The UA will ensure users have a meaningful right to petition by overseeing mechanisms for users to identify Community Standards in need of reform, to request certain classes of cases receive OB review, and to ensure their values and preferences are represented in formal proceedings involving Meta, the OB, and other stakeholders.

The following is a list of potential UA powers. The grant of even a few of these powers to a UA would serve as a meaningful improvement on the status quo. Stakeholders are encouraged to debate these powers and offer suggestions of their own.

Accountability

The UA can further the rights of users by performing the following accountability actions:

  • Monitoring Meta’s adoption of OB policy recommendations.
    • For example, the UA could solicit user feedback on which recommendations they want Meta to prioritize.
  • Auditing, evaluating, and informing the metrics collected by Meta and the OB and reported in their respective updates.
    • For example, the UA could advocate for the inclusion of Meta’s application of its case selection criteria to the cases it recommends to the OB to better understand how Meta assess severity, virality, and likelihood of violation.
  • Attending Meta’s policy meetings and other engagements with external stakeholders
  • Participating in the evaluating and selection of Meta governance team members and/or OB members.

Advocacy

The UA can further the rights of users by advocating on behalf of them and taking the following actions:

  • Identifying user concerns by:
    • conducting an annual survey of users regarding the content and application of Facebook’s Community Standards; and,
    • empaneling a citizen assembly (hereinafter, the “Assembly of Meta Users” or “AMU”) that is representative of Meta users to stand “on-call” for a two-year term during which they will respond to requests for input on everything from recent OB PAOs and case decisions to the candidates for Meta and OB positions (as described further below).
  • Sharing and advancing user concerns by:
    • serving as their representative in;
      • the OB adjudication process:
        • for example, you could imagine a UA Attorney General tasked with writing briefs for consideration by the OB.
      • Meta Community Standards review sessions.
    • consolidating user concerns into a sort of “class action” case for review by OB;
      • The UA could operate a platform akin to Change.org where users could publish petitions for certain types of cases and specific Community Standards (as well as allowances to those standards) for consideration by other users–if a sufficient number and diversity of users backed any such petition, then the UA could have the authority to demand review by Meta and the OB.
    • regularly developing a case selection criterion for Meta’s adoption based on the concerns of users; and,
    • publishing a response to every OB decision that expresses how the UA thinks the decisions impacts user’s rights.
  • Empowering users to share their concerns by:
    • empaneling user juries of peers of the user with a post undergoing OB adjudication; and,
      • these jurors would provide the OB with a better understanding of the context in which a post was made. For instance, if a candidate’s post was challenged, then a jury of users from that jurisdiction could review the facts before the OB and answer clarifying questions issued by the OB.
    • overseeing the election and participation of user representative(s) on the OB.
      • users should have at least one permanent member on the OB. This representative could work closely with the UA to make sure they understand and represent the interests of users in all content disputes.
    • [other actions may achieve this goal]

UA PLACEMENT WITHIN META

The UA should be formally within Meta but operate in a highly independent fashion. By residing within the Meta organization, the UA can better fulfill its mission due to several factors:

  • Increased understanding of the technical limitations of Meta’s platforms
  • More opportunities to connect with Meta employees and learn about their priorities and plans
  • Greater access to expertise within the Meta community regarding how best to consult a global user base
  • More reliable funding by virtue of being just another part of the company.

This placement, of course, would raise a number of valid concerns. Chief among those concerns may be the independence of the UA. The selection process proposed below should alleviate such concerns.

TENURE AND SELECTION OF THE UA TRIAD

Given Meta’s global user base and the representative nature of the UA, no individual could alone steer the UA. Instead, a collection of three individuals—each serving staggered, five-year terms—should form the UA Triad. Each member of the Triad would be a UA Director—tasked with overseeing the office’s accountability and advocacy functions.

Selection of Directors should involve the OB, Meta, and users to ensure each major stakeholder has a stake in the success of the UA. The OB should nominate candidates and Meta should select the Director. The Assembly of Meta Users (as defined above) should have the authority to veto Meta’s selection. If this latter mechanism gives rise to concerns that the AMU would simply veto each of Meta’s finalists, then the AMU could be assigned a fixed number of vetoes.

ORGANIZATION OF THE UA

The UA Triad would oversee a division of Meta akin to a foreign service department. Directors would select Regional Directors (RDs) and assign each RD to one of Meta’s three self-identified main regions:[14] Europe, Middle East, Africa (EMEA); Asia and the Pacific (APAC); and North America. These three RDs would then build a team of ambassadors to build relationships with users in countries in their respective regions. RDs would also create analyst teams tasked with researching current events and crises that may warrant action by the RD or the Triad.

CONCLUSION

Meta’s adherence to human rights norms and laws necessitates the protection of user’s right to petition. Users currently lack that right. By creating the Office of the User Advocate, Meta can ensure that users have a meaningful opportunity to challenge Meta’s Community Standards, to participate in OB decisions, and to shape the platform that reflects their values and concerns.

This White Paper should start, rather than conclude, a conversation around the right to petition among social media users and the need for something akin to the UA proposed above. If you are interested in this topic, please let me know—I am willing and eager to talk further.

 

 

 

 

 

[1] Kevin Frazier is an Assistant Professor at Crump College of Law at St. Thomas University and a Summer Research Fellow at the Legal Priorities Project. Frazier earned a Master of Public Policy from the Harvard Kennedy School and a JD from the UC Berkeley School of Law. Send feedback to kfraz@berkeley.edu.

[2] See How the Meta appeals process works, Meta (accessed May 18, 2023), https://transparency.fb.com/policies/improving/appealed-content-metric/

[3] See Facebook Community Standards, Meta (accessed May 18, 2023), https://transparency.fb.com/policies/community-standards/; infra note 7 and accompanying text.

[4] Kevin Frazier, Why Meta Users Need a Public Advocate: A Modest Means to Address the Shortcomings of the Oversight Board, 28 Rich. J.L. & Tech. 596 (2021), https://jolt.richmond.edu/files/2022/04/Frazier-Final.pdf

[5] See, e.g., Facebook Community Standards, Meta (accessed May 18, 2023), https://transparency.fb.com/policies/community-standards/ (discussing users as key stakeholders in the development of Facebook’s Community Standards).

[6] See Meta Q1 2023 Quarterly Update on the Oversight Board, Meta at 8 (May 17, 2023) (Stressing the value provided by the Oversight Board’s “crucial overlay of global human rights frameworks and diverse perspectives to [Meta’s] most significant and difficult decisions.”)

[7] See Lima Principles, Organization of American States (Nov. 16, 2000) (identifying the right to petition as essential to the protection of other rights); DRL Notice of Funding Opportunity, U.S. State Department (Feb. 2, 2023) (listing the right to petition as a part of freedom of expression); see also Declaration of Principles on Freedom of Expression, Organization of American States (n.a.).

[8] See, e.g., Schonberger v. European Parliament, C.J.E.U. Case 261 at Paragraph 13 (2014).

[9] See, e.g., Michael J. Dennis and David P. Stewart, Justifiability of Economic, Social, and Cultural Rights: Should There be an International Complaints Mechanism to Adjudicate the Rights to Food, Water, Housing, and Health?, 98 A.J.I.L 462, 467-68 (2004).

[10] See Virginia Leary, Justiciability and Beyond: Complaint Procedures and the Right to Health, Rev. Int’L Comm’n Jurists at 105, 106 (Dec. 1995).

[11] Facebook and Instagram have Community Standards and Community Guidelines, respectively. This White Paper refers to these standards jointly as Meta’s Community Standards in the interest of brevity.

[12] See Zoe Kleinman, Meta board hears over a million appeals over removed posts, BBC (June 22, 2022), https://www.bbc.com/news/technology-61893903

[13] How Meta Prioritizes Content for Review, Meta (Jan. 26, 2022), https://transparency.fb.com/policies/improving/prioritizing-content-review/

[14] Meta Q1 2023 Quarterly Update on the Oversight Board, Meta at 18.

 

 

 

Image Source: https://images.app.goo.gl/1wdKVgufDJcAvrpe9

Navigating Legal Factors for U.S. Companies Entering the E-commerce Market in Africa

By Yanrong Zeng

 

 

 

As Africa’s online banking and shopping sectors have gained popularity, e-commerce has become a crucial aspect of business operations in the region, attracting foreign investment.[1] To successfully export goods to Africa, U.S. companies must have a deep understanding of the legal factors that impact the e-commerce sector. This blog post delves into the legal considerations that contribute to the success of e-commerce in different African countries and recommends suitable entry points for businesses entering the e-commerce market.

The success potential of e-commerce hinges on two factors: information infrastructure and legal considerations. Information infrastructure sets the ceiling of e-commerce possibilities in a target market as access to internet, mobile phones, bank accounts, and postal addresses are necessary for online shopping.[2] To assess a market’s online shopping readiness, the UNCTAD B2C Commerce Index is an effective tool.[3] Countries with the highest B2C Commerce Index are South Africa, Algeria, and Kenya, followed closely by Kenya, Nigeria, and Morocco.[4] Meanwhile, Senegal, Egypt, and Ivory Coast are further down the list.[5] It’s worth noting that Egypt has surprisingly dropped in the rankings in the past decade.[6]

Legal factors can act as limiting factors for e-commerce opportunities. These include e-commerce law, consumer protection law, data privacy laws, and breach notification laws. However, companies can use these laws to their advantage by using them as a guide to identify the most suitable e-commerce market to enter.

To minimize potential disputes and legal complications, it is recommended that U.S. companies identify target markets with reliable legal protection for electronic agreements and strong consumer protection laws. Out of the nine countries mentioned earlier, only Nigeria is yet to publish a distinct e-commerce law, although a bill is currently under legislative process.[7] The Algerian e-commerce market is not open to foreign companies, which means that it is not advisable for companies to consider Algeria as a potential market to enter.[8] Among the mentioned countries, Egypt, Nigeria, South Africa, Ghana, and Morocco seem to be suitable markets for entry, as they have established specific laws or regulations to protect consumers, especially in online transactions.[9] Conversely, countries like Kenya, Algeria, Senegal, and Ivory Coast appear to have weaker consumer protection laws, which can create legal ambiguities and compliance challenges.[10]

Data security breaches pose a significant risk, as indicated by the high numbers of malware attacks on industrial control systems in the target markets.[11] Therefore, it is crucial for U.S. companies to take proactive measures to protect their data and adhere to foreign laws. To mitigate these risks, U.S. companies can prioritize entry into markets that have uniform data privacy and protection laws across a group of countries. This is because complying with the legal requirements of one country in the group ensures compliance with all others. The African Union (AU) member states and Economic Community of West African States (ECOWAS) member states are obligated to respect, protect, and promote the right to privacy and personal data protection, as stated in their declarations and conventions.[12]

To ensure compliance and mitigate risks, U.S. companies need to to carefully evaluate their business requirements and risk tolerance before entering a new market with data breach notification laws.[13] For larger companies with a greater focus on data protection, countries such as Nigeria, Egypt, and Algeria, which have well-defined and stringent data breach notification laws, may be a suitable choice.[14] However, for smaller to medium-sized companies that prioritize balancing compliance costs with maintaining consumer trust, countries such as Kenya, Ghana, and South Africa, which have moderate data breach notification laws, may be a more practical option.[15] Ultimately, U.S. companies should carefully assess their risk appetite and business requirements before making a decision on which market to enter.

To operate in Africa, U.S. companies must adhere to the Foreign Corrupt Practices Act (FCPA), which extends beyond U.S. borders.[16] To avoid violating this law, U.S. companies need to prioritize anti-corruption measures. Transparency International’s 2022 Corruption Perceptions Index (CPI) revealed that sub-Saharan Africa is currently facing a notable challenge with corruption, which may impact businesses operating within the region.[17] As a result, U.S. companies must conduct thorough due diligence to ensure compliance with anti-corruption laws when entering the e-commerce market in the region.

In conclusion, the e-commerce market in Africa presents both risks and opportunities for U.S. companies. While the region has seen tremendous growth in e-commerce, it is essential for U.S. companies to carefully consider the legal landscape and regulatory environment in each target market. By prioritizing legal compliance, consumer protection, data privacy, and anti-corruption measures, U.S. companies can mitigate risks and maximize opportunities for success. Ultimately, those who navigate the legal complexities with diligence and strategic planning stand to benefit from the growing e-commerce market in Africa.

 

 

 

 

 

 

 

[1] White and Case, Africa Focus: Navigating a Changing Business Landscape in Africa and Beyond (Spring 2021), https://www.whitecase.com/publications/insight/africa-focus-spring-2021.

[2] U.N. Conf. on Trade and Dev., UNCTAD B2C E-Commerce Index 2020 Spotlight on Latin America and the Caribbean, 1, https://unctad.org/system/files/official-document/tn_unctad_ict4d17_en.pdf.

[3] Id.

[4] Id. at 15–16.

[5] Id.

[6] Id. at 16.

[7] Aderibigbe et al., Digital Business in Nigeria: Overview, Thomson Reuters (Jan. 1, 2023), https://uk.practicallaw.thomsonreuters.com/w-020-0579; Electronic transaction: Senate prepares legal framework to guide deals, Tribune Online (Feb. 27, 2020), https://tribuneonlineng.com/electronic-transaction-senate-prepares-legal-framework-to-guide-deals/; Kenya Commc’n (Amend.) Act (2008), http://kenyalaw.org/kl/fileadmin/pdfdownloads/AmendmentActs/2009/KENYACOMMUNICATIONS_AMENDMENT_ACT_2008.pdf; Electronic Commu’n and Transactions Act (2002), https://www.gov.za/sites/default/files/gcis_document/201409/a25-02.pdf; Dyer et al., Digital Business in South Africa: Overview, Thomson Reuters (Mar. 1, 2021), https://uk.practicallaw.thomsonreuters.com/w-007-8319; Electronic Transactions Act (2008), https://ictpolicyafrica.org/fr/document/x6dx4fyl9b9; Electronic Signature Law No. 15 (2004), https://itida.gov.eg/English/Documents/2.pdf; World Intell. Prop. Org., Law No. 2008-08 on the Electronic Transactions, https://www.wipo.int/wipolex/en/legislation/details/10283; Evidence Act (2011) § 93, https://www.refworld.org/pdfid/54f86b844.pdf.

[8] Lloyds Bank, E-Commerce in Algeria (last updated Apr. 2023), https://www.lloydsbanktrade.com/en/market-potential/algeria/ecommerce; Loucif and Gauvin, Publication of the law on the post and the electronic

communications and the e-commerce law, LPA-CGR, https://tahseen.ae/media/3093/algeria_law-on-the-post-and-electronic-communications-and-the-e-commerce-law.pdf.

[9] Consumer Code of Prac. Regul. (2007), https://ncc.gov.ng/docman-main/legal-regulatory/regulations/102-consumer-code-of-practice-regulations-1/file; U.N. Conf. on Trade and Dev., Review of e-commerce legislation harmonization in the Economic Community Of West African States, 40, https://unctad.org/system/files/official-document/dtlstict2015d2_en.pdf; Sulaiman and Mashaba, E-commerce transactions under the Electronic Communications and Transactions Act and Consumer Protection Act, Dentons (Aug. 26, 2022), https://www.dentons.com/en/insights/articles/2022/august/26/e-commerce-transactions-under-the-electronic-communications#:~:text=Contact%20us-,E%2Dcommerce%20transactions%20under%20the%20Electronic%20Communications%20and%20Transactions%20Act,68%20of%202008%20(CPA); Electronic Transactions Act (2008), https://www.researchictafrica.net/countries/ghana/Electronic_Transactions_Act_no_772:2008.pdf; Morocco Ministry of Indus. and Trade, Consumer Protection, https://www.mcinet.gov.ma/en/content/consumer-protection.

[10] Kenya Info and Commc’n Act (1998), https://www.ca.go.ke/wp-content/uploads/2021/02/Kenya-Information-and-Communication-Act-1998.pdf; Consumer Prot. Law No.181 (2018), https://leap.unep.org/countries/eg/national-legislation/consumer-protection-law-no181-2018#:~:text=181%20of%202018.,-Country&text=This%20Law%20consisting%20of%2076,as%20increasing%20the%20consumer’s%20rights; Brill, Algeria – Consumer Protection, https://referenceworks.brillonline.com/entries/foreign-law-guide/algeria-consumer-protection-COM_013036; ICT Policy Africa, Ordinance n ° 2012 293 of March 21 2012 on Telecommunications and Information Technologies (Unofficial Translation), https://ictpolicyafrica.org/en/document/nvnyrchgy6r.

[11] Culture Custodian, More African Countries are Taking Data Privacy and Protection Seriously (Feb. 8, 2023), https://culturecustodian.com/more-african-countries-are-taking-data-privacy-and-protection-seriously/.

[12] Id; African Union, Personal Data Protection Guidelines for Africa (May 9, 2018), https://www.internetsociety.org/wp-content/uploads/2018/05/AUCPrivacyGuidelines_2018508_EN-1.pdf.

[13] Practical Law Data Privacy & Cybersecurity, Global Data Breach Notification Laws Chart: Overview, Thomson Reuters (Nov. 28, 2022), https://us.practicallaw.thomsonreuters.com/w-016-6863.

[14] DIA Piper, Data Protections Law of the World, https://www.dlapiperdataprotection.com/; Law 151/2020 on the Protection of Personal Data, https://www.ilo.org/dyn/natlex/natlex4.detail?p_lang=en&p_isn=111246&p_count=7&p_classification=01; Data Guidance, Algeria: Data protection law published in Official Gazette (Apr. 16, 2019), https://www.dataguidance.com/news/algeria-data-protection-law-published-official-gazette.

[15] Nzilani Mweu, Kenya – Data Protection Overview, Data Guidance (Mar. 2023), https://www.dataguidance.com/notes/kenya-data-protection-overview; Bhagattjee, South Africa – Data Protection Overview, Data Guidance (July 2022), https://www.dataguidance.com/notes/south-africa-data-protection-overview; Cybersecurity Act (2010), https://csdsafrica.org/wp-content/uploads/2021/08/Cybersecurity-Act-2020-Act-1038.pdf.

[16] Nick Oberheiden, 10 Reasons Why FCPA Compliance Is Critically Important for Businesses, National L.R. (July 24, 2020), https://www.natlawreview.com/article/10-reasons-why-fcpa-compliance-critically-important-businesses.

[17] Transparency Int., CPI 2022 for Sub-Saharan Africa: Corruption Compounding Multiple Crises (Jan. 31, 2023), https://www.transparency.org/en/news/cpi-2022-sub-saharan-africa-corruption-compounding-multiple-crises#:~:text=A%20regional%20average%20score%20of,by%20significant%20declines%20in%20others.

Image source: https://www.uneca.org/stories/the-afcfta%2C-an-opportunity-for-africa%E2%80%99s-youth-to-accelerate-trade-and-industrialization

To Neurotech or not to Neurotech – Whether ‘tis nobler in the Mind to Regulate

By Jack Younis

 

 

 

In the 2007 hit television show Chuck, an unwitting computer geek is turned into a CIA secret agent/asset when he downloads fighting skills and a database of government intelligence into his brain.[1] Regardless of the action-comedy shenanigans that ensue, the concept of connecting the human consciousness directly to technology has continued to capture the cultural zeitgeist. Now, neurotechnology and its related fields have advanced beyond discussion in popular culture and science fiction; it has become an increasingly topical reality. Moreover, as advances are made in neurotech, the legal question presented by this progress becomes increasingly demanding.

Much like Chuck’s download and transmission of CIA data to his brain, one facet of neurotechnology is the ability “download” data from the technology itself. [2] The brain itself is a biological computer, relying on the neurological firing of electric signals to execute commands, not unlike those of an actual computer.[3] The resulting application and development of technology that interprets these signals and firings have led to significant advancements in neurotech.[4]

With such increasingly adaptable technology, many in the field are calling for increased regulation of the technology. As professor Rajesh P.N. Rao from the University of Washington in Seattle puts it, “It’s a good time for us to think about these devices before technology leaps ahead and it’s too late.”[5] And regulatory commentary has already begun – the International Organization for Economic Co-operation and Development (OECD) issued in May of 2021 the first international standard for regulating neurotechnology.[6]

Promulgating recommendations for the novel technology include developing a set of nine principles in which governance over the novel industry should consider, not limited to promoting responsible innovation, prioritizing safety assessments, and safeguarding brain data and other information.[7]

In addition to regulatory guidance conversation, the adaptation of these technologies has become prevalent in legal discussions as well. Prominent in the conversation is Dr. Allan McCay, who was named one of the most influential lawyers of 2021 by Australasian Lawyer.[8]  Dr. McCay, a criminal law professor at the University of Sydney Law School, recently published a topical report addressing these concerns within the past year.[9] His report focuses not only on the social, political, and economic concerns related to neurotechnology, but the ethical and legal implications that follow suit.[10] Mirroring the issuance put forth by the OECD, Dr. McCay’s work strongly hones in on the ethical steps that must be considered as progress continues to be made, emphasizing “how the law should respond” in addition to how it is applied.[11]

Even with the careful consideration of neurotechnology’s future, every concern is not strictly related to the restriction of this developing industry. Some find that regulations themselves need breathing room to operate effectively. As one article puts it, “Outright bans of certain technologies could simply push them underground, so efforts to establish specific laws and regulations must include organized forums that enable in-depth and open debate.”[12] Much like the OECD and Dr. McCAy, Yuste and the Morningstar Group contend that the development of neurotechnology requires consideration beyond technological implications; legal questions related to privacy and consent, agency and identity, augmentation, and bias must all be accounted for as part of the discussion.[13]

Regardless if neurotechnology ever enables humans to download martial moves and spy secrets directly to their consciousness, the emergence of this technology will create progressively more and more questions. Whether it is related to the impact on administering justice or increasing development to higher capabilities, balancing outcomes and promoting conversation surrounding neurotechnology will most likely continue to elevate, and the legal field must stay prepared.

 

 

 

 

[1] Chuck: About, NBC (2023), https://www.nbc.com/chuck/about.

[2] Julia Masselos, Neurotechnology, technology Networks (Feb. 11, 2022), https://www.technologynetworks.com/neuroscience/articles/neurotechnology-358488.

[3] Id.

[4] Id.

[5] Esther Shein, Neurotechnology and the Law, 65 Communications of the ACM, no. 8, 2022, at 16-18, https://cacm.acm.org/magazines/2022/8/262912-neurotechnology-and-the-law/fulltext.

[6] OECD, Recommendation of the Council on Responsible Innovation in Neurotechnology, OECD Legal Instrument (Dec. 11, 2019), https://legalinstruments.oecd.org/api/print?ids=658&Lang=en.

[7] Id.

[8] Allan McCay, Neurotechnology, law and the legal profession, The Law Society (August 2022), https://www.scottishlegal.com/uploads/Neurotechnology-law-and-the-legal-profession-full-report-Aug-2022.pdf.

[9] Id.

[10] Id.

[11] Id. at 14.

[12] Rafael Yuste et al., Four ethical priorities for neurotechnologies and AI, nature  (Nov. 9, 2017), https://www.nature.com/articles/551159a#citeas.

[13] Id.

Image Source: https://www.flickr.com/photos/90958025@N03/8384110298

Are Layoffs the New Normal for Big Tech?

By Kasey Hall

 

 

 

Over 140,000 tech workers were laid off in 2022, and so far in 2023, we have seen more than 94,000 jobs cut, ranging from tech start-ups to “Big Tech.”[1] In fact, the tech industry has seen its highest number of layoffs since the dot-com bubble burst in the early 2000s.[2] These layoffs have been all over the news and social media, with many younger generations questioning the sustainability of a career in tech.[3]

In the past, tech companies had prioritized a “growth at all costs” mindset that meant profitability was viewed as a mere afterthought.[4] Sanjay Brahmawar, the CEO of the enterprise software firm Software AG says, “for years companies have said “let’s just keep growing and we’ll figure out profitability somewhere down the road.”[5] Since 2011, the tech industry has been growing year after year with explosive growth occurring after the pandemic.[6] In 2020 and 2021, sales sharply rose as new work-from-home orders put a heavy demand on tech companies, and more people and businesses relied on these technologies than before.[7] During the pandemic, tech hiring became progressively more competitive, with companies increasing pay packages and benefits across the board.[8] For instance, Amazon more than doubled its corporate staffing, and Meta doubled its employment headcount between March 2020 and September 2021.[9] This record-setting growth, however, could not be maintained forever, and we are currently experiencing a significant course correction triggered by an economic slowdown.[10]

For a while now, investors were willing to let these tech companies spend needlessly so long as the share prices continued to grow by double-digits year after year reliably.[11] However, as internal costs rose and spending slowed, many companies faced shrinking profits and alarms from angry investors calling for a significant reduction in expenses.[12] The “growth at all costs” era seems to be ending for “Big Tech.[13] Investors are instead shifting the focus towards profitability and efficiency, describing this as the “new normal” for tech companies.”[14] So, is this “new normal,” led by investors, to blame for these tech layoffs? Michael Cusumano, deputy dean at MIT’s Sloan School of Management, believes that “these massive tech layoffs have more to do with investors than companies’ bottom lines. “[15]

As record-breaking growth is no longer feasible long-term, investors have instead set their sights on curbing expenses and are beginning to evaluate tech companies more harshly.[16] This means that the mass hiring of high-skilled professionals during the pandemic, with sizable salaries and pay packages to match, are the first to be cut as tech companies look to reassess their balance sheets.[17] All this has been done in an effort by tech companies to signal to investors that they are willing to focus on long-term growth by showing more fiscal responsibility in the short term regarding staffing.[18] This reorganization of tech companies likely caused these industry-wide layoffs.[19] However, they should not signal absolute doom to those interested in the industry’s success. Instead, these tech layoffs could indicate that the “industry is maturing or becoming more stable after rapid growth” and that these tech companies are invested in a more sustainable path forward.[20]

 

 

 

 

 

[1] Keerthi Vedantam, Tech Layoffs: U.S. Companies That Have Cut Jobs in 2022 and 2023, Crunchbase News (Mar. 3, 2023), https://news.crunchbase.com/startups/tech-layoffs/.

[2] Amanda Hetler, Tech Sector Layoffs Explained: What You Need to Know, TechTarget (Feb. 1, 2023), https://www.techtarget.com/whatis/feature/Tech-sector-layoffs-explained-What-you-need-to-know.

[3] Tripp Mickle, Tech Layoffs Shock Young Workers. The Older People? Not So Much., N.Y. Times (Jan. 23, 2023), https://www.nytimes.com/2023/01/20/technology/tech-layoffs-millennials-gen-x.html.

[4] Leslie Picker & Ritika Shah, Tech Private Equity Investor Orlando Bravo Says the Mantra of “Growth at all Costs” is Over, CNBC (Mar. 3, 2023, 11:24 AM), https://www.cnbc.com/2022/03/03/tech-private-equity-investor-orlando-bravo-says-the-mantra-of-growth-at-all-costs-is-over-.html.

[5] Will Daniel, How to Navigate the Stock Market’s “New Normal” After the Last 2 Decades of Investing Became Ancient History, Fortune (June 4, 2022, 6:30 AM), https://fortune.com/2022/06/04/tech-stocks-investing-new-normal-end-of-growth-at-all-costs-era/.

[6] The Future of Big Tech, J.P.Morgan (Dec. 23, 2022), https://www.jpmorgan.com/insights/research/future-of-big-tech.

[7] Why Are Tech Companies Laying Off All These Workers?, Forbes (Jan. 27, 2023 10:50 AM), https://www.forbes.com/sites/qai/2023/01/27/why-are-tech-companies-laying-off-all-these-workers/?sh=30a34e764fc6.

[8] Id.

[9] Clare Duffy, How Big Tech’s Pandemic Bubble Burst, CNN (Jan. 22, 2023, 8:11 AM), https://www.cnn.com/2023/01/22/tech/big-tech-pandemic-hiring-layoffs/index.html.

[10] Big Tech Layoffs – A Meltdown or Course Correction? Harvard Prof Ranjay Gulati Explains, The Econ. Times (Nov. 10, 2022, 11:16 AM), https://economictimes.indiatimes.com/markets/expert-view/big-tech-layoffs-a-meltdown-or-course-correction-harvard-prof-ranjay-gulati-explains/articleshow/95418482.cms?from=mdr.

[11] Jake Swearingen, Wall Street Ignored Big Tech’s Bloat During Boom Times. Now It’s Ready to Slide and Dice, Insider (Nov. 17, 2022, 2:03 PM), https://www.businessinsider.com/tech-layoffs-meta-alphabet-wall-street-2022-11.

[12] Hetler, supra note 2.

[13] Daniel, supra note 5.

[14] Id.

[15] Forbes, supra note 7.

[16] Id.

[17] Bobby Allyn, 5 Takeaways from the Massive Layoffs Hitting Big Tech Right Now, NPR (Jan 26, 2023, 5:00 AM), https://www.npr.org/2023/01/26/1150884331/layoffs-tech-meta-microsoft-google-amazon-economy.

[18] Forbes, supra note 7.

[19] Id.

[20] Hetler, supra note 2.

 

 

Image Source: https://unsplash.com/photos/1K9T5YiZ2WU

ChatGPT Co-Wrote an Episode of South Park. Will The AI Chatbot Replace the Need for Writers in Hollywood?

ChatGPT Co-Wrote an Episode of South Park. Will The AI Chatbot Replace the Need for Writers in Hollywood?

By Cleo Scott

 

 

ChatGPT has been a hot topic lately. From dating apps[1] to the courtroom[2], the natural language processing tool driven by artificial intelligence technology is transforming the way we do things.[3] Now, the trailblazing chatbot can add television writing to its resume. South Park creators used OpenAI’s chatbot to create the fourth episode of season 26.[4] The episode, titled “Deep Learning,” shows boys from Stan’s class using the chatbot to write essays and send texts to girls.[5] During a speech written by ChatGPT, the character argues that people shouldn’t be blamed for using the chatbot.[6] “It’s the giant tech companies who took Open AI, packaged it, monetized it, and pushed it out to all of us as fast as they could in order to get ahead,” Stan says.[7]

At one point in the episode, Stan asks ChatGPT to write a story that takes place in South Park, where a boy named Stan must convince his girlfriend that it’s okay that he lied about using AI to text her.[8] After sending the request to ChatGPT, the chatbot begins “thinking” and replies with a story within seconds.[9] “Once upon a time, there was a boy named Stan who lived in South Park. Stan loved his girlfriend very much, but lately, he hadn’t been truthful with her. One day, when Stan got to school, he was approached by his best friend,” the response read.[10]

The ending credits show that the episode was written by both Trey Parker and ChatGPT.[11] While it is remarkable how advanced AI has become, people are now wondering if AI tools like ChatGPT will soon replace the need for human writers. OpenAI co-founder and president Greg Brockman thinks the chatbot could even be used to fix the last season of Game of Thrones.[12] “That is what entertainment will look like,” Brockman said at a SXSW panel. “Maybe people are still upset about the last season of Game of Thrones. Imagine if you could ask your A.I. to make a new ending that goes a different way and maybe even put yourself in there as a main character or something.”[13] Others also think ChatGPT should be used for television writing. For instance, Deadline used ChatGPT to create a pitch for a Mad Max reboot.[14] The chatbot responded with a detailed pitch outlining the premise of the show.[15] While the pitch needed some tweaking, it was said to be doable.[16]

Brockman thinks ChatGPT could help do the “drudge work” for writing but also add a more “interactive” entertainment experience.[17] Hollywood is now monitoring the potential impact of ChatGPT on the industry.[18] The Writers Guild of America West said they are “monitoring the development of ChatGPT and similar technologies in the event they require additional protections for writers.”[19] On the other hand, screenwriters interviewed by The Hollywood Reporter see ChatGPT as a potential tool to aid the writing process instead of a tool that will replace the work of writers.[20]

The issue is that what often takes writers weeks or months to formulate only takes ChatGPT 30 seconds.[21] Brockman said ChatGPT could take over the types of jobs where users “didn’t want human judgment there in the first place.”[22] Big Fish and Aladdin writer John August doesn’t think the AI chatbot will be replacing the kind of writing they’re doing in writers’ rooms anytime soon.[23] Still, he thinks we should start thinking about the best ways to use the tool.[24] “There certainly is no putting the genie back in the bottle. It’s going to be here, and we need to be thinking about how to use it in ways that advance art and don’t limit us.”[25]

 

 

 

 

 

[1] Anna Iovine, Tinder users are using ChatGPT to message matches, MASHABLE (Dec. 17, 2022), https://mashable.com/article/chatgpt-tinder-tiktok.

[2] Janus Rose, A Judge Just Used ChatGPT to Make a Court Decision, Vice (Feb. 3, 2023), https://www.vice.com/en/article/k7bdmv/judge-used-chatgpt-to-make-court-decision.

[3] See Natasha Lomas, ChatGPT shrugged, TechCrunch (Dec. 5, 2022) (quoting “ChatGPT is a new artificial intelligence (AI) tool that’s designed to help people communicate with computers in a more natural and intuitive way — using natural language processing (NLP) technology.”), https://techcrunch.com/2022/12/05/chatgpt-shrugged/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAADU59VjZUKBKujH7dTsnuAADMtjElPmTT1SQCENX5S19xIrGG7Nb4Y_u3oYDPvRKUVBhiRiYoCu4WDM7d8DQ8NnPd02PGcAUWvE8ojCXvVfGpARK5NXKe0F2epgIlzYZwW9V_I6bWPDTi5XWYPNseXl2vvZYP8DVZbrk8XWqyVAW.

[4] Stacy Liberatore, South Park’s latest episode was co-written by ChatGPT: ‘Deep Learning’ ends with a script generated by OpenAI’s chatbot, Daily Mail (Mar. 17, 2023), https://www.dailymail.co.uk/sciencetech/article-11873595/South-Parks-latest-episode-Deep-Learning-written-ChatGPT.html.

[5] Id.

[6] Id.

[7] Id.

[8] Id.

[9] Liberatore, supra note 4.

[10] Id.

[11] Id.

[12] J. Clara Chan, Using ChatGPT to Rewrite ‘Game of Thrones’? OpenAI Co-Founder Says “That Is What Entertainment Will Look Like”, The Hollywood Reporter (Mar. 10, 2023), https://www.hollywoodreporter.com/business/digital/chatgpt-game-of-thrones-openai-greg-brockman-1235348099/

[13] Id.

[14] Melissa Murphy, ChatGPT Is Going To Start Writing Hollywood Movies?, Giant Freakin Robot (last visited Mar. 18, 2023), https://www.giantfreakinrobot.com/ent/chatgpt-writing-hollywood-movies.html

[15] Id.

[16] Id.

[17] Chan, supra note 12.

[18] Id.

[19] Id.

[20] Id.

[21] Murphy, supra note 14.

[22] Chan, supra note 12.

[23] Katie Kilkenny & Winston Cho, Attack of the Chatbots: Screenwriters’ Friend or Foe?, The Hollywood Reporter (Jan. 12, 2023), https://www.hollywoodreporter.com/business/business-news/chatgpt-hollywood-screenwriters-film-tv-1235296724/

[24] See id.

[25] Id.

Image Source: https://metro.co.uk/wp-content/uploads/2023/03/SEC_148556154-8b3a.jpg?quality=90&strip=all&zoom=1&resize=644%2C338

How Doctors Used Patients’ Dreams to Further Their Own

By Jessica Birdsong

 

 

 

 

A lot of us have seen the Netflix documentary, Our Father, presenting a disturbing tale of a physician, Dr. Donald Cline, who, during the 1970s and 80s, performed inseminations on patients using his own sperm, without their knowledge or consent.[1] The extent of his actions is unknown, but he fathered at least 94 biological children, and possibly many more.[2] The discovery of this deception has been devastating for the victims, as they grapple with the loss of their identity and the revelation of having numerous half-siblings.[3] The mothers who were affected have also been left feeling violated and betrayed.[4]

Legal action was taken by some of the affected siblings, but they were met with disappointment. Despite Cline’s egregious actions, he was not charged with rape, battery with bodily waste, or even with criminal deception.[5] Instead, he was only charged with obstruction of justice for being untruthful, resulting in a $500 fine and no jail time.[6] This lack of legal consequences stems from the fact that no law in Indiana or most other states specifically prohibits a doctor from using their own sperm in their patients.[7]

Regrettably, Cline’s story is not unique. In a 2023 decision, a judge dismissed claims made by offspring who discovered that a Connecticut doctor had used his own sperm to impregnate their mothers without their knowledge.[8] After shocking results from an at-home DNA test, the plaintiffs discovered they were half-siblings.[9] They both brought claims of negligence, fraudulent concealment, lack of informed consent, and unfair trade practices, citing the mental anguish and physical injury they have suffered as a result of their discovery.[10]

Plaintiff Flaherty alleges that he sustained and continues to suffer mental anguish and physical injury through his emotional and psychological well-being, as a result of the defendant’s conduct.[11] The court found that because Flaherty doesn’t require any extraordinary care for his injury, this claim is precluded.[12] Further, plaintiff Suprynowicz alleges that she suffers from a genetic condition as a result of the defendant’s negligence that “limits her earning capacity and impairs her ability to earn a living.”[13] The court responded that because the plaintiff never had a wage-earning capacity taken away by the doctor’s conduct, she could not claim compensation for its loss.[14]

Overall, the court found that the plaintiffs’ claims fell under the category of “wrongful life,” a cause of action that has been declined by the majority of courts in the country.[15] The court argued that the plaintiffs could not recover for harm resulting from the achievement of life, and also raised concerns about the difficulty of quantifying damages in cases involving the weighing of an impaired life against no life at all.[16]

Thankfully, there is some hope for change. In January 2023, a federal bill was introduced to establish that it is a criminal offense for medical professionals to knowingly misrepresent the nature or source of DNA used in any procedure that involves assisted reproductive technology.[17] The Protecting Families from Fertility Fraud Act proposes a new federal crime under the Sexual Assault chapter, which would provide greater clarity and legal protection to those affected by fertility fraud.[18]

 

 

 

 

 

 

[1] Lindsey L. Wallace, Netflix’s Our Father Tells The True Story of a Fertility Doctor Who Used His Own Sperm on Patients, Times (May 12, 2022, 5:54 PM), https://time.com/6176310/our-father-true-story-netflix/.

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] Wallace, supra note 1.

[7] Id.

[8] Suprynowicz v. Tohan, X03-CV-21-6140245-S, 2023 WL 2134547, at *1 (Conn. Feb. 17, 2023).

[9] Id.

[10] Id. at *2.

[11] Id. at *5.

[12] Id.

[13] Suprynowicz, 2023 WL 2134547, at *5.

[14] Id.

[15] Id. at *4.

[16] Id.

[17] Press Release, U.S. Congressman Joseph Morelle, Congressman Joe Morelle Acts To Combat Fertility Fraud (Feb. 9, 2023), https://morelle.house.gov/media/press-releases/congressman-joe-morelle-acts-combat-fertility-fraud.

[18] Id.

 

 

 

Image Source: https://www.theverge.com/c/23157354/doctor-donor-fertility-fraud-ancestry-23andme-dna-test

Page 7 of 83

Powered by WordPress & Theme by Anders Norén