The first exclusively online law review.

Category: Blog Posts Page 11 of 78

Artificial Intelligence, Real Discrimination

By Jessiah Hulle[1]

_____ 

“Success in creating AI could be the biggest event in the history of our civilization. . . . [But a]longside the benefits, AI will also bring dangers.” – Stephen Hawking (2016)[2]

Two years ago, various news outlets reported that Amazon uses artificial intelligence (“AI”) “not only to manage workers in its warehouses but [also] to oversee contract drivers, independent delivery companies and even the performance of its office workers.” The AI is a cold but efficient human resources manager, comparing employees against strict metrics and terminating all underperformers. “People familiar with the strategy say . . . Jeff Bezos believes machines make decisions more quickly and accurately than people, reducing costs and giving Amazon a competitive advantage.”[3]

This practice is no longer unusual. In fact, AI-assisted human resources (“HR”) work is now commonplace. Recently, over 70% of human resources leaders surveyed by Eightfold AI confirmed that they use AI for HR functions such as recruiting, hiring, and performance management. In that same survey, over 90% of HR leaders stated an intent to increase future AI use, with 41% indicating a desire to use AI in the future for recruitment and hiring.[4] Already, “three in four organizations boosted their purchase of talent acquisition technology” in 2022 alone and “70% plan to continue investing” in 2023, regardless of a recession.[5] Research by IDC Future Work predicts that by 2024, “80% of the global 2000 organizations will use AI-enabled ‘managers’ to hire, fire, and train employees.”[6]

Skulking in the shadows of this enthusiastic adoption of AI for HR work, however, is a problem: employment discrimination. Like humans, AI can discriminate on the basis of protected classes like race, sex, and national origin. This article briefly addresses this problem, summarizes current local, state, and federal laws enacted or proposed to curtail it, and proposes two solutions for modern employers itching to implement AI-assisted employee management tools but dreading employment litigation.

AI-assisted discrimination

“Machine learning is like money laundering for bias.” – Maciej Cegłowski[7]

Employers can use AI to assist with a host of tasks. Some niche AI-assisted tasks, such as moderating internet content[8] or providing health care services,[9] implicate legal issues and invite civil litigation. Others do not. But the AI-assisted task currently receiving heightened legal scrutiny from the government is employment decision-making, including hiring, assigning, promoting, and firing. The reason for this scrutiny is straightforward: AI can, and sometimes does, discriminate against protected classes.

How does this happen? Put simply, the problem of AI discrimination boils down to a single maxim: garbage in, garbage out.[10] An AI that “learns” how to think from biased information (“garbage in”) will invariably produce biased results (“garbage out”). A funny example of this is Tay, a rudimentary AI chatbot designed by Microsoft that turned into a Nazi after only a day of “learning” on Twitter.[11] A serious example is predictive policing software, which can unfairly target racial minorities after “learning” about crime rates from historical over-policing patterns in minority neighborhoods.[12]

In the field of human resources, “garbage in” fed to an AI can range from historical data tainted by past discrimination (e.g. segregation-era Whites-only hiring practices) to statistics warped by employee self-selection (e.g. self-selection of male candidates into engineering). The resulting “garbage out” formulated by AI is employment discrimination under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Americans with Disabilities Act (“ADA”), the Age Discrimination in Employment Act (“ADEA”), and other civil rights statutes.[13]

Amazon is a prime example (no pun intended). In 2018, the company was forced to discontinue an AI program that filtered job applicant resumes because it developed an anti-woman bias. “Employees had programmed the tool in 2014 using resumes submitted to Amazon over a 10-year period, the majority of which came from male candidates. Based on that information, the tool assumed male candidates were preferable and downgraded resumes from women.”[14]

 

Local and state regulation

To curb AI-assisted discrimination, at least one locality and numerous states have enacted or proposed laws regulating bias in AI employment decision-making.

New York City

New York City is the clear leader on this front. In 2021, the city enacted an ordinance that requires employers using AI for job application screening to notify job applicants about the AI and conduct an annual independent bias audit of the AI if it “substantially assist[s] or replace[s] discretionary decision making.”[15] The city began enforcement of the ordinance for hiring and promotion decisions in July 2023. “The law [only] applies to companies with workers in New York City, but labor experts expect it to influence practices nationally.”[16]

Illinois and Maryland

On the state level, Illinois enacted the Artificial Intelligence Video Interview Act in 2019 to combat AI discrimination in screening initial job applicant interview videos.[17] The statute “requires employers that use AI-enabled analytics in interview videos” to notify job applicants about the AI, explain how it works, obtain the applicant’s consent, and destroy any analytics video within thirty days upon the applicant’s request. “If the employer relies solely on AI to make a threshold determination before the candidate proceeds to an in-person interview, that employer must track the race and ethnicity of the applicants who do not proceed to an in-person interview as well as those applicants ultimately hired.”[18]

Maryland enacted a similar statute in 2020, requiring employers to obtain a job applicant’s consent before using AI-assisted facial recognition technology during interviews.[19]

It appears that the impetus behind the Illinois and Maryland laws is a belief that AI-assisted facial recognition and analysis programs discriminate against less-privileged job applicants because such programs are trained on data from past, privileged applicants. As argued by Ivan Manokha, a lecturer at the University of Oxford, companies that use these programs “are likely to hire the same types of people that they have always hired.” A possible result is “inadvertently exclud[ing] people from diverse backgrounds.”[20]

Other states

Outside New York City, Illinois, and Maryland, numerous states have also proposed laws or empaneled special committees to address AI-assisted employment discrimination. For instance, the District of Columbia,[21] California,[22] and Massachusetts[23] have all introduced bills or draft regulations in the last two years to address this issue. And various states, including Alabama, Missouri, New York, North Carolina, and Vermont, have proposed or established committees, taskforces, or commissions to review and regulate AI issues.[24]

Virginia

So far, Virginia has neither enacted nor proposed a law to specifically regulate AI-assisted employment discrimination. In January 2020, Delegate Lashrecse D. Aird introduced a Joint Resolution to “convene a working group . . . to study the proliferation and implementation of facial recognition and artificial intelligence” because “the accuracy of facial recognition is variable across gender and race,”[25] but it was tabled by a House of Delegates subcommittee.[26]

Nevertheless, it is possible that AI programs can still violate antidiscrimination laws in the state. Virginia antidiscrimination law — which protects traits ranging from racial and ethnic identity[27] to lactation,[28] protective hair braids,[29] and (for public employees) smoking[30] — presents a veritable minefield of legal issues for an AI program to traverse in screening job applicants and employees. For instance,

Virginia . . . recently passed a law that protects employees who use cannabis oil for medical purposes. This law distinguishes “cannabis oil” from other types of medicinal marijuana and has specific definitions of what is and is not protected. An algorithm that fails to take these nuances into consideration might inadvertently discriminate against protected cannabis users.[31]

Federal guidance

The federal government has also issued guidance condemning AI-assisted employment discrimination.

EEOC

Although the Equal Employment Opportunity Commission (“EEOC”) has yet to issue a formal rule on AI-assisted employment discrimination, it has clearly condemned the practice through various informal guidance documents, a draft enforcement plan, and at least one civil lawsuit.

First, in May 2022 the EEOC issued a question-and-answer-style informal guidance document explaining that an employer’s use of an AI program that “relies on algorithmic decision-making may violate existing requirements under [the ADA].”[32]

The EEOC explained that, most commonly, employers violate the ADA when they fail to provide a reasonable accommodation “necessary for a job applicant or employee to be rated fairly and accurately by [an AI program]” or rely on “an [AI] that intentionally or unintentionally ‘screens out’ an individual with a disability.” A “screen out” occurs when a “disability prevents a job applicant or employee from meeting — or lowers their performance on — a selection criterion, and the applicant or employee loses a job opportunity as a result.”

The EEOC provided multiple examples of AI-assisted screen-outs that possibly violate the ADA. In one example, an AI chatbot designed to engage in text communications with a job applicant may violate the ADA by screening-out applicants who indicate “significant gaps in their employment history” because of a disability. In another example, an AI-assisted video interviewing software that analyzes job applicant speech patterns may violate the ADA by screening-out applicants who have speech impediments. In a third example, an AI-analyzed pre-employment personality test “designed to look for candidates . . . similar to the employer’s most successful employees” may violate the ADA by screening-out job applicants with PTSD who struggle to ignore distractions but can thrive in a workplace with “reasonable accommodations such as a quiet workstation or . . . noise-cancelling headphones.” All of these examples follow the same theme: AI programs can reject job applicants based on external data without considering reasonable accommodations.

The EEOC warned that employers remain liable under the ADA even if an AI program is administered by a vendor.[33]

Second, in May 2022 the EEOC sued an international tutoring company for ADEA discrimination resulting from an AI-assisted automated job applicant screening program. According to the complaint, the company’s online tutoring application solicited birthdates of job applicants but automatically rejected female applicants aged 55 or older and male applicants aged 60 or older. The company filed an amended answer denying the allegations in March 2023. The case is currently pending.[34]

Third, in January 2023 the EEOC announced in its Draft Strategic Enforcement Plan for fiscal years 2023 to 2027 that it was committed to “address[ing] systematic discrimination in employment.” The plan specifically announced the following subject matter priority for the agency:

The EEOC will focus on recruitment and hiring practices and policies that discriminate against racial, ethnic, and religious groups, older workers, women, pregnant workers and those with pregnancy-related medical conditions, LGBTQI+ individuals, and people with disabilities. These include: the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.[35]

In accordance with this strategic plan, the EEOC “launched an agency-wide initiative to ensure that the use of software, including artificial intelligence (AI), machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.”[36] The EEOC also held a four-hour public hearing on “Navigating Employment Discrimination in AI and Automated Systems,” which is currently hosted on its website[37] and YouTube.[38]

Fourth, the EEOC joined a Joint Statement with the Consumer Financial Protection Bureau, Department of Justice Civil Rights Division, and Federal Trade Commission promising to “monitor the development and use of automated systems,” “promote responsible innovation” in the field of AI, and “vigorously . . . protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”[39]

Finally, in April 2023 the EEOC published a second technical guidance document explaining that AI-assisted employment decision-making programs can violate Title VII.

The EEOC noted that modern employers use a variety of algorithmic and AI-assisted programs for human resources work, including scanning resumes, prioritizing job applications based on keywords, monitoring employee productivity, screening job applicants with chatbots, evaluating job applicant facial expressions and speech patterns with video interview programs, and testing job applicants on personality, cognitive ability, and perceived “cultural fit” with games and tests. However, under this new EEOC guidance document, all algorithmic and AI-assisted programs “used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees” fall within the ambit of the agency’s Guidelines on Employee Selection Procedures under Title VII in 29 C.F.R. Part 1607. In other words, if an AI program discriminates against a job applicant or employee in violation of Title VII, the EEOC evaluates the violation the same as if it was committed by a person.[40]

Again, the EEOC warned that employers remain liable under Title VII even if a discriminatory AI program is administered by a vendor.

It is expected that, in accordance with its four-year strategic plan and Joint Resolution, the EEOC will issue further guidance on this issue in the next few years.

The White House

In 2022 the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights. The Blueprint reaffirmed that “[a]lgorithms used in hiring . . . decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination” and suggested five principles to “guide the design, use, and deployment of automated systems to protect the American public.” The second principle proposed the following right: “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.” According to the White House,

This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.[41]

Although this Blueprint is currently all bark, it portends the future bite of enhanced enforcement by the Biden Administration against AI-assisted discrimination.

Congress

So far, Congress has proposed bills to regulate AI generally but not to regulate AI-assisted employment discrimination specifically.[42] However, this does not mean that Congress is unaware of the issue. For instance, in March 2023, Alexandra Reeve Givens, the President and CEO of the Center for Democracy & Technology, testified before the U.S. Senate Committee on Homeland Security and Government Affairs that “if the data used to train the AI system is not representative of wider society or reflects historical patterns of discrimination, it can reinforce existing bias and lack of representation in the workplace.”[43]

Takeaways

In sum, as more employers use unregulated AI to assist with human resources tasks, the potential for inadvertent, disparate impact, and even intentional discrimination increases. New York City, Illinois, and Maryland have already enacted laws directly regulating AI-assisted recruiting. Other states have proposed similar, or even stricter, laws. Accordingly, employers must tread this area of AI usage carefully.

Perhaps the best takeaway for employers is a quote ostensibly taken from a 1979 presentation at IBM: “A computer can never be held accountable[.] Therefore a computer must never make a management decision.”[44] Good HR staff know antidiscrimination laws inside and out. In this current wild west of AI regulation, employers should rely on well-versed HR staff to review AI work, just like employers rely on employees to review intern work. Moreover, employers should require a human to make final hiring, assigning, promoting, and firing decisions. In fact, requiring a human decision is a loophole in the New York City ordinance, which only requires a bias audit for AI programs that “substantially assist or replace discretionary decision making.”[45] Additionally, employers should stay abreast of new regulatory guidance on AI from the EEOC as it releases.

The second-best takeaway for employers is the old joke: “The early bird gets the worm, but the second mouse gets the cheese.”[46] Many employers want to be an early bird in implementing new AI programs to boost HR functions. This desire is understandable. It seems like everyone else is already onboard the AI train. In 2017, the Harvard Business Review published an article claiming that “[t]he most important general-purpose technology of our era is artificial intelligence.”[47] Now, in 2023, close to 75% of surveyed HR leaders report using AI for human resources tasks. And that percentage only increases as companies scramble to get the worm. As Jensen Huang, the co-founder and CEO of trillion-dollar-valued Nvidia, recently predicted in a speech, “[a]gile companies will take advantage of AI and boost their position. Companies less so will perish.”[48] However, employers — especially small businesses — should also consider the benefits of being the second mouse. Everyone, from Fortune 100 corporations to local, state, and federal governments, is currently testing the scope of liability for AI-assisted discrimination.[49] This beta testing phase exposes employers to high potential risk and cost.[50] Therefore, although not the “coolest” approach, it behooves many employers to simply wait until this issue is either litigated and regulated or solved by the invention of a relatively bias-proofed human resources AI.

 

 

 

 

[1] Jessiah Hulle is a litigation and investigations associate at Gentry Locke in Roanoke, Virginia. He graduated from the University of Valley Forge in 2017 and Washington and Lee University School of Law in 2020

[2] Dom Galeon, Hawking: Creating AI Could Be the Biggest Event in the History of Our Civilization, Futurism (Oct. 10, 2016), https://archive.is/M7DDD.

[3] Spencer Soper, Fired by Bot at Amazon: ‘It’s You Against the Machine’, Yahoo Finance (June 28, 2021), https://archive.is/Tpc5Q.

[4] Gem Siocon, Ways AI Is Changing HR Departments, Business News Daily (June 22, 2023), https://archive.is/Nf7rg.

[5] Lucas Mearian, Legislation to Rein in AI’s Use in Hiring Grows, Computerworld (Apr. 1, 2023), https://archive.is/Lx9xD.

[6] Lucas Mearian, The Rise of Digital Bosses: They Can Hire You – And Fire You, Computerworld (Jan. 6, 2022), https://archive.is/NuZLo.

[7] Maciej Cegłowski, The Moral Economy of Tech, Idle Words (June 26, 2016), https://archive.is/t6q6m (quoting remarks given at the SASE Conference in Berkeley).

[8] See, e.g., Force v. Facebook, Inc., 934 F.3d 53, 60 (2d Cir. 2019) (rejecting claim that Facebook’s AI-enhanced algorithm negligently propagated terrorism).

[9] See, e.g., Sharona Hoffman & Andy Podgurski, Artificial Intelligence and Discrimination in Health Care, 19 Yale J. Health Pol’y L. & Ethics 1 (2020) (arguing that AI-assisted algorithmic discrimination, especially on the basis of race, in health care should be actionable under Title VI).

[10] R. Stuart Geiger et al., “Garbage In, Garbage Out” Revisited: What Do Machine Learning Application Papers Report about Human-Labeled Training Data?, 2:3 Quantitative Sci. Stud. 795 (Nov. 5, 2021), https://archive.is/9D477 (quoting this maxim as a “classic saying in computing about how problematic input data or instructions will produce problematic outputs”).

[11] Amy Kraft, Microsoft Shuts Down AI Chatbot after It Turned into a Nazi, CBS News (Mar. 25, 2016), https://archive.is/xScSA (reporting that Tay went from stating “humans are super cool” on March 23, 2016, to “Hitler was right I hate the jews” on March 24, 2016).

[12] Will Douglas Heaven, Predictive Policing Algorithms Are Racist. They Need to Be Dismantled., MIT Tech. Rev. (July 17, 2020), https://archive.is/clURU.

[13] See generally Keith E. Sonderling et al., The Promise and the Peril: Artificial Intelligence and Employment Discrimination, 77 U. Miami L. Rev. 1 (2022).

[14] Guadalupe Gonzalez, How Amazon Accidentally Invented a Sexist Hiring Algorithm, Inc.com (Oct. 10, 2018), https://archive.is/INQrl.

[15] N.Y.C. Local Law 144; N.Y.C.R. §§ 5-300, 5-301, 5-302, 5-303, 5-304 (2021), https://archive.is/WmrAm.

[16] Steve Lohr, A Hiring Law Blazes a Path for A.I. Regulation, N.Y. Times (May 25, 2023), https://archive.is/mGYGU.

[17] 820 I.L.C.S. 42/1 et seq.

[18] Paul Daugherity et al., States Scramble to Regulate AI-Based Hiring Tools, Bloomberg Law (Apr. 10, 2023), https://archive.is/a6gZt.

[19] Md. Labor and Emp. Code § 3-717 (2020).

[20] Ivan Manokha, Facial Analysis AI Is Being Used in Job Interviews – It Will Probably Reinforce Inequality, The Conversation (Oct. 7, 2019), https://archive.is/9U2Jn.

[21] J. Edward Moreno, New York City AI Bias Law Charts New Territory for Employers, Bloomberg Law (Aug. 29, 2022), https://archive.is/QLQVD (“District of Columbia Attorney General Karl Racine introduced a bill [in 2021] that would mirror New York City’s law and would put the onus on employers to ensure AI tools they use aren’t discriminating against certain candidates.”).

[22] Id. (“The California Civil Rights Department announced [in early 2022] that it’s drafting regulations to clarify that the use of automated decision-making tools is subject to employment discrimination laws.”).

[23] Hiawatha Bray, Mass. Lawmakers Scramble to Regulate AI Amid Rising Concerns, Bos. Globe (May 18, 2023), https://archive.is/uCAHh (“[Massachusetts state senator Barry] Finegold has filed a bill that would set performance standards for powerful ‘generative’ AI systems . . . . [C]ompanies would need to make sure that AI systems aren’t used to discrimination against individuals or groups based on race, sex, gender, or other characteristics protected under antidiscrimination law.”).

[24] Report: Legislation Related to Artificial Intelligence, Nat’l Conf. of State Legs. (Aug. 26, 2022), https://archive.is/76Gjf (collecting proposed and enacted laws pre-August 2022).

[25] Va. H.J.R. No. 59 (2020 Session).

[26] HJ 59 Facial recognition and artificial intelligence technology; Joint Com. on Science & Tech to study., Va. Legis. Info. Sys. (Jan. 29, 2020), https://archive.is/fpjn6.

[27] Va. Code § 2.2-3900(B)(2).

[28] Va. Code §§ 2.2-3901, 2.2-3902.

[29] Va. Code § 2.2-3901(D).

[30] Va. Code § 2.2-2902.

[31] Amber M. Rogers & Michael Reed, Discrimination in the Age of Artificial Intelligence, ABA (Dec. 7, 2021), https://archive.is/ujiCa.

[32] Technical Guidance Document: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC (May 15, 2022), https://archive.is/OnMM2.

[33] Id.

[34] EEOC v. iTutorGroup, Inc. et al., No. 1:22-CV-2565 (E.D.N.Y. May 5, 2022).

[35] Draft Strategic Enforcement Plan 2023-2027, 88 Fed. Reg. 1379, 1381 (Jan. 1, 2023).

[36] Artificial Intelligence and Algorithmic Fairness Initiative, EEOC (2023), https://archive.is/SBeWt (promising to “issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions”).

[37] Id.

[38] Navigating Employment Discrimination in AI and Automated Systems, YouTube (Jan. 31, 2023), https://www.youtube.com/watch?v=rfMRLestj6s.

[39] Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, EEOC (2023), https://archive.is/9AybV.

[40] Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, EEOC (Apr. 2023), https://archive.is/u1s5p.

[41] Blueprint for an AI Bill of Rights, The White House (Oct. 2022), https://archive.is/OBbK8; Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House (Oct. 2022), https://archive.is/17aZb.

[42] See, e.g., U.S. Congress to Consider Two New Bills on Artificial Intelligence, Reuters (June 8, 2023), https://archive.is/7nnwl (reporting that one bill “would require the U.S. government to be transparent when using AI to interact with people” and the other “would establish an office to determine if the United States is remaining competitive in the latest technologies”).

[43] Testimony of Alexandra Reeve Givens, Artificial Intelligence: Risks and Opportunities, U.S. Senate Comm. on Homeland Security and Gov’t Affs. (Mar. 8, 2023), https://archive.is/LH3gv.

[44] See, e.g., An IBM slide from 1979, CSAIL – MIT, Facebook (Dec. 19, 2022), https://archive.is/MQF2n (sharing viral picture of quote); Gizem Karaali, Artificial Intelligence, Basic Skills, and Quantitative Literacy, 16:1 Numeracy 1, 4-5, 5 n.13 (2023) (attributing the quote to a 1979 IBM presentation).

[45] Steve Lohr, A Hiring Law Blazes a Path for A.I. Regulation, N.Y. Times (May 25, 2023), https://archive.is/mGYGU (quoting the president of the Center for Democracy & Technology criticizing this loophole as “overly sympathetic to business interests”).

[46] See, e.g., Wesley Wildman, Jokes and Stories: Wisdom Sayings, B.U.: Wesley Wildman’s Weird Wild World Wide Web Site (Jan. 1, 2000), https://archive.is/5wWhI.

[47] Erik Brynjolfsson & Andrew McAfee, The Business of Artificial Intelligence, Harv. Bus. Rev. (July 18, 2017), https://archive.is/RSvSb.

[48] Vlad Savov & Debby Wu, Nvidia CEO Says Those Without AI Expertise Will Be Left Behind, Bloomberg (May 28, 2023), https://archive.is/TLsR8.

[49] Cf., e.g., Ryan E. Long, Artificial Intelligence Liability: The Rules Are Changing, LSE Bus. Rev. Blog (Aug. 16, 2021), https://archive.is/mdcEM (discussing corporate civil liability for AI work).

[50] See generally Keith E. Sonderling et al., The Promise and the Peril: Artificial Intelligence and Employment Discrimination, 77 U. Miami L. Rev. 1 (2022).

 

 

 

Image Source: https://imgs.xkcd.com/comics/ai_hiring_algorithm.png

White Paper: Office of the User Advocate

By Kevin Frazier[1]

 

 

PURPOSE OF THE OFFICE OF THE USER ADVOCATE (UA)

THE UA AND THE RIGHT TO PETITION

UA POWERS

    Accountability

    Advocacy

UA PLACEMENT WITHIN META

TENURE AND SELECTION OF THE UA TRIAD

ORGANIZATION OF THE UA

CONCLUSION

 

 

PURPOSE OF THE OFFICE OF THE USER ADVOCATE (UA)

The content moderation system developed by Meta provides limited and, as set forth below, inadequate means of participation by users. The current system confines user participation to a single user challenging a content decision on a single post.[2] This limited form of participation diverges from Meta’s stated goals as well as human rights norms and laws.[3]

The creation of the Office of the User Advocate (UA), as proposed in my Richmond Journal of Law and Technology article[4] and detailed further below, will close the gap between Meta’s goals and the rights currently afforded to Meta users via its content moderation system. Housed within Meta, the UA would guarantee that users had a more meaningful role in every part of content moderation—from the development and amendment of Community Standards to the application and adjudication of those standards.

The UA will work on behalf of users to hold Meta accountable, to advocate on behalf of their interests, and to represent them in formal proceedings involving Meta, the Oversight Board (OB), and other stakeholders. Meta has rightfully celebrated its prior efforts to involve users in its content moderation system[5]—the UA is simply a deliberate and permanent means of continuing those efforts. As Meta expands its user base, refines its content moderation system, and creates new platforms, the UA will become even more valuable to Meta’s efforts to comply with human rights norms and law.

 

THE UA AND THE RIGHT TO PETITION

Meta has embraced a human rights framework for structuring and evaluating its content moderation decisions.[6] Fulfillment of Meta’s human rights responsibilities requires the creation of a means for users to exercise their right to petition, without which freedom of expression cannot fully be realized.[7]

The right to petition constitutes a fundamental right under human rights law.[8] An open procedure for individuals or groups alleging a violation of their rights or the rights of those they represent is an “essential, even irreducible, means of giving effect” to the protections set forth under human rights law.[9] In short, for Meta to protect the rights of its users, it must provide an “opportunity to demand that the right[s] be protected.”[10] The UA would provide such an opportunity.

By creating this opportunity, Meta can set a meaningful precedent for other platforms. Though some platforms tout their role as public spheres, the rule-making processes allow a staggering small role for the public. This discrepancy will become all the more apparent as platforms continue to play a larger role in communal discourse.

 

UA POWERS

Though Meta currently affords users who allege an improper decision; to remove content or leave that content up, the opportunity to challenge such a decision, this mechanism falls short of the sort of opportunity necessary to enforce the human rights norms and laws at the heart of Meta’s Community Standards.[11] This mechanism prohibits challenges by groups of affected users, prevents contestation of a specific Community Standard rather than a single post, and does not allow for proactive amendment of those standards in anticipation of crises and other events likely to increase the odds of violations.

Given the size of Meta’s user base, it is impossible for every challenge by a user or group of users to go through the entire content appeals process.[12] Meta has responded to this reality by attempting to prioritize cases for review by the OB based on the severity and virality of a post, and the likelihood of post violating Meta’s Community Standards.[13] Though perhaps unintentionally and in a way that would mirror the decisions of users, Meta has usurped the ability of users to signal their own priorities with respect to the enforcement and content of Meta’s Community Standards.

The UA will ensure users have a meaningful right to petition by overseeing mechanisms for users to identify Community Standards in need of reform, to request certain classes of cases receive OB review, and to ensure their values and preferences are represented in formal proceedings involving Meta, the OB, and other stakeholders.

The following is a list of potential UA powers. The grant of even a few of these powers to a UA would serve as a meaningful improvement on the status quo. Stakeholders are encouraged to debate these powers and offer suggestions of their own.

Accountability

The UA can further the rights of users by performing the following accountability actions:

  • Monitoring Meta’s adoption of OB policy recommendations.
    • For example, the UA could solicit user feedback on which recommendations they want Meta to prioritize.
  • Auditing, evaluating, and informing the metrics collected by Meta and the OB and reported in their respective updates.
    • For example, the UA could advocate for the inclusion of Meta’s application of its case selection criteria to the cases it recommends to the OB to better understand how Meta assess severity, virality, and likelihood of violation.
  • Attending Meta’s policy meetings and other engagements with external stakeholders
  • Participating in the evaluating and selection of Meta governance team members and/or OB members.

Advocacy

The UA can further the rights of users by advocating on behalf of them and taking the following actions:

  • Identifying user concerns by:
    • conducting an annual survey of users regarding the content and application of Facebook’s Community Standards; and,
    • empaneling a citizen assembly (hereinafter, the “Assembly of Meta Users” or “AMU”) that is representative of Meta users to stand “on-call” for a two-year term during which they will respond to requests for input on everything from recent OB PAOs and case decisions to the candidates for Meta and OB positions (as described further below).
  • Sharing and advancing user concerns by:
    • serving as their representative in;
      • the OB adjudication process:
        • for example, you could imagine a UA Attorney General tasked with writing briefs for consideration by the OB.
      • Meta Community Standards review sessions.
    • consolidating user concerns into a sort of “class action” case for review by OB;
      • The UA could operate a platform akin to Change.org where users could publish petitions for certain types of cases and specific Community Standards (as well as allowances to those standards) for consideration by other users–if a sufficient number and diversity of users backed any such petition, then the UA could have the authority to demand review by Meta and the OB.
    • regularly developing a case selection criterion for Meta’s adoption based on the concerns of users; and,
    • publishing a response to every OB decision that expresses how the UA thinks the decisions impacts user’s rights.
  • Empowering users to share their concerns by:
    • empaneling user juries of peers of the user with a post undergoing OB adjudication; and,
      • these jurors would provide the OB with a better understanding of the context in which a post was made. For instance, if a candidate’s post was challenged, then a jury of users from that jurisdiction could review the facts before the OB and answer clarifying questions issued by the OB.
    • overseeing the election and participation of user representative(s) on the OB.
      • users should have at least one permanent member on the OB. This representative could work closely with the UA to make sure they understand and represent the interests of users in all content disputes.
    • [other actions may achieve this goal]

UA PLACEMENT WITHIN META

The UA should be formally within Meta but operate in a highly independent fashion. By residing within the Meta organization, the UA can better fulfill its mission due to several factors:

  • Increased understanding of the technical limitations of Meta’s platforms
  • More opportunities to connect with Meta employees and learn about their priorities and plans
  • Greater access to expertise within the Meta community regarding how best to consult a global user base
  • More reliable funding by virtue of being just another part of the company.

This placement, of course, would raise a number of valid concerns. Chief among those concerns may be the independence of the UA. The selection process proposed below should alleviate such concerns.

TENURE AND SELECTION OF THE UA TRIAD

Given Meta’s global user base and the representative nature of the UA, no individual could alone steer the UA. Instead, a collection of three individuals—each serving staggered, five-year terms—should form the UA Triad. Each member of the Triad would be a UA Director—tasked with overseeing the office’s accountability and advocacy functions.

Selection of Directors should involve the OB, Meta, and users to ensure each major stakeholder has a stake in the success of the UA. The OB should nominate candidates and Meta should select the Director. The Assembly of Meta Users (as defined above) should have the authority to veto Meta’s selection. If this latter mechanism gives rise to concerns that the AMU would simply veto each of Meta’s finalists, then the AMU could be assigned a fixed number of vetoes.

ORGANIZATION OF THE UA

The UA Triad would oversee a division of Meta akin to a foreign service department. Directors would select Regional Directors (RDs) and assign each RD to one of Meta’s three self-identified main regions:[14] Europe, Middle East, Africa (EMEA); Asia and the Pacific (APAC); and North America. These three RDs would then build a team of ambassadors to build relationships with users in countries in their respective regions. RDs would also create analyst teams tasked with researching current events and crises that may warrant action by the RD or the Triad.

CONCLUSION

Meta’s adherence to human rights norms and laws necessitates the protection of user’s right to petition. Users currently lack that right. By creating the Office of the User Advocate, Meta can ensure that users have a meaningful opportunity to challenge Meta’s Community Standards, to participate in OB decisions, and to shape the platform that reflects their values and concerns.

This White Paper should start, rather than conclude, a conversation around the right to petition among social media users and the need for something akin to the UA proposed above. If you are interested in this topic, please let me know—I am willing and eager to talk further.

 

 

 

 

 

[1] Kevin Frazier is an Assistant Professor at Crump College of Law at St. Thomas University and a Summer Research Fellow at the Legal Priorities Project. Frazier earned a Master of Public Policy from the Harvard Kennedy School and a JD from the UC Berkeley School of Law. Send feedback to kfraz@berkeley.edu.

[2] See How the Meta appeals process works, Meta (accessed May 18, 2023), https://transparency.fb.com/policies/improving/appealed-content-metric/

[3] See Facebook Community Standards, Meta (accessed May 18, 2023), https://transparency.fb.com/policies/community-standards/; infra note 7 and accompanying text.

[4] Kevin Frazier, Why Meta Users Need a Public Advocate: A Modest Means to Address the Shortcomings of the Oversight Board, 28 Rich. J.L. & Tech. 596 (2021), https://jolt.richmond.edu/files/2022/04/Frazier-Final.pdf

[5] See, e.g., Facebook Community Standards, Meta (accessed May 18, 2023), https://transparency.fb.com/policies/community-standards/ (discussing users as key stakeholders in the development of Facebook’s Community Standards).

[6] See Meta Q1 2023 Quarterly Update on the Oversight Board, Meta at 8 (May 17, 2023) (Stressing the value provided by the Oversight Board’s “crucial overlay of global human rights frameworks and diverse perspectives to [Meta’s] most significant and difficult decisions.”)

[7] See Lima Principles, Organization of American States (Nov. 16, 2000) (identifying the right to petition as essential to the protection of other rights); DRL Notice of Funding Opportunity, U.S. State Department (Feb. 2, 2023) (listing the right to petition as a part of freedom of expression); see also Declaration of Principles on Freedom of Expression, Organization of American States (n.a.).

[8] See, e.g., Schonberger v. European Parliament, C.J.E.U. Case 261 at Paragraph 13 (2014).

[9] See, e.g., Michael J. Dennis and David P. Stewart, Justifiability of Economic, Social, and Cultural Rights: Should There be an International Complaints Mechanism to Adjudicate the Rights to Food, Water, Housing, and Health?, 98 A.J.I.L 462, 467-68 (2004).

[10] See Virginia Leary, Justiciability and Beyond: Complaint Procedures and the Right to Health, Rev. Int’L Comm’n Jurists at 105, 106 (Dec. 1995).

[11] Facebook and Instagram have Community Standards and Community Guidelines, respectively. This White Paper refers to these standards jointly as Meta’s Community Standards in the interest of brevity.

[12] See Zoe Kleinman, Meta board hears over a million appeals over removed posts, BBC (June 22, 2022), https://www.bbc.com/news/technology-61893903

[13] How Meta Prioritizes Content for Review, Meta (Jan. 26, 2022), https://transparency.fb.com/policies/improving/prioritizing-content-review/

[14] Meta Q1 2023 Quarterly Update on the Oversight Board, Meta at 18.

 

 

 

Image Source: https://images.app.goo.gl/1wdKVgufDJcAvrpe9

Navigating Legal Factors for U.S. Companies Entering the E-commerce Market in Africa

By Yanrong Zeng

 

 

 

As Africa’s online banking and shopping sectors have gained popularity, e-commerce has become a crucial aspect of business operations in the region, attracting foreign investment.[1] To successfully export goods to Africa, U.S. companies must have a deep understanding of the legal factors that impact the e-commerce sector. This blog post delves into the legal considerations that contribute to the success of e-commerce in different African countries and recommends suitable entry points for businesses entering the e-commerce market.

The success potential of e-commerce hinges on two factors: information infrastructure and legal considerations. Information infrastructure sets the ceiling of e-commerce possibilities in a target market as access to internet, mobile phones, bank accounts, and postal addresses are necessary for online shopping.[2] To assess a market’s online shopping readiness, the UNCTAD B2C Commerce Index is an effective tool.[3] Countries with the highest B2C Commerce Index are South Africa, Algeria, and Kenya, followed closely by Kenya, Nigeria, and Morocco.[4] Meanwhile, Senegal, Egypt, and Ivory Coast are further down the list.[5] It’s worth noting that Egypt has surprisingly dropped in the rankings in the past decade.[6]

Legal factors can act as limiting factors for e-commerce opportunities. These include e-commerce law, consumer protection law, data privacy laws, and breach notification laws. However, companies can use these laws to their advantage by using them as a guide to identify the most suitable e-commerce market to enter.

To minimize potential disputes and legal complications, it is recommended that U.S. companies identify target markets with reliable legal protection for electronic agreements and strong consumer protection laws. Out of the nine countries mentioned earlier, only Nigeria is yet to publish a distinct e-commerce law, although a bill is currently under legislative process.[7] The Algerian e-commerce market is not open to foreign companies, which means that it is not advisable for companies to consider Algeria as a potential market to enter.[8] Among the mentioned countries, Egypt, Nigeria, South Africa, Ghana, and Morocco seem to be suitable markets for entry, as they have established specific laws or regulations to protect consumers, especially in online transactions.[9] Conversely, countries like Kenya, Algeria, Senegal, and Ivory Coast appear to have weaker consumer protection laws, which can create legal ambiguities and compliance challenges.[10]

Data security breaches pose a significant risk, as indicated by the high numbers of malware attacks on industrial control systems in the target markets.[11] Therefore, it is crucial for U.S. companies to take proactive measures to protect their data and adhere to foreign laws. To mitigate these risks, U.S. companies can prioritize entry into markets that have uniform data privacy and protection laws across a group of countries. This is because complying with the legal requirements of one country in the group ensures compliance with all others. The African Union (AU) member states and Economic Community of West African States (ECOWAS) member states are obligated to respect, protect, and promote the right to privacy and personal data protection, as stated in their declarations and conventions.[12]

To ensure compliance and mitigate risks, U.S. companies need to to carefully evaluate their business requirements and risk tolerance before entering a new market with data breach notification laws.[13] For larger companies with a greater focus on data protection, countries such as Nigeria, Egypt, and Algeria, which have well-defined and stringent data breach notification laws, may be a suitable choice.[14] However, for smaller to medium-sized companies that prioritize balancing compliance costs with maintaining consumer trust, countries such as Kenya, Ghana, and South Africa, which have moderate data breach notification laws, may be a more practical option.[15] Ultimately, U.S. companies should carefully assess their risk appetite and business requirements before making a decision on which market to enter.

To operate in Africa, U.S. companies must adhere to the Foreign Corrupt Practices Act (FCPA), which extends beyond U.S. borders.[16] To avoid violating this law, U.S. companies need to prioritize anti-corruption measures. Transparency International’s 2022 Corruption Perceptions Index (CPI) revealed that sub-Saharan Africa is currently facing a notable challenge with corruption, which may impact businesses operating within the region.[17] As a result, U.S. companies must conduct thorough due diligence to ensure compliance with anti-corruption laws when entering the e-commerce market in the region.

In conclusion, the e-commerce market in Africa presents both risks and opportunities for U.S. companies. While the region has seen tremendous growth in e-commerce, it is essential for U.S. companies to carefully consider the legal landscape and regulatory environment in each target market. By prioritizing legal compliance, consumer protection, data privacy, and anti-corruption measures, U.S. companies can mitigate risks and maximize opportunities for success. Ultimately, those who navigate the legal complexities with diligence and strategic planning stand to benefit from the growing e-commerce market in Africa.

 

 

 

 

 

 

 

[1] White and Case, Africa Focus: Navigating a Changing Business Landscape in Africa and Beyond (Spring 2021), https://www.whitecase.com/publications/insight/africa-focus-spring-2021.

[2] U.N. Conf. on Trade and Dev., UNCTAD B2C E-Commerce Index 2020 Spotlight on Latin America and the Caribbean, 1, https://unctad.org/system/files/official-document/tn_unctad_ict4d17_en.pdf.

[3] Id.

[4] Id. at 15–16.

[5] Id.

[6] Id. at 16.

[7] Aderibigbe et al., Digital Business in Nigeria: Overview, Thomson Reuters (Jan. 1, 2023), https://uk.practicallaw.thomsonreuters.com/w-020-0579; Electronic transaction: Senate prepares legal framework to guide deals, Tribune Online (Feb. 27, 2020), https://tribuneonlineng.com/electronic-transaction-senate-prepares-legal-framework-to-guide-deals/; Kenya Commc’n (Amend.) Act (2008), http://kenyalaw.org/kl/fileadmin/pdfdownloads/AmendmentActs/2009/KENYACOMMUNICATIONS_AMENDMENT_ACT_2008.pdf; Electronic Commu’n and Transactions Act (2002), https://www.gov.za/sites/default/files/gcis_document/201409/a25-02.pdf; Dyer et al., Digital Business in South Africa: Overview, Thomson Reuters (Mar. 1, 2021), https://uk.practicallaw.thomsonreuters.com/w-007-8319; Electronic Transactions Act (2008), https://ictpolicyafrica.org/fr/document/x6dx4fyl9b9; Electronic Signature Law No. 15 (2004), https://itida.gov.eg/English/Documents/2.pdf; World Intell. Prop. Org., Law No. 2008-08 on the Electronic Transactions, https://www.wipo.int/wipolex/en/legislation/details/10283; Evidence Act (2011) § 93, https://www.refworld.org/pdfid/54f86b844.pdf.

[8] Lloyds Bank, E-Commerce in Algeria (last updated Apr. 2023), https://www.lloydsbanktrade.com/en/market-potential/algeria/ecommerce; Loucif and Gauvin, Publication of the law on the post and the electronic

communications and the e-commerce law, LPA-CGR, https://tahseen.ae/media/3093/algeria_law-on-the-post-and-electronic-communications-and-the-e-commerce-law.pdf.

[9] Consumer Code of Prac. Regul. (2007), https://ncc.gov.ng/docman-main/legal-regulatory/regulations/102-consumer-code-of-practice-regulations-1/file; U.N. Conf. on Trade and Dev., Review of e-commerce legislation harmonization in the Economic Community Of West African States, 40, https://unctad.org/system/files/official-document/dtlstict2015d2_en.pdf; Sulaiman and Mashaba, E-commerce transactions under the Electronic Communications and Transactions Act and Consumer Protection Act, Dentons (Aug. 26, 2022), https://www.dentons.com/en/insights/articles/2022/august/26/e-commerce-transactions-under-the-electronic-communications#:~:text=Contact%20us-,E%2Dcommerce%20transactions%20under%20the%20Electronic%20Communications%20and%20Transactions%20Act,68%20of%202008%20(CPA); Electronic Transactions Act (2008), https://www.researchictafrica.net/countries/ghana/Electronic_Transactions_Act_no_772:2008.pdf; Morocco Ministry of Indus. and Trade, Consumer Protection, https://www.mcinet.gov.ma/en/content/consumer-protection.

[10] Kenya Info and Commc’n Act (1998), https://www.ca.go.ke/wp-content/uploads/2021/02/Kenya-Information-and-Communication-Act-1998.pdf; Consumer Prot. Law No.181 (2018), https://leap.unep.org/countries/eg/national-legislation/consumer-protection-law-no181-2018#:~:text=181%20of%202018.,-Country&text=This%20Law%20consisting%20of%2076,as%20increasing%20the%20consumer’s%20rights; Brill, Algeria – Consumer Protection, https://referenceworks.brillonline.com/entries/foreign-law-guide/algeria-consumer-protection-COM_013036; ICT Policy Africa, Ordinance n ° 2012 293 of March 21 2012 on Telecommunications and Information Technologies (Unofficial Translation), https://ictpolicyafrica.org/en/document/nvnyrchgy6r.

[11] Culture Custodian, More African Countries are Taking Data Privacy and Protection Seriously (Feb. 8, 2023), https://culturecustodian.com/more-african-countries-are-taking-data-privacy-and-protection-seriously/.

[12] Id; African Union, Personal Data Protection Guidelines for Africa (May 9, 2018), https://www.internetsociety.org/wp-content/uploads/2018/05/AUCPrivacyGuidelines_2018508_EN-1.pdf.

[13] Practical Law Data Privacy & Cybersecurity, Global Data Breach Notification Laws Chart: Overview, Thomson Reuters (Nov. 28, 2022), https://us.practicallaw.thomsonreuters.com/w-016-6863.

[14] DIA Piper, Data Protections Law of the World, https://www.dlapiperdataprotection.com/; Law 151/2020 on the Protection of Personal Data, https://www.ilo.org/dyn/natlex/natlex4.detail?p_lang=en&p_isn=111246&p_count=7&p_classification=01; Data Guidance, Algeria: Data protection law published in Official Gazette (Apr. 16, 2019), https://www.dataguidance.com/news/algeria-data-protection-law-published-official-gazette.

[15] Nzilani Mweu, Kenya – Data Protection Overview, Data Guidance (Mar. 2023), https://www.dataguidance.com/notes/kenya-data-protection-overview; Bhagattjee, South Africa – Data Protection Overview, Data Guidance (July 2022), https://www.dataguidance.com/notes/south-africa-data-protection-overview; Cybersecurity Act (2010), https://csdsafrica.org/wp-content/uploads/2021/08/Cybersecurity-Act-2020-Act-1038.pdf.

[16] Nick Oberheiden, 10 Reasons Why FCPA Compliance Is Critically Important for Businesses, National L.R. (July 24, 2020), https://www.natlawreview.com/article/10-reasons-why-fcpa-compliance-critically-important-businesses.

[17] Transparency Int., CPI 2022 for Sub-Saharan Africa: Corruption Compounding Multiple Crises (Jan. 31, 2023), https://www.transparency.org/en/news/cpi-2022-sub-saharan-africa-corruption-compounding-multiple-crises#:~:text=A%20regional%20average%20score%20of,by%20significant%20declines%20in%20others.

Image source: https://www.uneca.org/stories/the-afcfta%2C-an-opportunity-for-africa%E2%80%99s-youth-to-accelerate-trade-and-industrialization

To Neurotech or not to Neurotech – Whether ‘tis nobler in the Mind to Regulate

By Jack Younis

 

 

 

In the 2007 hit television show Chuck, an unwitting computer geek is turned into a CIA secret agent/asset when he downloads fighting skills and a database of government intelligence into his brain.[1] Regardless of the action-comedy shenanigans that ensue, the concept of connecting the human consciousness directly to technology has continued to capture the cultural zeitgeist. Now, neurotechnology and its related fields have advanced beyond discussion in popular culture and science fiction; it has become an increasingly topical reality. Moreover, as advances are made in neurotech, the legal question presented by this progress becomes increasingly demanding.

Much like Chuck’s download and transmission of CIA data to his brain, one facet of neurotechnology is the ability “download” data from the technology itself. [2] The brain itself is a biological computer, relying on the neurological firing of electric signals to execute commands, not unlike those of an actual computer.[3] The resulting application and development of technology that interprets these signals and firings have led to significant advancements in neurotech.[4]

With such increasingly adaptable technology, many in the field are calling for increased regulation of the technology. As professor Rajesh P.N. Rao from the University of Washington in Seattle puts it, “It’s a good time for us to think about these devices before technology leaps ahead and it’s too late.”[5] And regulatory commentary has already begun – the International Organization for Economic Co-operation and Development (OECD) issued in May of 2021 the first international standard for regulating neurotechnology.[6]

Promulgating recommendations for the novel technology include developing a set of nine principles in which governance over the novel industry should consider, not limited to promoting responsible innovation, prioritizing safety assessments, and safeguarding brain data and other information.[7]

In addition to regulatory guidance conversation, the adaptation of these technologies has become prevalent in legal discussions as well. Prominent in the conversation is Dr. Allan McCay, who was named one of the most influential lawyers of 2021 by Australasian Lawyer.[8]  Dr. McCay, a criminal law professor at the University of Sydney Law School, recently published a topical report addressing these concerns within the past year.[9] His report focuses not only on the social, political, and economic concerns related to neurotechnology, but the ethical and legal implications that follow suit.[10] Mirroring the issuance put forth by the OECD, Dr. McCay’s work strongly hones in on the ethical steps that must be considered as progress continues to be made, emphasizing “how the law should respond” in addition to how it is applied.[11]

Even with the careful consideration of neurotechnology’s future, every concern is not strictly related to the restriction of this developing industry. Some find that regulations themselves need breathing room to operate effectively. As one article puts it, “Outright bans of certain technologies could simply push them underground, so efforts to establish specific laws and regulations must include organized forums that enable in-depth and open debate.”[12] Much like the OECD and Dr. McCAy, Yuste and the Morningstar Group contend that the development of neurotechnology requires consideration beyond technological implications; legal questions related to privacy and consent, agency and identity, augmentation, and bias must all be accounted for as part of the discussion.[13]

Regardless if neurotechnology ever enables humans to download martial moves and spy secrets directly to their consciousness, the emergence of this technology will create progressively more and more questions. Whether it is related to the impact on administering justice or increasing development to higher capabilities, balancing outcomes and promoting conversation surrounding neurotechnology will most likely continue to elevate, and the legal field must stay prepared.

 

 

 

 

[1] Chuck: About, NBC (2023), https://www.nbc.com/chuck/about.

[2] Julia Masselos, Neurotechnology, technology Networks (Feb. 11, 2022), https://www.technologynetworks.com/neuroscience/articles/neurotechnology-358488.

[3] Id.

[4] Id.

[5] Esther Shein, Neurotechnology and the Law, 65 Communications of the ACM, no. 8, 2022, at 16-18, https://cacm.acm.org/magazines/2022/8/262912-neurotechnology-and-the-law/fulltext.

[6] OECD, Recommendation of the Council on Responsible Innovation in Neurotechnology, OECD Legal Instrument (Dec. 11, 2019), https://legalinstruments.oecd.org/api/print?ids=658&Lang=en.

[7] Id.

[8] Allan McCay, Neurotechnology, law and the legal profession, The Law Society (August 2022), https://www.scottishlegal.com/uploads/Neurotechnology-law-and-the-legal-profession-full-report-Aug-2022.pdf.

[9] Id.

[10] Id.

[11] Id. at 14.

[12] Rafael Yuste et al., Four ethical priorities for neurotechnologies and AI, nature  (Nov. 9, 2017), https://www.nature.com/articles/551159a#citeas.

[13] Id.

Image Source: https://www.flickr.com/photos/90958025@N03/8384110298

Are Layoffs the New Normal for Big Tech?

By Kasey Hall

 

 

 

Over 140,000 tech workers were laid off in 2022, and so far in 2023, we have seen more than 94,000 jobs cut, ranging from tech start-ups to “Big Tech.”[1] In fact, the tech industry has seen its highest number of layoffs since the dot-com bubble burst in the early 2000s.[2] These layoffs have been all over the news and social media, with many younger generations questioning the sustainability of a career in tech.[3]

In the past, tech companies had prioritized a “growth at all costs” mindset that meant profitability was viewed as a mere afterthought.[4] Sanjay Brahmawar, the CEO of the enterprise software firm Software AG says, “for years companies have said “let’s just keep growing and we’ll figure out profitability somewhere down the road.”[5] Since 2011, the tech industry has been growing year after year with explosive growth occurring after the pandemic.[6] In 2020 and 2021, sales sharply rose as new work-from-home orders put a heavy demand on tech companies, and more people and businesses relied on these technologies than before.[7] During the pandemic, tech hiring became progressively more competitive, with companies increasing pay packages and benefits across the board.[8] For instance, Amazon more than doubled its corporate staffing, and Meta doubled its employment headcount between March 2020 and September 2021.[9] This record-setting growth, however, could not be maintained forever, and we are currently experiencing a significant course correction triggered by an economic slowdown.[10]

For a while now, investors were willing to let these tech companies spend needlessly so long as the share prices continued to grow by double-digits year after year reliably.[11] However, as internal costs rose and spending slowed, many companies faced shrinking profits and alarms from angry investors calling for a significant reduction in expenses.[12] The “growth at all costs” era seems to be ending for “Big Tech.[13] Investors are instead shifting the focus towards profitability and efficiency, describing this as the “new normal” for tech companies.”[14] So, is this “new normal,” led by investors, to blame for these tech layoffs? Michael Cusumano, deputy dean at MIT’s Sloan School of Management, believes that “these massive tech layoffs have more to do with investors than companies’ bottom lines. “[15]

As record-breaking growth is no longer feasible long-term, investors have instead set their sights on curbing expenses and are beginning to evaluate tech companies more harshly.[16] This means that the mass hiring of high-skilled professionals during the pandemic, with sizable salaries and pay packages to match, are the first to be cut as tech companies look to reassess their balance sheets.[17] All this has been done in an effort by tech companies to signal to investors that they are willing to focus on long-term growth by showing more fiscal responsibility in the short term regarding staffing.[18] This reorganization of tech companies likely caused these industry-wide layoffs.[19] However, they should not signal absolute doom to those interested in the industry’s success. Instead, these tech layoffs could indicate that the “industry is maturing or becoming more stable after rapid growth” and that these tech companies are invested in a more sustainable path forward.[20]

 

 

 

 

 

[1] Keerthi Vedantam, Tech Layoffs: U.S. Companies That Have Cut Jobs in 2022 and 2023, Crunchbase News (Mar. 3, 2023), https://news.crunchbase.com/startups/tech-layoffs/.

[2] Amanda Hetler, Tech Sector Layoffs Explained: What You Need to Know, TechTarget (Feb. 1, 2023), https://www.techtarget.com/whatis/feature/Tech-sector-layoffs-explained-What-you-need-to-know.

[3] Tripp Mickle, Tech Layoffs Shock Young Workers. The Older People? Not So Much., N.Y. Times (Jan. 23, 2023), https://www.nytimes.com/2023/01/20/technology/tech-layoffs-millennials-gen-x.html.

[4] Leslie Picker & Ritika Shah, Tech Private Equity Investor Orlando Bravo Says the Mantra of “Growth at all Costs” is Over, CNBC (Mar. 3, 2023, 11:24 AM), https://www.cnbc.com/2022/03/03/tech-private-equity-investor-orlando-bravo-says-the-mantra-of-growth-at-all-costs-is-over-.html.

[5] Will Daniel, How to Navigate the Stock Market’s “New Normal” After the Last 2 Decades of Investing Became Ancient History, Fortune (June 4, 2022, 6:30 AM), https://fortune.com/2022/06/04/tech-stocks-investing-new-normal-end-of-growth-at-all-costs-era/.

[6] The Future of Big Tech, J.P.Morgan (Dec. 23, 2022), https://www.jpmorgan.com/insights/research/future-of-big-tech.

[7] Why Are Tech Companies Laying Off All These Workers?, Forbes (Jan. 27, 2023 10:50 AM), https://www.forbes.com/sites/qai/2023/01/27/why-are-tech-companies-laying-off-all-these-workers/?sh=30a34e764fc6.

[8] Id.

[9] Clare Duffy, How Big Tech’s Pandemic Bubble Burst, CNN (Jan. 22, 2023, 8:11 AM), https://www.cnn.com/2023/01/22/tech/big-tech-pandemic-hiring-layoffs/index.html.

[10] Big Tech Layoffs – A Meltdown or Course Correction? Harvard Prof Ranjay Gulati Explains, The Econ. Times (Nov. 10, 2022, 11:16 AM), https://economictimes.indiatimes.com/markets/expert-view/big-tech-layoffs-a-meltdown-or-course-correction-harvard-prof-ranjay-gulati-explains/articleshow/95418482.cms?from=mdr.

[11] Jake Swearingen, Wall Street Ignored Big Tech’s Bloat During Boom Times. Now It’s Ready to Slide and Dice, Insider (Nov. 17, 2022, 2:03 PM), https://www.businessinsider.com/tech-layoffs-meta-alphabet-wall-street-2022-11.

[12] Hetler, supra note 2.

[13] Daniel, supra note 5.

[14] Id.

[15] Forbes, supra note 7.

[16] Id.

[17] Bobby Allyn, 5 Takeaways from the Massive Layoffs Hitting Big Tech Right Now, NPR (Jan 26, 2023, 5:00 AM), https://www.npr.org/2023/01/26/1150884331/layoffs-tech-meta-microsoft-google-amazon-economy.

[18] Forbes, supra note 7.

[19] Id.

[20] Hetler, supra note 2.

 

 

Image Source: https://unsplash.com/photos/1K9T5YiZ2WU

ChatGPT Co-Wrote an Episode of South Park. Will The AI Chatbot Replace the Need for Writers in Hollywood?

ChatGPT Co-Wrote an Episode of South Park. Will The AI Chatbot Replace the Need for Writers in Hollywood?

By Cleo Scott

 

 

ChatGPT has been a hot topic lately. From dating apps[1] to the courtroom[2], the natural language processing tool driven by artificial intelligence technology is transforming the way we do things.[3] Now, the trailblazing chatbot can add television writing to its resume. South Park creators used OpenAI’s chatbot to create the fourth episode of season 26.[4] The episode, titled “Deep Learning,” shows boys from Stan’s class using the chatbot to write essays and send texts to girls.[5] During a speech written by ChatGPT, the character argues that people shouldn’t be blamed for using the chatbot.[6] “It’s the giant tech companies who took Open AI, packaged it, monetized it, and pushed it out to all of us as fast as they could in order to get ahead,” Stan says.[7]

At one point in the episode, Stan asks ChatGPT to write a story that takes place in South Park, where a boy named Stan must convince his girlfriend that it’s okay that he lied about using AI to text her.[8] After sending the request to ChatGPT, the chatbot begins “thinking” and replies with a story within seconds.[9] “Once upon a time, there was a boy named Stan who lived in South Park. Stan loved his girlfriend very much, but lately, he hadn’t been truthful with her. One day, when Stan got to school, he was approached by his best friend,” the response read.[10]

The ending credits show that the episode was written by both Trey Parker and ChatGPT.[11] While it is remarkable how advanced AI has become, people are now wondering if AI tools like ChatGPT will soon replace the need for human writers. OpenAI co-founder and president Greg Brockman thinks the chatbot could even be used to fix the last season of Game of Thrones.[12] “That is what entertainment will look like,” Brockman said at a SXSW panel. “Maybe people are still upset about the last season of Game of Thrones. Imagine if you could ask your A.I. to make a new ending that goes a different way and maybe even put yourself in there as a main character or something.”[13] Others also think ChatGPT should be used for television writing. For instance, Deadline used ChatGPT to create a pitch for a Mad Max reboot.[14] The chatbot responded with a detailed pitch outlining the premise of the show.[15] While the pitch needed some tweaking, it was said to be doable.[16]

Brockman thinks ChatGPT could help do the “drudge work” for writing but also add a more “interactive” entertainment experience.[17] Hollywood is now monitoring the potential impact of ChatGPT on the industry.[18] The Writers Guild of America West said they are “monitoring the development of ChatGPT and similar technologies in the event they require additional protections for writers.”[19] On the other hand, screenwriters interviewed by The Hollywood Reporter see ChatGPT as a potential tool to aid the writing process instead of a tool that will replace the work of writers.[20]

The issue is that what often takes writers weeks or months to formulate only takes ChatGPT 30 seconds.[21] Brockman said ChatGPT could take over the types of jobs where users “didn’t want human judgment there in the first place.”[22] Big Fish and Aladdin writer John August doesn’t think the AI chatbot will be replacing the kind of writing they’re doing in writers’ rooms anytime soon.[23] Still, he thinks we should start thinking about the best ways to use the tool.[24] “There certainly is no putting the genie back in the bottle. It’s going to be here, and we need to be thinking about how to use it in ways that advance art and don’t limit us.”[25]

 

 

 

 

 

[1] Anna Iovine, Tinder users are using ChatGPT to message matches, MASHABLE (Dec. 17, 2022), https://mashable.com/article/chatgpt-tinder-tiktok.

[2] Janus Rose, A Judge Just Used ChatGPT to Make a Court Decision, Vice (Feb. 3, 2023), https://www.vice.com/en/article/k7bdmv/judge-used-chatgpt-to-make-court-decision.

[3] See Natasha Lomas, ChatGPT shrugged, TechCrunch (Dec. 5, 2022) (quoting “ChatGPT is a new artificial intelligence (AI) tool that’s designed to help people communicate with computers in a more natural and intuitive way — using natural language processing (NLP) technology.”), https://techcrunch.com/2022/12/05/chatgpt-shrugged/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAADU59VjZUKBKujH7dTsnuAADMtjElPmTT1SQCENX5S19xIrGG7Nb4Y_u3oYDPvRKUVBhiRiYoCu4WDM7d8DQ8NnPd02PGcAUWvE8ojCXvVfGpARK5NXKe0F2epgIlzYZwW9V_I6bWPDTi5XWYPNseXl2vvZYP8DVZbrk8XWqyVAW.

[4] Stacy Liberatore, South Park’s latest episode was co-written by ChatGPT: ‘Deep Learning’ ends with a script generated by OpenAI’s chatbot, Daily Mail (Mar. 17, 2023), https://www.dailymail.co.uk/sciencetech/article-11873595/South-Parks-latest-episode-Deep-Learning-written-ChatGPT.html.

[5] Id.

[6] Id.

[7] Id.

[8] Id.

[9] Liberatore, supra note 4.

[10] Id.

[11] Id.

[12] J. Clara Chan, Using ChatGPT to Rewrite ‘Game of Thrones’? OpenAI Co-Founder Says “That Is What Entertainment Will Look Like”, The Hollywood Reporter (Mar. 10, 2023), https://www.hollywoodreporter.com/business/digital/chatgpt-game-of-thrones-openai-greg-brockman-1235348099/

[13] Id.

[14] Melissa Murphy, ChatGPT Is Going To Start Writing Hollywood Movies?, Giant Freakin Robot (last visited Mar. 18, 2023), https://www.giantfreakinrobot.com/ent/chatgpt-writing-hollywood-movies.html

[15] Id.

[16] Id.

[17] Chan, supra note 12.

[18] Id.

[19] Id.

[20] Id.

[21] Murphy, supra note 14.

[22] Chan, supra note 12.

[23] Katie Kilkenny & Winston Cho, Attack of the Chatbots: Screenwriters’ Friend or Foe?, The Hollywood Reporter (Jan. 12, 2023), https://www.hollywoodreporter.com/business/business-news/chatgpt-hollywood-screenwriters-film-tv-1235296724/

[24] See id.

[25] Id.

Image Source: https://metro.co.uk/wp-content/uploads/2023/03/SEC_148556154-8b3a.jpg?quality=90&strip=all&zoom=1&resize=644%2C338

How Doctors Used Patients’ Dreams to Further Their Own

By Jessica Birdsong

 

 

 

 

A lot of us have seen the Netflix documentary, Our Father, presenting a disturbing tale of a physician, Dr. Donald Cline, who, during the 1970s and 80s, performed inseminations on patients using his own sperm, without their knowledge or consent.[1] The extent of his actions is unknown, but he fathered at least 94 biological children, and possibly many more.[2] The discovery of this deception has been devastating for the victims, as they grapple with the loss of their identity and the revelation of having numerous half-siblings.[3] The mothers who were affected have also been left feeling violated and betrayed.[4]

Legal action was taken by some of the affected siblings, but they were met with disappointment. Despite Cline’s egregious actions, he was not charged with rape, battery with bodily waste, or even with criminal deception.[5] Instead, he was only charged with obstruction of justice for being untruthful, resulting in a $500 fine and no jail time.[6] This lack of legal consequences stems from the fact that no law in Indiana or most other states specifically prohibits a doctor from using their own sperm in their patients.[7]

Regrettably, Cline’s story is not unique. In a 2023 decision, a judge dismissed claims made by offspring who discovered that a Connecticut doctor had used his own sperm to impregnate their mothers without their knowledge.[8] After shocking results from an at-home DNA test, the plaintiffs discovered they were half-siblings.[9] They both brought claims of negligence, fraudulent concealment, lack of informed consent, and unfair trade practices, citing the mental anguish and physical injury they have suffered as a result of their discovery.[10]

Plaintiff Flaherty alleges that he sustained and continues to suffer mental anguish and physical injury through his emotional and psychological well-being, as a result of the defendant’s conduct.[11] The court found that because Flaherty doesn’t require any extraordinary care for his injury, this claim is precluded.[12] Further, plaintiff Suprynowicz alleges that she suffers from a genetic condition as a result of the defendant’s negligence that “limits her earning capacity and impairs her ability to earn a living.”[13] The court responded that because the plaintiff never had a wage-earning capacity taken away by the doctor’s conduct, she could not claim compensation for its loss.[14]

Overall, the court found that the plaintiffs’ claims fell under the category of “wrongful life,” a cause of action that has been declined by the majority of courts in the country.[15] The court argued that the plaintiffs could not recover for harm resulting from the achievement of life, and also raised concerns about the difficulty of quantifying damages in cases involving the weighing of an impaired life against no life at all.[16]

Thankfully, there is some hope for change. In January 2023, a federal bill was introduced to establish that it is a criminal offense for medical professionals to knowingly misrepresent the nature or source of DNA used in any procedure that involves assisted reproductive technology.[17] The Protecting Families from Fertility Fraud Act proposes a new federal crime under the Sexual Assault chapter, which would provide greater clarity and legal protection to those affected by fertility fraud.[18]

 

 

 

 

 

 

[1] Lindsey L. Wallace, Netflix’s Our Father Tells The True Story of a Fertility Doctor Who Used His Own Sperm on Patients, Times (May 12, 2022, 5:54 PM), https://time.com/6176310/our-father-true-story-netflix/.

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] Wallace, supra note 1.

[7] Id.

[8] Suprynowicz v. Tohan, X03-CV-21-6140245-S, 2023 WL 2134547, at *1 (Conn. Feb. 17, 2023).

[9] Id.

[10] Id. at *2.

[11] Id. at *5.

[12] Id.

[13] Suprynowicz, 2023 WL 2134547, at *5.

[14] Id.

[15] Id. at *4.

[16] Id.

[17] Press Release, U.S. Congressman Joseph Morelle, Congressman Joe Morelle Acts To Combat Fertility Fraud (Feb. 9, 2023), https://morelle.house.gov/media/press-releases/congressman-joe-morelle-acts-combat-fertility-fraud.

[18] Id.

 

 

 

Image Source: https://www.theverge.com/c/23157354/doctor-donor-fertility-fraud-ancestry-23andme-dna-test

The FTX Saga: There’s No New Hope

By Dante Bosnic

 

 

 

 

As the Sam Bankman-Fried and FTX saga continues, more and more details are coming out regarding the once-famed cryptocurrency giant. According to Protos, Alameda Research purchased HiveEx, an Australian over-the-counter (OTC) trading desk, in 2020 and immediately appointed Bankman-Fried as Director.[1] Fred Schebesta, one of the HiveEx’s founders, also purchased a stake in a local bank, Australian Goldfields Money, in 2018 and announced his intention to launch Australia’s first crypto bank.[2] After Schebesta purchased this stake and before Alameda Research’s acquisition, HiveEx had advertised its ability to get other crypto companies’ banking, even those crypto companies that other banks had repeatedly rejected.[3]

Along with having an impact in Australia, the Financial Times (FT) has reported on FTX-integrated OTC desk Genesis Block which allowed Hong Kong residents to exchange their cash for crypto-currency or vice-versa.[4] A former employee of Genesis detailed to FT that the company had people lining up in the streets with bags of cash to exchange for cryptocurrency.[5] Both HiveEx and Genesis Block seem to serve as important on/off-ramps for FTX and Alameda Research, partly thanks to their connections to the banking system.[6] Along with acquiring banks in Australia and Hong Kong, DAAG Trading DMCC, based out of the United Arab Emirates, was also included in FTX’s bankruptcy.[7]

In addition to news regarding FTX’s acquisitions, more information has also come to light regarding FTX’s charitable contributions. According to Time, leaders of the Effective Altruism (EA) movement were repeatedly warned beginning in 2018 that Sam Bankman-Fried was unethical, duplicitous, and negligent in his role as CEO of Alameda Research.[8] They apparently dismissed those warnings, sources say, before taking tens of millions of dollars from Bankman-Fried’s charitable fund for effective altruist causes.[9] After FTX’s collapse, William MacCaskill, the Oxford moral philosopher and intellectual figurehead of EA, whose movement is set out to help the global poor, tweeted, “I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception”[10] Additionally, MacAskill declined to answer a list of detailed questions from TIME stating, “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly, I look forward to the results of the investigation and hope to be able to respond more fully after then.”[11] Furthermore, one person connected to MacAskill stated, “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance.”[12] While there are still many messy details regarding how many warnings MacAskill and the EA leaders received, it is clear Bankman-Fried’s wealth allowed him to repeat the same mistakes that likely led to FTX’s downfall.[13]

As more and more profiles come out weekly regarding Bankman-Fried and FTX, his court battle has continued on as well. In early March, Bankman-Fried’s lawyers reportedly argued that it might be necessary to delay Bankman-Fried’s criminal trial scheduled for October 2.[14] In a letter to U.S. District Judge Lewis Kaplan, the 31-year-old former billionaire’s lawyers said federal prosecutors in Manhattan had not yet turned over evidence collected from electronic devices belonging to Caroline Ellison and Gary Wang, previously two of their client’s closest associates.[15] “While we are not making such an application at this time, we wanted to note this issue for the Court now,” Christian Everdell, one of Bankman-Fried’s lawyers, wrote in the letter. Along with handling this request, Judge Lewis Kaplan has also questioned Bankman-Fried’s bail conditions.[16] According to CNN, Kaplan said he’s still not convinced that the founder of bankrupt crypto trading platform FTX couldn’t circumvent the more-restrictive bail conditions filed last week.[17] Bankman-Fried, who did not attend the hearing, is currently under house arrest at his parents’ home in Palo Alto, California.[18] Kaplan expressed concerns over handling the possibility of Bankman-Fried using other people’s devices if they’re brought into his California residence and said Bankman-Fried could use a flip phone to call someone to express what he would otherwise send in an email or text. Kaplan also said he would sign an order modifying the conditions to allow Bankman-Fried access to an FTX database to prepare for trial, but that order also needed further restrictions.[19]

As the saga continues, it only looks like it’s getting worse for Bankman-Fried. It will be interesting to see what else surfaces as he approaches his trial in October.

 

 

 

 

 

[1] Protos Staff, HiveEx, Genesis Block, and SBF’s trading desk network, Protos (Mar. 13, 2023), https://protos.com/hiveex-genesis-block-and-sbfs-trading-desk-network/.

[2] Id.

[3] Id.

[4] Id.

[5] Protos Staff, supra note 1.

[6] Id.

[7] Id.

[8] Charlotte Alter, Exclusive: Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed, Time (Mar. 15, 2023), https://time.com/6262810/sam-bankman-fried-effective-altruism-alameda-ftx/.

[9] Id.

[10]Charlotte Alter, supra note 8; Gideon Lewis-Kraus, The Reluctant Prophet of Altruism, The New Yorker (Aug. 8, 2022), https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism.

[11] Charlotte Alter, supra note 8.

[12] Id.

[13] See id.

[14] See Luc Cohen, Bankman-Fried’s lawyers say October trial may need to be delayed, Reuters (Mar. 9, 2023), https://www.reuters.com/legal/bankman-frieds-lawyers-say-october-trial-may-need-be-delayed-2023-03-09/.

[15] Id.

[16] See Lauren del Valle, Judge concerned Sam Bankman-Fried is too ‘technologically savvy,’ could find a way around tech restrictions, Cnn Business (Mar. 10, 2023), https://www.cnn.com/2023/03/10/business/sam-bankman-fried-bail-technology-restrictions/index.html.

[17] Id.

[18] Id.

[19] Id.

 

 

Image source: https://www.cnn.com/2023/03/10/business/sam-bankman-fried-bail-technology-restrictions/index.html

What is the Biden Administration’s new National Cybersecurity Strategy, and will it mean we can keep TikTok?

By: Paige Hastings

 

 

 

 

On March 2, the Biden-Harris Administration released a new National Cybersecurity Strategy (The Strategy) to create a “safe and secure digital ecosystem for all Americans.”[1] In different contexts, the specific meaning of cybersecurity can vary, but cybersecurity policies are extremely important on the national, local, and individual levels.[2]

The Strategy calls for defending critical infrastructure, disrupting security threats, shaping market forces by allocating responsibilities, investing in a plan for lasting innovation, and creating international partnerships to pursue common technology goals.[3] These actions are meant to handle hacking threats more aggressively, disrupt intruders of U.S. computer networks, and hold companies more accountable.[4] The establishment of minimum security standards could force software manufacturers and technology companies to take on the burden of implementing more secure software and better protect consumers.[5] The heightened accountability would be a significant shift from current insufficiencies in holding technology companies responsible for securing user accounts and information.[6]

The United States’ sectoral approach to technology law means many different cybersecurity laws and regulations create a patchwork of protection.[7] Recent threats from hackers, cyberterrorists, and data breaches have led to an increased examination of the U.S.’s regulatory approach.[8] Instead of calling for omnibus legislation, The Strategy addresses regulatory inadequacies by recognizing the need to renovate existing policies.[9]

The revamp would include building on, harmonizing, and streamlining our to empower the current frameworks’ support of national security and public safety.[10] The Strategy will “use existing authorities to set necessary cybersecurity requirements in critical sectors. … (and) leverage existing cybersecurity frameworks,” such as CISA’s Cybersecurity Performance Goals to accomplish these directives.[11] Implementing existing guidelines, like the National Institute of Standards and Technology’s Framework for Improving Critical Infrastructure Cybersecurity, will hopefully result in stricter security obligations and could lead to noticeable advancements more quickly.[12] Despite its potential for expediency, the Strategy’s method might be difficult to enforce without a legislative overhaul and before the next presidential election.[13] Existing policies have been criticized for their inability to control large technology and software companies like Meta Platforms, Inc., Amazon.com, Inc., Google, and Apple Inc., so cybersecurity infrastructure may not be equipped to effectuate the goals of responsibility and accountability The Strategy hopes to produce.[14]

Concerns, interest, and public outcries over data security have been increasing.[15] Increased awareness of companies profiting from lax data security systems and personal information, along with high-profile data breaches, has heightened concerns about cybersecurity in the private sector.[16] Data breaches and the subsequent abuse of private information are especially alarming when consumers lack the know-how and power to protect their data.[17] The responsibility for data security must shift from consumers to large, private sector software companies for advancements in consumer data protection.[18] On a national level, awareness of cyberterrorism dangers has also risen due to conflicts with Russia and risks from platforms like TikTok.[19] Americans have been especially captivated by considerations to ban TikTok in response to its potential threats.[20]  Success of The Strategy could prevent taking such drastic and potentially censoring measures by fortifying our national data protection systems.

Effective collaboration will be integral to successfully executing The Strategy and establishing safer internet use for consumers and our nation.[21] The proposed changes involve government regulation, oversight, enforcement, and participation from large companies and the public.[22] Although it may seem like a lofty request, the interconnected nature of modern society coupled with technological developments means that cyber threats are constant, evolving, and not exclusively important to national security. These dangers affect individuals, organizations, and entire societies, making participation on every level not just important but unavoidable.[23] The private sector, its infrastructure, services, and market power, must take proactive steps to safeguard data. Technology and software companies need to shoulder the additional responsibility The Strategy seeks to impose, potentially over economic interests, to improve the storage and protection of consumer information.  Additionally, the public needs better education about cyber risks so that they may take effective protective action. Our government can provide regulatory frameworks, intelligence, and resources for cyber protections, but it cannot do it alone. Only through an alliance with individuals and companies can The Strategy, and its underlying principles, create the strong and resilient cybersecurity ecosystem that we need.

 

 

 

 

 

 

[1] Press Release, The White House, Fact Sheet: Biden-⁠Harris Administration Announces National Cybersecurity Strategy (Mar. 2, 2023)(available at https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/02/fact-sheet-biden-harris-administration-announces-national-cybersecurity-strategy/).

[2] Jeff Kosseff, Defining Cybersecurity Law, 103 Iowa L. Rev. 985, 987-989 (2018); See Cybersecurity Act of 2015, Pub. L. No. 114-113, Div. N, § 1(a), 129 Stat. 2935 (codified at 6 U.S.C.A. §§ 1501–10 (West 2016)) (neglecting to set forth a definition for cybersecurity); See What is Cybersecurity?, Cybersecurity & Infrastructure Security Agency: News (Feb. 1, 2021), https://us.norton.com/blog/privacy/privacy-vs-security-whats-the-difference; Jessica Farrelly, High-Profile Company Data Breaches 2023, Electric: Blog (Mar. 7, 2023), https://www.electric.ai/blog/recent-big-company-data-breaches; Christopher Yasiejko, Prisma Labs Sued Over Lensa AI App’s Biometric Data Harvesting, Bloomberg Law: News (Mar. 14, 2023, 7:00 PM), https://www.bloomberglaw.com/product/privacy/bloomberglawnews/privacy-and-data-security/BNA%2000000186c7fbd31ba1afc7ff57430002?bna_news_filter=privacy-and-data-security ; Skye Witley, 2023’s Largest Health Data Breach So Far Brings Legal Flurry, Bloomberg Law: News (Mar. 14, 2023), https://www.bloomberglaw.com/product/privacy/bloomberglawnews/privacy-and-data-security/BNA%2000000186c7fbd31ba1afc7ff57430002?bna_news_filter=privacy-and-data-security; Naureen S. Malik, US Cyber Official says China is ‘Big Threat’ to Energy Industry, Bloomberg Law: News (Mar. 10, 2023, 10:10 AM), https://www.bloomberglaw.com/product/blaw/bloomberglawnews/privacy-and-data-security/XCCTHRIK000000?bc=W1siU2VhcmNoICYgQnJvd3NlIiwiaHR0cHM6Ly93d3cuYmxvb21iZXJnbGF3LmNvbS9wcm9kdWN0L2JsYXcvc2VhcmNoL3Jlc3VsdHMvOWJjODc5MmQ0YzMwZmQ3OGY0OTI4NDg5MjA1NGYyMTAiXV0–eab7eb50a376d38e48393a7a5bf008d82883e40c&bna_news_filter=privacy-and-data-security&criteria_id=9bc8792d4c30fd78f49284892054f210; Russia Cyber Threat Overview and Advisories, Cybersecurity & Infrastructure Sec. Agency, https://www.cisa.gov/russia (last visited Mar. 15, 2023); Press Release, The White House, Statement by President Biden on our Nation’s Cybersecurity (Mar. 21, 2022) (available at https://www.whitehouse.gov/briefing-room/statements-releases/2022/03/21/statement-by-president-biden-on-our-nations-cybersecurity/).

[3] President Biden, National Cybersecurity Strategy, The White House 4 (Mar. 1, 2023), https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.

[4] Ben Kochman, 4 Highlights From Biden’s Beefed Up Cybersecurity Strategy, Law360: Analysis (Mar. 2, 2023, 10:20 PM), https://www.law360.com/articles/1581635/4-highlights-from-biden-s-beefed-up-cybersecurity-strategy.

[5] Id.; National Cybersecurity Strategy, supra note 6, at 8-10.

[6] Katrina Manson, Cyber Plan Would Hold Software Makers Responsible in Hacks, Bloomberg Law: Privacy & Data Sec. (Mar. 2, 2023, 3:34 PM), https://news.bloomberglaw.com/privacy-and-data-security/biden-cyber-plan-would-hold-software-makers-responsible-in-hacks.

[7] Janine S. Hiller et al., Cybersecurity Carrots and Sticks, Am. Bus. L. J., (forthcoming 2023) (manuscript at 20-30) (available at https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID4322819_code354835.pdf?abstractid=4322819&mirid=1); Jeff Kossef, Updating Cybersecurity Law, Hous. L. Rev., (forthcoming 2023) (manuscript at 8-24) (available at https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID4364356_code3083727.pdf?abstractid=4364356&mirid=1).

[8] Jeff Kosseff, supra note 2, at 1001-1005; Skye Witley et. al., Why TikTok App Bans are Trending Across the US: Explained, Bloomberg Law: Privacy & Data Sec.(Mar. 8, 2023, 5:05 AM), https://www.bloomberglaw.com/product/blaw/bloomberglawnews/privacy-and-data-security/X4IQMABC000000?bc=W1siU2VhcmNoICYgQnJvd3NlIiwiaHR0cHM6Ly93d3cuYmxvb21iZXJnbGF3LmNvbS9wcm9kdWN0L2JsYXcvc2VhcmNoL3Jlc3VsdHMvOWJjODc5MmQ0YzMwZmQ3OGY0OTI4NDg5MjA1NGYyMTAiXV0–eab7eb50a376d38e48393a7a5bf008d82883e40c&bna_news_filter=privacy-and-data-security&criteria_id=9bc8792d4c30fd78f49284892054f210; Christopher Bing, Russian Hackers Preparing New Cyber Assault Against Ukraine – Microsoft Report, Reuters: Technology (Mar. 15, 2023, 3:09 PM), https://www.reuters.com/technology/russian-hackers-preparing-new-cyber-assault-against-ukraine-microsoft-report-2023-03-15/.

[9] National Cybersecurity Strategy, supra note 6, at 5-9.

[10] Id. at 8.

[11] Id.

[12] Id.

[13] Katrina Mason, supra note 8.

[14] Id.

[15] Christopher Brown, Website-Browsing Surveillance Suits Erupt After Appellate Ruling, Bloomberg Law: News (Sept. 23, 2022, 4:45 AM), https://www.bloomberglaw.com/product/blaw/bloomberglawnews/bloomberg-law-news/BNA%20000001836054d422ada7fbf7e0b90001?bna_news_filter=bloomberg-law-news; Brenna Goth, Florida ‘Digital Rights’ Push Big Tech Into DeSantis Culture War, Bloomberg Law: News (Mar. 15, 2023, 5:00 AM), https://www.bloomberglaw.com/product/blaw/bloomberglawnews/bloomberg-law-news/BNA%2000000186cd69dfddabf6efff1d5a0000?bna_news_filter=bloomberg-law-news.

[16] Brenna Goth & Skye Witley, Data Privacy ‘Panoply’ Looms as States Move to Fill Federal Hole, Bloomberg Law: News (Jan., 19, 2023, 5:01 AM), https://www.bloomberglaw.com/product/privacy/bloomberglawnews/bloomberg-law-news/X8ID0VLS000000?#jcite.

[17] Mason Storm, When the Consumer Becomes the Product: Utilizing Products Liability Principles to Protect Consumers from Data Breaches, 29 Rich. J.L. & Tech. 1, 4-11 (2023); Jen Easterly, The Cost of Unsafe Technology and What We Can Do About It, Cybersec. & Infrastructure Sec. Agency: Blog (Mar. 10, 2023), https://www.cisa.gov/news-events/news/cost-unsafe-technology-and-what-we-can-do-about-it.

[18] Id.

[19] Bing, supra note 12; Josh Liberatore, GAO Warns US Gov’t About ‘Catastrophic’ Cyber Risk, Law360: News (June 22, 2022), https://www.law360.com/articles/1504836?scroll=1&related=1; Malik, supra note 4.

[20] Witley et. al., supra note 12; Anna Edgerton, US TikTok Ban Advances in House After Flurry of China Bills, Bloomberg Law: News (Mar. 1, 2023, 10:29 AM), https://www.bloomberglaw.com/product/blaw/bloomberglawnews/bloomberg-law-news/XF40I5JS000000?#jcite.

[21] National Cybersecurity Strategy, supra note 6, at

[22] Id.

[23] Narenda Sharma et. al., Cost and Effects of Data Breaches, Precautions, and Disclosure Laws, 8 Int’l J. Emerging Trends  Soc. Sci. 33, 36 (2020).

 

Image Source: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.securitycompass.com%2Fblog%2Fwhite-house-national-cybersecurity-strategy-takes-on-industrys-third-rail%2F&psig=AOvVaw0GUvx07zb0BlqyanViD6ct&ust=1679150033058000&source=images&cd=vfe&ved=0CBAQjRxqFwoTCJifub-X4_0CFQAAAAAdAAAAABAE

Page 11 of 78

Powered by WordPress & Theme by Anders Norén