The first exclusively online law review.

Author: JOLT Page 2 of 8

The End of TikTok?

By Sophie Deignan





Banning TikTok, a popular video-sharing app, is being fueled by bi-partisan support as both federal lawmakers and state governors move beyond merely prohibiting the app on government-issued devices.[1] The main concern behind TikTok is that it is owned by Beijing-based Byte Dance Ltd., and there is a fear that this gives the Chinese government unrestricted access to U.S. users’ data.[2] Assurances by TikTok representatives that the app does not share U.S. users’ data with the Chinese government have not persuaded U.S. lawmakers that the app is harmless[3]. Rather, the U.S. fears that the Chinese Communist Party may require Byte Dance, as a Chinese business, to hand over data collected by the app at any point, thus providing China with extensive access to U.S. users’ data.[4]

The app first became popular during the Covid-19 pandemic, allowing users to create and upload short videos of themselves, and it is now seen as mainstream social media.[5] Despite the fun dance moves and cooking recipes that have been uploaded on TikTok, the app has already been banned in varying capacities from state-issued devices by approximately two dozen states with both Republican and Democratic governors.[6] In December 2022, the federal government also banned the app from government-issued devices.[7]

Now, the federal government is looking to take further action to prohibit the app entirely.[8] Most recently, a group of bipartisan U.S. Senators introduced a bill on March 7, 2023, aimed at dealing with technology companies that are based in adversary countries.[9] The bill, if passed, would create a new government process for reviewing potential risks that are inherent in the use of foreign technology and blocking technology found to be too dangerous.[10] Titled the RESTRICT Act (“Restricting the Emergence of Security Threats that Risk Information and Communications Technology”), this bill would require the Commerce Department to establish procedures that “identify, deter, disrupt, prevent, prohibit and mitigate” risks that the U.S. believes are linked to foreign technology.[11] If passed, the bill would grant more governmental authority for the policing of apps and services that are viewed as risks to U.S. users’ data security.[12] This would allow the U.S. government to specifically target TikTok if it wanted to.[13] The Biden Administration has suggested that the RESTRICT Act should go further than is currently proposed, and simply ban TikTok outright.[14] It is unclear at this point in time how the White House will either support or modify the RESTRICT Act; however, it appears that the President has already attempted but failed to mitigate some of the potential risks associated with TikTok.[15] Ongoing private negotiations have already occurred between TikTok and the Committee on Foreign Investment in the United States, but no agreement was reached between the two parties regarding how TikTok may continue to operate in the U.S. without creating a national security risk.[16] It appears that the lack of success arising from these negotiation is what has prompted the Biden Administration to turn to Congress to pass legislation that will ban TikTok.[17] In the upcoming months, it will be important to follow if the RESTRICT Act is passed as it currently stands, or if the Act will reworked to take an even more aggressive stance against TikTok.






[1] Jennifer Calfas, TikTok Bans on Government Phones are Increasing. Here’s What to Know., Wall St. J. (Jan. 26, 2023),

[2] Id.

[3] Id.

[4] Aaron Schaffer, There are TikTok Bans in Nearly Two Dozen States, Wash. Post (Jan. 10, 2023),

[5] Calfas, supra note 1.

[6] Schaffer, supra note 2.

[7] Bobby Allyn, Biden Approves Banning TikTok from Federal Government Phones, NPR (Dec. 30, 2022)

[8] Calfas, supra note 1.

[9] John D. McKinnion, TikTok Faces More Scrutiny in New Senate Bill, Wall St. J. (Mar. 7, 2023),

[10] Id.

[11] Id.

[12] Id.

[13] David McCabe, White House Said to Consider Pushing Congress on Dealing With TikTok, N.Y. Times (Mar. 6, 2023),

[14] Id.

[15] Id.

[16] Id.

[17] Id.




Image Source:


By Kevin Frazier*






“We really don’t know about these things. You know, these are not like the nine greatest experts on the Internet.”[1]

Justice Kagan broke an unspoken rule during oral argument in a case involving complex technological evidence – she acknowledged that judges may lack the requisite knowledge to adjudicate certain cases. This admission should lead to a long overdue conversation about providing judges with the background knowledge necessary to adjudicate cases involving new technology and complex science.

Our adversarial judicial system hinges on parties making the strongest arguments possible and judges understanding the strengths and weaknesses of those arguments. This system breaks down where neither the parties nor the judges possess an accurate and sufficient understanding of the complex evidence in question. Yet, parties are not compelled to offer the most accurate and complete information – their task is instead to present the most compelling evidence. And, many judges have admitted – though less publicly than Justice Kagan – that they lack familiarity with many of the complex scientific and technological topics that will make up a larger share of their respective dockets in the next few years.[2]

The possibility of the “strongest” argument winning the day based on inaccurate, flawed, or outdated scientific or technological evidence is not one the parties nor the public should accept. Judicial decisions based on faulty conceptions of complex topics will further undermine the public’s perception of the courts. The current approach to reducing the odds of such a possibility is woefully inadequate.

Judicial education today is akin to a NASCAR racer teaching a teenager how to fly a plane —more experienced judges informing their junior colleagues about topics neither group knows anything about. What’s astonishing is that this approach is actually an improvement on the historical norm. Until the 1950s, judges avoided any sort of education once they assumed the bench.[3] Nevertheless, it’s time that state court judges receive timely, impartial, and accurate education through the creation of State Court Science Offices (SCSOs).


Law school and even years of legal practice do not equip judges to deal with the panoply of issues presented in modern disputes. “When lawyers don black robes to become judges, they do not magically acquire all the knowledge, experience, and skills necessary to become excellent judges,” according to Chief Justice Mary Russell of the Supreme Court of Missouri.[4] Consequently, judicial education is “imperative.”[5] In particular, judges must receive education on “the appropriate information to allow [them] to develop the most comprehensive and current understanding of substantive areas of the law, as well as the law of evidence and procedure.”[6]

Chief Justice Russell lists scientific and technological matters as the first area that demands more judicial education.[7] Others have similarly highlighted the importance of scientific and technological knowledge among jurists in an increasingly complex and complicated world.[8] Absent providing judges with an understanding of everything from artificial intelligence to online privacy, Chief Justice Russell warns that judges will “hand down opinions rooted in ignorance.”[9] Professor Edward Cheng shares her concerns. He forecasts that “unfamiliarity with scientific concepts and an inability to assess expert evidence critically substantially increase the chance of erroneous decisions, particularly when judges face conflicting expert witnesses.”[10] In fact, the decision whether to admit scientific and technological evidence may determine the outcome of case.[11] More broadly, courts that fail to follow the best scientific evidence will cause the entire judiciary “intellectual embarrassment.”[12]

Complex evidence places a major burden on state court judges in particular – many of which bear the task of serving as “all-important gatekeepers who are obligated to ensure that only ‘good’ science reaches the jury.”[13] To complete this task, judges must have the requisite knowledge to “critically examine an expert’s methodology and conclusions[.]”[14] State court judges will also have to critically examine and, in some cases, overrule precedent based on erroneous scientific and technological findings.[15]

State court judges are not up to this task.[16] Cheng goes so far as to argue that they are “remarkably ill-positioned” to make decisions regarding complex evidence.[17] Their incompetent analysis of this evidence results from two facts: first, as conveyed by Chief Justice Russell, they lack the background knowledge required to assess specialized information;[18] and, second, they lack the educational resources to make up for that lack of familiarity.[19] The resulting knowledge gap undermines the ability of the judges to “know[] the needs of the people [they] serve, and hav[e] the ability to serve those needs,” as described by Judge Bruce Bohlman of North Dakota.[20]

The current most common approach to equipping the court with the requisite degree of specialized knowledge—presentation by experts selected by the parties–is also insufficient. Perhaps unsurprisingly, judges generally do not trust such experts. A survey conducted by the Federal Judicial Center revealed that federal judges commonly feel that party-appointed experts “abandon objectivity and become advocates for the side that hired them.”[21] Such widespread and significant doubts about the impartiality of expert input means that education on specialized knowledge must come from other sources.[22]

Increasing judicial competency then requires either providing likely candidates for the judiciary (aka law school students) with more technical education or developing more resources for sitting judges to close knowledge gaps as they arise. Other scholars have examined the need for and possibility of including more technical courses in law school. Generally, they agree that law schools offer too few courses in the technical fields that underlie an increasing amount of evidence. However, even if this shortage was corrected, there’s no guaranteeing that law school grads who, possibly decades later, assume the bench will recall any of this content in a way that will assist them in adjudicating a highly complex and technical dispute. Additionally, any requirement for students to take such courses would be overinclusive—though empirical analysis is becoming more widespread across legal practice areas, the provision of educational resources on the topic should likely occur before law school or through other graduate education programs.

So if educating judges on technical issues before they assume the bench is inadequate at best and, more likely, improbable, then judicial education comes from other sources. However, education from “other sources” presents a slew of questions including, but not limited to, who will do the teaching? Who decides the content? How frequently will classes occur? Who will attend those classes? Existing answers to these questions have not resolved the “unfamiliarity” that gave rise to Cheng’s concerns about a dearth of judicial education.


Current public sources of judicial education, such as the Federal Judiciary Center with respect to federal judges and the State of Missouri’s education programs for its state court judges, generally occur too infrequently to provide judges with adequate technical knowledge.[23] Moreover, these education programs typically do not cover the topics such as scientific and technological evidence.[24] Finally, there’s the issue of capacity. Judges have limited time to attend these programs—let alone keep up to date on the content following the program.[25] And, the programs themselves often educate a small fraction of the judges within a state’s judicial system.[26] State court judges will lack the requisite familiarity with complex scientific and technical matters so long as new approaches to educating them and their clerks go unexplored.  

Similar issues are posed by counting on judges to tutor themselves on complex matters. Cheng argues that “independent judicial research,” whereby judges do as any “responsible person” would when faced with an unfamiliar and specialized area—namely, “do research,” “read references books,” and “search the Internet for relevant materials.”[27] This approach would certainly be timely and topical – i.e., the judge would start their research upon the instant case coming before their court, and they would attempt to refine that search only to the issue(s) at hand. Moreover, the researching judge would have the time to conduct a more deliberate and comprehensive inquiry than may be provided at a seminar or through a panel at a conference. Still, independent judicial research poses at least five significant and disqualifying drawbacks.

First, there’s judicial bias.[28] Second, there’s no source of validation as to whether the judge relied on quality sources with relevant and accurate information pertaining to the pertinent questions. One can imagine judges unintentionally relying on sources with biased or flawed information. In fact, parties may respond to such a trend by paying third parties to develop purportedly authoritative and neutral websites meant to solicit judicial attention and reliance. Third, the court again does not retain the benefits of this study. Fourth, independent judicial research may result in judges improperly looking outside the record.[29] And, fifth, professional researchers will do a better job than judges in pulling the requisite information in a timely and accurate manner. In other words, anything a judge can do, an officer of the SCSO could do better. Professional researchers can, for instance, better maintain an auditable record of which sources were considered and why. Additionally, unlike judges constrained by “limited resources for conducting specialized research,”[30] professional researchers would presumably have access to all relevant databases (as well as awareness of those databases).

State Court Science Offices should become the default means of educating the judiciary on scientific and technological matters as they come before the court. In a manner akin to the Congressional Research Service, SCSOs would produce detailed summaries on complex topics at the request of the parties or the judge. A professional researcher within the SCSO would take the lead on responding to such a request and ultimately produce a report that the parties, the public, and the court could consult. These researchers—as public employees within a “think tank”-esque agency—would be impartial, independent, and anonymous.

Unlike independent judicial research and educational content taught by other judges, the professional researchers that make up SCSOs would have the expertise necessary to ensure judges receive only accurate and necessary information. Moreover, reports by SCSOs would avoid other pitfalls associated with the current approach to judicial education—the knowledge and guidance contained in a report would not end with a judge’s tenure; doubts about the quality and impartiality of the research would be diminished relative to testimony from the parties’ expert witnesses; and, the reports could be updated in light of new scientific and technological advances and in response to specific cases.


Judges are not experts in every matter that comes before their respective courts. A failure to address this reality will inevitably result in decisions so devoid of scientific and technological rigor that the public will come to doubt the legitimacy of the courts. In fact, remarks such as those by Justice Kagan are already awakening the public to the limits of judicial knowledge. It follows that time is of the essence—state court systems should take the lead in adopting a new approach to judicial education. By creating State Court Science Officers, judges can receive timely, accurate, and impartial information.







* Kevin Frazier will join the Crump College of Law at St. Thomas University as an Assistant Professor starting this Fall. He currently is a clerk on the Montana Supreme Court.

[1] Transcript of Oral Argument at 45, Reynaldo Gonzales, et al. v. Google, ___ U.S. ___ (2023) (No. 21-1333).

[2] See The National Courts and Sciences Institute, Judges’ Forecasts and Preferences for Managing Scientific Evidence in Complex Cases 2020-2030 at 8-9 (Oct. 17, 2020) [hereinafter, “NCSI Survey”]. Though this survey only received responses from 790 judges in 26 states and territories, the findings reinforce other studies that have revealed a shortage of judicial education on scientific and technological questions, as well as a desire by judges for additional education and resources on those topics.

[3] Francis C. Cady & Glenn E. Coe, Education of Judicial Personnel: Coals to Newcastle, 7 CONN. L. REV. 423, 424 (1975); see Robert G. Bone, Judging as Judgment: Tying Judicial Education to Adjudication Theory, 2015 J. Disp. Resol. 129, 130-32 (2015).

[4] Mary Russell, Toward a New Paradigm of Judicial Education, 2015 J. Disp. Resol. 79, 79 (2015).

[5] Id.

[6] Id.

[7] Id. at 84; but see David S. Caudill & Lewis H. LaRue, Why Judges Applying the Daubert Trilogy Need to Know about the Social, Institutional, and Rhetorical–and not Just the Methodological–Aspects of Science, 45 B.C. L. Rev. 1, 23 (2003) (summarizing challenges to the notion that “science” questions can be separated from “non-science” questions—thereby questioning the notion of science- and technology-specific education and judicial resources).

[8] See, e.g., Edward K. Cheng, Independent Judicial Research in the Daubert Age, 56 Duke L.J. 1263, 1265 (2007) (“Scientific and other forms of expert evidence are crucially important to modern litigation.”)

[9] Russell, supra note 4, at 84.

[10] Cheng, supra note 8, at 1300

[11] Id. at 1265.

[12] Editorial, Federal Judges v. Science, N.Y. Times, Dec. 27, 1986, at A22.

[13] Cheng, supra note 8, at 1265 (detailing that many state supreme courts have followed the U.S. Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals, Inc., which assigned that gatekeeping task to federal court judges); see Sophia Gatowski et al., Asking the Gatekeepers: A National Survey of Judges on Judging Expert Evidence in a Post- Daubert World, 25 Law & Hum. Behav. 433 (2001) (finding that state court judges see themselves as active participants in admitting or excluding scientific evidence regardless of whether their state follows Daubert or Frye).

[14] Cheng, supra note 8, at 1265.

[15] Russell, supra note 4, at 84 (providing changes in forensic sciences as an example of how advances in science and technology require that judges reconsider “what had been seen as accepted and trusted evidence.”)

[16] Andrew W. Jurs, Questions from the Bend and Independent Experts: A Study of the Practices of State Court Judges, 74 U. Pitt. L. Rev. 47, 55 n.47 (2012) (collecting from various articles quotes on how judges struggle to deal with complex evidence); see Caudill & LaRue, supra note X, at 19 (opining that judges not only lack an understanding of admissibility frameworks related to complex evidence but also of the “social, institutional, and rhetorical” aspects of science and technology).

[17] Cheng, supra note 8, at 1266.

[18] Russell, supra note 4, at 79.

[19] See Cheng, supra note 8, at 1266.

[20] Bruce Bohlman, Transforming the Judicial System Through Education, in EDUCATION FOR DEVELOPMENT: THE VOICES OF PRACTITIONERS IN THE JUDICIARY, JERITT MONOGRAPH SIX 7 (Charles Claxton & Esther Ochsman eds., 1995).

[21] Molly Treadway Johnson et al., Fed. Judicial Ctr., Expert Testimony in Federal Civil Trials: A Preliminary Analysis 5 (2000).

[22] Such doubts also diminish the likelihood of a judge gleaning the necessary information from the experts simply by questioning them from the bench – a common practice used by federal judges to evaluate complex evidence. See Carol Krafka et al., Judge and Attorney Experiences, Practices, and Concerns Regarding Expert Testimony in Federal Civil Trials, 8 Psychol. Pub. Pol’y & L. 309, 326 (2002) (discussing the percentage of federal judges that use certain techniques when faced with complex evidence). Some state court judges also resort to questioning experts from the bench but such questioning cannot close any existing information gap if judges do not trust such experts from the outset. See Shirley A. Dobbin et al., Federal and State Trial Judges on the Proffer and Presentation of Expert Evidence, 28 Just. Sys. J. 1, 10, 77 (2007) (discussing the percentage of state judges that use certain techniques when faced with complex evidence).

[23] See Russell, supra note 4, at 79 (noting that budget shortfalls have limited Missouri’s ability to fund its own education programs as well as to reimburse judges for attending education programs hosted by other entities).

[24] See, e.g., id. at 79 (describing “developments in the areas of civil law, criminal law, family law, juvenile law, and probate,” as well as “sessions on skills and information,” as the usual curriculum for judicial education in Missouri).

[25] See, e.g., id. (“Our dockets are full and impose real limitations on the time judges can devote to educational opportunities.”)

[26] See, e.g., id. at 84 (disclosing that about 30 judges from Missouri attended one of the premier science and technology education seminars hosted by the Advanced Science and Technology Adjudication Resource (ASTAR) Project. Despite this low number, Chief Justice Russell asserted that such training made the state’s judiciary “one of the most effectively trained judiciaries in the country with respect to complex scientific and technological dockets.”)

[27] Cheng, supra note 8, at 1266.

[28] See Anne E. Mullins, Opportunity in the Age of Alternative Facts, 58 Washburn L. J. 577 (2019).

[29] See George D. Marlow, From Black Robes to White Lab Coats: The Ethical Implications of a Judge’s Sua Sponte, Ex Parte Acquisition of Social and Other Scientific Evidence During the Decision-Making Process, 72 St. John’s L. Rev. 291, 298 (1998).

[30] Cheng, supra note 8, at 1283.




Image Source:

Stop and “Use” the Roses

By: Payton Miles


A bouquet of flowers can be used in many different ways, from an “I’m sorry” to a centerpiece on the dining room table. But what is a flower’s main use? The United States Court of Appeals for the Federal Circuit recently held that displaying flowers is a use for an intended purpose – ornamentation.[1]

On February 2, 2023, the Federal Circuit decided in In re WinGen LLC that a patent for a petunia-like plant was invalid because it was on display at a trade show before the patent was initially filed.[2] The patent in issue is a utility patent directed to an ornamental Calibrachoa plant known as “Cherry Star.”[3] Through the inventor’s breeding process, the claimed plant has a single half-dominant gene that results in a star pattern displayed on the center of the petals.[4]

During a reissue of this utility patent, which is an application filed to correct an error in the patent that would otherwise make it “wholly or partly inoperative or invalid,”[5] the patent owner admitted that its “ornamental” plant was on display at a private Home Depot event before the patent was filed.[6] Under pre-AIA 35 U.S.C. § 102(b), a person may receive a patent unless the invention was “in public use . . . more than one year prior to the date of application for patent in the United States.”[7] When deciding what constitutes a “prior use” under pre-AIA law, courts will typically consider “whether the purported use: (1) was accessible to the public; or (2) was commercially exploited.” [8] In this case, the Federal Circuit was newly tasked with deciding what “accessible to the public” meant for an ornamental plant at a trade show.

In Motionless Keyboard Co. v. Microsoft Corp., the court found that the visual display of a keyboard did not amount to a “public use” because the keyboard was not connected to a device that would allow it to be used for its intended purpose while on display.[9] Here, the Federal Circuit seemed to differentiate the plant from the keyboard in that the sole purpose of the plant was to be on display.[10] Therefore, the display of the plant at the Home Depot trade show was a public use before the patent was filed, causing the patent to be invalid.[11]

It should be noted that WinGen tried to argue that the utility of the “Cherry Star” comes from the genetics of the plant, which were not publicly disclosed at the trade show.[12] However, the Federal Circuit declined to address that issue because WinGen did not present this argument to the USPTO Patent Trial and Appeals Board. If WinGen focused on the genetics and how to grow the plant from the beginning of the reissue, then this decision may have looked more like Motionless Keyboard.[13] What if the intended purpose was growing this unique plant rather than merely displaying it? No information about growing the plant was revealed at the trade show, so the Federal Circuit may have had to do a further analysis into this “prior use.” Regardless, WinGen seems to have set themselves up for failure by initially telling the Court that the patent claims covered an ornamental plant.[14]




[1] See In re WinGen LLC, No. 2021-2322, 2023 U.S. App. LEXIS 2628, at *8 (Fed. Cir. Feb. 2, 2023).

[2] Id. at *1.

[3] Id.

[4] Id. at *1–*2.

[5] 1402 Grounds for Filing, USPTO,,application%20which%20became%20the%20patent (last visited Feb. 22, 2023).

[6] Dennis Crouch, How Does One “Use” Flowers?, Patently-O (Feb. 6, 2023),

[7] 35 U.S.C. § 102(b).

[8] Invitrogen Corp. v. Biocrest Mfg., L.P., 424 F.3d 1374, 1380 (Fed. Cir. 2005).

[9] See Motionless Keyboard Co. v. Microsoft Corp., 486 F.3d 1376, 1385 (Fed. Cir. 2007).

[10] See In re WinGen, 2023 U.S. App. LEXIS 2628, at *8.

[11] Id.

[12] Id.

[13] Id. at *7.

[14] See Crouch, supra note 6.


Image Source:

Can law keep up with technology?

By Tundun Oladipo




Legislators are often playing catchup with technology, and unfortunately, it is common for legislators not to understand the technology they are trying to regulate or cannot agree on the best way to regulate it.[1] Often times, the time spent playing catchup is enough for the technology to evolve and change to something far beyond its origins, leaving legislators back at square one[2].

Social media has always been a great example of technology that legislation could not keep up with. The most popular example is Facebook’s many data privacy issues.[3] It is one thing to have these issues domestically, but recently it has increasingly become an international issue for the United States as well.[4]

Many were outraged by the discovery of China’s spy balloon in the United States earlier this year[5]. It was found floating over the state of Montana and was shot down not too long after discovery.[6] This intrusion on American privacy, coupled with the feelings of outrage, led to many states taking action against China through their legislators. Many states and congress turned their attention to TikTok and began instituting bans of the social media app from government devices because of its ties to China and security concerns[7].

The Commonwealth of Virginia is one of 29 states and counting that have acted against TikTok[8]. The ban of TikTok began with Virginia’s Governor Glenn Youngkin.[9] He issued an executive order banning TikTok from state-owned devices, and the Virginia General Assembly made the ban permanent by passing a bill in both the House and Senate to prevent the use of TikTok on government device or internet. [10]

Despite the bipartisan support for the ban, a heavy debate was had about the bill’s effectiveness and, more specifically, the efficacy of a ban via legislation.[11] Senators in the chamber raised valid points about the fluidity of technology and the technology industry as a whole. The legislation, as written, prohibits the “use [of] any application…or access [of] any website developed by ByteDance Ltd. or Tencent Holdings Ltd” [12]. Many senators were concerned that this definition was too limited and that the same applications could be sold to another company or that ByteDance or Tencent Holdings could change their name and would no longer be affected by this ban. Effectively China (or any other technology) could find its way around this ban, and then legislators would have to return to rework the bill the next year.

This argument has some merit as the Biden Administration recently had to add 59 Chinese Companies to the list of companies Americans were no longer allowed to invest in[13]. This comes only two years after the initial list was released because other companies had been created sharing similarities. [14] Flexibility has not always been the law’s strong suit, and many Virginia senators called for the ban to remain in the hands of the Governor and the executive branch specifically to allow for flexibility; once the Virginia session ends, the law will remain as is till the following year[15]. Other senators argued that the ban from the governor could only go so far and may come to an end with his term.[16]

Despite Virginia being ranked as one of the most effective state legislatures, they still have to catch up to ever-changing technology.[17] The idea of Law and Technology may quickly be fading with the speed that technology is changing, and reactionary measures for technology may need to break out of the current legislative structures to keep up.






[1] Brett Milano, Big Tech’s power growing at runaway speed, Harvard Gazette (Feb. 7, 2019),; Alex Sherman, U.S. lawmakers agree Big Tech has too much power, but what to do about it remains a mystery, CNBC (last updated Jul. 30, 2020, 02:29 PM EDT),

[2] Rae Hodge, 60% of people worry that tech is moving too fast, study finds, CNET (Feb. 25, 2020, 10:24 AM PT),

[3] Facebook data privacy scandal: A cheat sheet, TechRepublic (Jul. 30, 2020, 11:37 AM PDT),

[4] Supra note 1; James Stavridis and Frances Townsend, US tech at risk of falling behind, threatening our global interests, Armytimes (Oct. 28, 2020),

[5] Martha Raddatz, Luis Martinez, and Karson You, Large Chinese reconnaissance balloon spotted over the US, officials say, abcNews (Feb. 3, 2023, 09:23 AM),

[6] Id.

[7] Sarah Elbeshbishi, Congress weighing TikTok ban following Chinese spy balloon discovery, USA Today (last updated Feb. 15, 2023, 02:46 PM ET),; Andrew Adams, Updated: Where Is TikTok Banned? Tracking State by State, govTech (last Updated Feb. 21, 2023),

[8] Id.

[9] Ned Oliver, The Virginia General Assembly is taking on China, MSN (Feb. 14 , 2023) ,

[10] Id.

[11] February 7, 2023 – Regular Session – 10:00 am – Feb 7th, 2023, (Feb. 7, 2023)

[12] SB 1459, LIS, (last visited Feb. 23, 2023)

[13] Biden expands US investment ban on Chinese firms, BBC News (June 3, 2021),

[14] Id.

[15] supra note 7.

[16] Id.

[17] FiscalNote Releases 2021 “Most Effective States” Legislative Report, FiscalNote (Dec. 14, 2021),



Image Source:

BLOCKCHAIN: A Viable Tool For Shareholder Participation in Modern-day Corporate Governance

By Jessica Otiono




Corporate Democracy requires that shareholders actively participate and exert influence in the governance affairs of their Corporation.[1] The Corporate democracy theory accords with the doctrine of shareholder primacy, “the view that directors’ fiduciary duties requires them to maximize shareholder wealth and precludes them from giving independent consideration to the interest of other constituents.”[2] The relationship between the Board and Shareholders in the corporate governance sense is an agency relationship with shareholders as principals enjoying certain rights.

Shareholder rights are essential to a publicly traded company’s share ownership because it allows its owner to influence the company’s direction and hold its management accountable. Over the years, shareholders have actively participated in corporation affairs through voting, shareholder proposals, and shareholder derivative litigation. The right to vote is so sacrosanct that courts have found that a Board cannot impede this right without compelling justification.[3]

To exercise their right to vote, shareholders at publicly traded companies may vote in person during an annual general meeting (AGM), through the mail, or through corporate proxies. However, many shareholders are not motivated to vote in person during AGMs because of the mindset that their small stake in a corporation will have minimal impact on the result of an action. [4] Shareholders who want to vote but not in person have the option of proxy voting. However, proxy voting relies on multiple layers of intermediaries, including financial institutions and information service providers, which makes it inefficient, costly, and complex due to a lack of transparency.[5] Fortunately, Blockchain technology could provide smart solutions for the classic inefficiencies in corporate governance.[6]

Blockchain technology is a distributed ledger system that allows for the creation of secure and presumably immutable records. In a Blockchain system that operates on a decentralized peer-to-peer network, information is stored on a public or private ledger and contains all executed transactions.[7]  To date, most applications of blockchain technology have been to record transactions, including digital assets such as cryptocurrency.[8] Blockchain is a crucial tool today because it provides immediately shared and completely transparent information stored on an immutable ledger that only permissioned network members can access.[9] An immutable ledger means that nobody can tamper with a transaction after it has been held on a shared ledger.[10]  Blockchain Technology may solve the challenges and complexities of shareholder voting and corporate monitoring structures in several ways.

Reducing Agency and Monitoring Costs

Blockchain may reduce agency and monitoring costs associated with the Board’s mandatory disclosures to shareholders under their fiduciary duties.[11] The Corporation may give shareholders permissioned access to real-time records and increase verified secured communication to reduce burdensome disclosure requirements, thus establishing trust between directors and shareholders.[12]

Share ownership Transparency  

Blockchain could provide a transparent overview of share ownership. All holders of record shares at a publicly traded company would be visible while allowing for real-time observation of the transfer of shares from one owner to another.[13] Managerial shareholder ownership would become more transparent, and minority shareholders would immediately know their ownership amount and have immediate access to their ownership rights.[14]

Shareholder Voting

Blockchain could be a viable substitute for archaic mail voting or corporate voting systems.[15] Shareholder votes could be recorded on a permissioned distributed ledger that could be managed directly by the Corporation or by the shareholders themselves.[16] Shareholders could use Blockchain technology to designate proxies to vote on their behalf by providing a private key to the voter’s proxy, which allows the voter to determine precisely how many of their shares were voted. [17] This process improves the speed, transparency, and accuracy of voting resulting in less litigation when votes are mistakenly cast.[18]

The use of Blockchain technology to improve shareholder participation comes with certain risks like tampering with data before it is stored, insufficient or erroneous coding, hacking by cyber criminals, and threats from increased levels of transparency.[19] Also, adopting, operating, and maintaining Blockchain would raise additional governance, regulatory, and liability concerns.[20] However, it is up to Corporations to determine how to structure blockchain operation and maintenance in the best way possible to ensure transparency and trust.

In conclusion, Blockchain, despite some of its challenges, has the potential to significantly improve governance in publicly held companies by creating an efficient and transparent framework that ultimately improves shareholder participation.







[1] See Jason Gordon, What is a Shareholder Democracy?, The Bus. Professor (Sep. 25, 2021),

[2] See Ian B. Lee, Efficiency and Ethics in the Debate About Shareholder Primacy, 31 Del. J. Corp. L. 533 (2006).

[3] Blasius Indus., Inc. v. Atlas Corp., 564 A.2d 651, 661 (Del. Ch. 1988).

[4] See Alexandra Andhov, Introduction: Corporations on Blockchains: Opportunities and Challenges, 53 Cornell Int’l L. J. 1, 22 (2020).

[5] Mark van Rijmenam, How Blockchain Proxy-Voting Improves Shareholder Engagement, The Digit. Speaker (Oct. 3, 2019),

[6] See Anne Laffere & Christopher Van Der Elst, Blockchain Technology for Corporate Governance and Shareholder Activism, Harv. L. Sch. F. on Corp. Governance (Mar. 27, 2018),

[7] Anne Lafarre & Christopher van Der Elst, Blockchain Technology for Corporate Governance and Shareholder Activism, Egci (Apr. 9, 2018),

[8] Stuart D. Levi et al, Emerging Discovery Issues in Blockchain Litigation, Legal Tech News (Apr. 3, 2019, 7:00 AM),

[9] See What is Blockchain Technology, IBM, (last visited Feb. 11, 2022).

[10] See id.

[11] Soroya Ghebleh, On Governance: How will Blockchain Technology Change Organizational Governance, The Conf. Bd. (Mar. 21, 2018),

[12] Id.

[13] Andhov, supra note 3 at 18.

[14] See id.

[15] Id.

[16] Spencer J. Jord, Blockchain Plumbing: A Potential Solution for Shareholder Voting? 21 U. Pa. J. Bus. L. 706, 731 (2019).

[17] Id.

[18] See id.

[19] See Federico Panisi et al, Blockchain and Public Companies: A Revolution in Share Ownership, Transparency and Proxy Voting, and Corporate Governance,  2 Stan. J. Blockchain L. & Pol’y 189, 216 (2019).

[20] Andhov, supra note 3 at 35.



Image Source:

What Role Should Tech Companies Have in Framing Public Discourse?

By Michael Alley




Last April, the European Union enacted the Digital Services Act that would force tech companies, such as Twitter, Facebook, YouTube, and other internet services, to censor misinformation and report how their algorithms are used to promote divisive content.[1] Many see this as a positive change. One reason is that many people who have spent time on social media or other areas of cyberspace encounter violent and defamatory information promoted by technology companies to drive engagement. The issue is that maximizing engagement can increase polarization by amplifying divisive content.[2]

Citizens and government officials in the United States are divided on how to regulate content and speech in cyberspace.[3] Many are opposed to limiting freedom of speech. In contrast, others feel that more action needs to be taken to prevent misinformation, violent content, and illegal activity.[4]  With the void of regulation and the fear of a hands-on approach from the government, tech companies have implemented their own mechanisms for controlling content by users.[5] Many of these actions bring into doubt how capable these tech companies are at regulating themselves proportionality.[6]

Twitter has undergone many changes, with Elon Musk buying the new company. He has been particularly polarizing for his support of free speech. But as we have seen, there are limits to just how far Musk is willing to go.[7] In late 2022, Musk banned Kanye West for inciting violence against Jewish people, a move many applauded in the Twitter community.[8] But the issue of proportionality comes in when people ask why the Supreme Leader of Iran has not been banned for his comments calling for genocide against the Jewish people.[9] Twitter has shown that they are willing to ban accounts of state leaders as it did with President Trump (when he spread misinformation regarding the 2020 election results). Why has Twitter not taken similar action with Supreme Leader Khamenei of Iran? They can and they should.

Even before Musk bought Twitter, there was concern about censorship surrounding the COVID-19 health crisis. When Musk purchased the company, he stated that he would no longer enforce the company’s COVID-19 policy.[10] Some public health officials and epidemiologists criticized the move, such as epidemiologist Eric Feigl-Ding stating that it was a threat to public health.[11] But others applaud the move. One epidemiologist and professor of Medicine at Stanford University, Jay Bhattacharya, is one of these people. Twitter had hidden his tweets during COVID-19 and suppressed his message. Dr. Bhattacharya advocated for age-based analysis of COVID-19 risks and for public schools to remain open during the pandemic.[12] Defenders of Bhattacharya and free speech argued that Twitter tech executives with minimal health education censored a professor who works at one of the most prestigious Universities in the country and is a top expert in health policy for infectious diseases because he was spreading “misinformation” about a new and evolving health crisis.[13]

Although citizens and government officials have very different ideas of what should be permitted online, enforcement must be uniform across these platforms. It erodes public trust when enforcement is not proportional. On the other hand, the increase in hate speech, violence, defamation, and explicit content on these platforms undermines trust. European authorities have decided how to regulate these platforms, and we are quickly approaching the day when the United States must make the same decision. What legal guardrails is the U.S. government willing to put on these platforms?






[1] Adam Satariano, E.U. Takes Aim at Social Media’s Harms With Landmark New Law, N.Y. Times (Apr. 22, 2022),

[2]Paul Barrett et al., How tech platforms fuel U.S. political polarization and what government can do about it, Brookings (Sept. 27, 2021),

[3] Marcin Rojszczak, Online content filtering in EU law – A coherent framework or jigsaw puzzle?, Elsevier (2022),

[4] Id.

[5] Id.

[6] Id.

[7] Rachel Lerman et al., Elon Musk says Kanye West suspended from Twitter after swastika Tweet, Wash. Post (Dec. 2, 2022, 1:19 PM),

[8] Id.

[9] Sean Burch, Twitter Rules Don’t Block Iran’s Ayatollah From Calling Israel ‘Cancerous Tumor,’ Jack Dorsey Says, Yahoo (Oct. 28, 2020),

[10] David Klepper, Twitter ends enforcement of COVID misinformation policy, AP News (Nov. 29, 2022),

[11] Id.

[12] Justin Hart, The Twitter Blacklisting of Jay Bhattacharya: The social-media platform revealed that many had been censored and shadow-banned, WSJ Opinion (Dec. 9, 2022, 6:23 PM),

[13] Id.



Image Source:

Will the Era of Influencers Eventually Fall?

By Eliza Mergenmeier


Generally, influencers are people who make a profit from promoting a company’s product or service through social media. In the modern era of advertising, the middleman’s role, previously known as an ad agency, has become diminished as the company with the product they wish to promote can reach out to ‘influencers’ and ask them to advertise the product directly to their followers on different social media platforms. Influencers come in all shapes and sizes, from a young 22-year-old college girl promoting a vitamin to Serena Williams promoting a shoe line. Currently, the infamous Kim Kardashian is under heat from the Securities and Exchange Commission (“SEC”) for an advertisement she posted on Instagram.[1]

Kardashian posted an ad on her Instagram in June 2021 promoting the cryptocurrency EthereumMax.[2] Over a year later, in October 2022, the SEC charged Kardashian with violating the “anti-touting” provision of the Securities Act of 1933.[3] Kardashian settled the matter at the high amount of $1.26 million, along with the restriction to not promote any cryptocurrency for three years.[4]

Section 17(b) of the Securities Act is referred to as the “anti-touting provision” because it places requirements upon individuals who get paid to promote a security.[5] Essentially, if a person is promoting a security, they must disclose what they are being given as consideration for the promotion, as in Kardashian should have disclosed that she was being paid for her promotion of EthereumMax.[6] Further, the rules of the SEC require influencers to “disclose to the public when and how much they are paid to promote investing in securities.”[7] However, the Securities Act only regulates the promotion of securities, so the Federal Trade Commission (FTC) is the governing body for other types of products and services promoted by influencers.[8]

Section 5 of the FTC Act provides a broader outline of complying with the rules of endorsing and offering testimonials in advertising.[9] Further, Title 16 of the Code of Federal Regulations (CFR) offers a more comprehensive guide to understanding section 5.[10] Title 16 CFR § 255.5 requires that disclosures be made when there is a connection between the endorser and the seller of the product being advertised.[11] Thus, you might try to be more aware now when viewing Instagram stories to notice the hashtag, #ad, on the post. The FTC also requires that when an influencer claims to have a certain experience with a product or service, it should be representative of what consumers can expect.[12]

The body of laws governing influencers and the content they create has grown over the last two decades. In addition to the SEC and FTC, the Food and Drug Administration (FDA) has implemented regulations that prohibit an ad to not overstate a drug’s benefits.[13] Under the FDA’s laws, Kim Kardashian comes into play again. Kardashian promoted a drug named Diclegis, which claimed to treat nausea and vomiting during pregnancy without disclosing the subsequent drug’s risks.[14] In this case, though, the FDA contacted only the company that made Diclegis take remedial steps, which resulted in Kardashian reposting the advertisement, but this time with the drug’s risk disclosures.[15]

The laws governing advertisements and endorsements are typically set up to go after companies rather than individual influencers.[16] However, in recent years, we see how these agencies are broadening their conception of who can be to blame for misleading ads, and presumably in the future it will only get broader when we consider how influencers have almost pierced through every market and affect a large percentage of people.




[1] Shalia M. Sakona, Kim Kardashian Sanctioned by SEC for Unlawful Touting of Cryptocurrency, Bilzin Sumberg, (Oct. 5, 2022),

[2] Id.

[3] Id.; SEC Charges Kim Kardashian for Unlawfully Touting Crypto Security, U.S. Securities and Exchange Commission, (Oct. 3, 2022),

[4] SEC Charges Kim Kardashian for Unlawfully Touting Crypto Security, U.S. Securities And Exchange Commission, (Oct. 3, 2022),

[5] Supra note 1.

[6] Securities Act of 1933, Pub. L. No. 117-263, 48 Stat. 74.

[7] Supra note 4.

[8] Gregory L. Cohen & Bryan Reece Clark, Tech Transactions & Data Privacy 2022 Report: #Compliance: Legal Pitfalls in Social Media Influencer Marketing,  National Law Review, (Feb. 11, 2022),

[9] Id.

[10] Id.

[11] 16 CFR § 255.

[12] Supra note 8.

[13] Id.

[14] Id.

[15] Id.

[16] Id.


Image Source:

Consumer Harms — A Privacy Policy’s Missing Ingredient

By Chris Jones*


I. Introduction

As the world moves increasingly online, consumers are forced to enter personal information into websites to apply for jobs, attend school, or purchase tickets to an event. Consumers’ personal information is often sold and shared as a commodity among tech businesses, advertising agencies, and data brokers. According to the Federal Trade Commission (“FTC”), “most consumers . . . know little about the data brokers who collect and trade consumer data or build consumer profiles that can expose intimate details about their lives and . . . expose unsuspecting people to future harm.”[1] As a result, the risk of individual privacy harms continues to increase. Privacy injuries may include reputational, discriminatory, physical, emotional, economic, and relationship harms.[2]

Absent a comprehensive federal privacy law, the majority of U.S. businesses operate under the assumption that fine print in a legally complex privacy policy is sufficient to act in good faith. Unfortunately, standard privacy policies do nothing to advise consumers of the harms they may experience when utilizing a website, application, or device. Without a basic understanding of why they should care about their personal information being sold and shared, consumers lack the requisite knowledge necessary to make an informed decision.

This article argues that the potential for consumer harms, resulting from the use of a product or service, should be spelled out and disclosed. The FTC should promulgate a rule to require harms’ disclosure in a standardized, easy-to-understand privacy policy that is consistent throughout the industry. By educating the general public up front, informed consumers can determine the true level of risk they are willing to take, instead of blindly following flashy advertising and exciting trends.

II. Background
A. Overview of Privacy Policies

Privacy policies are typically lengthy notices filled with technical terms and legal language[3] that explain what an entity does with a consumer’s personal information, how the information is shared with third parties, and whether the consumer has options regarding this sharing.[4] Many privacy policies are difficult to understand and contain language designed to mislead consumers into believing the business protects their information.[5] Moreover, privacy policies typically provide no warning to consumers of potential harms they may encounter when utilizing the product or service.[6]

Many businesses rely on the fine print in a legally complex privacy policy to address consumer privacy issues.[7] According to Jen King, the director of consumer privacy at the Center for Internet and Society, privacy policies are “documents created by lawyers, for lawyers. They were never created as a consumer tool.”[8]

In 2012, the average length of an online privacy policy was 2,415 words.[9] It would take an average internet user seventy-six working days—consisting of eight hours per day—to read the privacy policies of every website they encountered within a year.[10]  Over the past decade, Americans’ use of the internet has exploded; as a result businesses have greatly expanded their privacy policies.[11] For example, Facebook’s privacy policy takes a reported eighteen minutes to read.[12] Thus, it is not reasonable to expect the average consumer has the time, nor the sophistication, to read and understand every lengthy and substantially different privacy policy they may encounter.

B. Legal Foundation

The FTC is in charge of preventing unfair or deceptive acts or practices that affect commerce in the privacy arena.[13] Pursuant to Section 18 of the FTC Act and the Commission’s rules of practice, the FTC has the authority to “promulgate, modify, and repeal trade regulation rules that define with specificity acts or practices that are unfair or deceptive in or affecting commerce within the meaning of Section 5(a)(1).”[14] The FTC Act identifies unfair or deceptive acts as those that cause or are likely to cause significant injury to consumers.[15]

The FTC currently sanctions businesses for unfair or deceptive practices while enforcing adherence to a business’ privacy policy.[16] As the FTC does not provide a standardized template for privacy policies, businesses are left to draft their own documents, without clear guidelines.

III. Potential Harms from Data Sharing

As technology has revolutionized American lives, individuals’ personal information is entered into online platforms on a daily basis in order to schedule medical appointments, apply for college, or communicate with most businesses. Moreover, the average Smartphone user in the U.S. utilizes approximately forty-six apps per month.[17] As a result, the risk of individual privacy harms continues to increase.[18] Privacy harms encompass a large scale of scenarios ranging from discrimination to emotional impairment to economic loss.[19]

Privacy injuries associated with the unauthorized use of an individual’s data may include reputational,[20] discrimination,[21] physical,[22] autonomy,[23] economic,[24] emotional,[25] and relationship harms.[26] For example, the disclosure of personal health data may affect a consumer’s ability to obtain employment, financial products, insurance, housing, or admission to a nursing home; it may cause social stigmatization based on race, sexual preferences, disease, addictions, mental health conditions, religion, or political positions; and it may subject the consumer to potentially dangerous situations due to blackmail, bullying, stalking, Ransomware, or the revelation of secret locations for domestic abuse victims.[27] Disclosures of mental health conditions, along with certain diagnoses, such as sexually transmitted disease, alcohol, or drug use carry additional social stigmatizations.[28]

Courts have moved beyond rigid injury requirements to include more intimate personal autonomy harms, such as “coercion – the impairment on people’s freedom to act or choose; (2) manipulation – the undue influence over people’s behavior or decision-making; (3) failure to inform – the failure to provide people with sufficient information to make decisions; (4) thwarted expectations – doing activities that undermine people’s choices; (5) lack of control – the inability to make meaningful choices about one’s data or prevent the potential future misuse of it; [and] (6) chilling effects – inhibiting people from engaging in lawful activities.”[29]

Further stigmatization can occur when online platforms make their way into the real lives of consumers. For example, Facebook has developed a relationship with law enforcement, searching for individuals whose online activities may infer suicidal tendencies.[30] Facebook scans users’ input—including private messages—for content that may apply to “safety and health.”[31]  Facebook then reports individuals to law enforcement that they consider as potentially suicidal.[32] Thus, by sending allegedly private messages on Facebook, a user runs the risk of the police showing up at their door in real life.[33]

This can be particularly troubling for users as police documentation about a potentially suicidal visit—including officers’ body cam footage of people, cars, and homes—becomes public record.[34] The records may be shared with any interested parties—including data brokers.[35] This public documentation of a consumer’s perceived mental instability can have devastating consequences that affect the rest of their lives.[36] Potential harms may include discrimination in careers, housing, public doxing, reputational damage, relationship issues, or mental health stigmas. Imagine having to disclose to potential employers that you were deemed a suicide risk by local law enforcement.

Still, the majorities of consumers are not adequately informed of potential harms and have little to no knowledge of the life-long consequences that may result from utilizing these products and services.[37]

IV. Privacy Policies Should Disclose Potential Harms

At the time of publication, there are still no comprehensive U.S. privacy laws at the federal level, let alone any statutes to require privacy policies disclose potential consumer harms. While it is standard practice in the U.S. for some industries to warn consumers of potential harms, the technological world is way behind. For example, in California, amusement parks, personal car manufacturers, and even holiday lights’ manufacturers are required to disclose “significant exposures to chemicals that cause cancer, birth defects or other reproductive harm.”[38]

Countries worldwide and several U.S. states have begun to pass privacy laws to minimize commercial surveillance and promote data security.[39] “Persistent and targeted surveillance collapses individual moments of interaction, spread out over time and mitigated through human forgetfulness, into one long story of an individual’s life.”[40] This type of surveillance can lead to inferences about highly sensitive areas of a person’s life, such as religion, sexual activities, and health.[41] Therefore, U.S. consumers need to be warned of surveillance profiling and subsequent harms, before they agree to utilize a product or service.

“Studies have shown that most people do not generally understand the market for consumer data that operates beyond their monitors and displays.”[42] A Pew Research Center study found that “78% of US adults say they understand very little or nothing about what the government does with the data it collects, and 59% say the same about the data companies collect.”[43] Thus, if the majority of consumers admit they do not understand what is done with their personal data, a privacy policy filled with legal terms and jargon does nothing to serve as a warning.

Critics may argue that privacy harms often do not occur until some future time, if at all. Therefore, it is unnecessary to warn consumers about the potential risk of future harms. The current technological ecosystem is so complex and consists of so many entities; it is difficult—if not impossible—for the average consumer to pinpoint where their personal information was disclosed, when experiencing higher insurance rates, employment discrimination, or targeted advertising based on their most intimate secrets. Thus, it is critical to notify consumers of potential harms at the initial time of their data collection.

V. Solution

Absent a comprehensive federal privacy law, this article proposes the FTC should promulgate a rule to require consumer harms’ disclosure in a standardized, easy-to-understand privacy policy that is consistent throughout the industry.

The Gramm-Leach-Bliley Act (“GLBA”) provides financial institutions with a standardized template listing specific categories of information that must be disclosed.[44] This template is similar to the nutrition-label approach to privacy instituted by Apple and Google in their app stores.[45] For example, Apple created the labels “to help users learn at a glance what data will be collected by an app, whether that data is linked to them or used to track them, and the purposes for which that data may be used.”[46] The nutrition labels are a great tool to identify the information being collected; however, the labels fail to warn consumers of potential harms resulting from the collection.

By utilizing a template similar to the GLBA requirement, consumers can easily determine whether or not the business sells or shares their personal information with third-parties, who those third parties are, what potential harms can result from said sharing, and what choices the consumer has—if any—to opt out. Thus, consumers can quickly identify the key components to determine whether they want to do business with the entity. While the U.S. has a long way to go in protecting users’ privacy, this disclosure of harms is one step that can educate and empower consumers to make informed decisions.

VI. Conclusion

Action should be taken at the federal level to clearly notify consumers of potential privacy harms resulting from the sharing of their personal information. By providing sweeping protection in privacy policies for all U.S. residents, the FTC can help to balance the benefits of technology with the education necessary for consumers to take back control of their own private lives.



*J.D., Gonzaga University School of Law. Acknowledgments and gratitude to Professor Drew Simshaw for his
invaluable insights and continuing support.

[1] Trade Regulation Rule on Commercial Surveillance and Data Security, Fed. Trade Comm’n,, Dec. 8, 2021, 1, 5 [hereinafter FTC Trade Regulation Rule].

[2] See Danielle Keats Citron & Daniel J. Solove, Privacy Harms, Geo Wash. L. Fac. Publications & Other Works, 1, 19, 21-23, 25, 28 (last visited Feb. 7, 2023).

[3] See Lori Andrews, A New Privacy Paradigm in the Age of Apps, 53 Wake Forest L. Rev. 421, 435 (2018).

[4] See Andrews, supra note 3, at 434-36.

[5] See Deceived by Design, Forbrukerradet 1, 22  (June 27, 2018), (discussing how Big Tech utilizes positive and negative wording to “nudge users toward making certain choices”); See also Kevin Litman-Navarro, We Read 150 Privacy Policies. They Were an Incomprehensible Disaster, N.Y. Times, June 12, 2019,​interactive/​2019/​06/​12/​opinion/​facebook-google-privacy-policies.html.

[6] See Forbrukerradet, supra note 5.

[7] See Andrews, supra note 3, at 435.

[8] See Litman-Navarro, supra note 5.

[9]Alexis C. Madrigal, Reading the Privacy Policies You Encounter in a Year Would Take 76 Work Days, Atl. Monthly Grp., LLC (Mar. 1, 2012),

[10] Id.

[11] See Litman-Navarro, supra note 5.

[12] See Litman-Navarro, supra note 5.

[13] See 15 U.S.C. §§ 41-58, as amended [hereinafter 15 U.S.C.]; See also Michael Goodyear, The Dark Side of Videoconferencing: The Privacy Tribulations of Zoom and the Fragmented State of U.S. Data Privacy Law, 10 Hous. L. Rev. 76, 79 (2020).

[14] See FTC Trade Regulation Rule, supra note 1, at 12.

[15] See 15 U.S.C., supra note 13; See also Scott Stiefel, The Chatbot Will See You Now;  Protecting Mental Healthware Confidentiality in Software Applications, 20 Colum. Sci. & Tech. L. Rev. 333, 386 (2019).

[16] See 15 U.S.C., supra note 13; See also Nicole Angelica, Alexa’s Artificial Intelligence Paves the Way for Big Tech’s Entrance into the Health Care Industry – The Benefits to Efficiency and Support if the Patent-Centric System Outweigh the Impact on Privacy, 21 N.C. J. L. & Tech. 59, 77-78 (2020)..

[17] See Stephanie Chan, U.S. Consumers Used an Average of 46 Apps Each Month in the First Half of 2021, Sensor Tower, Inc., Aug. 2021,

[18] See FTC Trade Regulation Rule, supra note 1, at 7 (stating how the FTC noted that “companies’ collection and use of data have significant consequences for consumers’ wallets, safety, and mental health”).

[19] See Lothar Determann, Healthy Data Protection, 26 Mich. Tech. L. Rev. 229, 256 (2020).

[20] See Citron & Solove, supra note 2, at 22 (describing how “reputational harms impair a person’s ability to maintain ‘personal esteem in the eyes of others and can taint a person’s image.’” Reputational harms can result in social rejection, lost employment or business).

[21] See Citron & Solove, supra note 2, at 28 (Discrimination harms particularly highlight the inequality and disadvantages for people from marginalized communities. Potential discrimination may occur in the form of employment, housing, insurance ratings, or online harassment).

[22] See Citron & Solove, supra note 2, at 19 (describing physical harms as setbacks to physical health or physical violence when personal data is improperly shared).

[23] See Citron & Solove, supra note 2, at 40 (describing autonomy harms as the “restriction, coercion, or manipulation of people’s choices”).

[24] See Citron & Solove, supra note 2, at 21 (Economic harms include financial loss or identity theft).

[25] See Citron & Solove, supra note 2, at 23 (Emotional harms include emotional distress, categorized by anger, frustration, various degrees of anxiety, and annoyance).

[26] See Citron & Solove, supra note 2, at 25 (describing how relationship harms can encompass personal, professional, and organizational relations).

[27] See Determann, supra note 19, at 256; See Andrews, supra note 3, at 465–66.

[28] See Determann, supra note 19, at 256-57 (When faced with a mental health diagnosis, patients may experience “embarrassment, shame, and even social exclusion should information of this nature become public.” This stigmatization often affects an individual’s quality of life and can cause additional health conditions or a variety of psychosomatic symptoms).

[29] See Citron & Solove, supra note 2, at 47.

[30] See Benjamin Goggin, Inside Facebook’s Suicide Algorithm; Insider, Inc., Jan. 6, 2019,

[31] See id.

[32] See id.

[33] See id.

[34] See Jacqueline White, ISP body-cam footage shows Idaho suspect pulled over in Indiana, minutes after being stopped by Deputy, Scripps Media, Inc., (last updated Jan 3, 2023 04:18 PM).

[35] See generally Electronic Frontier Foundation, FOIA How To, (last visited Feb. 6, 2023).

[36] See Goggin, supra note 30.

[37] See Brooke Auxier, Lee Rainie, Monica Anderson, Andrew Perrin, Madhu Kumar, and Erica Turner, Americans and Privacy: Concerned, Confused, and Feeling Lack of Control Over Their Personal Information, Pew Res. Ctr, 1, 10 (Nov. 15, 2019) [hereinafter Pew Research Center] (describing how the majority of Americans do not understand what the government or private businesses do with their personal information); See also Citron & Solove, supra note 2, at 18-40 (describing privacy harms a consumer may experience due to unauthorized use of their personal information).

[38]See OEHHA Cal. Off. Envtl Health Hazard Assessment About Proposition 65 (last visited Feb. 2, 2023) (“Proposition 65 requires businesses to provide warnings to Californians about significant exposures to chemicals that cause cancer, birth defects or other reproductive harm.  These chemicals can be in the products that Californians purchase, in their homes or workplaces, or that are released into the environment. By requiring that this information be provided, Proposition 65 enables Californians to make informed decisions about their exposures to these chemicals”).

[39] See Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, Official J. Eur. Union L. 119, at 1 (2016) (The GDPR is a comprehensive privacy law designed to prohibit businesses from tracking and selling the personal information of consumers located in the EU, absent consent); See also Californians for Consumer Privacy, CPRA Executive Summary (last visited June 21, 2021)  (describing California’s privacy law); See also Va. Code §§ 59.1-571 to -581 (2021) (describing Virginia’s privacy law); See also Col. Gen. Assemb., Senate Bill 21-190, (last updated June 23, 2021) (describing Colorado’s Privacy Law).

[40] Margot E. Kaminsky, Privacy and the Right to Record, 97 Boston U. L. Rev. 167, 215 (2015).

[41] Id.

[42] See FTC Trade Regulation Rule, supra note 1, at 5.

[43] See Pew Research Center, supra note 37, at 10.

[44] See FTC Trade Regulation Rule, supra note 1, at  15 (describing how the GLBA regulates the privacy of consumer information collected by financial institutions).

[45] See Cookie Pro, Google Play Data Safety vs. Apple Nutrition Label (last updated July 6, 2022).

[46] Id.


Image Source:

Robot Lawyers: Sooner Than You Think

By Haley Magel



There’s been a lot of buzz recently about ChatGPT and robots taking the role of lawyers[1], and many probably think it’s satire or an over-exaggeration.  While robot lawyers might not be taking over the legal industry right now[2], that day might be a lot sooner than anyone expects.

For those that don’t know what ChatGPT is or have a scant understanding, it is a chatbot that uses “natural language processing to understand and respond to human communication.”[3]  Chatbots are either retrieval or generative, and ChatGPT is generative meaning that it takes a user input pattern and creates the output itself with the help of an underlying deep-learning model. [4]  In less technical terms, you can ask ChatGPT a question and it will answer the question with an output that it creates.  In the legal context, one could ask ChatGPT to explain what constitutes a well-founded fear of persecution in an asylum case, and the chatbot can spit out a relatively accurate response.[5]  One could also ask ChatGPT to explain the concept of personal jurisdiction, develop a list of deposition questions for the plaintiff in a motor vehicle accident, and create a contract for the sale of real estate in Massachusetts and receive competent answers.[6]

There are obviously some drawbacks to ChatGPT as it currently operates such as low interpretability which means that ChatGPT does not explain the methods it uses to come to its answers.[7]  ChatGPT also does not include footnotes or specific references, so it isn’t easy to fact-check answers and make sure that the correct legal authority was applied accurately.[8]  Another factor to take into consideration is that all artificial intelligence (“AI”) is trained with human input and there are numerous examples of how bias has been introduced into algorithms.[9]  Drawing conclusions from AI could include implicit bias that many aren’t suspecting to be in the AI output.[10]

The ultimate test of legal competence for many is the bar exam, so researchers put ChatGPT to work to try its hand at answering questions from the multistate multiple choice section of the bar exam, known as the Multistate Bar Examination (“MBE”).[11]  ChatGPT’s answers were compared to the average correct answers of bar test-takers, and overall bar takers answer 68% of questions correctly with ChatGPT answering 50% of questions correctly.[12]  ChatGPT is significantly exceeding the baseline random choice rate of 25%, but is still trailing human testtakers by 18%.[13]  Researchers believe that a chatbot may be able to pass the bar exam within the next 18 months as updated versions of ChatGPT are released.[14]

Recently, there actually was an attempt to use ChatGPT in a courtroom setting where DoNotPay tried to use its AI chatbot to help represent a defendant in a speeding case.[15]  The plan was to have the chatbot run on a smartphone, listen to what was being said in court, and provide instructions to the defendant via an earpiece.[16]  State bar association prosecutors threatened 6 months of jail time if a chatbot were used in court, and DoNotPay backed down with their robot lawyer stunt.[17]

Thankfully, for every lawyer out there that wants to keep their job, it doesn’t seem like ChatGPT and other AI chatbots are ready to take over the legal industry quite yet.  It seems most likely in the near future that chatbots will be used in conjunction with human lawyers to achieve simple drafting tasks and other small, routine legal needs.





[1] Ken Crutchfield, ChatGPT—Are the Robots Finally Here?, ABOVE THE L. (Jan. 10, 2023, 1:47 PM),

[2] Amanda Yeo, DoNotPay’s AI Lawyer Stunt Cancelled After Multiple State Bar Associations Object, MASHABLE (Jan. 26, 2023),

[3] Thomas Bacas, Analysis: Will ChatGPT Bring AI to Law Firms? Not Anytime Soon, BLOOMBERG L. (Dec. 28, 2022, 10:22 AM),

[4] Id.

[5] Jenna Greene, Will ChatGPT Make Lawyers Obsolete? (Hint: Be Afraid), THOMSON REUTERS (Dec. 9, 2022, 2:33 PM),

[6] Id.

[7] Bacas, supra note 3.

[8] Crutchfield, supra note 1.

[9] Id.

[10] Id.

[11] Michael James Bommarito and Daniel Martin Katz, GPT Takes the Bar Exam, at 2,

[12] Id. at 5.

[13] Id.

[14] Id. at 6.

[15] Yeo, supra note 2.

[16] Id.

[17] Id.



Image Source:*zvGIPN52Cx3NVGvP.png

Page 2 of 8

Powered by WordPress & Theme by Anders Norén