The first exclusively online law review.

Author: JOLT Page 2 of 10

Telehealth’s COVID-19 Lack of Privacy—Where Do We Go From Here?

By Chris Jones


I. Introduction

COVID-19 sparked a “tsunami of growth” in the United States’ telehealth industry.[1] The Office for Civil Rights (“OCR”) Notification of Enforcement Discretion for Telehealth Remote Communications During the COVID-19 Nationwide Public Health Emergency (“Notification”) allowed medical providers to utilize telehealth platforms that fell short of The Health Insurance Portability and Accountability Act of 1996’s (“HIPAA”) privacy requirements.[2]

Technology companies and governments have “long shown themselves to be wolves in sheep’s clothing when it comes to privacy: promising privacy while conducting widespread and illicit surveillance.”[3] A recent study determined that over 70% of medical applications shared users’ sensitive information with third-party data aggregators, without the user’s knowledge or consent.[4]

Data aggregators can use this information to potentially damage the individual financially, physically, or psychologically.[5] This data is often marketed and sold to a variety of commercial third parties including employers, advertisers, and insurers.[6] In one case, a data aggregator sold the digital health-related data of approximately 3 million individuals to an insurance company.[7]

By allowing medical providers to utilize inferior privacy measures, the risk of individual privacy harm continues to increase. Privacy injuries associated with the unauthorized use of an individual’s data may include reputational, discrimination, physical, psychological, economic, and relationship harms.[8] Thus, telehealth platforms should be required to obtain a Telehealth Privacy Certification (“Certification”) of compliance prior to public market release. This Certification would strengthen security and ensure a patient’s privacy rights are protected moving forward—regardless of what the future holds.

II. Telehealth During COVID-19

During the COVID-19 pandemic, private telehealth companies and health care systems reported an increase of telehealth use ranging from 100% to 4300%.[9] According to the Center for Disease Control and Prevention (“CDC”), 43% of medical providers had telehealth capabilities before the pandemic.[10] After the pandemic began, 95% offered telehealth.[11]

Telehealth is defined as the use of electronic information and communications technologies to deliver clinical and nonclinical health care services.[12] Telehealth communications generally consist of three types: (1) Synchronous, which involves direct communication between the provider and the patient using phone, video, or data transmission such as texting; (2) Asynchronous, which involves the storage of information for the provider or patient with the expectation the other party will review it and respond back at some point in the future; and (3) Remote patient monitoring, which involves a mix of both synchronous and asynchronous telehealth that allows the provider to monitor the patient over time.[13] Prior to the pandemic, there were many barriers to utilizing telehealth such as provider licensing, insurance approval, lack of equipment, and the overall costs of complying with HIPAA.[14] For example, extremely expensive devices were required for both the patient and physician with costs ranging from $799 on up.[15]

As telehealth appointments have become common events, patients confide in their providers as they would if meeting in person. Patients can be particularly susceptible to privacy harm when being videotaped. Typical telehealth sessions often contain a patient’s personal disclosures of “objective and highly sensitive statements of fact” that “may be inherently more revelatory” than the provider’s ordinary notes based on subjective impressions.[16] Even when complying with HIPAA, providers are free to retain archived, stored, or transmitted data from telehealth sessions.[17]

Mental health therapist Tiffany Chhuom worries about the impact of temporarily lifting privacy protections for patient data included in video or text discussions with their doctors.[18] “The ways in which these clients who are so vulnerable on video could be exploited — I don’t have the words to explain how much that concerns me,” said Chhuom.[19]

III. Legal Background

While it is generally understood that medical providers should use the highest level of standards to assure peace of mind for their patients,[20] HIPAA regulates the use of technology to transmit certain medical data at the federal level.[21] HIPAA requires covered entities to follow data privacy, data security, and data breach notification requirements when handling applicable medical information.[22]

The Health Information Technology for Economic and Clinical Health Act (“HITECH Act”) amended HIPAA in 2009 to further define the responsibilities and roles of healthcare providers and business associates.[23] The HITECH Act requires that covered entities utilize a Business Associate Agreement (“BAA”)[24] and demands that associates comply with the appropriate sections of HIPAA’s Privacy and Security Rules.[25] Absent exception, HIPAA’s Privacy Rule requires patient consent in order for a covered entity to share Protected Health Information (“PHI”) with third parties.[26] HIPAA’s Security Rule requires that covered entities utilize “administrative, physical, and technical safeguards to prevent threats or hazards to the security of electronic PHI.”[27]

Herein lies the broader issue—under the current regulation, the use of specific telehealth equipment or technology cannot ensure an entity is HIPAA-compliant.[28] Thus, the burden to utilize HIPAA-compliant telehealth platforms falls on the covered entity.[29] If a covered entity utilizes telehealth involving PHI, the entity must comply with the same HIPAA requirements that it would if the patient visited the office.[30] This requires the entity to have technical knowledge in order to conduct a thorough assessment of any potential risk or vulnerability that may affect the confidentiality or integrity of the patient’s PHI.[31] Pandemic or not, this level of technical compliance can be difficult for medical specialists to ascertain.

IV. OCR’s Notification of Enforcement Discretion

During the COVID-19 pandemic, patients across the country were asked to accept a trade-off between access to remote health care and protection of their sensitive health data.[32]  In March 2020, the United States government further relaxed its already anemic privacy standards by enacting the Notification that allowed medical providers to utilize telehealth platforms, which fell short of HIPAA privacy requirements.[33] This Notification declared the OCR would not impose penalties for noncompliance with HIPAA regulatory requirements regarding telehealth during the pandemic, as long as the activities were carried out in good faith—even if the appointment was not related to COVID-19.[34] Without stringent privacy and security features employed, a telehealth appointment can have devastating effects on a patient’s employment status, insurance ratings, and personal reputation.[35]

Any non-public facing remote communication products were allowed to be utilized for medical appointments, regardless of the privacy and security features.[36] For example, this Notification allowed a provider to examine a patient utilizing a videoconferencing application on the patient’s phone.[37] Additionally, it only suggested the provider notify their patients of potential third-party privacy risks and only recommended they should utilize all encryption and privacy modes available.[38] This waiver applied to HIPAA’s Privacy, Security, and Breach Notification Rules.[39]

The OCR provided a list of potential video communication platforms that claimed to be HIPAA-compliant and were willing to enter into a BAA;[40] yet did not specifically endorse them.[41] This Notification recommended that providers concerned about additional privacy protections for their patients continue to utilize services through HIPAA-compliant vendors.[42]

For example, this Notification allowed medical providers to utilize the consumer version of Zoom for confidential telehealth visits.[43] Zoom experienced a 10-fold increase in usage since the COVID-19 pandemic began, including increased use in healthcare.[44] According to a study by Sermo, Zoom was the most common telehealth platform in use during the COVID-19 pandemic.[45]

Zoom has come under fire as a myriad of articles and lawsuits exposed its privacy flaws.[46] Zoom users encountered Zoom Bombing, which occurs when someone joins a meeting they weren’t invited to and “drops gross or disturbing images.”[47] Confidential secrets are easily revealed when random people join private videoconferences.[48] For example, online classes at UCLA were disrupted by a Zoom Bomber shouting slurs and insulting individuals.[49] Zoom Bombers have even posted pornographic content during video chats like AA meetings.[50]

As this Notification allowed for any non-public teleconferencing platform to be used,[51] the privacy issues exposed here are not unique to Zoom alone—Zoom likely only scratches the surface.[52]

This Notification of Enforcement Discretion was inadequate as it essentially stripped consumers of their right to privacy by allowing providers to utilize random videoconferencing applications with no guarantees of confidentiality or security. The importance of privacy and security concerns surrounding telehealth cannot be overlooked as medical data necessitates a higher standard of security due to its personal and sensitive nature.[53] The effects of this Notification may be felt for years or decades to come if the medical data makes its way into the hands of data brokers,[54] unscrupulous actors, or onto the Dark Web.

V. Two Years In – Where Do We Go From Here?

For the past two years, medical providers and patients have become accustomed to utilizing telehealth without the safety of regulatory oversight. As the state of emergency declarations are lifted, this compliance waiver will likely end. Covered entities will no longer be allowed to utilize whatever nonpublic telehealth modality patients have chosen, and will be required to resume utilizing only telehealth platforms that comply with HIPAA.

In October 2021, the American Medical Association (“AMA”) called on the OCR to extend this Notification for yet another year, in order to provide its members with more time to adapt to HIPAA-compliant technologies.[55] The AMA has requested that the OCR assist providers by establishing “guidance documents that specifically speak to telemedicine platforms and what HIPAA requires for use of such technology.”[56] As the AMA explained, “many clinicians are using telemedicine for the first time and may not be well-versed in the unique risks and vulnerabilities associated with the new tools they are using.”[57]

This chain of events is taking Americans’ privacy concerns even further off track.  As the AMA confirmed, medical providers are not technologists and do not specialize in decrypting the intricate privacy concerns involved with third-party applications.

In order to protect both health and privacy, Congress should enact a comprehensive federal regulation to require Telehealth Privacy Certification (“Certification”), administered by the OCR, for all platforms prior to public market release. The OCR already has a trained staff of technologists currently tasked with HIPAA auditing and enforcement. Thus, the OCR should implement a new system to standardize and simplify the necessary HIPAA-compliant technological requirements for telehealth platforms and control how personal telehealth data is maintained.[58] This Certification should also implement one standardized BAA required for use with all approved telehealth platforms.

By requiring telehealth platforms to meet or exceed HIPAA regulations upfront, entities that specialize in medicine would no longer be tasked with attempting to analyze whether or not technology platforms comply with the applicable laws. Instead, medical providers can confidently focus their resources on treating patients’ health concerns—the true purpose of telehealth.

With Certification, it is possible to leverage the public health benefits of telehealth without subjecting unsuspecting patients to abusive or illicit surveillance. This long-term solution would safeguard the privacy rights of telehealth users and ensure that patients’ medical data is protected—both during a pandemic and beyond.


[1] See Marie Fishpaw & Stephanie Zawada, Telehealth in the Pandemic and Beyond: The Policies That Made It Possible, and the Policies That Can Expand Its Potential, Heritage Found. (July 20, 2020),

[2] See Notification of Enforcement Discretion for Telehealth Remote Communications During the COVID-19 Nationwide Public Health Emergency, U.S. Dep’t Health & Hum. Servs., (last updated Mar. 30, 2020) [hereinafter Notification of Enforcement Discretion]; Telehealth and Telemedicine: Frequently Asked Questions, Cong. Research Serv. (Mar. 12, 2020),

[3] Jake Goldenfein, Ben Green, & Salome Viljoen, Privacy Versus Health Is a False Trade-Off, Jacobin, (last visited Sept. 19, 2020).

[4] Lori Andrews, A New Privacy Paradigm in the Age of Apps, 53 Wake Forest L. Rev. 421, 421 (2018).

[5] Id. at 424.

[6] Id. at 421.

[7] Id. at 461.

[8] See Daniel J. Solove & Danielle Keats Citron, Privacy Harms, GW L. Fac. Publ’n & Other Works, 1534, (2021),

[9] Fishpaw, supra note 1.

[10] Hanna B. Demeke, Sharifa Merali, Suzanne Marks, et al., Trends in Use of Telehealth Among Health Centers During the COVID-19 Pandemic — United States, June 26–November 6, 2020 (Morbidity and Mortality Weekly Report), CDC (Feb. 19, 2021),

[11] Id.

[12] Telehealth and Telemedicine: Frequently Asked Questions, Cong. Research Serv. (Mar. 12, 2020), While the World Health Organization limits the term “telemedicine to services provided by doctors,” whereas “telehealth is broader, including services from other health providers such as nurses, psychologists and pharmacists;” the terms telehealth and telemedicine are often used interchangeably. Dana Shilling, Telemedicine in the age of COVID-19, 35 Elder L. Advisory NL 1, 1 (2020).

[13] See David A. Hoffman, Increasing Access to Care: Telehealth During COVID-19, 7 J. L. & Biosciences 1, 3 (2020).

[14] See Miranda A. Moore & Dominique D. Monroe, COVID-19 Brings About Rapid Changes in the Telehealth Landscape, Mary Ann Liebert, Inc., Publishers (Aug. 14, 2020),

[15] Id.

[16] Josh Sherman, Double Secret Protection:  Bridging Federal and State Law to Protect Privacy Rights for Telemental and Mobile Health Users, 67 Duke L. J. 1115, 1143 (2018).

[17] See id. at 1141–43.

[18] Kate Kaye, HHS Notice on Telehealth Penalties Raises Privacy Concerns, Int’l Ass’n of Privacy Prof’ls (Mar. 20, 2020), Chhuom worked with the Washington State Health Care Authority to utilize digital technology in its response to the COVID-19 pandemic. Chhuom has first-hand experience with patient technology as she owns Eth Tech, a digital training firm. See id.

[19] Id.

[20] See Geoffrey Lottenberg, COVID-19 Telehealth Boom Demands Better Privacy Practices, Lexis Law 360 (July 2, 2020),

[21] HIPAA, Telehealth, and COVID-19, Cong. Res. Serv. (June 5, 2020),

[22] See id. (A Covered Entity is one that is (1) A health plan, (2) A health care clearinghouse, or (3) A health care provider who transmits any health information in electronic form in connection with a transaction covered by 45 CFR § 160.103). HIPAA imposes obligations on covered entities, those that have entered into a Business Associate Agreement (“BAA”) with a covered entity, and subcontractors of covered entities or business associates. See Business Associate Contracts, U.S. Dep’t of Health & Hum. Servs., (last reviewed June 16, 2017).

[23] See Business Associate Contracts, supra note 22. (A Business Associate is a person or entity, including subcontractors, who “perform[s] functions or activities on behalf of, or provides certain services to, a covered entity that involve access by the business associate to protected health information.”).

[24] A covered entity may share PHI with another entity only after a BAA has been entered into that provides “satisfactory assurances” the business will appropriately safeguard the information. Therefore, the business associates themselves are directly liable for breaches of HIPAA. See HIPAA, Telehealth, and COVID-19, supra note 21.

[25] See Leslie Lenert & Brooke Yeager McSwain, Balancing Health Privacy, Health Information Exchange, and Research in the Context of the COVID-19 Pandemic, J. Am. Med. Infomatics Ass’n (Apr. 26, 2020),

[26] See HIPAA, Telehealth, and COVID-19, supra note 21.

[27] See HIPAA, Telehealth, and COVID-19, supra note 21.

[28] HIPAA and Telehealth, Ctr. for Connected Health Policy, (last visited Dec. 18, 2020).

[29] Id.

[30] Id.

[31] Id.

[32] Goldenfein, Green, & Viljoen, supra note 3.

[33] See Notification of Enforcement Discretion, supra note 2.

[34] Notification of Enforcement Discretion, supra note 2.

[35] See Lothar Determann, Healthy Data Protection, 26 Mich. Tech. L. Rev. 229, 256 (2020).

[36] Notification of Enforcement Discretion, supra note 2.

[37] Notification of Enforcement Discretion, supra note 2.

[38] Notification of Enforcement Discretion, supra note 2. 

[39] FAQs on Telehealth and HIPAA During the COVID-19 Nationwide Public Health Emergency, U.S. Dep’t of Health & Hum. Servs., (last viewed Dec. 17, 2020).

[40] Notification of Enforcement Discretion, supra note 2.

[41] Notification of Enforcement Discretion, supra note 2.

[42] Notification of Enforcement Discretion, supra note 2.

[43] See Notification of Enforcement Discretion, supra note 2.

[44] Mohammad S. Jalali, Adam Landman, & William Gordon, Telemedicine, Privacy, and Information Security in the Age of COVID-19, SSRN 1, 2 (July 8, 2020),

[45] Deborah R. Farringer, A Telehealth Explosion: Using Lessons from the Pandemic to Shape the Future of Telehealth Regulation, SSRN 1, 28 (Aug. 5, 2020),

[46] See Ajay Chawla, Coronavirus – Covid 19 ‘Zoom’ Application Boon or Bane, SSRN (May 20, 2020),

[47] Id.

[48]  See Michael Goodyear, The Dark Side of Videoconferencing: The Privacy Tribulations of Zoom and the Fragmented State of U.S. Data Privacy Law, 10 Hous. L. Rev. 76, 80-81 (2020).

[49] Emily MacInnis, Students, Professors Report Multiple Incidents of Zoombombing in One Day, Daily Bruin (Oct. 11, 2020, 6:00 PM),

[50] Chawla, supra note 46.

[51] Notification of Enforcement Discretion, supra note 2.

[52] See Goldenfein, Green, & Viljoen, supra note 3.

[53] Mohammad S. Jalali, Adam Landman, & William Gordon, Telemedicine, Privacy, and Information Security in the Age of COVID-19, SSRN (July 8, 2020),

[54] See WebFX Team, What are Data Brokers — and What is Your Data Worth?, WebFX (Mar. 16, 2020), (Data brokers belong to a “multi-billion dollar industry made up of companies who collect consumer data and sell it to other companies, usually for marketing purposes.” Because data brokers do not deal directly with consumers, many individuals are unaware these companies exist.).

[55] Letter from James L. Madara, MD, CEO Executive Vice President, Am. Med. Ass’n, to Lisa J. Pino, Dir., Off. Civ. Rts. (Oct. 25, 2021),

[56] Id.

[57] Id.

[58] Goldenfein, Green, & Viljoen, supra note 3.

Image source:

Tinder Swindler: Online Dating & the Law

By Eleni Poulos


The documentary, Tinder Swindler, looks at the story of Simon Hayut, who posed as the son of a diamond mogul and used the popular dating app Tinder to meet women and manipulate them into providing him money.[1] Ultimately, Hayut scammed approximately $10 million from these women.[2] Unfortunately, this isn’t an isolated incident.[3] It is reported that Americans alone lost nearly $1 billion in 2021 to online dating scams like the Tinder Swindler.[4] The law protects those who are financially harmed by online dating scams and in the aftermath of Hayut’s con, the victims sued in hopes of recovering the money they lost during their “relationship” with Hayut.  Though, recourse is not always successful.[5] For example, one of Hayut’s victims, Pernilla Sjöholm recently lost a battle in court against the banks she believes in partially responsible for the scam.[6] Another victim of Hayut’s—Ayleen Charlotte—lost her case against the bank, ING.[7] But what about those victims that do not necessarily lose their money, but instead lose their time and emotional stability?

Laws in the United States do not adequately protect against falsehoods and manipulation on dating applications, even though studies show that individuals using online dating apps, like Tinder, Hinge, and Bumble, often lie about their name, relationship status, and appearance.[8] More than half the respondents in a study by B2B International and Kaspersky Lab, admitted to lying in their profiles.[9] As a result, a law professor at Hofstra University, Irina Manta, focused extensively in this area and uses the term “sexual fraud” to define this type of behavior.[10] She makes three main arguments for addressing it.[11] First, she argues that because of the ineffectiveness of criminal law in these circumstances, the courts should use a rendition of trademark law to “reduce search costs and deception in the dating marketplace, just as we do in the economic marketplace.”[12] Manta also argues that legal recourse would be more effective, if the process could take advantage of slam claims courts as a way of “discourage[ing]” behaviors that may bring significant dignitary, emotional, and other harms” to the individuals using these dating apps.[13] Finally, she argues that statutory damages should be available to victims of sexual fraud.[14] She makes the point that proving this fraudulent behavior and thus allowing an individual to recover damages is easier now than ever—since the evidence is saved on these dating apps.[15] Overall, the theory requires that a profile be truthful and that what is advertised on an individual’s profile does not mislead another, or pay the consequences of fraudulent behavior.[16] Manta believes this is an integral first step in making online dating, and the Internet more generally, a safe place to be. [17]

Nevertheless, though legal recourse for this kind of sexual fraud is limited, states have formed legislation to help protect their citizens. Though this is a positive step, the legislation still has a way to go. As more and more stories of scams like the Tinder Swindler come to light, it’s fair to assume that the law may also evolve to protect individuals using these apps. If it doesn’t, will the responsibility begin to fall on the app developers themselves?


[1] The Tinder Swindler, Wikipedia (Mar. 13, 2022),

[2] Id.

[3] Maya Yang, American Lost $1bn to Tinder-Swindler style romance cons last year, FBI says, The Guardian, (Feb. 15, 2022),

[4] Id.

[5] See Emily Smith, ‘Tinder Swindler’ victim suffers legal setback, Page Six, (Mar. 14, 2022),

[6] Id.

[7] Id.

[8] Amy Polacko, Netflix’s ‘Tinder Swindler’ isn’t alone. Beware Match monsters and Bumble betrayers, too, NBC News, (Feb. 11, 2022),

[9] Id.

[10] Irina D. Manta, Tinder Lies, 54 Wake Forest L. Rev. 207, 207 (2019).

[11] Id. at 207.

[12] Id.

[13] Id. at 207-08.

[14] Id. at 208.

[15] Manta, supra note 10, at 236.

[16] Polacko, supra note 8.

[17] Manta, supra note 10, at 249.

Image source:

Are Faceprints the New Fingerprints? Clearview AI Facial Recognition Finds its Way into Russo-Ukrainian War

By Annalisa Gobin


Clearview AI’s facial recognition technology caused a privacy uproar when it began scraping the internet and personal social media pages for images to store in its facial recognition database.[1] Clearview AI’s software then places nifty facial algorithms on the billions of images it collects so that both the software and database can be sold to law enforcement agencies.[2] In February of 2022, Clearview AI informed investors that it was on track to acquire a total of 100 billion faces within its database (equivalent to 14 photos per each of the 7 billion people on Earth).[3]

Many countries have met Clearview AI with hostility. Citizens in the United States have swarmed the New York-based company with numerous privacy-related lawsuits.[4] Additionally, both the United Kingdom and Australia have broadly outlawed the use of facial recognition technology.[5] Experts have warned that the software, which is sometimes paired with augmented-reality glasses, allows users to reveal personal information on every person they see, including where they live and who they know, promoting more sinister activities such as stalking.[6] Notably, massive tech giants like Google have declined to release facial recognition software, fearing its potential for misuse.[7] In 2020, Clearview AI announced that it would be pulling out of the Canadian market following an investigation into the use of the product by law enforcement.[8]

However, Clearview AI may have found acceptance in the world as a war tool. Following the 2022 invasion by Russia, Ukraine was given free access to Clearview AI’s software by the start-up’s executives as a form of international aid.[9] The country could employ the use of facial recognition for a variety of war-related activities including vetting faces at checkpoints for wanted individuals, helping to identify the deceased, and reuniting refugees with family members they were separated from.[10]

While Clearview AI could be helpful wartime technology to Ukraine, critics warn against the use of facial recognition at checkpoints, finding the technology imperfect and posing the risk of misidentification and unfair arrests.[11] Clearview maintains that it does not support using the technology as a sole identification tool and it recommends that individuals receive training in how to use the software appropriately.[12] However, critics remain concerned with the consequences that may occur if the technology were to fall into the wrong hands.[13] As Ukraine prepares to utilize the new technology, critics warn that once software with the potential for such dangerous misuse enters the warzone, there is no going back.[14]


[1] Paresh Dave & Jeffrey Dastin, Exclusive: Ukraine Has Started Using Clearview AI’s Facial Recognition During War, Reuters (Mar. 14, 2022, 5:12 PM),

[2] Id.

[3] Drew Harwell, Facial Recognition Firm Clearview AI Tells Investors It’s Seeking Massive Expansion Beyond Law Enforcement, Washington Post (Feb. 16, 2022, 12:47 PM),

[4] Kashmir Hill, The Secretive Company that Might End Privacy as We Know it, N.Y. Times (Jan. 18, 2020),

[5] Id.

[6] Id.

[7] Id.

[8] Id.

[9] Paresh Dave & Jeffrey Dastin, Exclusive: Ukraine Has Started Using Clearview AI’s Facial Recognition During War, Reuters (Mar. 14, 2022, 5:12 PM),

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Id.

Image source:

Machines Can Write Stories Now?

By Grayson Walloga


Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece? Can a robot produce a beautiful and impactful movie script?

In 2016, there was an attempt. Sunspring is a short science fiction film written entirely by an AI that named itself Benjamin.[1] The director, Oscar Sharp, fed hundreds of sci-fi screenplays to the AI and then instructed it to create its own. Was it any good? Well, it did place in the top ten out of hundreds of entries in the Sci-Fi London contest.[2] The film was enjoyed by many, though mostly for the wrong reasons. Sunspring is entirely incoherent. Most of the dialogue is littered with grammatical errors, the plot is non-existent, and the characters have whole conversations on what seems like an alternative plane of reality. That being said, the film is quite fun. Most of the praise should go to the actors who did their best to interpret the mess conjured up by Benjamin.[3] They turned a script composed of utter nonsense into a gripping tale of romance and murder by their own tone and body language which allowed for Benjamin’s story to be realized in some way.

Solicitors, released in 2020, was another short film written by an AI.[4] Two senior student filmmakers from Chapman University used GPT-3 (specifically, the tool “Shortly Read”) to create most of the film, but started off the script with just the following lines: “Barb’s reading a book. A knock on the door. She stands and opens it. Rudy, goofy-looking, stands on the other side.”[5] GPT-3 is a 175 billion parameters Transformer deep learning model from OpenAI that has been used for translation work, writing scripts for films, and even the creation of fake blog posts (not this one).[6]

Solicitors, unlike Sunspring before it, actually manages to make a bit of sense and adhere to a basic three-act structure. It even throws in an M. Night Shyamalan plot twist for good measure! There are times when the dialogue becomes odd or characters contradict themselves after a while, but for a machine-written work it is very impressive. The GPT-3 tends to be more effective on shorter content as it usually has problems retaining a story’s tone.[7] While that means an author might run into issues trying to get a whole novel created with GPT-3, he can still find great success using the technology to overcome writer’s block when working on a particular scene. [8] Authors should still make sure the machine-written scene makes sense before adopting it into their work. Just because a machine can write a story doesn’t mean it will be any good.

Of course, there is also the problem with interpreting the machine-written story too. Both Sunspring and Solicitors explore the necessary inclusion of human beings in AI writing. For Sunspring, the actors were the ones who turned the poorly written jumble of words into an overly dramatic, so-bad-its-good experience like The Room.[9] The more tightly written Solicitors was half the run time as Sunspring, and had parameters set with the initial scenario being pre-written. [10] As it stands right now, machine-written works can only truly work through the combined efforts of humans and artificial intelligence.[11]

Eager to find out how entertaining a machine-written story might be, I set out to find one that I could use free of charge. DeepStory is an AI-driven script & story generator that is freely available online.[12] You can write something entirely from scratch or choose from some preloaded prompts. Wishing to be inspired by a new take on a personal favorite tale, I had the AI generate a modified scene from The Lord of the Rings. This scene is set during the Council of Elrond, where the fate of the One Ring is being discussed. DeepStory can generate actions, characters, and dialogue at the touch of a button. The results were…intriguing.

I generated a few actions right after Frodo places the Ring for all to see. Instead of the lengthy discussion of what should be done, the stone floor cracked open and revealed the eye of Sauron! And then another eye of Sauron appears at the front gate. And then the eyes started shooting fireballs at everyone. The scene ended with “glimpses of violence and destruction.” Not exactly in line with the established canon, but divorced from the lore it was certainly entertaining.

I reset the scene and tried again to see if the AI could do something drastically different. This time, DeepStory decided to have Gimli stand tall and march straight towards the Ring, not unlike the film version. A few other characters go with him, one of whom is not even supposed to show up until the next book, but then Pippin “holds the ring like a grenade…” as he nervously inspects his comrades. Frodo then stands up and exclaims, “It is time.  The battle of Endor began many years ago.  It is time we are all on the same side.” DeepStory seems to have mixed up its nerd franchises. While both AI-generated scenes have their problems, they still have their uses. AI-generated content like this is useful in helping a writer figure out his own style, voice, or themes for his own work.[13] At the very least, you can see an example of how not to write your story, though maybe you’ll find a diamond in the artificially generated rough.


[1] Annalee Newitz, Movie written by algorithm turns out to be hilarious and intense, Ars Technica (May 30, 2021),

[2] Id.

[3] Id.

[4] Sejuti Das, OpenAI’s GPT-3 Now Writing Screenplay For A Short Film With A Plot Twist, Analytics India Magazine (Oct. 26, 2020),

[5] Id.

[6] Przemek Chojecki, Why GPT-3 Heralds a Democratic Revolution in Tech, Built In (Nov. 3, 2020), (last updated July 13, 2021); see Kim Lyons, A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News, The Verge (Aug. 16, 2020),

[7] See Jacob Vaus & Eli Weiss, How We Made a Movie by an AI Script Writer, Built In, (last updated July 13,2021).

[8] See ShortlyAI, (last visited Mar. 10, 2022).

[9] Newitz, supra note 1.

[10] Das, supra note 4.

[11] Vaus & Wiess, supra note 7.

[12] DeepStory,!/ (last visited Mar. 10, 2022).

[13] See Jason Boog, How To Write Movie Reviews with AI, Toward Data Sci. (Feb. 3, 2020),

Image source:

The Metaverse: Are We Prepared for the Dangers of This Digital Reality?

By Najah Walker


The idea of a metaverse is not a new concept.[1] The term “metaverse” was first coined by speculative fiction writer, Neal Stephenson, in his 1992 novel “Snow Crash”.[2] This concept was later expanded upon by Ernest Cline in his 2011 novel “Ready Player One”.[3] So, what exactly is a metaverse?

At this point in time, there is no universally accepted definition of the metaverse.[4] However, many consider it to be the eventual successor to the internet.[5] Venture capitalist, Matthew Ball, describes the metaverse as “an expansive network of persistent, real-time rendered 3D worlds that support continuity of identity, objects, history, payments, and entitlements.”[6] He discusses the key component to the metaverse, which is that it “can be experienced synchronously by [an] unlimited number of users [….].”[7] So, in simpler terms, the metaverse is the convergence of physical and virtual reality.[8] Facebook describes it as a “virtual space where you can create and explore with other people who aren’t in the same physical space as you.”[9]

It appears that to some extent, the metaverse has been around for a long time.[10] Many of the social elements of the metaverse have been found in video games such as Minecraft, Fortnite, and the social platform Second Life, which was created nearly twenty years ago.[11] However, those virtual reality games are not as advanced as what the complete metaverse will likely be because technology has significantly advanced since then.[12]

Late last year, Facebook changed its corporate name to Meta and announced plans to build a virtual-reality platform, Horizon Worlds.[13] Meta’s CEO, Mark Zuckerberg, announced that the then-existing Facebook brand could not “represent everything that [they’re] doing today, let alone in the future” and the metaverse would serve as a place where people can “game, work and communicate in a virtual environment, often using [virtual reality] headsets.[14]

While Meta’s new virtual-reality platform may be the next major innovation, it poses dangers society may not be prepared for.[15] Weeks before Meta officially opened access to Horizon Worlds, a beta tester reported that she was sexually assaulted by a stranger on the platform.[16] Upon review of the incident, Meta found that the beta tester “should have” used a tool called “Safe Zone”.[17] This feature can be activated when users feel threatened, and it prevents other users from touching, talking to or interacting with their avatar until Safe Zone is lifted.[18]

Another user reported that “within 60 seconds of joining” she was verbally and sexually harassed by three to four male avatars, with male voices.[19] This woman was conducting research for Kabuni Ventures, a technology company, when the assault occurred.[20] The woman also reported that she received comments from others calling her experience a “pathetic cry for attention” and encouraging her not to choose a female avatar next time.[21]

These negative experiences beg the question: are we truly prepared for an unregulated metaverse? Joseph Jones, president of an investigative agency specializing in cyber media, says that it is unlikely that there would be a strong legal case for sexual assault in the metaverse.[22] This is largely because avatars could be anonymous and difficult to track.[23] Also, it may be difficult for victims to find law enforcement agencies “legitimately willing to help.”[24] It appears that remedies for victims of sexual assault in the metaverse may be limited.[25] Still, there has been a call for the industry to introduce more effective anti-harassment features and safety measures.[26] Hopefully, as the metaverse continues to emerge and develop, it becomes a safe and inclusive place for all.


[1] Brian X. Chen, What’s All the Hype About the Metaverse, N.Y. Times (Jan. 18, 2022),

[2] Id.

[3] Id.

[4] Adi Robertson & Jay Peters, What Is the Metaverse and Do I Have to Care?, The Verge (Oct. 4, 2021),

[5] Id.

[6] Chen, supra note 1.

[7] Id.

[8] Id.

[9] Robertson & Peters, supra note 4.

[10] Chen, supra note 1.

[11] Id.

[12] Id.

[13] Daniel Thomas, Facebook Changes Its Name to Meta in Major Rebrand, BBC (Oct. 28, 2021),

[14] Id.

[15] Tanya Basu, The Metaverse Has a Groping Problem Already, MIT Technology Review (Dec. 15, 2021),

[16] Id.

[17] Id.

[18] Id.

[19] Michelle Shen, Sexual Harassment in the Metaverse? Woman Alleges Rape in Virtual World, USA Today (Jan. 31, 2022),

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Id.

[26] Id.

Image source:

Revisiting Trespass to Chattels Amidst the Metaverse and its Conceptualization of Work Culture

By Mark Edward Blankenship Jr.*


Unlike the Internet, which has been defined as a global network of billions of computers, millions of servers and other electronic devices that are facilitating worldwide communication, the metaverse combines aspects of physical reality, virtual reality, augmented reality, artificial intelligence, social media, online gaming and cryptocurrencies, allowing users to interact virtually, making it conceptually distinct. Within the metaverse, people can meet, and digital assets—land, malls, offices, products, and avatars—can be created, bought, and sold. Moreover, the metaverse is attempting to redefine work culture, due to the pandemic. As the metaverse gradually aims to be the new virtual experience that supersedes the Internet, current employment laws and regulations provide little insight as to navigating the workplace. For instance, what might be considered discrimination or harassment within this new workplace atmosphere? How might the company’s resources and infrastructure face harm?

When the Internet first underwent legal examination, judges and practitioners made much of what they could through the implementation of analogy and common law claims. One of the first common law claims applied to the context of the Internet was trespass to chattels, which occurs when an individual intentionally and without authorization interferes with possessory interest in chattel, and such unauthorized use proximately resulted in damage. One who commits a trespass to a chattel is subject to liability to the possessor of the chattel if, but only if: (a) he dispossesses the other of the chattel; (b) the chattel is impaired as to its condition, quality, or value; (c) the possessor is deprived of the use of the chattel for a substantial time; or (d) bodily harm is caused to the possessor, or harm is caused to some person or thing in which the possessor has a legally protected interest.

CompuServe Inc. v. Cyber Promotions, Inc. became the first to apply this tort in the context of cyberspace. In that case, the Defendant bombarded the Plaintiff’s email system with enough “spam” email to cause harm to the system’s functionality. In Ebay v. Bidder’s Edge, Inc., where an auction data aggregator used a ‘crawler’ to gather data from eBay’s website, the court found that even though the Defendant’s interference was not substantial, any intermeddling with or use of another’s personal property established such possessory interference with Plaintiff’s chattel. Additionally, the intentional use of Plaintiff’s bandwidth was considered harmful, because of its aggregative effect.

In Intel Corp. v. Hamidi, the Defendant, a former employee of Plaintiff Intel Corp., sent e-mails criticizing Intel’s employment practices to numerous current employees on Intel’s electronic mail system. Hamidi continued to send these emails, even after being sent several cease-and-desist letters from Intel. However, he breached no computer security barriers in order to communicate with Intel employees. Additionally, Hamidi’s communications to individual Intel employees caused neither physical damage nor functional disruption to the company’s computers, nor did they at any time deprive Intel of the use of its computers. Instead, the contents of the messages, however, caused discussion among employees and managers.

The court noted that unwanted electronic communications may constitute a trespass to chattels if the volume and frequency of the communications is sufficient to overly burden the recipients’ email system. However, the court found the current case to be distinct from the CompuServe decision because the messages transmitted by Hamidi were infrequent and did no actual harm to Intel’s computer system. Any harm caused to Intel’s employees by reading the emails stemmed from the content of the emails, rather than the actual quantity or frequency of those emails. The court refused to extend the tort of trespass to chattels encompass “impairment by content.” Furthermore, Intel could not assert a property interest in its employees’ time, since this would insinuate that employees were chattels, which they are not.

Hamidi’s dissenting opinions do raise some concerns about the effects this one common law tort could have in a realm where the workplace, the physical world, and the digital world are intertwined. Judge Brown wrote in his dissenting opinion that Hamidi should have been held liable for trespass to chattels because Intel had invested millions of dollars to develop and maintain a computer system in order to enhance the productivity of its employees and the time required to review and delete Hamidi’s messages diverted employees from productive tasks and undermined the utility of the computer system. And Judge Mosk believed that the majority failed to distinguish open communication in the public “commons” of the Internet from unauthorized intermeddling on a private, proprietary intranet. Hamidi’s actions, in crossing from the public Internet into a private intranet, is the equivalent to intruding into a private office mailroom, commandeering the mail cart, and dropping off unwanted broadsides on 30,000 desks. Because Intel’s security measures have been circumvented by Hamidi, the majority leave Intel, which has exercised all reasonable self-help efforts, with no recourse unless he causes a malfunction or systems “crash.” . . . Intel correctly expects protection from an intruder who misuses its proprietary system, its nonpublic directories, and its supposedly controlled connection to the Internet to achieve his bulk mailing objectives—incidentally, without even having to pay postage.

The court in Hamidi seemed to draw a line with regards to “impairment by content.” Some additional factors that may need to be considered are whether such content is an invasion of privacy or whether such content is defamatory, in this case, there would be an actual harm of one’s privacy rights. The court also seems to draw a line on whether an actual harm in one’s productivity would be a basis for recovery. Back in 2003, that court did not think so. However, in this technologically advancing world, we find ourselves it is unclear whether it will be reconsidered. Lawyers stress over the importance of the “billable hour.” Companies are constantly attempting to find ways in improving their productivity in an efficient way. Today, security breaches are more frequent than ever, and as a result, companies are urged under the law to invest in and employ reasonable cybersecurity practices. From this angle, it would seem as if the adage “time is money” has merit. In that case, perhaps common law should reflect that.

Critics might argue there should be no property interest in disrupted work productivity because human beings are not chattel. Rather, people should work to live, rather than live to work. But that argument seems to lose strength when applied to an interactive world where people are represented as avatars, in which a “separate personhood” is hard to establish. Furthermore, what determines the value of real and personal property within the metaverse is still being investigated. Upon analyzing the differences between cyberspace and the metaverse, legal scholars and practitioners may need to reconsider, if not expand upon, the application of law in Hamidi.


* Mark Edward Blankenship Jr. is a Senior Associate Attorney for the Ott Law Firm in St. Louis, Missouri. He received his LL.M. (Intellectual Property Law emphasis) at the Benjamin N. Cardozo School of Law at Yeshiva University in 2021, and his J.D. from the J. David Rosenberg College of Law at the University of Kentucky in 2019.

Source image:

Can AI Copyright Its Art?

By Mirae Heo


One of the things artists can do to protect their works is to federally register them with the US Copyright Office. Works that can be registered include literary works, musical works, graphical works, and even architectural works.[1] Congress purposely left the language of section 102(a) of the Copyright Act of 1976 to be very broad so that the statute did not bar future works from copyright protection because of updates in technology.[2] Does that mean that works created by artificial intelligence (AI) are copyrightable? The Copyright Office says no, but Dr. Stephen Thaler certainly tried to.

On November 3, 2018, Dr. Thaler filed a copyright application to register a two-dimensional work of art titled, A Recent Entrance to Paradise.[3] This artwork was created by an algorithm Dr. Thaler invented called the Creativity Machine which can create images of what the AI experiences while going through a “simulated near-death experience.”[4] Dr. Thaler wrote on his application that he was “seeking to register this computer-generated work as a work-for-hire to the owner of the Creativity Machine.”[5] On August 12, 2019, the US Copyright Office rejected the application on the grounds that the artwork “lack[ed] the human authorship necessary to support a copyright claim.”[6] The type of work was not what was at issue, but rather the “author” of the work.

Dr. Thaler requested that the Office reconsider its rejection of his application. He argued that “the human authorship requirement is unconstitutional and unsupported by either statute or case law.”[7] On March 30, 2020, the Office once again rejected to extend copyright protection to A Recent Entrance to Paradise because Dr. Thaler did not provide evidence with his request that the image was created with “creative input or intervention by a human author.”[8] The Office refused to “abandon its longstanding interpretation of the Copyright Act, Supreme Court, and lower court judicial precedent that a work meets the legal and formal requirements of copyright protection only if it is created by a human author.”[9]

In his second request for reconsideration, Dr. Thaler once again argued that the human authorship requirement is unconstitutional and unsupported by case law.[10] He claims that, from a public policy view, extending copyright protection to computer-generated works “further[s] the underlying goals of copyright law, including the constitutional rationale for copyright protection.”[11] He also argued that there was no binding authority that explicitly prohibited computer-generated works from receiving copyright protection.[12]

On February 14, 2022, the Office denied the registration again. The Office concluded that Dr. Thaler had to have “convince[d] the Office to depart from a century of copyright jurisprudence,” but he failed to do so.[13]

Copyright protection is given to “original works of authorship fixed in any tangible medium of expression,” but neither “author” nor “authorship” is defined in the Copyright Act of 1976.[14] Despite the lack of guidance in the Copyright Act, case law limits the scope of “author” and “authorship.  In Supreme Court cases, such as Burrow-Giles Lithographic Co. v. Sarony and Goldstein v. California, the Court referred to “authors” as humans.[15]

Lower court decisions have also refused to extend copyright protection to non-human creations. In Urantia Found. v. Maaherra, the Ninth Circuit held that a book “’authored’ by non-human spiritual beings” was not copyrightable.[16] In Naruto v. Slater, the Ninth Circuit held that animals did not have legal standing under the Copyright Act.[17]

The Compendium of U.S. Copyright Office Practices, the administrative manual of the Register of Copyrights, goes into even more detail about works that lack human authorship. According to the Compendium, “[t]he U.S. Copyright Office will not register works produced by nature, animals, or plants. Likewise, the Office cannot register a work purportedly created by divine or supernatural beings . . . .”[18] It even lists very specific examples of uncopyrightable works such as “a mural painted by an elephant,” “a claim based on cut marks, defects, and other qualities found in natural stone,” and “an application for a song naming the Holy Spirit as the author of the work.”[19] In regards to machine-created works, the Compendium states that “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”[20]

But can AI really compare to animals and divine spirits? AI is a system “that mimic[s] human intelligence to perform tasks.”[21] The Copyright Office seems convinced that, for now, a work solely created by AI lacks the required human input necessary to constitute a copyrightable work. Perhaps advancements in AI will reach a point in the future where the Copyright Office will recognize the human intelligence in AI as human authorship.


[1] 17 U.S.C. § 102(a).

[2] H.R. Rep. No. 94-1476, at 51 (1976).

[3] Letter from Shira Perlmutter, Register of Copyrights, U.S. Copyright Off. Rev. Bd., to Ryan Abbott, Dr. Thaler’s Attorney (Feb. 14, 2022) (on file with the U.S. Copyright Off.) [hereinafter Letter].

[4] Mark Nicholson, Artificial Intelligence – Visions (Art) of a Dying Synthetic Brain, Urbasm (May 18, 2016),

[5] Letter, supra note 3, at 2.

[6] Id.

[7] Id.

[8] Id.

[9] Letter, supra note 3, at 2.

[10] Id.

[11] Id.

[12] Id.

[13] Id. at 3.

[14] 17 U.S.C. § 102(a).

[15] See Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884); Goldsten v. California, 412 U.S. 546, 561 (1973).

[16] Urantia Found v. Maaherra, 114 F.3d 955, 957 (9th Cir. 1997).

[17] Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018).

[18] U.S. Copyright Off., Compendium of U.S. Copyright Off. Practices § 313.2 (3d ed. 2021).

[19] Id.

[20] Id.

[21] IBM Cloud Education, Artificial Intelligence (AI), IBM (June 3, 2020),

Image Source:

Movie Storyboards, Intellectual Property, and a $3 Million Mistake

By Nick Corn IV


I am an active Twitter user. One of my favorite accounts on Twitter is @BadLegalTakes. The basic gist of their account is that they post screenshots of users posting misinformed, yet often very self-assured, legal opinions. Recently, one of the screenshots featured a post that I thought was so downright ridiculously wrong that I couldn’t help but write a blog about it. I know what you’re thinking, “he isn’t really doing his blog about a dumb tweet is he?” Yes. Yes, I am.

On January 15, 2022, an NFT group by the name of Spice DAO announced on Twitter that they had won an auction for Director Alejandro Jodorowsky’s storyboard for a later cancelled 1974 film known as Dune.[1] This film, based on the 1965 book of the same name by Frank Herbert, was eventually produced under a different director in 1984 and re-made once again in 2021.[2] The 2021 iteration of the film has grossed over $400 million as of this writing.[3] Seeing the success of the 2021 film, Spice DAO saw what they thought was an opportunity to make a big splash when Jodorowsky’s storyboard went up for auction with world-renowned auction house Christie’s in November.[4] The piece was estimated to sell for anywhere between $29,000 and $40,000.[5] Spice DAO paid a whopping $3 million for the storyboard.[6] After their successful bid, Spice DAO pledged to “[m]ake the book public” and “[p]roduce an original animated limited series inspired by the book and sell it to a streaming service” among other things.[7] Essentially, Spice DAO had come to the conclusion that, because they bought the storyboard, they were now the owners both of the storyboard itself and the intellectual property rights to the storyboard.

Unfortunately for Spice DAO, the current copyright for Dune’s IP run through 2056.[8] Even more so, the book they bought will also be copyrighted until at least 2092 as an author, namely Jodorowsky himself, is still alive.[9] Just for kicks though, even if this was the correct method of copyright transfer, there are 10-20 other copies of the same storyboard in existence.[10] Not just that, but pages of the storyboard also already exist online. At the time of the purchase, an individual had already uploaded the contents of the book onto Google Photos.[11] While the link to it is no longer functional, presumably because the owner of the copyright became aware of it, there are still many images of pages that come up upon a quick Google search.

Undeterred, however, Spice DAO has continued to persist in their attempt to make some use of their massive overpayment. In a blog post released by Spice DAO, the group stated that they conducted a “whirlwind week of meetings” in which met with a producer who helped create the anime sequence in Quentin Tarantino’s “Kill Bill,” a writer for a Netflix series, an entertainment attorney employed by Canadian rapper Drake, and 3 Los Angeles based animation studios among others.[12] In an interview, Spice DAO tried to reframe the situation as everyone else simply misunderstood what the purpose for their purchase was for. According to Spice DAO, “while we do not own the IP to Frank Herbert’s masterpiece, we are uniquely positioned with the opportunity to create our own addition to the genre as an homage to the giants who came before us.”[13] This posits two questions to inquiring minds. Firstly, if they were simply going to create their own fanfiction based on the storyboard, why spend $3 million to buy pages they could have found online for free? Secondly, how exactly do they intend to create that fanfiction based on the storyboard, and then subsequently sell it for the creation of an animated series, without infringing upon the intellectual property? One writer was blocked by Spice DOA on Twitter after asking them that very question.[14] The answer however is that they can’t. As UK-based trademark attorney Kirsty Stewart wrote, “in order to produce or authorise derivative works such as an animated series, SpiceDOA would need to obtain licenses from the Herbert estate, as well as potentially Jodorowsky (and any other authors such as Michel Seydoux) if the adaptation was based on the Jodorowsky book. Similar to how buying a Batman comic does not give you the inherent rights to produce a new Batman film, the purchasing of this directors bible does not give SpiceDOA any intrinsic rights to produce new material.”[15]

At the end of the day, this is plainly an example of a group, caught up in a trend, taking drastic action without fully comprehending what they were doing. Their attempted backtracking is the equivalent of a teenager, upon being romantically turned down, stumbling through a “What? No, of course I wasn’t actually asking you out.” Unfortunately for Spice DOA, the world can recognize their $3 million embarrassment from a mile away.


[1] @TheSpiceDAO, Twitter (Jan. 15, 2022, 12:28 PM),

[2] Adrienne Westenfeld, The Crypto Bros Who Thought They Bought the Dune Rights Won’t Give Up, Esquire (Jan. 25, 2022),

[3] Dune (2021), The Numbers, (last visited Feb. 18, 2022).

[4] Westenfeld, supra note 2.

[5] Joyce Li, Rare 1970 ‘Dune’ Storyboard Set To Hit Christie’s Auction Block, HypeBeast (Nov. 3, 2021),

[6] Spice DAO meets with Drake’s lawyer, still can’t fix $3M Dune blunder, Protos (Jan. 24, 2022), [hereinafter Protos].

[7] @TheSpiceDAO, supra note 1.

[8] David Barnett, Jodorowsky animated Dune in development, says crypto group, The Guardian (Jan. 24, 2022, 7:43 AM),

[9] Id.

[10] Protos, supra note 6.

[11] Id.

[12] Spice DAO, Development on the Original Animation has begun!, Medium (Jan. 18, 2022),

[13] Christian Zilko, Crypto Investors Plot Animated Series Based on Jodorowsky’s ‘Dune’ Ideas — Without ‘Dune’ IP, IndieWire (Jan. 23, 2022, 2:30 PM),

[14] Barnett, supra note 8.

[15] Kirsty Stewart, Jodorowsky’s Dune, NFTs, and Copyright, Thorntons (Jan. 17, 2022),

Image source:

Grid Operator PJM Proposes A Two-Year Pause on Interconnection Approvals: What It Means for America’s Green Energy Goals

By Alexis Laundry


Last week, PJM Interconnection’s Planning Committee endorsed a transition plan that included a two-year delay on reviewing the majority of proposed projects in their interconnection request queue.[1] PJM is the nation’s largest regional grid operator, controlling electricity transmission activities across all or parts of Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, and the District of Columbia.[2] Over the past several years, PJM has seen a drastic increase in interconnection requests for new generation projects, most from renewable sources like solar and wind, as demand for green electricity has grown in response to widespread passage of state Renewable Portfolio Standards and net-zero carbon goals.[3] The trend is expected to continue as the nation works towards meeting the Biden administration’s goal of a carbon-free electricity sector by 2035.[4] This increase in demand for interconnection approval has created a backlog in the queue, mostly due to the fact that the approval system is not meant to handle this volume of applications, which is delaying projects from being built by months and even years. It is a problem that must be resolved soon if the U.S. is going to meet the transition goals necessary to reduce carbon-emissions and mitigate the effects of climate change.

PJM is a regional transmission organization (RTO), which is a non-profit entity that coordinates the movement of electricity throughout a specific geographic region.[5] RTOs are responsible for operating the wholesale electricity market in their region and managing the high-voltage transmission grid to ensure reliable access to electricity for users.[6] This includes approving requests from new generation facilitates to connect into that transition network, which allows the electricity they produce to be delivered to end-users. The approval system was originally designed to accommodate the relatively few and far between requests for large-scale coal or natural gas plants to interconnect.[7] This process is slow and thorough, which worked fine when only a few big projects needed to be approved each year. However, the recent proliferation of smaller scale renewable energy projects being proposed across the region has overwhelmed the system. PJM recognized this problem and began taking steps late last year to create a plan for transitioning to a new approval process. The organization created the Interconnection Process Reform Task Force, which met for the first time in April 2021.[8] This task force created the proposed transition plan that was officially endorsed by the Planning Committee last week.[9] The transition plan creates a “fast track” approval process for projects currently in the queue that are the most ready for implementation, which amounts to about 450 of the ~2,500 projects currently languishing in the queue.[10] Another 1,200 would be prioritized for review starting in 2024.[11] The plan also includes a pause on new applications until 2025, which means many approvals wouldn’t be complete until 2027.[12] The ultimate result is a two-year pause on reviewing the majority of projects currently in the queue, which will delay getting over 100,000 MW of carbon-free electricity onto the grid by at least that long.[13] The plan is just a proposal at this time and still needs to be reviewed by additional PJM committees and ultimately approved by FERC, the federal agency responsible for regulating RTOs.[14] PJM plans to file with FERC in May and begin implementation late this year or early next year.[15]

What does this all mean for our nation’s climate action goals? On the one hand, many stakeholders are very supportive of the plan.[16] Representatives from major trade groups and public interest organizations have spoken in favor of the changes, seeing them as necessary and timely.[17] It’s obvious that something must be done to address the backlog for renewable projects, which is an issue replicated at many RTOs across the country. PJM is largely considered a bellwether for how RTOs operate, so approval of this plan would likely make it a model for other regions.[18] On the other hand, the two-year delay and new prioritization process will put many renewable project developers into tough financial positions, forcing them to delay work and ultimately revenue on projects as they await approval.[19] The delay will also have consequences on states’ ability to meet their renewable portfolio goals, many of which are statutorily mandated, as well as national goals. It seems unlikely that the U.S. will be able to get to 100% carbon-free electricity in just 13 years if new projects can’t come online quickly. At the end of the day, it will all depend on whether FERC approves this plan and how other regions respond to the same problem. Hopefully, other solutions will also arise that can help maintain the momentum for renewable development that we’ve seen over the past few years.


[1] PJM’s Planning Committee Widely Endorses PJM Transition Plan to New Interconnection Process, PJM Inside Lines (Feb. 9, 2022), [hereinafter PJM Transition Plan].

[2] Who We Are, PJM, (last visited Feb. 11, 2022).

[3]  PJM Transition Plan, supra note 1.

[4] Exec. Order 14,008, 86 Fed. Reg. 7,619 (2021).

[5] Who We Are, supra note 2.

[6] Id.

[7] James Bruggers, Overwhelmed by Solar Projects, the Nation’s Largest Grid Operator Seeks a Two-Year Pause on Approvals, Inside Climate News (Feb. 2, 2022),

[8] PJM Transition Plan, supra note 1.

[9] Id.

[10] Id.

[11] Bruggers, supra note 7.

[12] Id.

[13] PJM Transition Plan, supra note 1.

[14] Id.

[15] Id.

[16] Id. (stating that the plan was approved by 91% of the committee and was preferred over several alternatives).

[17] Bruggers, supra note 7.

[18] Id.

[19] Id.

Image source:

Page 2 of 10

Powered by WordPress & Theme by Anders Norén