The first exclusively online law review.

Author: JOLT Page 1 of 8

Navigating Legal Factors for U.S. Companies Entering the E-commerce Market in Africa

By Yanrong Zeng




As Africa’s online banking and shopping sectors have gained popularity, e-commerce has become a crucial aspect of business operations in the region, attracting foreign investment.[1] To successfully export goods to Africa, U.S. companies must have a deep understanding of the legal factors that impact the e-commerce sector. This blog post delves into the legal considerations that contribute to the success of e-commerce in different African countries and recommends suitable entry points for businesses entering the e-commerce market.

The success potential of e-commerce hinges on two factors: information infrastructure and legal considerations. Information infrastructure sets the ceiling of e-commerce possibilities in a target market as access to internet, mobile phones, bank accounts, and postal addresses are necessary for online shopping.[2] To assess a market’s online shopping readiness, the UNCTAD B2C Commerce Index is an effective tool.[3] Countries with the highest B2C Commerce Index are South Africa, Algeria, and Kenya, followed closely by Kenya, Nigeria, and Morocco.[4] Meanwhile, Senegal, Egypt, and Ivory Coast are further down the list.[5] It’s worth noting that Egypt has surprisingly dropped in the rankings in the past decade.[6]

Legal factors can act as limiting factors for e-commerce opportunities. These include e-commerce law, consumer protection law, data privacy laws, and breach notification laws. However, companies can use these laws to their advantage by using them as a guide to identify the most suitable e-commerce market to enter.

To minimize potential disputes and legal complications, it is recommended that U.S. companies identify target markets with reliable legal protection for electronic agreements and strong consumer protection laws. Out of the nine countries mentioned earlier, only Nigeria is yet to publish a distinct e-commerce law, although a bill is currently under legislative process.[7] The Algerian e-commerce market is not open to foreign companies, which means that it is not advisable for companies to consider Algeria as a potential market to enter.[8] Among the mentioned countries, Egypt, Nigeria, South Africa, Ghana, and Morocco seem to be suitable markets for entry, as they have established specific laws or regulations to protect consumers, especially in online transactions.[9] Conversely, countries like Kenya, Algeria, Senegal, and Ivory Coast appear to have weaker consumer protection laws, which can create legal ambiguities and compliance challenges.[10]

Data security breaches pose a significant risk, as indicated by the high numbers of malware attacks on industrial control systems in the target markets.[11] Therefore, it is crucial for U.S. companies to take proactive measures to protect their data and adhere to foreign laws. To mitigate these risks, U.S. companies can prioritize entry into markets that have uniform data privacy and protection laws across a group of countries. This is because complying with the legal requirements of one country in the group ensures compliance with all others. The African Union (AU) member states and Economic Community of West African States (ECOWAS) member states are obligated to respect, protect, and promote the right to privacy and personal data protection, as stated in their declarations and conventions.[12]

To ensure compliance and mitigate risks, U.S. companies need to to carefully evaluate their business requirements and risk tolerance before entering a new market with data breach notification laws.[13] For larger companies with a greater focus on data protection, countries such as Nigeria, Egypt, and Algeria, which have well-defined and stringent data breach notification laws, may be a suitable choice.[14] However, for smaller to medium-sized companies that prioritize balancing compliance costs with maintaining consumer trust, countries such as Kenya, Ghana, and South Africa, which have moderate data breach notification laws, may be a more practical option.[15] Ultimately, U.S. companies should carefully assess their risk appetite and business requirements before making a decision on which market to enter.

To operate in Africa, U.S. companies must adhere to the Foreign Corrupt Practices Act (FCPA), which extends beyond U.S. borders.[16] To avoid violating this law, U.S. companies need to prioritize anti-corruption measures. Transparency International’s 2022 Corruption Perceptions Index (CPI) revealed that sub-Saharan Africa is currently facing a notable challenge with corruption, which may impact businesses operating within the region.[17] As a result, U.S. companies must conduct thorough due diligence to ensure compliance with anti-corruption laws when entering the e-commerce market in the region.

In conclusion, the e-commerce market in Africa presents both risks and opportunities for U.S. companies. While the region has seen tremendous growth in e-commerce, it is essential for U.S. companies to carefully consider the legal landscape and regulatory environment in each target market. By prioritizing legal compliance, consumer protection, data privacy, and anti-corruption measures, U.S. companies can mitigate risks and maximize opportunities for success. Ultimately, those who navigate the legal complexities with diligence and strategic planning stand to benefit from the growing e-commerce market in Africa.








[1] White and Case, Africa Focus: Navigating a Changing Business Landscape in Africa and Beyond (Spring 2021),

[2] U.N. Conf. on Trade and Dev., UNCTAD B2C E-Commerce Index 2020 Spotlight on Latin America and the Caribbean, 1,

[3] Id.

[4] Id. at 15–16.

[5] Id.

[6] Id. at 16.

[7] Aderibigbe et al., Digital Business in Nigeria: Overview, Thomson Reuters (Jan. 1, 2023),; Electronic transaction: Senate prepares legal framework to guide deals, Tribune Online (Feb. 27, 2020),; Kenya Commc’n (Amend.) Act (2008),; Electronic Commu’n and Transactions Act (2002),; Dyer et al., Digital Business in South Africa: Overview, Thomson Reuters (Mar. 1, 2021),; Electronic Transactions Act (2008),; Electronic Signature Law No. 15 (2004),; World Intell. Prop. Org., Law No. 2008-08 on the Electronic Transactions,; Evidence Act (2011) § 93,

[8] Lloyds Bank, E-Commerce in Algeria (last updated Apr. 2023),; Loucif and Gauvin, Publication of the law on the post and the electronic

communications and the e-commerce law, LPA-CGR,

[9] Consumer Code of Prac. Regul. (2007),; U.N. Conf. on Trade and Dev., Review of e-commerce legislation harmonization in the Economic Community Of West African States, 40,; Sulaiman and Mashaba, E-commerce transactions under the Electronic Communications and Transactions Act and Consumer Protection Act, Dentons (Aug. 26, 2022),,E%2Dcommerce%20transactions%20under%20the%20Electronic%20Communications%20and%20Transactions%20Act,68%20of%202008%20(CPA); Electronic Transactions Act (2008),; Morocco Ministry of Indus. and Trade, Consumer Protection,

[10] Kenya Info and Commc’n Act (1998),; Consumer Prot. Law No.181 (2018),,-Country&text=This%20Law%20consisting%20of%2076,as%20increasing%20the%20consumer’s%20rights; Brill, Algeria – Consumer Protection,; ICT Policy Africa, Ordinance n ° 2012 293 of March 21 2012 on Telecommunications and Information Technologies (Unofficial Translation),

[11] Culture Custodian, More African Countries are Taking Data Privacy and Protection Seriously (Feb. 8, 2023),

[12] Id; African Union, Personal Data Protection Guidelines for Africa (May 9, 2018),

[13] Practical Law Data Privacy & Cybersecurity, Global Data Breach Notification Laws Chart: Overview, Thomson Reuters (Nov. 28, 2022),

[14] DIA Piper, Data Protections Law of the World,; Law 151/2020 on the Protection of Personal Data,; Data Guidance, Algeria: Data protection law published in Official Gazette (Apr. 16, 2019),

[15] Nzilani Mweu, Kenya – Data Protection Overview, Data Guidance (Mar. 2023),; Bhagattjee, South Africa – Data Protection Overview, Data Guidance (July 2022),; Cybersecurity Act (2010),

[16] Nick Oberheiden, 10 Reasons Why FCPA Compliance Is Critically Important for Businesses, National L.R. (July 24, 2020),

[17] Transparency Int., CPI 2022 for Sub-Saharan Africa: Corruption Compounding Multiple Crises (Jan. 31, 2023),,by%20significant%20declines%20in%20others.

Image source:

To Neurotech or not to Neurotech – Whether ‘tis nobler in the Mind to Regulate

By Jack Younis




In the 2007 hit television show Chuck, an unwitting computer geek is turned into a CIA secret agent/asset when he downloads fighting skills and a database of government intelligence into his brain.[1] Regardless of the action-comedy shenanigans that ensue, the concept of connecting the human consciousness directly to technology has continued to capture the cultural zeitgeist. Now, neurotechnology and its related fields have advanced beyond discussion in popular culture and science fiction; it has become an increasingly topical reality. Moreover, as advances are made in neurotech, the legal question presented by this progress becomes increasingly demanding.

Much like Chuck’s download and transmission of CIA data to his brain, one facet of neurotechnology is the ability “download” data from the technology itself. [2] The brain itself is a biological computer, relying on the neurological firing of electric signals to execute commands, not unlike those of an actual computer.[3] The resulting application and development of technology that interprets these signals and firings have led to significant advancements in neurotech.[4]

With such increasingly adaptable technology, many in the field are calling for increased regulation of the technology. As professor Rajesh P.N. Rao from the University of Washington in Seattle puts it, “It’s a good time for us to think about these devices before technology leaps ahead and it’s too late.”[5] And regulatory commentary has already begun – the International Organization for Economic Co-operation and Development (OECD) issued in May of 2021 the first international standard for regulating neurotechnology.[6]

Promulgating recommendations for the novel technology include developing a set of nine principles in which governance over the novel industry should consider, not limited to promoting responsible innovation, prioritizing safety assessments, and safeguarding brain data and other information.[7]

In addition to regulatory guidance conversation, the adaptation of these technologies has become prevalent in legal discussions as well. Prominent in the conversation is Dr. Allan McCay, who was named one of the most influential lawyers of 2021 by Australasian Lawyer.[8]  Dr. McCay, a criminal law professor at the University of Sydney Law School, recently published a topical report addressing these concerns within the past year.[9] His report focuses not only on the social, political, and economic concerns related to neurotechnology, but the ethical and legal implications that follow suit.[10] Mirroring the issuance put forth by the OECD, Dr. McCay’s work strongly hones in on the ethical steps that must be considered as progress continues to be made, emphasizing “how the law should respond” in addition to how it is applied.[11]

Even with the careful consideration of neurotechnology’s future, every concern is not strictly related to the restriction of this developing industry. Some find that regulations themselves need breathing room to operate effectively. As one article puts it, “Outright bans of certain technologies could simply push them underground, so efforts to establish specific laws and regulations must include organized forums that enable in-depth and open debate.”[12] Much like the OECD and Dr. McCAy, Yuste and the Morningstar Group contend that the development of neurotechnology requires consideration beyond technological implications; legal questions related to privacy and consent, agency and identity, augmentation, and bias must all be accounted for as part of the discussion.[13]

Regardless if neurotechnology ever enables humans to download martial moves and spy secrets directly to their consciousness, the emergence of this technology will create progressively more and more questions. Whether it is related to the impact on administering justice or increasing development to higher capabilities, balancing outcomes and promoting conversation surrounding neurotechnology will most likely continue to elevate, and the legal field must stay prepared.





[1] Chuck: About, NBC (2023),

[2] Julia Masselos, Neurotechnology, technology Networks (Feb. 11, 2022),

[3] Id.

[4] Id.

[5] Esther Shein, Neurotechnology and the Law, 65 Communications of the ACM, no. 8, 2022, at 16-18,

[6] OECD, Recommendation of the Council on Responsible Innovation in Neurotechnology, OECD Legal Instrument (Dec. 11, 2019),

[7] Id.

[8] Allan McCay, Neurotechnology, law and the legal profession, The Law Society (August 2022),

[9] Id.

[10] Id.

[11] Id. at 14.

[12] Rafael Yuste et al., Four ethical priorities for neurotechnologies and AI, nature  (Nov. 9, 2017),

[13] Id.

Image Source:

Are Layoffs the New Normal for Big Tech?

By Kasey Hall




Over 140,000 tech workers were laid off in 2022, and so far in 2023, we have seen more than 94,000 jobs cut, ranging from tech start-ups to “Big Tech.”[1] In fact, the tech industry has seen its highest number of layoffs since the dot-com bubble burst in the early 2000s.[2] These layoffs have been all over the news and social media, with many younger generations questioning the sustainability of a career in tech.[3]

In the past, tech companies had prioritized a “growth at all costs” mindset that meant profitability was viewed as a mere afterthought.[4] Sanjay Brahmawar, the CEO of the enterprise software firm Software AG says, “for years companies have said “let’s just keep growing and we’ll figure out profitability somewhere down the road.”[5] Since 2011, the tech industry has been growing year after year with explosive growth occurring after the pandemic.[6] In 2020 and 2021, sales sharply rose as new work-from-home orders put a heavy demand on tech companies, and more people and businesses relied on these technologies than before.[7] During the pandemic, tech hiring became progressively more competitive, with companies increasing pay packages and benefits across the board.[8] For instance, Amazon more than doubled its corporate staffing, and Meta doubled its employment headcount between March 2020 and September 2021.[9] This record-setting growth, however, could not be maintained forever, and we are currently experiencing a significant course correction triggered by an economic slowdown.[10]

For a while now, investors were willing to let these tech companies spend needlessly so long as the share prices continued to grow by double-digits year after year reliably.[11] However, as internal costs rose and spending slowed, many companies faced shrinking profits and alarms from angry investors calling for a significant reduction in expenses.[12] The “growth at all costs” era seems to be ending for “Big Tech.[13] Investors are instead shifting the focus towards profitability and efficiency, describing this as the “new normal” for tech companies.”[14] So, is this “new normal,” led by investors, to blame for these tech layoffs? Michael Cusumano, deputy dean at MIT’s Sloan School of Management, believes that “these massive tech layoffs have more to do with investors than companies’ bottom lines. “[15]

As record-breaking growth is no longer feasible long-term, investors have instead set their sights on curbing expenses and are beginning to evaluate tech companies more harshly.[16] This means that the mass hiring of high-skilled professionals during the pandemic, with sizable salaries and pay packages to match, are the first to be cut as tech companies look to reassess their balance sheets.[17] All this has been done in an effort by tech companies to signal to investors that they are willing to focus on long-term growth by showing more fiscal responsibility in the short term regarding staffing.[18] This reorganization of tech companies likely caused these industry-wide layoffs.[19] However, they should not signal absolute doom to those interested in the industry’s success. Instead, these tech layoffs could indicate that the “industry is maturing or becoming more stable after rapid growth” and that these tech companies are invested in a more sustainable path forward.[20]






[1] Keerthi Vedantam, Tech Layoffs: U.S. Companies That Have Cut Jobs in 2022 and 2023, Crunchbase News (Mar. 3, 2023),

[2] Amanda Hetler, Tech Sector Layoffs Explained: What You Need to Know, TechTarget (Feb. 1, 2023),

[3] Tripp Mickle, Tech Layoffs Shock Young Workers. The Older People? Not So Much., N.Y. Times (Jan. 23, 2023),

[4] Leslie Picker & Ritika Shah, Tech Private Equity Investor Orlando Bravo Says the Mantra of “Growth at all Costs” is Over, CNBC (Mar. 3, 2023, 11:24 AM),

[5] Will Daniel, How to Navigate the Stock Market’s “New Normal” After the Last 2 Decades of Investing Became Ancient History, Fortune (June 4, 2022, 6:30 AM),

[6] The Future of Big Tech, J.P.Morgan (Dec. 23, 2022),

[7] Why Are Tech Companies Laying Off All These Workers?, Forbes (Jan. 27, 2023 10:50 AM),

[8] Id.

[9] Clare Duffy, How Big Tech’s Pandemic Bubble Burst, CNN (Jan. 22, 2023, 8:11 AM),

[10] Big Tech Layoffs – A Meltdown or Course Correction? Harvard Prof Ranjay Gulati Explains, The Econ. Times (Nov. 10, 2022, 11:16 AM),

[11] Jake Swearingen, Wall Street Ignored Big Tech’s Bloat During Boom Times. Now It’s Ready to Slide and Dice, Insider (Nov. 17, 2022, 2:03 PM),

[12] Hetler, supra note 2.

[13] Daniel, supra note 5.

[14] Id.

[15] Forbes, supra note 7.

[16] Id.

[17] Bobby Allyn, 5 Takeaways from the Massive Layoffs Hitting Big Tech Right Now, NPR (Jan 26, 2023, 5:00 AM),

[18] Forbes, supra note 7.

[19] Id.

[20] Hetler, supra note 2.



Image Source:

ChatGPT Co-Wrote an Episode of South Park. Will The AI Chatbot Replace the Need for Writers in Hollywood?

ChatGPT Co-Wrote an Episode of South Park. Will The AI Chatbot Replace the Need for Writers in Hollywood?

By Cleo Scott



ChatGPT has been a hot topic lately. From dating apps[1] to the courtroom[2], the natural language processing tool driven by artificial intelligence technology is transforming the way we do things.[3] Now, the trailblazing chatbot can add television writing to its resume. South Park creators used OpenAI’s chatbot to create the fourth episode of season 26.[4] The episode, titled “Deep Learning,” shows boys from Stan’s class using the chatbot to write essays and send texts to girls.[5] During a speech written by ChatGPT, the character argues that people shouldn’t be blamed for using the chatbot.[6] “It’s the giant tech companies who took Open AI, packaged it, monetized it, and pushed it out to all of us as fast as they could in order to get ahead,” Stan says.[7]

At one point in the episode, Stan asks ChatGPT to write a story that takes place in South Park, where a boy named Stan must convince his girlfriend that it’s okay that he lied about using AI to text her.[8] After sending the request to ChatGPT, the chatbot begins “thinking” and replies with a story within seconds.[9] “Once upon a time, there was a boy named Stan who lived in South Park. Stan loved his girlfriend very much, but lately, he hadn’t been truthful with her. One day, when Stan got to school, he was approached by his best friend,” the response read.[10]

The ending credits show that the episode was written by both Trey Parker and ChatGPT.[11] While it is remarkable how advanced AI has become, people are now wondering if AI tools like ChatGPT will soon replace the need for human writers. OpenAI co-founder and president Greg Brockman thinks the chatbot could even be used to fix the last season of Game of Thrones.[12] “That is what entertainment will look like,” Brockman said at a SXSW panel. “Maybe people are still upset about the last season of Game of Thrones. Imagine if you could ask your A.I. to make a new ending that goes a different way and maybe even put yourself in there as a main character or something.”[13] Others also think ChatGPT should be used for television writing. For instance, Deadline used ChatGPT to create a pitch for a Mad Max reboot.[14] The chatbot responded with a detailed pitch outlining the premise of the show.[15] While the pitch needed some tweaking, it was said to be doable.[16]

Brockman thinks ChatGPT could help do the “drudge work” for writing but also add a more “interactive” entertainment experience.[17] Hollywood is now monitoring the potential impact of ChatGPT on the industry.[18] The Writers Guild of America West said they are “monitoring the development of ChatGPT and similar technologies in the event they require additional protections for writers.”[19] On the other hand, screenwriters interviewed by The Hollywood Reporter see ChatGPT as a potential tool to aid the writing process instead of a tool that will replace the work of writers.[20]

The issue is that what often takes writers weeks or months to formulate only takes ChatGPT 30 seconds.[21] Brockman said ChatGPT could take over the types of jobs where users “didn’t want human judgment there in the first place.”[22] Big Fish and Aladdin writer John August doesn’t think the AI chatbot will be replacing the kind of writing they’re doing in writers’ rooms anytime soon.[23] Still, he thinks we should start thinking about the best ways to use the tool.[24] “There certainly is no putting the genie back in the bottle. It’s going to be here, and we need to be thinking about how to use it in ways that advance art and don’t limit us.”[25]






[1] Anna Iovine, Tinder users are using ChatGPT to message matches, MASHABLE (Dec. 17, 2022),

[2] Janus Rose, A Judge Just Used ChatGPT to Make a Court Decision, Vice (Feb. 3, 2023),

[3] See Natasha Lomas, ChatGPT shrugged, TechCrunch (Dec. 5, 2022) (quoting “ChatGPT is a new artificial intelligence (AI) tool that’s designed to help people communicate with computers in a more natural and intuitive way — using natural language processing (NLP) technology.”),

[4] Stacy Liberatore, South Park’s latest episode was co-written by ChatGPT: ‘Deep Learning’ ends with a script generated by OpenAI’s chatbot, Daily Mail (Mar. 17, 2023),

[5] Id.

[6] Id.

[7] Id.

[8] Id.

[9] Liberatore, supra note 4.

[10] Id.

[11] Id.

[12] J. Clara Chan, Using ChatGPT to Rewrite ‘Game of Thrones’? OpenAI Co-Founder Says “That Is What Entertainment Will Look Like”, The Hollywood Reporter (Mar. 10, 2023),

[13] Id.

[14] Melissa Murphy, ChatGPT Is Going To Start Writing Hollywood Movies?, Giant Freakin Robot (last visited Mar. 18, 2023),

[15] Id.

[16] Id.

[17] Chan, supra note 12.

[18] Id.

[19] Id.

[20] Id.

[21] Murphy, supra note 14.

[22] Chan, supra note 12.

[23] Katie Kilkenny & Winston Cho, Attack of the Chatbots: Screenwriters’ Friend or Foe?, The Hollywood Reporter (Jan. 12, 2023),

[24] See id.

[25] Id.

Image Source:

How Doctors Used Patients’ Dreams to Further Their Own

By Jessica Birdsong





A lot of us have seen the Netflix documentary, Our Father, presenting a disturbing tale of a physician, Dr. Donald Cline, who, during the 1970s and 80s, performed inseminations on patients using his own sperm, without their knowledge or consent.[1] The extent of his actions is unknown, but he fathered at least 94 biological children, and possibly many more.[2] The discovery of this deception has been devastating for the victims, as they grapple with the loss of their identity and the revelation of having numerous half-siblings.[3] The mothers who were affected have also been left feeling violated and betrayed.[4]

Legal action was taken by some of the affected siblings, but they were met with disappointment. Despite Cline’s egregious actions, he was not charged with rape, battery with bodily waste, or even with criminal deception.[5] Instead, he was only charged with obstruction of justice for being untruthful, resulting in a $500 fine and no jail time.[6] This lack of legal consequences stems from the fact that no law in Indiana or most other states specifically prohibits a doctor from using their own sperm in their patients.[7]

Regrettably, Cline’s story is not unique. In a 2023 decision, a judge dismissed claims made by offspring who discovered that a Connecticut doctor had used his own sperm to impregnate their mothers without their knowledge.[8] After shocking results from an at-home DNA test, the plaintiffs discovered they were half-siblings.[9] They both brought claims of negligence, fraudulent concealment, lack of informed consent, and unfair trade practices, citing the mental anguish and physical injury they have suffered as a result of their discovery.[10]

Plaintiff Flaherty alleges that he sustained and continues to suffer mental anguish and physical injury through his emotional and psychological well-being, as a result of the defendant’s conduct.[11] The court found that because Flaherty doesn’t require any extraordinary care for his injury, this claim is precluded.[12] Further, plaintiff Suprynowicz alleges that she suffers from a genetic condition as a result of the defendant’s negligence that “limits her earning capacity and impairs her ability to earn a living.”[13] The court responded that because the plaintiff never had a wage-earning capacity taken away by the doctor’s conduct, she could not claim compensation for its loss.[14]

Overall, the court found that the plaintiffs’ claims fell under the category of “wrongful life,” a cause of action that has been declined by the majority of courts in the country.[15] The court argued that the plaintiffs could not recover for harm resulting from the achievement of life, and also raised concerns about the difficulty of quantifying damages in cases involving the weighing of an impaired life against no life at all.[16]

Thankfully, there is some hope for change. In January 2023, a federal bill was introduced to establish that it is a criminal offense for medical professionals to knowingly misrepresent the nature or source of DNA used in any procedure that involves assisted reproductive technology.[17] The Protecting Families from Fertility Fraud Act proposes a new federal crime under the Sexual Assault chapter, which would provide greater clarity and legal protection to those affected by fertility fraud.[18]







[1] Lindsey L. Wallace, Netflix’s Our Father Tells The True Story of a Fertility Doctor Who Used His Own Sperm on Patients, Times (May 12, 2022, 5:54 PM),

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] Wallace, supra note 1.

[7] Id.

[8] Suprynowicz v. Tohan, X03-CV-21-6140245-S, 2023 WL 2134547, at *1 (Conn. Feb. 17, 2023).

[9] Id.

[10] Id. at *2.

[11] Id. at *5.

[12] Id.

[13] Suprynowicz, 2023 WL 2134547, at *5.

[14] Id.

[15] Id. at *4.

[16] Id.

[17] Press Release, U.S. Congressman Joseph Morelle, Congressman Joe Morelle Acts To Combat Fertility Fraud (Feb. 9, 2023),

[18] Id.




Image Source:

The FTX Saga: There’s No New Hope

By Dante Bosnic





As the Sam Bankman-Fried and FTX saga continues, more and more details are coming out regarding the once-famed cryptocurrency giant. According to Protos, Alameda Research purchased HiveEx, an Australian over-the-counter (OTC) trading desk, in 2020 and immediately appointed Bankman-Fried as Director.[1] Fred Schebesta, one of the HiveEx’s founders, also purchased a stake in a local bank, Australian Goldfields Money, in 2018 and announced his intention to launch Australia’s first crypto bank.[2] After Schebesta purchased this stake and before Alameda Research’s acquisition, HiveEx had advertised its ability to get other crypto companies’ banking, even those crypto companies that other banks had repeatedly rejected.[3]

Along with having an impact in Australia, the Financial Times (FT) has reported on FTX-integrated OTC desk Genesis Block which allowed Hong Kong residents to exchange their cash for crypto-currency or vice-versa.[4] A former employee of Genesis detailed to FT that the company had people lining up in the streets with bags of cash to exchange for cryptocurrency.[5] Both HiveEx and Genesis Block seem to serve as important on/off-ramps for FTX and Alameda Research, partly thanks to their connections to the banking system.[6] Along with acquiring banks in Australia and Hong Kong, DAAG Trading DMCC, based out of the United Arab Emirates, was also included in FTX’s bankruptcy.[7]

In addition to news regarding FTX’s acquisitions, more information has also come to light regarding FTX’s charitable contributions. According to Time, leaders of the Effective Altruism (EA) movement were repeatedly warned beginning in 2018 that Sam Bankman-Fried was unethical, duplicitous, and negligent in his role as CEO of Alameda Research.[8] They apparently dismissed those warnings, sources say, before taking tens of millions of dollars from Bankman-Fried’s charitable fund for effective altruist causes.[9] After FTX’s collapse, William MacCaskill, the Oxford moral philosopher and intellectual figurehead of EA, whose movement is set out to help the global poor, tweeted, “I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception”[10] Additionally, MacAskill declined to answer a list of detailed questions from TIME stating, “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly, I look forward to the results of the investigation and hope to be able to respond more fully after then.”[11] Furthermore, one person connected to MacAskill stated, “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance.”[12] While there are still many messy details regarding how many warnings MacAskill and the EA leaders received, it is clear Bankman-Fried’s wealth allowed him to repeat the same mistakes that likely led to FTX’s downfall.[13]

As more and more profiles come out weekly regarding Bankman-Fried and FTX, his court battle has continued on as well. In early March, Bankman-Fried’s lawyers reportedly argued that it might be necessary to delay Bankman-Fried’s criminal trial scheduled for October 2.[14] In a letter to U.S. District Judge Lewis Kaplan, the 31-year-old former billionaire’s lawyers said federal prosecutors in Manhattan had not yet turned over evidence collected from electronic devices belonging to Caroline Ellison and Gary Wang, previously two of their client’s closest associates.[15] “While we are not making such an application at this time, we wanted to note this issue for the Court now,” Christian Everdell, one of Bankman-Fried’s lawyers, wrote in the letter. Along with handling this request, Judge Lewis Kaplan has also questioned Bankman-Fried’s bail conditions.[16] According to CNN, Kaplan said he’s still not convinced that the founder of bankrupt crypto trading platform FTX couldn’t circumvent the more-restrictive bail conditions filed last week.[17] Bankman-Fried, who did not attend the hearing, is currently under house arrest at his parents’ home in Palo Alto, California.[18] Kaplan expressed concerns over handling the possibility of Bankman-Fried using other people’s devices if they’re brought into his California residence and said Bankman-Fried could use a flip phone to call someone to express what he would otherwise send in an email or text. Kaplan also said he would sign an order modifying the conditions to allow Bankman-Fried access to an FTX database to prepare for trial, but that order also needed further restrictions.[19]

As the saga continues, it only looks like it’s getting worse for Bankman-Fried. It will be interesting to see what else surfaces as he approaches his trial in October.






[1] Protos Staff, HiveEx, Genesis Block, and SBF’s trading desk network, Protos (Mar. 13, 2023),

[2] Id.

[3] Id.

[4] Id.

[5] Protos Staff, supra note 1.

[6] Id.

[7] Id.

[8] Charlotte Alter, Exclusive: Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed, Time (Mar. 15, 2023),

[9] Id.

[10]Charlotte Alter, supra note 8; Gideon Lewis-Kraus, The Reluctant Prophet of Altruism, The New Yorker (Aug. 8, 2022),

[11] Charlotte Alter, supra note 8.

[12] Id.

[13] See id.

[14] See Luc Cohen, Bankman-Fried’s lawyers say October trial may need to be delayed, Reuters (Mar. 9, 2023),

[15] Id.

[16] See Lauren del Valle, Judge concerned Sam Bankman-Fried is too ‘technologically savvy,’ could find a way around tech restrictions, Cnn Business (Mar. 10, 2023),

[17] Id.

[18] Id.

[19] Id.



Image source:

What is the Biden Administration’s new National Cybersecurity Strategy, and will it mean we can keep TikTok?

By: Paige Hastings





On March 2, the Biden-Harris Administration released a new National Cybersecurity Strategy (The Strategy) to create a “safe and secure digital ecosystem for all Americans.”[1] In different contexts, the specific meaning of cybersecurity can vary, but cybersecurity policies are extremely important on the national, local, and individual levels.[2]

The Strategy calls for defending critical infrastructure, disrupting security threats, shaping market forces by allocating responsibilities, investing in a plan for lasting innovation, and creating international partnerships to pursue common technology goals.[3] These actions are meant to handle hacking threats more aggressively, disrupt intruders of U.S. computer networks, and hold companies more accountable.[4] The establishment of minimum security standards could force software manufacturers and technology companies to take on the burden of implementing more secure software and better protect consumers.[5] The heightened accountability would be a significant shift from current insufficiencies in holding technology companies responsible for securing user accounts and information.[6]

The United States’ sectoral approach to technology law means many different cybersecurity laws and regulations create a patchwork of protection.[7] Recent threats from hackers, cyberterrorists, and data breaches have led to an increased examination of the U.S.’s regulatory approach.[8] Instead of calling for omnibus legislation, The Strategy addresses regulatory inadequacies by recognizing the need to renovate existing policies.[9]

The revamp would include building on, harmonizing, and streamlining our to empower the current frameworks’ support of national security and public safety.[10] The Strategy will “use existing authorities to set necessary cybersecurity requirements in critical sectors. … (and) leverage existing cybersecurity frameworks,” such as CISA’s Cybersecurity Performance Goals to accomplish these directives.[11] Implementing existing guidelines, like the National Institute of Standards and Technology’s Framework for Improving Critical Infrastructure Cybersecurity, will hopefully result in stricter security obligations and could lead to noticeable advancements more quickly.[12] Despite its potential for expediency, the Strategy’s method might be difficult to enforce without a legislative overhaul and before the next presidential election.[13] Existing policies have been criticized for their inability to control large technology and software companies like Meta Platforms, Inc.,, Inc., Google, and Apple Inc., so cybersecurity infrastructure may not be equipped to effectuate the goals of responsibility and accountability The Strategy hopes to produce.[14]

Concerns, interest, and public outcries over data security have been increasing.[15] Increased awareness of companies profiting from lax data security systems and personal information, along with high-profile data breaches, has heightened concerns about cybersecurity in the private sector.[16] Data breaches and the subsequent abuse of private information are especially alarming when consumers lack the know-how and power to protect their data.[17] The responsibility for data security must shift from consumers to large, private sector software companies for advancements in consumer data protection.[18] On a national level, awareness of cyberterrorism dangers has also risen due to conflicts with Russia and risks from platforms like TikTok.[19] Americans have been especially captivated by considerations to ban TikTok in response to its potential threats.[20]  Success of The Strategy could prevent taking such drastic and potentially censoring measures by fortifying our national data protection systems.

Effective collaboration will be integral to successfully executing The Strategy and establishing safer internet use for consumers and our nation.[21] The proposed changes involve government regulation, oversight, enforcement, and participation from large companies and the public.[22] Although it may seem like a lofty request, the interconnected nature of modern society coupled with technological developments means that cyber threats are constant, evolving, and not exclusively important to national security. These dangers affect individuals, organizations, and entire societies, making participation on every level not just important but unavoidable.[23] The private sector, its infrastructure, services, and market power, must take proactive steps to safeguard data. Technology and software companies need to shoulder the additional responsibility The Strategy seeks to impose, potentially over economic interests, to improve the storage and protection of consumer information.  Additionally, the public needs better education about cyber risks so that they may take effective protective action. Our government can provide regulatory frameworks, intelligence, and resources for cyber protections, but it cannot do it alone. Only through an alliance with individuals and companies can The Strategy, and its underlying principles, create the strong and resilient cybersecurity ecosystem that we need.







[1] Press Release, The White House, Fact Sheet: Biden-⁠Harris Administration Announces National Cybersecurity Strategy (Mar. 2, 2023)(available at

[2] Jeff Kosseff, Defining Cybersecurity Law, 103 Iowa L. Rev. 985, 987-989 (2018); See Cybersecurity Act of 2015, Pub. L. No. 114-113, Div. N, § 1(a), 129 Stat. 2935 (codified at 6 U.S.C.A. §§ 1501–10 (West 2016)) (neglecting to set forth a definition for cybersecurity); See What is Cybersecurity?, Cybersecurity & Infrastructure Security Agency: News (Feb. 1, 2021),; Jessica Farrelly, High-Profile Company Data Breaches 2023, Electric: Blog (Mar. 7, 2023),; Christopher Yasiejko, Prisma Labs Sued Over Lensa AI App’s Biometric Data Harvesting, Bloomberg Law: News (Mar. 14, 2023, 7:00 PM), ; Skye Witley, 2023’s Largest Health Data Breach So Far Brings Legal Flurry, Bloomberg Law: News (Mar. 14, 2023),; Naureen S. Malik, US Cyber Official says China is ‘Big Threat’ to Energy Industry, Bloomberg Law: News (Mar. 10, 2023, 10:10 AM),–eab7eb50a376d38e48393a7a5bf008d82883e40c&bna_news_filter=privacy-and-data-security&criteria_id=9bc8792d4c30fd78f49284892054f210; Russia Cyber Threat Overview and Advisories, Cybersecurity & Infrastructure Sec. Agency, (last visited Mar. 15, 2023); Press Release, The White House, Statement by President Biden on our Nation’s Cybersecurity (Mar. 21, 2022) (available at

[3] President Biden, National Cybersecurity Strategy, The White House 4 (Mar. 1, 2023),

[4] Ben Kochman, 4 Highlights From Biden’s Beefed Up Cybersecurity Strategy, Law360: Analysis (Mar. 2, 2023, 10:20 PM),

[5] Id.; National Cybersecurity Strategy, supra note 6, at 8-10.

[6] Katrina Manson, Cyber Plan Would Hold Software Makers Responsible in Hacks, Bloomberg Law: Privacy & Data Sec. (Mar. 2, 2023, 3:34 PM),

[7] Janine S. Hiller et al., Cybersecurity Carrots and Sticks, Am. Bus. L. J., (forthcoming 2023) (manuscript at 20-30) (available at; Jeff Kossef, Updating Cybersecurity Law, Hous. L. Rev., (forthcoming 2023) (manuscript at 8-24) (available at

[8] Jeff Kosseff, supra note 2, at 1001-1005; Skye Witley et. al., Why TikTok App Bans are Trending Across the US: Explained, Bloomberg Law: Privacy & Data Sec.(Mar. 8, 2023, 5:05 AM),–eab7eb50a376d38e48393a7a5bf008d82883e40c&bna_news_filter=privacy-and-data-security&criteria_id=9bc8792d4c30fd78f49284892054f210; Christopher Bing, Russian Hackers Preparing New Cyber Assault Against Ukraine – Microsoft Report, Reuters: Technology (Mar. 15, 2023, 3:09 PM),

[9] National Cybersecurity Strategy, supra note 6, at 5-9.

[10] Id. at 8.

[11] Id.

[12] Id.

[13] Katrina Mason, supra note 8.

[14] Id.

[15] Christopher Brown, Website-Browsing Surveillance Suits Erupt After Appellate Ruling, Bloomberg Law: News (Sept. 23, 2022, 4:45 AM),; Brenna Goth, Florida ‘Digital Rights’ Push Big Tech Into DeSantis Culture War, Bloomberg Law: News (Mar. 15, 2023, 5:00 AM),

[16] Brenna Goth & Skye Witley, Data Privacy ‘Panoply’ Looms as States Move to Fill Federal Hole, Bloomberg Law: News (Jan., 19, 2023, 5:01 AM),

[17] Mason Storm, When the Consumer Becomes the Product: Utilizing Products Liability Principles to Protect Consumers from Data Breaches, 29 Rich. J.L. & Tech. 1, 4-11 (2023); Jen Easterly, The Cost of Unsafe Technology and What We Can Do About It, Cybersec. & Infrastructure Sec. Agency: Blog (Mar. 10, 2023),

[18] Id.

[19] Bing, supra note 12; Josh Liberatore, GAO Warns US Gov’t About ‘Catastrophic’ Cyber Risk, Law360: News (June 22, 2022),; Malik, supra note 4.

[20] Witley et. al., supra note 12; Anna Edgerton, US TikTok Ban Advances in House After Flurry of China Bills, Bloomberg Law: News (Mar. 1, 2023, 10:29 AM),

[21] National Cybersecurity Strategy, supra note 6, at

[22] Id.

[23] Narenda Sharma et. al., Cost and Effects of Data Breaches, Precautions, and Disclosure Laws, 8 Int’l J. Emerging Trends  Soc. Sci. 33, 36 (2020).


Image Source:

Informatics for Health in a Changing Climate

By W. Kyle Resurreccion




The single biggest health threat to humanity is climate change.[1] In 2018, the 24th Conference of Parties (COP 24) to the United Nations Framework Convention on Climate Change (UNFCCC) reported that health-damaging air pollution produced by the burning of fossil fuels kills over seven million people, making it the second leading cause of deaths from non-infectious diseases globally.[2] Climate change also leads to increased occurrences of extreme weather events; for example, between 2000 and 2016, the number of vulnerable people exposed to heat waves increased to 125 million.[3] The worsening climate also affects access, quality, and cost of healthcare.[4] In the United States, almost three-quarters of 158 hospital evacuations between 2000 and 2017 were due to climate-sensitive events such as hurricanes and wildfires, and over half required evacuating more than 100 patients.[5] Other factors essential to health are also worsened such as the spread of infectious diseases, water and sanitation infrastructures, and food insecurity and malnutrition.[6]

The future does not look any brighter. The World Health Organization (WHO) estimates that climate change will cause approximately 250,000 more deaths annually between 2030 and 2050 and threatens to undo the last fifty years of progress in development, global health, and poverty reduction.[7] Importantly, the people who will be harmed first and worst – people in low-income and disadvantaged countries and communities – contribute the least to its causes.[8]

The inherent disconnection between regimes that address climate change and human rights, like the right to health, adds to the difficulty of finding effective solutions.[9] While international climate change regimes focus mainly on preventing and mitigating environmental harm, they do not directly address a country’s responsibility to protect human rights.[10] One example can be found in the Kyoto Protocol of 1997, an international treaty that obligated countries to reduce greenhouse gas emissions but did not clearly state its health-related goals nor impose penalties on countries for human rights injuries due to climate change.[11] The dynamic nature of climate change and the web of legal and socioeconomic determinants across jurisdictions make attempts at unifying both regimes difficult.[12]

An important and increasingly necessary part of the solution may lie in the emerging use of health informatics.[13] Health informatics is the practice of using technological approaches to work with health data, information, and knowledge to improve health and healthcare.[14] Traditionally, health informatics has been used to inform developments in physically and politically established settings such as hospitals, primary care clinics, and biomedical research organizations.[15] For example, data and information provided by this approach enhanced our ability to respond, recover, and prepare for pandemics such as COVID-19.[16] But in recent years, this practice has been increasingly used to study environmental determinants of health and the human health aspects of climate change.[17] .[18]

Questions regarding the appropriate application of health informatics abound. A broad and important consideration is how such technologies can be applied ethically.[19] One small part of this issue asks how this approach may affect an individual’s right to privacy.[20] For example, in the United States, the federal Health Insurance Portability and Accountability Act (HIPAA) obligates various healthcare industry stakeholders to protect patients’ health information. Still, that law was passed in the late 20th century with no major updates in the past 20 years, leaving substantial gaps in protecting privacy in the advent of digital health.[21] Another consideration asks how to implement health informatics in developing countries with limited resources, infrastructure, and trained personnel.[22] There, the lack of legal regimes to address these novel technologies may, on the one hand, lead to an incentive to work more freely, but on the other hand, delay implementation due to lack of legal guidance.[23]

Change is fast approaching, however. In January 2022, the U.S. Department of Health and Human Services announced the Trusted Exchange Framework and Common Agreement (TEFCA), an initiative to establish a standard for interoperability between health information networks across the country to help facilitate the exchange of information.[24] Although still in its infancy and lacking specific climate change-oriented goals, projects such as TEFCA serve as a model for how more developed countries may embrace health informatics to protect the right to health in general and provide useful second-hand information for how health is affected by the climate.[25] A more direct and localized approach lies in creating frameworks such as the Green-MIssion, published in a 2022 study, that actively combines hospital information management theory with environmental sciences for application in healthcare settings.[26]

The use of health informatics brings the promise of novel and innovative solutions to address current and future threats that endanger one of the oldest widely recognized human rights. This promise comes with its own challenges, one that legal regimes worldwide must be prepared to tackle. As such, large and small jurisdictions must weigh the benefits and dangers of this approach using scientifically and ethically backed regimes that acknowledge the undeniable connection between one’s health and the planet’s health.






[1] World Health Organization [WHO], Climate Change and Health (Oct. 30, 2021),

[2] 24th Conference of Parties to the United Nations Framework Convention on Climate Change, COP24 Special Report: Health and Climate Change, at 16 (Dec. 3, 2018) [hereinafter COP24 Special Report],

[3] Id. at 20, 23.

[4] Renee N. Salas et al., Adding a Climate Lens to Health Policy in the United States, 39 Health Affs., no. 12, Dec. 2020, at 2063,

[5] Id. at 2064.

[6] COP24 Special Report, supra note 2, at 20.

[7] WHO, supra note 1.

[8] Id.

[9] Chuan-Feng Wu, Challenges to Protecting the Right to Health Under the Climate Change Regime, 23 Health and Hum. Rts. J., no. 2, Dec. 2021, at 121, 122-23,

[10] Id.

[11] Id.

[12] Id. at 129.

[13] Kathleen Gray, Climate Change, Human Health, and Health Informatics: A New View of Connected and Sustainable Digital Health, 4 Frontiers in Digit. Health 1 (2022),

[14] Id. at 1

[15] Id. at 2

[16] Brian E. Dixon et al., Managing Pandemics with Health Informatics, IMIA Y.B. of Med. Informatics, 2021, at 69, 71,

[17] Gray, supra note 13, at 2.

[18] Id.

[19] Kenneth W. Goodman, Ethics in Health Informatics, IMIA Y.B. of Med. Informatics, 2020, at 26,

[20] Kim Theodus et al., Health Information Privacy Laws in the Digital Ages: HIPAA Doesn’t Apply, 18 Persps. in Health Info. Mgmt., no. 1l, Dec. 7 2020, at 1,

[21] Id. at 7.

[22] Daniel Luna et al., Health Informatics in Developing Countries: Going Beyond Pilot Practices to Sustainable Implementations: A Review of Current Challenges, 20 Healthcare Informatics Rsch., no. 1, at 3,

[23] Id. at 5

[24] U.S. Dep’t of Health and Hum. Servs., Trusted Exchange Framework and Common Agreement (TEFCA), (Feb. 8, 2023),

[25] U.S. Dep’t of Health and Hum. Servs., Off. of the Nat’l Coordinator for Health Info. Tech, Trusted Exchange Framework (TEF): Principles for Trusted Exchange (2022),

[26] Marieke E. Sijm-Eeken et al, Medical Informatics and Climate Change: A Framework for Modeling Green Healthcare Solutions, 29 J. of the Am. Med. Informatics Ass’n, no. 12, Dec. 2022, at 2083,






Image Source:

You Are What You Eat: Is Human Steak Something We Can Sink Our Teeth Into?

By Madison Edenfield




What started out as a commentary on the ethics of the meat industry has stirred questions about eating meat products grown from human cells. Is this synthesized cannibalism, or simply the future of meat?[1]

Andrew Pelling, with the help of industrial designer Grace Knight, and artist and researcher Orkan Telhan, developed a grow-your-own steak kit using human cells and blood.[2] This DIY kit involves collecting cells from inside your cheek with a cotton swap and putting these cells onto “pre-grown scaffolds made from mycelium.”[3] The cells are then stored in a warm environment and fed with a serum for about 3 months until the steak is fully grown. [4]

This artistic statement stirred up controversy in the art world, but it also poses an interesting question– what if we did start eating meat grown from our own cells? I will discuss two topics arising from this question: who would regulate this human, lab-grown meat and would eating it count as cannibalism?

First, U.S. food production is overseen by two regulatory agencies: the Food & Drug Administration (FDA) and the U.S. Department of Agriculture (USDA). [5] These two agencies oversee different aspects of food production and have different requirements and frameworks. Thus, it is important to distinguish which agency would oversee lab-grown­­–or cultured–meat. Cultured meat does not fit neatly within the parameters of the FDA and USDA because it falls within both of their jurisdictions. [6]

The USDA oversees meat production, but in the case of cultured meat, the stem cells are extracted without slaughter. These cells are then managed and grown in laboratories.[7] However, in the abstract, this process could be similar to cheese or yogurt fermentation which falls under the FDA’s jurisdiction. So, the end product is meat (regulated by the USDA), possibly with other ingredients (regulated by the FDA) like edible polymer scaffolds.[8] Due to the complexity of cultured meat, the USDA and FDA agreed to jointly manage cultured meat in March 2019.[9]

The FDA will oversee the collection and management of stem cells as well as cell growth. In simple terms, the FDA will govern all the steps to cultured meat before it is actually meat. From there, the USDA will take over and oversee the processing of the tissue into meat and will label the final product. This combined approach will cover all cultured meat derived from livestock to poultry, but will it potentially cover cells derived from humans?[10]

Second, there is surprisingly sparse information on whether eating human, lab-grown meat would be considered cannibalism. However, a few sources seem to believe that consuming human, cultured meat would not be considered cannibalism.[11]

Dr. Abdulaziz Sachedina, professor at George Mason University stated that, “‘Human meat’ produced through scientific method rather than human person is actually non-human in physical sense. It is human only in biochemical composition.”[12]

However, Bill Schutt, professor of biology and author of Cannibalism: A Perfectly Natural History, argues that eating human, cultured meat falls into a gray area. “I suppose if these are cultured human cells we’re talking about, then I’d have to say yes, I’d consider this cannibalism.” But if this meat is derived from human tissue, Schutt concludes that the tissue “isn’t an individual any more than an isolated neuron or muscle fiber is an individual.”[13]

While this may seem like something out of a science fiction novel, lab-grown meet is not that far from being integrated into our daily lives. As of 2021, there are over 100 companies focused solely on cultivated meat, and 60 additional companies have announced services or products connected to cultured meat.[14] There is currently no food made from cultured animal cells available for sale in the U.S. market[15]

Whether you’re appalled or intrigued by the idea of eating human cells, cultured meat from humans is not likely to catch on in the mainstream culinary world.[16] Dr. Koert Van Mensvoort, director of the Next Nature Network and author of In Vitro Meat Cookbook, believes that there will be a “huge reluctance against in vitro human meat.”[17] Van Mensvoort predicts that “it will be very, very niche. Maybe a very haute-cuisine restaurant will offer this once-in-a-lifetime, special experience for which you pay a lot of money.”[18]

While delivered under the humorous guise of eating human steak, this idea represents a very important legal question that needs to be examined as the world evolves– how will the law adapt? Is the legal system capable of keeping up with new technological advances and our increasingly complex world? Food for thought.





[1] Iain Leggat, Scientists have created an edible steak made from human cells – here’s why, The Scotsman (Nov. 19, 2020, 4:44 PM),

[2] Luana Steffen, Grow-Your-Own Human Meat Kit – “Technically” Not Cannibalism, Intelligent Living (Jan. 29, 2021),

[3] Id.

[4] Id.

[5] Luke Grocholl, Clean Meat – How an Emerging Technology Will Be Regulated, Millipore Sigma (last visited Mar. 3, 2023),

[6] Id.

[7] Id.

[8] Id.; see also Food and Drug Administration, Human Food Made with Cultured Animal Cells (Nov. 16, 2022),,and%20cell%20growth%20and%20differentiation.

[9] U.S. Gov’t Accountability Off., GAO-20-325, Food Safety: FDA and USDA Could Strengthen Existing Efforts to Prepare for Oversight of Cell-Cultured Meat (2020).

[10] Id. at 19.

[11] Whitney Kimball, Is Eating Synthetic Human Flesh Cannibalism?, Gizmodo (Oct. 16, 2017),

[12] Id.

[13] Id.

[14] 2021 State of the Industry Report: Cultivated meat and seafood, Good Food Institute § 1 at 22.

[15] Food and Drug Administration, Human Food Made with Cultured Animal Cells (Nov. 16, 2022),,and%20cell%20growth%20and%20differentiation.

[16] Rich Wordsworth, What’s wrong with eating people?, Wired (Oct. 28, 2017, 8:00 AM),

[17] Id.

[18] Id.


Image Source:

Page 1 of 8

Powered by WordPress & Theme by Anders Norén