The first exclusively online law review.

Category: Blog Posts Page 14 of 76

Digital License Plates Are Available in Some States, But People Have Concerns

Digital License Plates Are Available in Some States, But People Have Concerns

By Cleo Scott

 

An increasing number of car owners across the country are now able to toss their old metal license plates and upgrade to a digital one. But this new car enhancement is already raising privacy concerns.[1]

Digital license plates are currently only made by one company, Reviver.[2] They are described as vehicle-mounted identification devices that produce a radio signal for tracking and digital monitoring.[3] Owners can also customize their digital plates by changing the color and border displays.[4] The digital plates connect to an app which allows owners to use vehicle location services, security features, stolen vehicle reports, and registration renewals without needing stickers or visiting a DMV location.[5] Additionally, the plates can display emergency messages, such as when there is an AMBER Alert.[6]

Many think that digital plates will make our lives much easier. “It’s really going to be much more beneficial for them and make our processing much more efficient,” California DMV Policy Division deputy director Bernard Soriano told ABC30 Fresno. “It’s a big change, we’re no longer your father’s DMV, and I think it’s something we can all embrace and be part of.”[7] Cars with digital plates are legal to drive anywhere in the United States.[8] However, the plates are only available for purchase and DMV registration in Michigan, Arizona, California, and Texas for commercial vehicles.[9] California is the latest state to allow digital plates for everyone in the state.[10] Governor Gavin Newson signed a law in late September 2022 extending the digital option to all drivers.[11]

Notably, the built-in location tracker in the plates will allow the police to locate the car if it’s stolen.[12]  However, people are feeling uneasy about the plates’ tracking capabilities.[13] Back in 2018, San Francisco’s nonprofit Electronic Frontier Foundation, a group that promotes civil liberties in the digital world, stated that the devices will turn individual cars into a “honeypot of data” because it will “record the drivers’ trips to the grocery store, to protests, or to an abortion clinic.”[14] “Your locational history has the potential to reveal a lot more than . . . where you happen to be at a particular moment in time,” said Stephanie Lacambra, a criminal defense attorney for the foundation. “It can reveal your associations, who you speak with, where you go to work, where you live.”[15]

However, California Assembly member Lori Wilson says the tracking features on the plates can be disabled on privately owned cars and that the California bill allows for regular review of safety measures.[16] “Anytime that our [California Highway Patrol] or we feel like safety is a concern in terms of license plates being altered in any kind of way, they can pull that back and make sure that is taken care of before it’s continued use,” Wilson said.[17] Reviver claims they have taken measures to deter hacking and your plates from being stolen.[18] “Both our RPlate Battery and RPlate Wired have tamper-proof mounting, robust built-in anti-theft features, and communicate using secure cloud communication,” the company said on their website.[19] Reviver also states that it doesn’t share data with the DMV or law enforcement.[20] People are still worried about the trouble the digital plates may bring, especially employment attorneys.[21]

Steven Gallagher from Fox Rothschild LLP, said the digital plates are a “privacy nightmare” for employers.[22] He states that employers may only monitor employees using digital license plates if it is “strictly necessary” for the performance of the employee’s duties.[23] If an employer chooses to monitor employees’ whereabouts using digital plates, it must first provide the employee with a comprehensive notice, according to Gallagher.[24] A few of the requirements that must be in the notice include a description of the specific activities that will be monitored and a description of the dates, times, and frequency that the monitoring will occur.[25] Gallagher states that using digital plates to track employees also implicates several other privacy laws, including “obligations on employers for handling, storing, and conveying data retrieved from the plates.”[26] Gallagher thinks the digital plates aren’t worth the trouble they may bring employers.[27] “[The] privacy concerns (including data requirements), the potential civil penalties at stake, and risk that the Labor Commissioner will find tracking was not ‘strictly necessary’ probably outweigh the benefits for most employers,” he wrote.[28 Despite concerns, the list of states that allow their residents to purchase digital license plates may be growing.[29] Reviver says that at least another 10 states are in some way considering the adoption of digital license plates.[30]

 

 

 

 

[1] Joe Hernandez, California drivers can now sport digital license plates on their cars, NPR (Oct. 15, 2022, 11:53 AM), https://www.npr.org/2022/10/15/1129305660/digital-license-plates-california

[2] Renee Martin, Digital License Plates Now Legal for Everyone in California, WAY.COM (last visited Nov. 10, 2022), https://www.way.com/blog/digital-license-plates-california/#What_are_digital_license_plates

[3] Id.

[4] Vivian Chow, New California law legalizes digital license plates, KTLA (Oct. 12, 2022, 11:25 PM), https://ktla.com/news/california-wire/new-california-law-legalizes-digital-license-plates/

[5] Id.

[6] Id.

[7] Dustin Dorsey, California approves digital license plates for all vehicles; here’s how it works, ABC 30 (Oct. 11, 2022), https://abc30.com/digital-license-plates-california-what-are-plate-how-do-work/12316715/

[8] Reviver, https://reviver.com/geographic-expansion/#:~:text=United%20States,Arizona%2C%20California%2C%20and%20Michigan (last visited Nov. 11, 2022).

[9] Id; Hernandez, supra note 1.

[10] Hernandez, supra note 1.

[11] Id; see Assem. Bill 984, Ch. 746, 2021-2022 Reg. Sess. (Cal. 2022).

[12] Id.

[13] James Doubek, Digital License Plates Roll Out In California, NPR (June 1, 2018, 8:14 AM), https://www.npr.org/sections/thetwo-way/2018/06/01/616043976/digital-license-plates-roll-out-in-california

[14] Rachel Swan, California’s digital license plates: road to convenience or invasion of privacy?, BOSTON.COM (May 31, 2018), https://www.boston.com/cars/car-news/2018/05/31/california-digital-license-plates/

[15] Id.

[16] Dorsey, supra note 7.

[17] Id.

[18] RPlate Security – can it be stolen or hacked?, Reviver (last visited Nov. 11, 2022), https://support.reviver.com/knowledge/rplate-security.

[19] Id.

[20] Hernandez, supra note 1.

[21] See Steven Gallagher, Digital License Plates – A Privacy Nightmare for Employers, Fox Rothschild LLP (Oct. 14, 2022), https://californiaemploymentlaw.foxrothschild.com/2022/10/articles/advice-counseling/digital-license-plates-a-privacy-nightmare-for-employers/

[22] Id.

[23] Id.

[24] Id.

[25] Id.

[26] Gallagher, supra note 21.

[27] Id.

[28] Id.

[29] See Sebastian Blanco, California Extends Digital License Plate Option to Everyone, Car And Driver (Oct. 8, 2022), https://www.caranddriver.com/news/a41564407/california-digital-license-plate-extended/.

[30] Id.

 

Image Source: https://www.caranddriver.com/news/a34748524/digital-license-plates-coming-2021/

Bring on the Babies!

By Jessica Birdsong

 

 

In the last five years, the total number of babies born via assisted reproductive technology (“ART”) skyrocketed from five million to eight million, with an increase of one million in the last year alone.[1]1 Today one in sixty Americans is born thanks to IVF and other artificial treatments.[2] Sperm and egg donation have become an available alternative resource for many adults seeking to have children. As the use of assisted reproduction technologies increases, more questions implicate the law and ethics in relation to these advanced technologies.

Unofficial policy dictates how sperm banks operate.[3] Although these policies are not officially regulated by law, the regulatory agencies can remove their accreditation and/or license if these standards are not met.[4] However, this doesn’t seem to stop the number of ethical and legal issues still arising from improper management of sperm bank facilities. Regulatory agencies assess the quality of the work and enforce informed consent. Still, there are many aspects of the process where the lack of regulation leads ART users into litigation.

Currently, no federal or state law limits the number of donations per individual donor. The importance of limiting the number of donor offspring from a single sperm donor relates to preventing accidental consanguinity between donor offspring.[5] Sperm banks do not have to keep track of the births or coordinate with one another.[6] It’s not uncommon for donor-conceived children to find half-sibling groups that regularly number in the dozens and occasionally exceed 100.[7] These siblings don’t always only share inherited eye color and height, but they are also finding they share genetic variants or mutations linked to cancer and other diseases that come to light only years later.[8]

The federal government only requires donated sperm and eggs to be tested for communicable diseases, not genetic diseases.[9] There are also no federal requirements that sperm banks obtain and verify information about a donor’s medical history, educational background, or criminal record. A big driver in lobbying for change comes from the number of lawsuits in which donors misrepresented themselves, and sperm banks failed to properly vet them, or banks failed to follow up or adequately disclose critical information to recipients.[10] A recent case ruled on by the Georgia Supreme Court addresses a donor-conceived child that has been diagnosed with an inheritable blood disorder, for which the mother is not a carrier.[11] He also has suicidal and homicidal ideations, requiring multiple periods of hospitalizations, therapists, and prescribed anti-depressant and anti-psychotic medications.[12] After almost 20 years after the child was born through ART, the family learned that their sperm donor had actually been diagnosed with psychotic schizophrenia, narcissistic personality disorder, and significant grandiose delusions.[13] They also discovered that the sperm donor was not a Ph. D candidate with an IQ of 160, he actually had no degrees at all at the time of donation.[14] The family was also told he had no criminal history; however, he had an extensive arrest record.[15] The sperm bank told the parents at the time of purchase that the sperm they were buying was one of the best donors they had, but the facility hadn’t properly vetted this donor at all.[16]

The policies surrounding the anonymity of the donors arguably have caused the most tension in ART regulation. A movement for change comes partly from the donor-conceived children who have grown up and now understand the value of knowing their biological parent.[17] Advocates have pointed out that the anonymity has endangered both the physical and psychological well-being of offspring.[18] Some legislation has started to turn toward this idea on the state level. States such as California, Connecticut, Rhode Island, and Washington State allow such disclosure for donor-conceived offspring.[19] It is important to note that this is only a small step towards solving the issue because these state laws permit the sperm donor to opt out of the disclosure. Similarly, Colorado has signed a bill making them the first state to prohibit anonymous sperm and egg donations.[20] The bill, beginning in 2025, will require that donors agree to have their identity released to children conceived from their donations when they turn 18.[21] The bill also requires egg and sperm banks to keep updated contact information and medical history of all donors, as well as requiring a clear limit for each donor to contribute to no more than 25 families.[22]

The solution may not be so easy. Research shows that 29% of men would refuse to donate sperm if they couldn’t remain anonymous, leaving less supply and the potential to drive up costs.[23] Overall, these issues will likely become more critical as technology and its use continues to outpace its regulation.

 

 

 

 

[1] Barbara P. Billauer, The Sperminator as a Public Nuisance: Redressing Wrongful Birth and Life Claims in New Ways (A.K.A. New Tricks for Old Torts), 42 Ark. Little Rock L. Rev. 1, 9–10 (2019).

[2] Id. at 10.

[3] Stanford, https://web.stanford.edu/class/siw198q/websites/reprotech/New%20Ways%20of%20Making%20Babies/spermpol.htm (last visited Nov. 11, 2022).

[4] Id.

[5] Dan Gong at al., An overview on Ethical Issues About Sperm Donation, 11(6) Asian J. of Andrology 645, 646 (2009).

[6] Sarah Zhang, The Children of Sperm Donors Want to Change the Rule of Conception, The Atlantic (Oct. 15, 2021), https://www.theatlantic.com/science/archive/2021/10/do-we-have-right-know-our-biological-parents/620405/.

[7] Bryn Nelson & Austin Wiles, A Shifting Ethical and Legal Landscape for Sperm Donation, 130 Cancer Cryptopathology 572, 572 (2022).

[8] See Id.

[9] 21 C.F.R § 1271.75 (2006)

[10] Nelson & Wiles, supra note 7, at 573

[11] Norman v. Xytech, 310 G. 127, 128 (2020).

[12] Id. at 129.

[13] Id.

[14] Id. at 128.

[15] Id.

[16] See Norman, 310 Ga. at 128.

[17] Zhang, supra note 6.

[18] Nelson & Wiles, supra note 7, at 573.

[19] Id.

[20] 2022 Colo. Legis. Serv. 1 (S.B. 22-224) (West).

[21] Id.

[22] Id.

[23] Nelson & Wiles, supra note 7, at 573.

Image Source: https://www.women-info.com/en/wp-content/uploads/2014/07/infertility-17-1.jpg

 

The Dangers of Artificial Intelligence in Employment Decisions

The Dangers of Artificial Intelligence in Employment Decisions

By Gwyn Powers

Artificial intelligence (“A.I.”) is becoming more and more pervasive in our society, especially in the last decade and during the COVID-19 pandemic.[1] Companies are using A.I. and analytic data to understand their customers and optimize their supply chains.[2] For example, Frito-Lay created an e-commerce website, Snacks.com, during the pandemic and used their data “to predict store openings [and] shifts in demand due to return to work[.]”[3] Companies are not limiting their use of A.I. to determine productivity and predict the next chip flavor; human resources departments have used A.I. to help with resume screenings since the mid-2010s.[4] One of the major concerns with using A.I. in the hiring process is the potential for discrimination because of implicit bias.[5]

Law Firms and the Cloud

By Manasi Singh

 

 

The transition to web-based legal tools has been on the rise over the last several decades, but the one tool that law firms seem to be most hesitant about is the use of cloud technology.[1] Some lawyers are rightly concerned about the security implications of storing confidential information in the cloud, considering the risk of exposing their clients’ information to the world wide web. Instead, they cling to their on-site servers and server-based applications to store confidential information, but this fear assumes that on-site servers are more secure than the cloud, which is misguided.[2]

The reality is that no computer is 100% secure, except maybe one that’s disconnected, turned off, unplugged, and buried underground. So then, why should law firms prefer cloud-based servers instead of on-site servers? The answer is simple—cloud technology is held to a higher operational and infrastructure security standard than any on-site server.[3] On-site servers require a team to be on-site, manually updating the systems and security protocols.[4] This can be a problem if your on-site team has a high turn-over or can’t physically access your servers frequently enough, both of which can cause the systems to fall behind in required updates, which actually makes them even more vulnerable to security risks.[5] Cloud-based servers allow you to automate these processes, which improves your ability to meet core security and compliance requirements while also giving users the freedom to manage and control their data, as well as decide who gets to have access to the data.[6]

I think the move to cloud technology is inevitable for law firms, but the process of getting there has been slow. Cloud servers have been around for at least a decade now, but few law firms have taken any steps to make the transition. The lawyers that have taken steps to transition tend to stick to popular cloud services such as Dropbox, Microsoft Teams, iCloud, and Box rather than legal-specific cloud services such as Clio and NetDocuments.[7] Whether this is due to familiarity or popular usage, the answer isn’t as clear.

In order to promote the usage of cloud-based servers in law firms, it’s important to understand why lawyers are skeptical. The American Bar Association conducted a survey in which they discovered that the primary concern is confidentiality/security, which is closely followed by a lack of control over data.[8] The facts are that neither of these concerns hold up under any amount of scrutiny, and mitigating these concerns will be key to promoting the cloud in law firms.

 

 

 

 

[1] CDW, Cloud Computing and Law Firms (2013) [hereinafter CDW White Paper], https://webobjects.cdw.com/webobjects/media/pdf/Solutions/Legal/122223-White-Paper-Cloud-Computing-and-Law-Firms.pdf.

[2] Id.

[3] Tommy Montgomery, Why your data is safer in the cloud than on premises, TechBeacon (last visited Nov. 5, 2022), https://techbeacon.com/security/why-your-data-safer-cloud-premises.

[4] Id.

[5] Id.

[6] Id.

[7] Dennis Kennedy, 2021 Cloud Computing, American Bar Association (Nov. 10, 2021), https://www.americanbar.org/groups/law_practice/publications/techreport/2021/cloudcomputing/.

[8] Id.

 

Image Source: https://www.webhosting.uk.com/blog/6-reasons-legal-firms-need-cloud-storage

Social Media Platforms Potential to Escape Liability for User-Posted Images of Child Pornography

By Brianna Hughes

 

 

 

The internet and social media platforms have become a modern staple in many people’s everyday lives.[1] Most people with internet access can share information, ideas, and other content easier than ever before and interact with those who are like-minded.[2] While this dissemination of information can be positive, social media platforms may face the problem of users sharing content that can be considered objectionable or illegal.[3] Platforms attempt to combat this phenomenon by imposing guidelines prohibiting sharing certain types of content.[4] This can include limitations on users posting hate speech, harassment, and revenge pornography.[5] Congress has highlighted the importance of shielding online providers from the potential liability of what their users could post by enacting the Communications Decency Act (“CDA”).[6] The CDA, 47 U.S.C § 230(c)(1), states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[7] While this provides media platforms with significant immunity from being held liable for most of their users’ posts, if the moderators violate a criminal statute, they may be subject to liability.[8]

The parents of victims of child pornography attempted to impose civil liability on one of the most widely used social discussion online platforms, Reddit, but were unable to do so.[9] Reddit is a social media platform that allows its users to post content publicly as well as participate in forums devoted to specific topics called subreddits.[10] Reddit holds the power to remove moderators and content if it does not conform to Reddit’s policies.[11] The parents of the victims found sexually explicit photos of their children on the website. The parents reported the images and images took the photos down. However, the parents then found themselves in a cycle where the images would get reposted.[12] The plaintiffs argued that Reddit earned substantial revenue from these explicit subreddits and did little to place protections for those who were being exploited.[13]

The parents sued pursuant to 18 U.S.C. § 1595, which states that “an individual who is a victim… may bring a civil action against the perpetrator (or whoever knowingly benefits, financially or by receiving anything of value from participation in a venture which that person knew or should have known has engaged in an act…) in an appropriate district court.”[14] Reddit claimed they were shielded from liability based on section 230 of the CDA.[15] This section has been amended to include the Fight Online Sex Trafficking Act to allow victims of trafficking to bring civil lawsuits against platforms that helped traffickers.[16] The district court dismissed the parents’ claim, and the Ninth Circuit affirmed the district court’s decision.[17]

While the CDA shields Reddit from potential liability based on their user’s posts, it does not provide immunity if the information violates 18 U.S.C. § 1595.[18] However, the defendant must have actual knowledge of the trafficking and must “assist, support, or facilitate” the trafficking venture.[19] This knowledge standard creates a higher bar for imposing liability on social media platforms.[20] In this “actual knowledge” analysis, the court will look at the defendant’s website’s “participation in the venture” and that they knowingly benefited from participating in child sex trafficking.[21] “Mere association with sex traffickers is insufficient absent some knowing ‘participation’ in the form of assistance.”[22] The Plaintiffs, in this case, failed to show that Reddit knowingly participated or benefited from a sex trafficking venture but rather “turned a blind eye,” which is not enough for the court to impose liability on the media platform.[23] Therefore, the court held that Reddit had not knowingly benefited from knowingly facilitating sex trafficking.[24]

The high bar set by the “knowing” standard proves difficult for victims of child pornography and revenge pornography to receive the remedy they demand.[25] Future litigators in cases like these may have to frame their cases around elements common to civil torts, such as a claim for intentional infliction of emotional distress, defamation, or breach of privacy. [26] For claims that do not involve intentional torts to be successful, plaintiffs must demonstrate that the platform “materially contributed to the illicit nature of the content by showing that they did more than passively transmit information from third parties.”[27] The CDA presents obstacles for parties seeking redress, but it is not impossible to overcome.[28]

 

 

 

[1] Matthew P. Hooker, Censorship, Free Speech & Facebook: Applying the First Amendment to Social Media Platforms Via the Public Function Exception, 15 Wash. J.L. Tech & Arts 36, 39 (2019).

[2] See id. at 39-40.

[3] Id. at 42.

[4] Id.

[5] Id.

[6] Id. at 55.

[7] 47 U.S.C § 230(c)(1).

[8] Reddit Child Porn Suit Escape Under Section 230 Affirmed (1), Bloomberg law (Oct. 24, 2022, 3:35 PM), https://news.bloomberglaw.com/tech-and-telecom-law/reddits-section-230-escape-from-sex-trafficking-claims-affirmed.

[9] See Does v. Reddit, Inc., 2022 U.S. App. Lexis 29510, 3 (9th Cir. 2022).

[10] Id. at 4.

[11] Id.

[12] Id. at 4-5.

[13] Id. at 5-6.

[14] 18 U.S.C. §1595.

[15] Reddit, Inc., 2022 U.S. App. Lexis at 6.

[16] Bloomberg Law, supra note 8.

[17] Id.

[18] Reddit, Inc., 2022 U.S. App. Lexis at 7.

[19] Id. at 9.

[20] Bloomberg Law, supra note 8.

[21] Reddit, Inc., 2022 U.S. App. Lexis at 12.

[22] Id. at 20

[23] Id.

[24] Id. at 21

[25] Jessy R. Nations, Revenge Porn and Narrowing the CDA: Litigating a Web-Based Tort in Washington, 12 Wash. J.L. Tech. & Arts. 189, 200 (2017).

[26]See Id. at 192-195.

[27] Id. at 200.

[28] Id. at 209.

Image source: https://www.politico.com/magazine/story/2015/06/the-truth-about-the-effort-to-end-sex-trafficking-118600/

 

 

 

 

 

 

 

 

 

 

Social Media: No Longer Novel

By Sophie Deignan

 

 

The quick growth of social media in our society has opened the door to various different platforms for people to broadcast their personal lives on for the general public. The benefits or drawbacks of communicating on social media for many takes on an individualistic approach. There is no one size fits all. In contrast, within the legal community, lawyers’ use of social media in their work has become a point of contention, raising various ethical concerns.[1]

Recently in the Matter of Robertelli,[2] the New Jersey Supreme Court held that disciplinary charges brought against John Robertelli failed to show, by clear and convincing evidence, that Robertelli had violated the Rules of Professional Conduct (“RPC”) when his paralegal “friended” the opposing counsel’s client on Facebook.[3] The cause of action against Robertelli began in 2007 when he was working on a personal injury lawsuit, defending the Borough of Oakland and the Oakland Police Department.[4] As part of his investigation into the alleged injury brought by the plaintiff, Dennis Hernandez, Robertelli asked his paralegal, Valentina Cordoba, to research Hernandez online for routine, background information.[5] While researching Hernandez, Cordoba became friends with him on his private Facebook page and messaged him that he looked like one of her favorite hockey players.[6] There was no further communication between the two, but Cordoba did download a video of Hernandez wrestling with friends after his alleged injury.[7] She presented the video to Robertelli who in turn, deposed Hernandez and shared the video with Hernandez’s attorney.[8] Opposing counsel then accused Robertelli of violating RPC 4.2 by communicating with his client via Facebook without his consent.[9]

In response, Robertelli argued that the video downloaded from Hernandez’s Facebook page was public but that admittedly, he did not know what it meant to be “friends” on Facebook or the distinction between private and public pages.[10] Over the next decade, the ethical charges brought against Robertelli were reviewed by the Office of Attorney Ethics (“OAE”), and a Special Master was appointed by the Court to investigate the charges.[11] In 2021, the court was unconvinced that the OAE had established clear and convincing evidence that Robertelli had violated RPC 4.2.[12] The crux of the issue was that in 2008, Facebook, and most forms of social media, were still in their infancy, and overall familiarity with such platforms was not as mainstream as it is now perceived.[13] The court’s holding in Robertelli’s case carves a very narrow loophole in that while it acknowledges that ignorance is not a defense, it was quite conceivable that the nuances of Facebook, at that time, would not have been understood by most individuals.[14]

It is highly unlikely that with the advent of Instagram, Twitter, TickTock, Snapchat, or any one of the other various social media platforms, a court would now find that ignorance concerning social media is still a plausible defense. In its closing words, the Robertelli court stated, “Lawyers must educate themselves about commonly used forms of social media to avoid the scenario that arose in this case. The defense of ignorance will not be a safe haven.”[15] Social media may exist for many to use in their individual capacity, but for lawyers, general rules on ethics are adapting and will continue to adapt to try and keep up with the times. Clear rules surrounding the use of social media will be something to look for in the future, in addition to an emphasis on lawyers proactively educating themselves on social media use.

 

 

 

 

[1] Marina Wilson, Social Media, Media Interactions, and Legal Ethics, Justia (July 6, 2022), https://onward.justia.com/social-media-media-interactions-and-legal-ethics/.

[2] Matter of Robertelli, No. 084373, 258 A.3d 1059 (N.J. Sept. 21, 2021).

[3] N.J. Ct. R. Pro. Conduct r. 4.2 (2022) (“…a lawyer shall not communicate about the subject of the representation with a person the lawyer knows, or by the exercise of reasonable diligence should know, to be represented by another lawyer…”); see Robertelli, 258 A.3d 1059, at 1075.

[4] Robertelli, 258 A.3d 1059, at 1063.

[5] Id.

[6] Id. at 1066.

[7] Id. at 1062

[8] Id.

[9] Robertelli, 258 A.3d 1059, at 1063

[10] Id. at 1066 (arguing that he, Robertelli, never authorized or knew that his paralegal messaged Hernandez on Facebook).

[11] Id. at 1074.

[12] Id. at 1075.

[13] Id.

[14] Robertelli, 258 A.3d 1059, at 1075.

[15] Id. at 1074.

 

Image Source: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQQWsdrxjiwbpcYXoWaVeFKoSFOkX8Hgrh0gA&usqp=CAU

What Makes Spotify Tick? An Overview of How Spotify Licenses Music

By Samuel Rosen

 

 

Since launching in 2008, Spotify has become a game-changer for music fans around the globe. To this day, Spotify has amassed upwards of 188 million subscribers across 183 markets, making it the most popular music streaming platform in the world.[1] But how exactly does Spotify have permission to do all of this? Spotify gains access to catalogs of millions of songs using two types of licenses, (1) sound recording license agreements and (2) musical composition license agreements.[2] Sound recording license agreements cover the rights to the recording of the song itself.[3] When getting these licenses, Spotify deals with the following three juggernauts of the recording industry: (1) Universal Music Group, (2) Sony Music Entertainment Group, and (3) Warner Music Group.[4] There are also smaller groups to cover digital recordings from independent labels.[5]

However, it gets a bit more complicated regarding composition licenses. There are two components to composition licenses that Spotify needs to secure and pay out to stream music on its platform.[6] These are mechanical royalties and performance rights.[7] Mechanical royalties are paid to the owner of the composition copyright. In a relatively new development, mechanical royalties are mainly distributed through The Mechanical Licensing Collective. This was established pursuant to the passage of the Music Modernization Act of 2018.[8] Since its formation, the Mechanical Licensing Collective has been heralded as a success, distributing almost $700 million in royalties to various rights holders.[9] Performance royalties are paid to songwriters and publishers when music is performed or played publicly, such as in a bar, restaurant, baseball stadium, or on Spotify.[10] These royalties are usually paid out through organizations such as the American Society of Composers, Authors, and Publishers (ASCAP). These groups license songs to be played publicly and then distribute the royalties to their members.[11]

If you are confused, you are not alone, and unsurprisingly, the various complex processes involved in licensing have yielded litigation over the years. A prime example of this comes from a 2018 lawsuit for copyright infringement filed by Wixen Music Publishing.[12] While not one of the big three music publishing companies listed above, the complaint notes that “Wixen administers more than 50,000 songs written and/or owned by its more than 2,000 clients, including songs by some of the most popular and acclaimed musical artists of the last 100 years.”[13] The complaint further states, “Spotify has repeatedly failed to obtain necessary statutory, or “mechanical,” licenses to reproduce and/or distribute musical compositions on its service. Consequently, while Spotify has become a multibillion dollar company, songwriters and their publishers, such as Wixen, have not been able to fairly and rightfully share in Spotify’s success, as Spotify has in many cases used their music without a license and without compensation.”[14] Specifically to Wixen, it alleged that Spotify failed to gain the appropriate licenses to certain songs it owns and that the licensing agency that Spotify had used to attempt to gain the proper licenses was “ill-equipped” to do so.[15] The lawsuit ended up settling for an undisclosed sum.[16] While the Music Modernization Act helped streamline royalty payouts and mitigate some of the issues in the Wixen suit, the complexities of licensing schemes still leave room for future litigation over rights and payout schemes.[17]

 

 

 

[1] About Spotify, Spotify, https://newsroom.spotify.com/company-info/ (last visited Oct 26, 2022).

[2] Michelle Castillo, Spotify IPO filing reveals how insanely complicated it is to license music rights, CNBC, https://www.cnbc.com/2018/02/28/how-spotify-licenses-and-pays-for-music-rights.html.

[3] Id.

[4] Id.

[5] Id.

[6] Id.

[7] Id.

[8] Dale Kawashima, Kris Ahrend Interview – CEO of The Mechanical Licensing Collective, Songwriter Universe (Mar. 24, 2022), http://www.songwriteruniverse.com/kris-ahrend-the-mlc-interview-2022.htm.

[9] Ashley King, MLC Says Nearly $700MM In Royalties Distributed to Members, Digital Music News (Oct. 24, 2022), https://www.digitalmusicnews.com/2022/10/24/mlc-royalties-distributed-to-date-2022.

[10] Mechanical Royalties vs. Performance Royalties: What’s the Difference?, Royalty Exchange (Jan. 31, 2019), https://www.royaltyexchange.com/blog/mechanical-and-performance-royalties-whats-the-difference

[11] About ASCAP, ASCAP, http://www.ascap.com/about-us (last visited Oct 25, 2022).

[12] Complaint at *1, Wixen Music Publishing Inc., v. Spotify USA Inc, No. 2:17-cv-09288, 2017 WL 6663826 (C.D.Cal. Dec. 29, 2017).

[13] Id. at *4.

[14] Id. at *1.

[15] Id. at *7.

[16] Amy X. Wang, Spotify Settles Its $1.6 Billion Publishing Lawsuit, Rolling Stone (Dec. 20, 2018), https://www.rollingstone.com/pro/news/spotify-settles-its-1-6-billion-publishing-lawsuit-771557.

[17] See Andrew Flanagan, New Music Law Expedites A $1.6 Billion Lawsuit Against Spotify, NPR (Jan. 3, 2018), https://www.npr.org/sections/therecord/2018/01/03/575368674/sweeping-new-music-law-expedites-a-1-6-billion-lawsuit-against-spotify (describing how the music modernization act, which at the time was not yet in effect, will create a central database that identifies which songwriter and/or publishers are entitled to royalties.)

 

 

Image Source: https://images.complex.com/complex/images/c_crop,h_1063,w_1890,x_13,y_284/c_fill,f_auto,g_center,w_1200/fl_lossy,pg_1/fdrkedcwuz1hbrlzoa7y/spotify-getty-nurphoto

Shred the Gnar, Not the Law

By Payton Miles

 

 

Picture this. It’s mid-December, and you’re atop a snowy mountain in Vail, Colorado awaiting your next downhill battle. The lifts are closing soon, which means it is time for a hot chocolate back at the lodge to warm you up. But you are in no rush today, all thanks to your new Columbia Sportswear jacket, equipped with its patented Omni-Heat thermal reflective material.

Columbia Sportswear Company, founded in 1938, is one of the largest outdoor and active lifestyle apparel and footwear companies in the world.[1] With competitors like Patagonia and North Face, Columbia prides itself on having the most affordable and innovative technology, such as their Omni-Heat material.[2] Its success in the marketplace has opened the door to potential infringers. One such infringer is Seirus Innovative Accessories, Inc. In 2015, Columbia brought suit against Seirus, claiming that Seirus’s HeatWave products infringed its design patent drawn to the ornamental design of its Omni-Heat reflective material.[3]

The district court granted summary judgment for Columbia Sportswear, deciding that Seirus’s HeatWave products infringed Columbia’s design patent.[4] The standard for determining whether a design patent has been infringed is the “ordinary observer” test.[5] Essentially, if one design copies a particular feature of another in a way that would deceive an ordinary observer into thinking they were the same, then the first patented design is infringed by the other.[6] The district court used the ordinary observer test and also relied on a previous case, L.A. Gear, Inc. v. Thom McAn Shoe Co., in which it was decided that logos should be “wholly disregarded” in a design infringement analysis.[7] L.A. Gear created the long-standing precedent that infringers should not be able to escape liability for design patent infringement by simply adding a logo to a copied design.[8] Contrary to this precedent, Seirus argued that there were “substantial and significant differences between the two designs,” namely that Seirus’s waves were interrupted by the use of their logo and were of different orientation, spacing, and size as compared to Columbia’s waves.[9] The district court ultimately agreed with Columbia’s argument that the two designs would be essentially the same without Seirus’s logo.[10]

In 2019, Seirus appealed, and the Federal Circuit reversed and remanded the district court’s grant of summary judgment.[11] Unlike in L.A. Gear, the Federal Circuit, in this case, gave more weight to logo placement and determined that all factors can be used to determine the overall visual impression to the ordinary observer.[12]

The Federal Circuit’s decision shredded the precedent that L.A. Gear established, muddying the waters between design patent and trademark law. Design patent law is meant to protect “the non-functional aspects of an ornamental design displayed in a patent.”[13] A trademark is “any word, phrase, symbol, design, or a combination of these things” that is used in conjunction with a product to establish that the product is associated with a certain brand/company.[14] But if a logo can be protected as a non-functional part of a design that an ordinary observer is evaluating, why would trademark protection also be needed? This decision has the possibility of closing the door on a large area of trademark law. And it also has the possibility of limiting which design patents actually get protection from infringers. The addition of a logo to a design patent defeats the purpose of the ordinary observer test that has been characteristic of design patent law. If logos are a contributing factor in distinguishing between two design patents, then an ordinary observer will attribute the dissimilarities to the logo each and every time, regardless of what the underlying products actually are.

On remand in August 2021, a jury determined that Seirus’s design did not infringe Columbia’s design patent.[15] The fear now is that the Federal Circuit’s decision will allow for endless infringement if a logo is given weight in such cases. If the ordinary observer cannot differentiate the two underlying designs, to begin with, then the logo should not be the way around infringement. Further clarity from courts on logo placement within design patents will be necessary in future cases to protect these two separate areas of intellectual property.

 

 

 

 

[1] A Diversified Revenue Base, Columbia Sportswear Co. (2022), https://investor.columbia.com/company-information.

[2] Columbia Sportswear Competitors, Comparably (2022), https://www.comparably.com/companies/columbia-sportswear/competitors.

[3] See Columbia Sportswear N. Am., Inc. v. Seirus Innovative Accessories, Inc., 942 F.3d 1119, 1122–23 (Fed. Cir. 2019).

[4] Id. at 1122.

[5] Id. at 1129.

[6] Id.

[7] Id. at 1131.

[8] See Columbia Sportswear, 942 F.3d at 1131.

[9] Id. at 1130.

[10] Id.

[11] Id. at 1128–29, 1133.

[12] Id. at 1131.

[13] Design Patent, Legal Info. Inst. (July 2020), https://www.law.cornell.edu/wex/design_patent.

[14] What is a Trademark?, USPTO (June 13, 2022), https://www.uspto.gov/trademarks/basics/what-trademark.

[15] Gregory A. Castanias et al., When Trademarks and Design Patents Intersect: Making Waves in Columbia v. Seirus, Jones Day (August 2021), https://www.jonesday.com/en/insights/2021/08/when-trademarks-and-design-patents-intersect-making-waves-in-columbia-v-seirus.

 

Image Source: https://www.columbia.com/how-to-choose-a-ski-jacket.html

Mitigating the Risks of Digital Health Technologies Using an International Rights Framework

By W. Kyle Resurreccion

 

 

I. Introduction

In response to the COVID-19 pandemic, governments and healthcare systems worldwide sough the widespread adoption of digital health technologies.[1] This phenomenon has led to the development of various apps for contract tracing, social distancing, and quarantine enforcement, as well as the creation of artificial intelligence (AI) and machine learning algorithms to analyze the large datasets used and produced by these apps.[2] While arguably beneficial, these new tools come with potential harms that must be understood in order to facilitate their effective use and implementation in both public and private health systems.[3]

II. Risks of Digital Health Technologies

Data breaches in the healthcare industry are prolific. Of the 5,212 confirmed breaches included in Verizon’s report on global data breaches, 571 occurred in healthcare, making it the third highest industry for total number of breaches, just behind finance (690) and professional services (681).[4] Furthermore, according to IBM’s report, healthcare is also the costliest industry for data breaches.[5] The average total cost of a breach in healthcare in 2022 is USD 10.10 million, which is more than twice as costly as a breach in any industry (USD 4.35 million) or a breach in any critical infrastructure (USD 4.82 million).[6] Data breaches in healthcare also harm the individual right to privacy.[7] In 2018, Anthem, one of the largest health insurance companies in the United States, agreed to pay USD 16 million to the U.S. Department of Health and Human Services to settle potential violations of the Health Insurance Portability and Accountability Act (HIPAA) after the largest health data breach in the nation’s history.[8] The breach exposed the protected health information of 79 million people to hackers who stole names, social security numbers, addresses, dates of birth, and employment information, among other private electronic data.[9]

Digital health technologies also carry the risk of bias due to the AI and machine learning algorithms used in automated processes.[10] Since machine learning models are powered by data, biases can be encoded via the datasets from which the algorithm is derived or through the modeling choices made by the programmer.[11] This can compound the bias problem in healthcare where, for instance, the gender and race of participants in randomized clinical trials for new medical treatments are often not representative of the population that ultimately receives the treatment.[12] For example, in one study, an AI was built using hospital notes of ICU patients.[13] The AI was later used to predict the mortality of patients in the intensive care unit (ICU) based on their gender, race, and type of insurance (insurance was used as a proxy for socioeconomic status).[14] The results of the study showed differences in the AI’s prediction accuracy for mortality based on gender and type of insurance, which is a sign of bias.[15] A difference based on race was also observed, but this finding may have been confounded since the original dataset was racially imbalanced to begin with.[16]

Discrimination in the accessibility of digital health technologies is also an immediate concern. This is especially relevant considering many nations are transitioning to a “digital by default” or “digital by choice” model for providing welfare, which in reality are “digital only” in practice.[17] The United Nations report on extreme poverty and human rights emphasized how the lack of digital literacy and access to reliable internet connection can contribute to the inequality in accessing digital technologies.[18] This issue occurs in both the global North and global South.[19] For example, in the wealthy nation of the United Kingdom, 4.1 million adults (8% of the population) are offline, with almost half of them coming from low-income households and almost half being under the age of 60.[20] Failure to address these gaps can result in exacerbated inequalities where underserved and vulnerable populations cannot receive healthcare due to the inability to access and use digital health technologies.[21]

III. International Rights Framework to Mitigate Risks

Guidance for governments to mitigate the concerns brought by digital health technologies can be found in the ethics-based approach derived from a framework of international human rights most relevant in the context of this issue.[22] These are the rights to health, nondiscrimination, benefit from scientific progress, and privacy.[23] The international right to health is particularly critical, being enshrined in both international and domestic laws, with over 100 national constitutions guaranteeing this right to individuals.[24]

In pursuing this ethics-based approach, the United Nations Educational, Scientific and Cultural Organization (UNESCO) published its recommendation on the ethics of artificial intelligence, which aims to guide stakeholders in making AIs work for the good of humanity and to prevent harm.[25] The recommendation lists values and principles for the proper creation and implementation of AIs and specifically includes the principles of non-discrimination, safety and security, privacy, transparency, and accountability among its other provisions.[26]

Additionally, frameworks such as those established by the African Union (AU), Asia-Pacific Economic Cooperation (APEC), European Union (EU), and other regional organizations are also helpful in providing guidance for how to best regulate personal data ethically.[27] These regional frameworks enshrine a common set of positive rights: (1) the right to be informed about what data are and are not collected, (2) right to access stored data, (3) right to rectification, (4) right to erasure (or the “right to be forgotten”), (5) right to restriction of procession, (6) right to be notified, (7) right to data portability, (8) right to object, and (9) other rights related to automated decision-making and profiling.[28]

This approach based on ethics and international human rights also envisions a pathway for holding private companies accountable for how they use, implement, and offer digital health technologies. Through a resolution, the Human Rights Council of the United Nations endorsed non-binding guidelines which would place upon private companies the obligation to respect international human rights independently of nations and governments.[29] This responsibility would require businesses to avoid causing or contributing to adverse human rights impacts through their own activities and to address such impacts when they occur.[30]

IV. Conclusion

The risks of digital technologies are real and immediate. However, to halt this progress out of fear would rob us of the extensive benefits these tools can offer, especially to our health and wellbeing. The intersection of digital technologies and the right to health is an inevitable development in the growth and progress of humanity. As with any tool, the onus is on the user to ensure that these advances continue to benefit our goal of being healthy. We must also ensure that these tools enable others to pursue the same goal and are regulated in their license to intrude into the most private and intimate parts of our lives. In this fast-evolving field, laws based on core human rights are needed to ensure our progress does not turn these tools into weapons that can be used to divide and harm us.

 

 

 

[1] Nina Sun et al., Human Rights and Digital Health Technologies, 22 Health & Hum. Rts. J., no. 2, Dec. 2020, at 21, 22.

[2] Id.

[3] Id. at 23.

[4] Gabriel Basset et al., Verizon, Data Breach Investigations Report 50 (2022).

[5] IBM, Cost of a Data Breach Report 11 (2022).

[6] Id. at 5, 11 (“critical infrastructure” means financial services, technology, energy, transportation, communication, healthcare, education and the public sector industries).

[7] Sun et al., supra note 1, at 23.

[8] Off. for Civ. Rts., Anthem Pays OCR $16 Million in Record HIPAA Settlement Following Largest U.S. Health Data Breach in History (2020).

[9] Id.

[10] Sun et al., supra note 1, at 23.

[11] Irene Y. Chen et al., Can AI Help Reduce Disparities in General Medical and Mental Health, 21 AMA J. Ethics, no. 2, Feb. 2019, at E167, 168; James Zou & Londa Schiebinger, Comment, AI Can Be Sexist and Racist – It’s Time to Make It Fair, 559 Nature 324, 325 (2018).

[12] Chen et al., supra note 11, at 167.

[13] Id. at 169.

[14] Id. at 167, 171, 175.

[15] Id.

[16] Id. at 169, 173, 175.

[17] Philip Alston (Special Rapporteur on Extreme Poverty and Human Rights), Extreme Poverty and Human Rights, U.N. Doc. A/74/493, at 15 (Oct. 11, 2019).

[18] Id.

[19] Id.

[20] Id. at 16

[21] Sun et al, supra note 1, at 25.

[22] Chen et al., supra note 11, at 24.

[23] Id. at 24-26.

[24] G.A. Res. 217 (III) A, Universal Declaration of Human Rights, art. 25(1) (Dec. 10, 1948) (“Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family . . . .”); Rebecca Dittrich et al., The International Right to Health: What Does It Mean in Legal Practice and How Can It Affect Priority Setting for Universal Health Coverage?, 2 Health and Sys. Reform, no. 1, Jan. 2016, at 23, 24.

[25] U.N. Educ., Sci. & Cultural Org. (UNESCO), Recommendation on the Ethics of Artificial Intelligence, U.N. Doc. SHS/BIO/PI/2021/1 (Nov. 23, 2021).

[26] Id. at 7-10.

[27] Sun et al, supra note 1, at 27.

[28] Id.

[29] John Ruggie (Special Representative of the Secretary-General on Human Rights and Transnational Corporations and Other Business Enterprises), annex, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, U.N. Doc. A/HRC/17/31, at 13 (Mar. 21, 2011); G.A. Res. 17/4, ¶ 1 (July 6, 2011) (endorsing the guiding principles).

[30] Ruggie, supra 28, at 14.

 

Image Source: https://www.thelancet.com/cms/attachment/e936ef82-9641-4796-9760-81d386e465a9/fx1.jpg

 

Page 14 of 76

Powered by WordPress & Theme by Anders Norén