The first exclusively online law review.

Year: 2018

The Impact of Technological Illiteracy

By: Niesha Gibbs,

In today’s society, it is virtually impossible for anyone to excel with the absence of two things; education and computer literacy. Education is considered the great equalizer. Stated differently, it’s the universal key that opens the proverbial door of opportunity. But what happens when you don’t have the key?

At the Bronzeville Scholastic Institute, freshman Jerod Franklin and his peers work on writing assignments in their homework lab.[1] What appears as a common day for any high school, is anything but that. Nearly one thousand individuals, a vast majority of which do not have access to personal computers on a regular basis, share this moderately sized lab of twenty-four systems.[2] Not having access to such technology is a breeding ground for technological illiteracy.

While over 75% of American adults and over 80% of teens use the internet, “some poorer areas in the United States still see comparatively low rates of home computer use.”[3] This occurrence, most notably known as the “digital divide” fosters disparities in basic computer literacy, which translates into far greater socio-economic implications.[4] With the strong presence of cellphones and other smart devices, its counter-intuitive to think that some youth lack even the most minimal computer skills. However, access to the web does not render a person, in this case a school aged students, as having computer literacy.[5]

The blatant fact of the matter with this trend is that it affects the same demographic, which in this case is underprivileged inner city minorities. The lack of digital literacy may include the inability to perform simple functions such as, compose emails, log into online platforms, and even saving work to a thumb or disk drive.[6] Without these basic skills, one becomes unmarketable to potential universities and job opportunities. In attempt to sharpen these skills, 70% of teachers assign homework that requires Internet access.[7] However, the system that may appear to help these youngsters is only hurting them. How can one sharpen a tool without the proper materials? In an attempt to improve digital literacy, imagine the falling grades hundreds of children may receive because all of the aforementioned twenty-four computers in the school’s lab were all being used. Computer literacy programs that are implemented in the inner cities should take a more holistic approach.[8] While more devices are necessary, basic introductory courses should accompany them. Courses that will “enhance participants’ skill sets and ensure they become self-sufficient.” But, how? The answer appears to be funding.

A principal of a predominantly minority based elementary school, Pleasant View, has decided to pursue the answer on behalf of her students.[9] Over the course of one-year Principal Gara Field, applied for and received a grant totaling over $400,000 with some support from her district.[10] This, no doubt, will allow these impressionable students to have the access, capability and understanding of many digital resources. Principal Gara understands an important notion that many others may not. Seeking the answer, before the problems arises makes for the best solution.

 

[1] See Nick Pandolfo, As Some Schools Plunge into Technology, Poor Schools Are Left Behind, Hechinger Report (2012), http://hechingerreport.org/as-some-schools-plunge-into-technology-poor-schools-are-left-behind/.

[2] Id.

[3] See John Wihbey, Computer Usage and Access in Low-income Urban Communities, Journalist’s Resource (2013), https://journalistsresource.org/studies/society/internet/computer-usage-access-low-income-urban-communities.

[4] Id.

[5] Id.

[6] Id.

[7] See Gage William Salicki, Urban School Districts Still Don’t Have Equal Access to Digital Tools and Education, ctViewpoints (2017), https://ctviewpoints.org/2017/11/01/urban-school-districts-still-dont-have-equal-access-to-digital-tools-and-education/.

[8] Id.

[9] See Jennifer D. Jordan, How an Unconventional Principal Turned Around a Struggling urban School, pbs (2015), https://www.pbs.org/newshour/education/unconventional-principal-uses-blended-learning-help-turn-around-struggling-urban-school.

[10] Id.

Image Source: https://tribune.com.pk/story/936341/in-the-cyberspace-technology-illiteracy-leads-to-online-harassment/.

Could Social Media Be Used to Help Prevent Suicides Rather than a Source to Trigger Them?

By: Nicole Gram,

According to the Centers for Disease Control and Prevention (CDC), suicide was the tenth leading cause of death overall in the United States and the second leading cause among individuals between the ages of 15 and 34 in 2015.[1] To provide further perspective on the statistics, there were more than twice as many suicides in the United States as there were homicides.[2] While it is complex and difficult to predict, there are very often signs that an individual is struggling with suicidal thoughts and behaviors.[3] Social media applications provide a forum in which some individuals share emotions and issues they are experiencing. Facebook even experienced a number of live streamed suicides this past year.[4] As a result, as part of a call to action to proactively identify at risk individuals and prevent them from harming themselves, Facebook is using Artificial Intelligence (AI) technology to scan content with pattern recognition for specific phrases that indicate someone may need help.[5] An AI algorithm identifies and prioritizes the posts for action by the thousands of employee content reviewers.[6] The application then prompts the at-risk individual with options for a helpline, tips to address their issues and feelings, an option to contact another friend or will even notify First Responders in critical situations.[7] Users cannot opt out of this technology that is being tested in the US with plans to rollout to most countries, excluding the European Union due to their regulatory restrictions.[8]

Mental health professionals also recognize the advantage of leveraging technology beyond hospitals, emergency rooms, and ICU for psychiatrists’ offices.[9] The focus of their mission is “alleviating suffering with technology”.[10] The large amounts of data on smartphones and in social media applications are a valuable resource to supplement the patient interviews that mental health professionals are dependent on.[11] They too have an application, named Spreading Activation Mobile (SAM), that uses predictive machine learning to analyze speech and determine whether someone is likely to take their own life.[12] SAM is being tested in Cincinnati schools and looks for increases in negative words and/or decreases in positive words based on language, emotional state and social media footprint.[13]

Time is a key factor in the prevention of suicide and advancements in AI are a continuing trend toward performing more and more tasks that could only be performed by humans previously.[14] In the mental health arena, the availability and evolving quality of fresh data is creating improved and more effective algorithms that can drive earlier identification and action.[15] However, there are several legal implications and challenges with using AI as a tool to prevent suicide. Whenever personal data is involved, concerns about privacy and misuse top the list.  Given the mental health content, there is increased sensitivity about who has access and the potential impact on items such as insurance premiums and coverage.[16] With AI algorithms making decisions and driving actions, an equally important consideration is around who holds the moral and legal responsibility to be accountable when harm is caused.[17] This becomes even more complicated by the autonomous nature of AI technology. As the tool learns from experience and more data, it is possible that AI systems will grow to perform actions not anticipated by their creators.[18] Some experts have proposed a legal framework with a governing authority that certifies AI systems so that operators of certified AI systems have limited tort liability while operators of uncertified systems face strict liability.[19] This appears to provide an appropriate balance, at this point in time, with obtaining the advantage of AI tools and algorithms in preventing suicide while ensuring controls exist to mitigate the risks.[20]

 

[1] See Peter Holley, Teenage Suicide Is Extremely difficult to Predict. That’s Why Some Experts Are Turning to Machines for Help., Wash. Post (Sept. 26, 2017), https://www.washingtonpost.com/news/innovations/wp/2017/09/25/teenage-suicide-is-extremely-difficult-to-predict-thats-why-some-experts-are-turning-to-machines-for-help/?tid=a_inl&utm_term=.4ce68a113fd1.

[2] See Suicide, Nat’l Inst. of Mental Health (2015), https://www.nimh.nih.gov/health/statistics/suicide/index.shtml.

[3] See Hayley Tsukayama, Facebook Is Using AI to Try to Prevent Suicide, Wash. Post (Nov. 27, 2017), https://www.washingtonpost.com/news/the-switch/wp/2017/11/27/facebook-is-using-ai-to-try-to-prevent-suicide/?utm_term=.75045cbd8923.

[4] See id.

[5] See id.

[6] See id.

[7] See id.

[8] See Tsukayama, supra note 3.

[9] See Holley, supra note 1.

[10] See id.

[11] See id.

[12] See id.

[13] See id.

[14] See Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J. Law & Tec 353 (2016).

[15] See id.

[16] See id.

[17] See id.

[18] See id.

[19] See Scherer, supra note 14.

[20] See id.

Image Source: http://www.valleynewslive.com/content/news/New-app-working-to-predict-suicidal-behaviors-among-teens–448339133.html.

Net Neutrality

By: David Hart,

Net Neutrality is the idea that your internet service provider must treat all lawful websites equally. They must not block lawful websites, throttle speeds of certain websites, or accept payment to prioritize the speed of the payer’s website. There is currently a proposal by the FCC that would effectively end Net Neutrality.[1] The heart of the issue is whether broadband internet should be classified as a common carrier or as an information service.

Since 2015, the internet has been effectively regulated under a common carrier classification, as opposed to an information service classification.[2] Before the change in 2015, Internet Service Providers were classified as an information service.[3] This changed after the Title II Order, in which the FCC classified providers as a common carrier, which brings with it much more stringent regulations.

An information service is defined as the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications.[4] Contrastingly, telecommunication (which is classified as a common carrier) is defined as the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received.[5] The common carrier classification applies to services such as railroads, trucking companies, and telecommunication services.[6]  A common carrier provides a service to the general public and cannot discriminate among its users. They are statutorily prevented from discrimination in charges, practices, classifications, regulations, facilities, or services.[7] What this means to your Internet Service Provider is that they cannot increase speeds for websites that they own, or websites that pay them. If Netflix were to offer them money in order to prioritize the access to Netflix, relative to other sites, they are unable to accept. For another example, take Verizon. Verizon owns the Huffington Post. It makes sense that Verizon would want more people to access the Huffington Post. In theory, Verizon could increase their customers’ internet speed for accessing the Huffington Post while decreasing the speed of accessing other news sites, like CNN’s. Currently, they are unable to do so under the common carrier classification.[8] This would not be the case if Internet Service Providers’ classification is changed to that of an “information service.” Relative to a common carrier classification, the information service class is much less regulated. If Internet Service Providers become classified as an information service, instead of a common carrier, they would no longer be prevented from throttling or increasing speeds based on website and would be able to prioritize some sites over others.[9] However, the FCC proposal would require providers to let their customers (and potential customers) know if they blocked, slowed, or prioritized websites.[10]

The FCC will vote on the proposal December 14, 2017.[11] The FCC proposal argues that the regulatory environment of the common carrier classification has delayed new services and stifled innovation.[12] The proposal aims to end FCC micromanagement of innovative business models and restore the Federal Trade Commission’s power to protect consumers from unfair practices without burdensome regulation.[13] Opponents of the proposal argue that a change in classification would effectively allow providers to determine what websites consumers would be able to see and use.[14] This could be even more problematic as many areas of the country are dominated by a limited number of large internet providers, so even if a customer were to be dissatisfied with their provider’s decisions, they’d have few to no other options.[15]

 

[1] Restoring Internet Freedom, 82 Fed. Reg. 25,568 (Jun. 2, 2017).

[2] Protecting and Promoting the Open Internet, 47 C.F.R. § 8.5, §8.7, §8.9 (2015).

[3] Verizon Communications Inc. v. FCC, 740 F.3d 623 (2014).

[4] 47 U.S.C. § 153 (24) (2017).

[5] 47 U.S.C. § 153 (50) (2017).

[6] 47 U.S.C. § 153 (51) (2017).

[7] 47 U.S.C. § 202 (2017).

[8] Protecting and Promoting the Open Internet, 47 C.F.R. § 8.5, §8.7, §8.9 (2015).

[9] 47 U.S.C. § 202 (2017).

[10] Restoring Internet Freedom Fact Sheet ¶ 216, http://transition.fcc.gov/Daily_Releases/Daily_Business/2017/db1122/DOC-347927A1.pdf.

[11] Cecilia Kang, F.C.C. Plans Net Neutrality Repeal in a Victory for Telecoms (Nov. 21, 2017), https://www.nytimes.com/2017/11/21/technology/fcc-net-neutrality.html.

[12] Restoring Internet Freedom Fact Sheet, http://transition.fcc.gov/Daily_Releases/Daily_Business/2017/db1122/DOC-347927A1.pdf.

[13] Id.

[14] ACLU: What Is Net Neutrality (Jun. 2017), https://www.aclu.org/issues/free-speech/internet-speech/what-net-neutrality.

[15] Jeff Dunn, America Has An Internet Problem — but a Radical Change Could Solve It, Business Insider (Apr. 23, 2017), http://www.businessinsider.com/internet-isps-competition-net-neutrality-ajit-pai-fcc-2017-4/#-3.

Image Source: https://www.aclu.org/sites/default/files/styles/scale_1200w/public/wysiwyg/web15-siteimages-act-netneutrality-2400x960_0.jpg?itok=SWcUu9tK.

Juror Emotion

By: Niesha Gibbs,

Should jurors be completely void of emotion? When posed with this question, I impulsively answered no. Prompting my answer are cases like that of Cyntoia Brown. Brown’s story has garnered the attention of thousands after a recent post about her went viral. In 2006, the then 16-year-old Brown was sentenced to life imprisonment for the murder of 43-year year old Jonathan Allen.[1] Allen attempted to rape Cyntoia, and in self-defense she shot and killed him.[2] Brown was tried as an adult and ultimately convicted of first degree felony murder and aggravated robbery.[3] Cyntoia’s case, like many others, is a prime example of when just a bit more understanding from a jury would have had an entirely different outcome. Nonetheless, there are instances where far too much emotion certainly clouds judgment. So, where does this leave the plight of nearly all jurors? The unspoken plight of finding the delicate balance between acknowledging sympathy while applying an impartial balance of the law? One solution being offered is artificial intelligence.[4]

With technology present in nearly every facet of the legal profession, from e-discovery to electronic filing, it should come as no surprise that technology has now pervaded to the courtroom itself. Currently, technical advances are being used to “transfer” jurors to the actual crime scene of the trial they are sitting on.[5] However, the strides of modernizing courtrooms haven’t stopped there, robots or algorithms are now being used to determine the guilt or innocence of a defendant.[6] The company Nortpointe, Inc. has created software, Compas, that is designed to assist courts or judges with making “better” – or at least more data-centric – decisions in court.”[7]

While this hasn’t become a widespread practice, it has attracted the attention of prominent legal figures, namely the Honorable Chief Justice John G. Roberts Jr.[8] After being asked about the effect of artificial intelligence in the courtroom, Justice Roberts described it as “putting a significant strain on how the judiciary goes about doings things.”[9] Roberts comments were given over two months after the Supreme Court declined to review the case of Eric Loomis.[10]

In early 2013, Loomis was “sentenced to six years in prison at least in part by the recommendation of a private company’s secret proprietary software.”[11] One can’t help but to believe this decision was partially or even wholly motivated by the implications of such a ruling. Additionally, as with any new discovery, the potential problems that could arise if this method was endorsed by the Court. For example, hacking is regularly associated with some of the most sensationalized scandals. One could only imagine the potential issues that could arise during a highly contentious case, with someone’s life in the balance. Regardless of which side you are positioned on this issue, one point that both sides will concede is that as jurors each person brings their own world view. This worldview is a lens through which legal advocates often rely during a trial. Further, as humans, we have the unique ability to empathize with one another. An ability I think should never be undervalued or overlooked.

 

[1] See AJ Willingham, Why Cyntoia Brown, Who Is Spending Life in Prison for Murder, Is All Over Social Media, CNN (2017), http://www.cnn.com/2017/11/23/us/cyntoia-brown-social-media-murder-case-trnd/index.html.

[2] See id.

[3] See id.

[4] See Kayla Mathews, Is AI Getting Closer to Replacing Jurors, ProductivityBytes (2017), http://productivitybytes.com/ai-getting-closer-replacing-jurors/.

[5] See Nick Caloway, Investigators Use 3D Technology to Solve Crimes, Bring Scenes to Juries, Wkrn.com (2016), http://wkrn.com/2016/10/10/investigators-use-3d-technology-to-solve-crimes-bring-scenes-to-juries/.

[6] See Mathews, supra note 4.

[7] Christopher Markou, Why Using AI to Sentence Criminals is a Dangerous Idea, The Conversation (2017), https://theconversation.com/why-using-ai-to-sentence-criminals-is-a-dangerous-idea-77734.

[8] See See Adam Liptak, Sent to Prison by a Software Program’s Secret Algorithms Sidebar, New York Times (2017), https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html?_r=0.

[9] See id.

[10] Amy Howe, Federal Government Files Invitation Briefs, SCOTUSblog (2017), http://www.scotusblog.com/2017/05/federal-government-files-invitation-briefs/.

[11] See Liptak, supra note 8.

Image Source: https://www.law.com/insidecounsel/2017/04/19/jury-consulting-gets-emotional-with-the-help-of-te/?slreturn=20180017121919

HUMVEE Maker Files Trademark Case against Call of Duty

By: Seth Bruneel,

The ears of any lawyer who even dabbles in video games will perk up at the mention of “Call of Duty” and their attention will be fully captured when the title of the popular video game is found in context with “lawsuit.” Such is the case here. The maker of Call of Duty, Activision, is being sued by AM General for trademark infringement. [1]

AM General owns the registered trademark “HMMWV” (Reg. No. 3026594), more commonly referred to as “HUMVEE” (Reg. NO. 1697530). AM General alleges that by using the vehicles and names in Call of Duty Activision “[w]rongfully leverag[es] the goodwill and reputation AM General has developed in these marks … in advertising and promotion of their Call of Duty video game franchise” and that Activision uses the trademarks in the “manufacture and sale of collateral toys and books to further derive wrongful profits.”[2]

There is little room to dispute that Activision is using the trademark in the video games as shown in some of the pictures from AM General’s brief. [3] (In its brief (link) AM General provides further analysis of the similarities.)

AM General Activision
 
   

[4]

Activision has yet to file an answer to the complaint but there are several defenses that would qualify their use of the trademarks as non-infringement including: lack of consumer confusion and fair-use. [5]

The main test for infringement of a trademark is the likelihood of confusion. [6] However, here Activision has a strong argument that there is not likelihood of confusion because there is no intent to confuse. In fact, the best defense to the idea that consumers will confuse the Call of Duty version of the HUMVEE and AM General’s HUMVEE is to admit that they are both the same vehicle. Activision will then need to show that the use of the trademark is an allowable use.

One such use would be if the use of HUMVEE was a fair-use. Activision’s use of the registered trademarks is fair use if an alleged infringer uses a mark solely to describe the trademark holder’s product, but not the alleged infringer’s produce, for purposes such as comparison, criticism, or simply a point of reference. [7] In this upcoming action, Activing is merely using the trademarked terms and likenesses to refer to AM General’s military vehicles without referring to any of Activision’s products so the trademarks are used simply as a point of reference.

Another way for Activision to escape liability is to claim a free-speech defense. Activision’s use of the trademarks can be free-speech if it meets the Rogers Test. [8] The Rogers test says that a “use of a trademark that would otherwise violate the Lanham Act is not actionable unless [use of the mark] has no artistic relevance to the underlying work whatsoever, or, if it same some artistic relevance, unless [it] explicitly misleads as to the source or content of the work”. [9] Here, it is likely that the court will find that Activision’s use of the trademarks HUMVEE and MMMWV has “at least some artistic relevance” so the court will have to decide if the use misleads consumers as to the source of the HUMVEEs. [10] It is on this second prong of the test that Activision will likely succeed because there is little argument that Call of Duty explicitly mislead consumers to believe that the HUMVEEs are made by Call of Duty, rather than AM General.

While it may seem Activision is likely to evade liability for using AM General’s trademarks in-game, Activision could be in real trouble with the toy products as the toys do much more than simply use HUMVEE as a point of reference. [11] The toys present a higher likelihood of confusion for consumers as they could see the military vehicle as a Call of Duty product rather than a product of American General.[12]

The pending litigation is intriguing as it presents a confluence of commercial and artistic use. While I expect more practical solutions before a judge’s final ruling, it would be interesting to see if the courts are still likely to find that the use is artistic (thus, non-infringing) when the infringer gobbles up such massive amounts of commercial success as Call of Duty: Modern Warfare ($1.23 Billion). [13]

 

[1] Anandashankar Mazumdar, Humvee Maker Targets Activision’s ‘Call of Duty’ in Trademark Case, PATENT, TRADEMARK & COPYRIGHT J. BNA, (Nov. 10, 2017).

[2] Complaint at 1-2, AM General v. Activision Blizzard, (S.D.N.Y, Nov. 7, 2017) (No. 2:17-cv-08644).

[3] Id. at 5-7, 14-16, 20-22.

[4] Id. at 5-6, 16, 20, (Figs. 1, 2, 11, 12).

[5] Winthrop & Weinstine, Call of Duty Trademark Lawsuit: A Humvee Humdinger, DUETS BLOG (Nov. 15, 2017), https://www.jdsupra.com/legalnews/call-of-duty-trademark-lawsuit-a-humvee-30152/ (last visited Nov. 28, 2017).

[6] Lanham Act, 15 U.S.C. § 1114 (2017).

[7] See Winthrop, supra note 4.

 [8] See RockStar Videos, 547 F.3d at 1095(quoting Rogers v. Grimaldi, 875 F.2d 994, 999 (2d Cir. 1989)).

[9] Id.

[10] Id. at 1099-1101.

[11] See Muzumdar, supra note 1.

[12] Supra note 2, at 29.

[13] Tom Gernencer, How Much Money Has Every Call of Duty Game Made?, MONEYNATION (Dec. 23, 2015), http://moneynation.com/how-much-money-has-every-call-of-duty-game-made/ (last visited Nov. 28, 2017).

Image Source: http://www.gambitmag.com/2017/11/activision-getting-sued-humvee-use-call-duty/.

Fishing for Location

By: David Hart,

It is no longer the era of buddy-cop stakeouts, waiting outside of a suspect’s home or hangout. Instead, law enforcement has been employing a device that pinpoints a supposed criminal’s location. The most well-known brand of this device is the StingRay, which has become the catch-all term for these devices. The StingRay disguises itself as a cellphone tower, tricking the suspect’s phone into transmitting data to it.[1] This may seem fine at first glance; after all, who doesn’t want criminals off our streets? Unfortunately, it’s not as cut and dry as it seems. The device does not simply target one cellphone. It dupes all cell phones in an area to send information.[2] Additionally, law enforcement does not always obtain a search warrant before utilizing StingRays.[3] So not only is a net being dragged through innocent citizens’ data, oftentimes law enforcement does not explain their probable cause to a magistrate, in an attempt to obtain a warrant, before targeting a suspect’s cell phone. The use of these devices raises significant privacy and civil liberty concerns.

The law struggles to keep up with the sprinting advance of technology, but it seems the judiciary is finally closing the gap. An important development has recently come from the D.C. Court of Appeals, through the case of Jones v. United States. Jones was convicted on charges of sexual assault and robbery.[4] Law enforcement had used a StingRay-type device (a cell-site simulator) to locate him, without obtaining a warrant.[5] Jones argued that this was a 4th amendment violation but the trial court denied his claim.[6] Jones then appealed his conviction to the D.C. Court of Appeals. The Court found that “the government violated the Fourth Amendment when it deployed the cell-site simulator against him without first obtaining a warrant based on probable cause.”[7] In the words of Judge Beckwith, “under ordinary circumstances, the use of a cell-site simulator to locate a person through his or her cellphone invades the person’s actual, legitimate, and reasonable expectation of privacy in his or her location information and is a search.”[8]

The D.C. Court of Appeals is not the only authority that has spoken on the issue of StingRays. In 2016, The Maryland Court of Special Appeals held that law enforcement must have a valid warrant to use cell-site simulators.[9] The Baltimore City Police Department had used a cell-site simulator to locate Kerron Andrews, who was wanted on charges of attempted murder.[10] They tracked his cellphone with this device and located him in a residence.[11] Andrews claimed that the use of this device without a warrant was a violation of the 4th amendment. The Court found that the use of the cell-site simulator was indeed a 4th amendment violation and suppressed the evidence found in the residence where Andrews was located.[12]

Virginia is one of the states that has passed laws restricting the use of StingRay devices. VA Code § 19.2 – 70.3 K. provides that: “an investigative or law-enforcement officer shall not use any device to obtain electronic communications or collect real-time location data from an electronic device without first obtaining a search warrant authorizing the use of the device if, in order to obtain the contents of such electronic communications or such real-time location data from the provider of electronic communication service or remote computing service, such officer would be required to obtain a search warrant pursuant to this section.”[13] In essence, this means that if law enforcement would need a warrant to get location data from a cell-service provider then they also need a warrant to use a StingRay device.

With courts ruling against warrantless use of StingRay devices and legislatures passing laws requiring warrants, it seems we’re headed in the right direction in regards to cell-phone locating and our 4th amendment rights.

 

[1] See Cyrus Farivar, Another Court Tells Police: Want to Use a Stingray? Get a Warrant (Sep. 22, 2017), https://arstechnica.com/tech-policy/2017/09/another-court-tells-police-want-to-use-a-stingray-get-a-warrant/

[2] Id.

[3] See generally, Jones v. United States, 168 A.3d 703 (2017); State v. Andrews, 227 Md. App. 350 (2015).

[4] See Jones v. United States, 168 A.3d 703, 707 (2017).

[5] Id.

[6] Id.

[7] Id.

[8] Id. at 715.

[9] See generally, State v. Andrews, 227 Md. App. 350 (2015).

[10] Id. at 354.

[11] Id.

[12] See generally, State v. Andrews, 227 Md. App. 350 (2015).

[13] See Va. Code Ann. § 19.2 – 70.3 (2017).

Image Source: https://www.aclu.org/sites/default/files/styles/action_sidebar_wide_280x240/public/field_image/web15-siteimages-act-ecpa-2400×960.jpg?itok=vaHqbpk5.

#Sponsored: Holding Social Media Influencers Responsible for Their Representations

By: Helen Vu,

Those familiar with the social media world may recall the disaster that was the inaugural Fyre Festival. The music festival was slated to take place during a weekend in April 2017 on the island of Great Exuma, in the Bahamas.[1] Organized by an entrepreneur named Billy McFarlane, Fyre Festival was framed as “Coachella in the Caribbean”[2] and was promoted almost exclusively through social media platforms such as Instagram, Facebook, and Twitter.[3] Organizers paid celebrities and social media influencers, such as Jeffrey “Ja Rule” Atkins, Bella Hadid, and Kendall Jenner, to upload posts promoting the event onto their Instagram accounts.[4] These influencers, who have millions of followers, convinced their fans to hand over thousands of dollars for tickets to a lavish music festival on a tropical island.[5]

Unfortunately, the event turned into more of a dumpster fire than a Fyre Festival. The organizers were woefully unprepared for the actual event, had barely anything set up, and cancelled the festival as people were boarding planes from Miami to the Bahamas.[6] A large group of individuals received the news too late and ended up stuck on an island with not enough departing flights.[7] These unlucky attendees had to wait a day to find a flight back to the United States.[8] Meanwhile, instead of staying in the beachside villas promised, they had to find shelter in half-built tents.[9] At mealtime, organizers handed festivalgoers cheese sandwiches rather than the world-class cuisine they paid for.[10] Social media lit up with discussions of how the Fyre Festival had turned into such a nightmare and how the organizers had known for at least a month that there was no feasible way that the event could occur.[11] Eventually the failed festival became old news but as the furor died down, legal consequences for the festival organizers and promoters began rolling in.[12]

Several individuals brought class action lawsuits on behalf of other festivalgoers, alleging fraud, breach of contract, and negligent misrepresentation.[13] While these complaints all name the main organizer, Mr. McFarlane, as a defendant, it is noteworthy that some of the claimants attempt to hold the influencers who promoted the Fyre Festival accountable as well.[14] A suit filed in the United States District Court for the Central District of California included claims against 50 unnamed individuals, alleging that these defendants made misrepresentations to their followers to induce them into purchasing tickets to the event.[15] Another suit was brought in the Los Angeles County Superior Court against 100 unnamed individuals, claiming false and misleading advertising.[16] It is unclear whether the Fyre Festival ticketholders have a valid claim against the promoters who advertised the event on their social media accounts. After all, attendees purchased their tickets from the organizers rather than the promoters. In addition, these celebrities likely did not have much more knowledge of the event than the attendees did. They merely posted promotional content with a tap of a finger in exchange for compensation.

As social media developed and its use grew exponentially, users who had extremely high numbers of followers found a way to take advantage of the influence they had over so many individuals. Brands and companies commonly pay popular users to post material on their social media accounts promoting goods and services. These sponsored posts are often couched as recommendations from the influencer to her followers rather than as outright advertisements. Thus, users view these sponsored posts as more authentic than regular ads posted by the companies themselves.[17] As this business-savvy practice grows, steps should be taken to ensure that these promoters post truthful material or, at the least, do diligent research on the products they advertise. The companies who pay for these posts have regulations they must follow themselves when they advertise directly to consumers.[18] Instagram influencers should have similar regulations as well. Otherwise, little stops them from posting false advertising for the company that pays the most.

The Federal Trade Commission (“FTC”) is a federal agency that protects consumers from unfair, deceptive, and fraudulent practices in the marketplace by regulating advertising and marketing.[19] While the bulk of the work that the FTC does involves traditional methods of advertising, it has gotten more involved with social media marketing as online influencers play an increasingly larger role in promoting products and services. The FTC recently sent more than 90 letters to influencers and online marketers reminding them that they are required to disclose to their followers which social media posts they are being paid to upload as a promotion or endorsement.[20] This signals a shift from holding only the brands responsible for advertising violations to making the influencers accountable as well.[21]

While the Fyre Festival is arguably the most well known social media marketing failure, it is only a part of the bigger picture involving promotional posts, influencers, and fraud. Internet users will find more and more innovative ways of using social media not just to connect with friends but also to make money. How will we ensure that those making financial gains off an inherently untrustworthy medium are held responsible for their actions? Does the FTC have the power to strictly regulate anyone who gets compensated for his or her posts, even if the posts are on a forum for self-expression? Influencer marketing fraud is a growing area of concern not just for the consumers who may fall prey to deceptive advertising, but for the influencers who may be held liable for their sponsored posts as well.[22]

 

[1] See Bryan Burrough, Fyre Festival: Anatomy of a Millennial Marketing Fiasco Waiting to Happen.  Vanity Fair, Aug. 2017, https://www.vanityfair.com/news/2017/06/fyre-festival-billy-mcfarland-millennial-marketing-fiasco.

[2] Id.

[3] See id.

[4] See id.  

[5] See id.

[6] See id.

[7] See Burrough, supra note 1. 

[8] See id.

[9] Jung v. McFarland, No. 2:17-cv-03245 (D. Cal. filed Apr. 30, 2017).

[10] See id. 

[11] Burrough, supra note 1.

[12] See Jeff John Roberts, Celebrity Influencers Face Moment of Truth in Fyre Festival Lawsuit, Fortune (May 7, 2017), http://fortune.com/2017/05/07/fyre-festival-lawsuit/.

[13] See The Fyre Festival is Facing 9 Lawsuits, FBI Investigation, Organizer Arrested, The Fashion Law (July 3, 2017), http://www.thefashionlaw.com/home/a-list-of-all-of-the-fyre-festival-lawsuits-that-have-been-filed-so-far.

[14] See id.

[15] See Jung, No. 2:17-cv-03245.

[16] See Chinery v. Fyre Media, Inc., No. BC659938 (Super. Ct. Cal. filed May 2, 2017).

[17] See Shareen Pathak, Cheatsheet: what you need to know about influencer fraud, Digiday (Nov. 3, 2017), https://digiday.com/marketing/cheatsheet-need-know-influencer-fraud/.

[18] Federal Trade Commission, Advertising and Marketing on the Internet (Sept. 2009), https://www.ftc.gov/system/files/documents/plain-language/bus28-advertising-and-marketing-internet-rules-road.pdf.

[19] Federal Trade Commission, What We Do, https://www.ftc.gov/about-ftc/what-we-do (last visited Nov. 21, 2017).

[20] Federal Trade Commission, FTC Staff Reminds Influencers and Brands to Clearly Disclose Relationship (Apr. 19, 2017), https://www.ftc.gov/news-events/press-releases/2017/04/ftc-staff-reminds-influencers-brands-clearly-disclose.

[21] Roberts, supra note 12.

[22] See id.

Image Source: http://www.iamwire.com/2016/05/influencer-marketing-strategy/135280.

Page 8 of 8

Powered by WordPress & Theme by Anders Norén