The first exclusively online law review.

Tag: social media

TikTok Brain: Can We Save Children’s Attention Spans?

TikTok Brain: Can We Save Children’s Attention Spans?

By Nathan Crispo

The Facebook, now just Facebook, launched in February of 2004.[1]  It took just one day for the site to get over one thousand users.[2]  As of 2024, Facebook has more than three billion monthly users and over two billion daily users.[3]  While Facebook is one of the earliest social media platforms that is still widely used today,[4] it is far from alone in having billions of users.  As of January 2024, at least six different social media platforms have more than one billion users.[5]  Social media has become ubiquitous; more than five billion people worldwide are social media users.[6] 

New Friend on Social Media or Human Trafficker Looking to Make a Connection?

By Amanda Short

Do you know every person you add on social media? Do you know if your loved ones are adding strangers on social media? Human trafficking is the exploitation of persons for labor, services, or commercial sex.[1] Human trafficking is a form of modern-day slavery as victims are coerced and compelled against their will for the benefit of the trafficker.[2] As the modern age ushered in the use of social media, human traffickers have also adapted their tactics to recruit and sell victims through social media.[3] According to the Polaris Project, human traffickers use the following social media sites for recruitment purposes such as  Facebook, Instagram, Snapchat, and Kik.[4] Traffickers are also known to find victims through dating sites like Tinder, Grinder, and Plenty of Fish.[5]

Human trafficking has been reported in every state in the United States,[6] with a disproportionate effect on children and women.[7] Reports by human trafficking victims and survivors to the National Human Trafficking Hotline increased by 20% from 2018 to 2019.[8] A few common misconceptions about human trafficking are that victims can only be foreign nationals or immigrants from other countries, there must be some type of physical restraint or force to be trafficked, and victims are only coming from poverty situations.[9] The top five recruitment techniques for sex trafficking include an intimate partner or marriage proposition, familial relationship, job offers, posing as a benefactor, and false promises. [10]

As the population has grown to enjoy the many uses of social media, so have human traffickers. 72% of the American public is reported to use some type of social media.[11] Not only are Americans using social media, but these sites are being visited every day by users.[12] Many social media sites include privacy settings, but these settings may still allow strangers to send friend requests and direct messages. In a study by the Pewter Research Center, one in six teens responded that they have been messaged by a person they did not know which incited fear or discomfort.[13]

Human traffickers often use a “loverboy” tactic to attract victims through befriending young girls in public or online.[14]The loverboy trafficker will make the victim feel special through gifts and affection, but the relationship will change drastically once the victim is demanded to provide services.[15] In 2018, the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) was enacted by Congress to allow the civil and criminal liability of online marketplaces that knowingly participate in sex trafficking.[16] On April 28, 2020, the 14th Court of Appeals in Texas denied Facebook’s motion to dismiss in response to a suit alleging that Facebook facilitated enabled sex trafficking on its site.[17] The plaintiffs are 13, 14, and 16-year old girls that claim they were recruited by human trafficking pimps on Facebook or Instagram.[18]

You may be wondering what you can do to protect yourself and others from human traffickers and here are some tips from the Department of Homeland Security: (1) set your social media platforms to private so only real friends can see your information; (2) know whom you are talking to on social media sites and refrain from speaking with people who are not real friends; (3) only share photos of yourself that you would want to be seen by family, teachers, and employees; (4) don’t share personal information like your location or contact information; (5) do not meet up with any person you have only met online; and (6) report suspicious activity to law enforcement or a trusted adult if you are a minor.[19] Follow and share these tips to ensure yourself and others are safe from the dangers of human trafficking on social media platforms.

[1] Human Trafficking, U.S Dep’t of Just., https://www.justice.gov/humantrafficking (last visited Oct. 3, 2020).

[2] What is Human Trafficking?, The United States Department of Justice,  https://www.justice.gov/humantrafficking/what-is-human-trafficking(last visited Oct. 3, 2020).

[3]  Social Media in Recruitment, Polaris, https://polarisproject.org/human-trafficking-and-social-media/, (last visited Oct. 3, 2020).

[4] Id.

[5] Id.

[6] 2019 U.S. National Human Trafficking Hotline Statistics, Polaris, https://polarisproject.org/2019-us-national-human-trafficking-hotline-statistics/, (last visited Oct. 3, 2020).

[7]  Id.

[8] Id.  

[9] Common Myths and Misconceptions About Human Trafficking in the U.S., Polaris, https://humantraffickinghotline.org/sites/default/files/Common%20Myths%20and%20Misconceptions.pdf, (last visited Oct. 3, 2020).

[10] 2019 U.S. National Human Trafficking Hotline Statistics, Polaris, https://polarisproject.org/2019-us-national-human-trafficking-hotline-statistics/, (last visited Oct. 3, 2020) (stating the top five recruitment tactics for labor trafficking are job offers, false promises, smuggling-related,  familial relationships, and posing as a benefactor).

[11] Social Media Fact Sheet, Pew Research Ctr. (June 12, 2019), https://www.pewresearch.org/internet/fact-sheet/social-media/.

[12] Id.

[13] Mary Madden et al., Teens, Social Media, and Privacy, Pew Research Ctr. (May 21, 2013), https://www.pewresearch.org/internet/2013/05/21/teens-social-media-and-privacy/.

[14] Michelle Lillie, How Street Traffickers Recruit Young Girls, Human Trafficking Search (2014), https://humantraffickingsearch.org/how-street-traffickers-recruit-young-girls/.

[15] Id.

[16] Tom Jackman, Trump Signs ‘FOSTA’ Bill Targeting Online Sex Trafficking, Enables States and Victims to Pursue Websites, Wash. Post. (Apr. 11, 2018, 11:41 AM), https://www.washingtonpost.com/news/true-crime/wp/2018/04/11/trump-signs-fosta-bill-targeting-online-sex-trafficking-enables-states-and-victims-to-pursue-websites/.

[17] Will Neal, US Court Approves Sex-Trafficking Lawsuits Against Facebook, Organized Crime and Corruption Rep. Project (April 29, 2020, 4:16 PM), https://www.occrp.org/en/daily/12224-us-court-approves-sex-trafficking-lawsuits-against-facebook.

[18] Id.

[19] Online Safety, Homeland Sec., https://www.dhs.gov/blue-campaign/online-safety, (last visited Oct. 3, 2020).

Image Source: https://www.cpomagazine.com/data-privacy/new-research-study-shows-that-social-media-privacy-might-not-be-possible/

Tik Tok on The Clock: An Overview of the Controversy Surrounding Gen Z’s Favorite App

By Tristan Smith

Over the course of 2020, a new wave of controversy has arisen over the use of the popular social media app TikTok across the world and specifically here in the United States.  The application allows users to create original content such as music, lip-sync, dances, and general narratives.[1] The application has grown immensely in popularity in the United States, reporting a growth rate of nearly 800% since January 2018 with nearly 100 million monthly users.[2]  Globally, the app has been downloaded about 2 billion times.[3]  However, with this increased popularity has also come a renewed and heightened scrutiny of the foreign-owned app by U.S. officials as well as other players in the private sector.[4] As far back as 2018, government officials in the United States were raising concerns about the Chinese-owned app, and in response to its growing popularity, Congress passed the Foreign Investment Risk Review Modernization Act (FIRRMA), which expanded the scope and investigatory powers of the United States Committee on Foreign Investment to allow for the committee to launch investigations into foreign corporations with a large financial presence in the United States.[5]

TikTok has also found itself embroiled in political controversies here in the United States.  In the summer of 2020, in the wake of the coronavirus pandemic and an economic crisis across the United States, President Donald J. Trump’s reelection campaign announced an in-person rally for supporters of the President in Tulsa, Oklahoma.[6]  Although the Trump campaign boasted that over one million tickets had been requested, a mere 6,200 tickets were actually scanned at the rally.[7]  In the immediate aftermath of the lackluster rally, TikTok users claimed to have registered hundreds of thousands of tickets for the event as a prank without event planners being aware of the truth behind the registrations.[8]

In the wake of the growing controversy surrounding TikTok, the Trump administration has sought to encourage TikTok’s Chinese owners to sell either the entire company or at least a majority stake to American buyers.[9]  ByteDance LtD, the Beijing company that owns TikTok, originally began discussions with Oracle Corporation to purchase a large portion of the company; however, the original proposed partnership fell short of the desires of President Trump and Senate Republicans who were seeking to see a U.S. company retain at least a majority stake in TikTok in order for the app to continue to be available for download in the United States.[10]

The backlash on the Trump Administration’s threats against TikTok has been swift from users of the social media platform.[11] Users have developed a sense of community and view the app as an outlet for creativity and expression, specifically during the wake of COVID.[12]  As one user put it, “If TikTok did shut down, it would be like losing a bunch of really close friends I made, losing all the progress and work I did to get a big following.”[13]

[1] See generally Top 10 TikTok App Tips and Tricks, Guiding Tech (Oct. 2, 2018), https://www.guidingtech.com/top-tiktok-musically-app-tips-tricks/

[2] Alex Sherman, TikTok Reveals Detailed User Numbers for the First Time, CNBC (Aug. 24, 2020, 6:33 PM), https://www.cnbc.com/2020/08/24/tiktok-reveals-us-global-user-growth-numbers-for-first-time.html.

[3] Id.

[4] Taylor Lorenz, What if the U.S. Bans TikTok?, The New York Times (July 10, 2020), https://www.nytimes.com/2020/07/10/style/tiktok-ban-us-users-influencers-taylor-lorenz.html (last updated Aug. 3, 2020); see also Mike Isaac and Karen Weise, Amazon Backtracks From Demand That Employees Delete TikTok, The New York Times (July 10, 2020), https://www.nytimes.com/2020/07/10/technology/tiktok-amazon-security-risk.html (explaining that less than five hours after Amazon sent an email to its employees asking them to delete TikTok citing security risks, the company reversed course).

[5] Hannah Weiss, Who’s Afraid of TikTok?, Wake Forest Journal of Business & Intellectual Property Blog (Mar. 29, 2020), http://ipjournal.law.wfu.edu/2020/03/whos-afraid-of-tiktok/(explaining the expanded powers of the Committee on Foreign Investment in the United States under FIRRMA include the expansion of its jurisdiction and increased reporting requirements on the part of foreign companies).

[6] Taylor Lorenz, TikTok Teens and K-Pop Stans Say They Sank Trump Rally, The New York Times (June 21, 2020), https://www.nytimes.com/2020/06/21/style/tiktok-trump-rally-tulsa.html (last updated Sept. 14, 2020).

[7] Id.

[8] Id.

[9] Trump Administration Pushes for U.S. Control of TikTok, The Wall Street Journal https://www.wsj.com/articles/trump-administration-pushes-for-u-s-control-of-tiktok-11600295711 (updated Sept. 16, 2020).

[10] Id. (“Asked about ByteDance maintaining a majority stake in TikTok, Mr. Trump said, “Conceptually, I can tell you, I don’t like that.””).

[11] Lorenz, supra note 4.

[12] Id.

[13] Id.

Image Source: https://techcrunch.com/2020/03/12/hawley-bill-tiktok-china/

How Social Media is Playing a Role in the Upcoming Presidential Election

By: Joleen Traynor

With the presidential election fast approaching, voters and pundits alike are increasingly concerned about the integrity of our election systems. Outside groups have attempted to infiltrate our election systems in the 2016 election cycle[1], and the United States could be at risk again in the 2020 cycle. This concern grows by the day as we near election day. This comes after Microsoft recently issued a warning outlining a number of foreign governments, including Russia, China and Iran, that have attempted to access secure campaign data from both President Trump and Joe Biden’s campaigns.[2] Social media also has an increasing role to play in election security, and in ensuring that only verifiable, accurate information is presented to the public. However, exactly how this information is presented to the public is up for debate, and each platform is taking its own approach to how it plans to treat unverifiable information.

This begs the question: what role does social media have to play in securing our election system? Foreign adversaries often rely on artificial intelligence, often referred to as “bots” to disseminate misleading or false information online. A “bot” is “a computer that attempts to talk to humans through technology that was designed for humans to talk to humans.”[3] In fact, “researchers determined that bots accounted for roughly 25 percent of tweets concerning the 2016 presidential election.”[4]

Some platforms, like Twitter, are attempting to balance access to information, and ensuring that the information that is circulated is legitimate, seen in Twitter’s policy of suspending certain accounts for inappropriate behavior, in many instances these suspended accounts are “bots”, and are not actual people sharing links and information.[5] Another social media platform, Facebook, overhauled its advertising policy after the 2016 election.[6] Looking ahead, “Twitter said it plans to more aggressively label or remove election-related tweets that include disputed or misleading information, while Google said it would screen more auto-complete suggestions to avoid voters being misled.”[7] Are these actions sufficient?

Regarding the 2020 presidential election cycle, substantive steps are already being taken to appropriately label posts and information given by President Trump’s campaign team. Both Twitter and Facebook have issued warning labels for tweets and posts from President Trump’s official channels, however the posts have not been removed.[8] These actions provide a warning to followers and readers that the posts may contain inaccurate information, and it may encourage readers to independently verify information for themselves. These are valid and important steps that social media platforms are taking in order to better inform the public, but is it enough to stop unverifiable information and false news from spreading online? These are questions that can only be answered in the days and weeks after election day.

[1] See generally 2016 Presidential Campaign Hacking Fast Facts, CNN (Oct. 31, 2019, 1:10 PM), https://www.cnn.com/2016/12/26/us/2016-presidential-campaign-hacking-fast-facts/index.html.

[2] David E. Sanger and Nicole Perlroth, Russian Intelligence Hackers Are Back, Microsoft Warns, Aiming at Officials of Both Parties, N.Y. Times (Sept. 10, 2020), https://www.nytimes.com/2020/09/10/us/politics/russian-hacking-microsoft-biden-trump.html.

[3] Siobhan Roberts, Who’s a Bot? Who’s Not?, N.Y. Times (June 16, 2020), https://www.nytimes.com/2020/06/16/science/social-media-bots-kazemi.html.

[4] Ashley Fox, Automated Political Speech: Regulating Social Media Bots in the Political Sphere, 18 First Amend. L. Rev. 114, 117 (2020).

[5] Brice C. Barnard, The Tweet Stops Here: How Politicians Must Address Emerging Freedom of Speech Issues in Social Media, 88 UMKC L. Rev. 1019, 1033 (2020).

[6] Alex Rochefort, Regulating Social Media Platforms: A Comparative Policy Analysis, 25 Comm. L. & Pol’y 225, 237 (2020).

[7] Emily Glazer and Kirsten Grind, Google and Twitter Sharpen Tools to Stop False Claims About Election, Wall St. J. (Sept. 10, 2020, 4:11 PM), https://www.wsj.com/articles/twitter-to-label-remove-more-election-related-tweets-with-misleading-information-11599757200?mod=tech_lead_pos7.

[8] Brian Fung and Paul P. Murphy, Facebook and Twitter put warning label on Trump’s posts on voting twice, CNN (Sept. 12, 2020, 2:39 PM), https://www.cnn.com/2020/09/12/politics/twitter-facebook-trump-north-carolina/index.html.

Image Source: https://www.texomashomepage.com/news/national-news/social-media-giants-testified-on-capitol-hill-addressing-their-role-in-online-extremism/

Lessons From The German Saga Of Fake News – Proposing A Shift From The State To Communities

by Saiesh Kamath, 3L at the National University of Juridical Sciences, Kolkata

As the ubiquity of the Internet reigns, social media platforms have turned into essential vehicles in facilitating participation in a democratic culture.[1] However, these platforms are also used for the dissemination of ‘fake news’.[2] Fake news is an umbrella term which is used to denote propaganda, hoaxes, trolling, and often, satire.[3] The growth of ‘fake news’ has been flagged as a global cause of concern. This is due to its demonstrated capability of subverting democratic processes[4] and instigating violence,[5] with minorities and the marginalised being disproportionately affected.[6] Since fake news achieves virality on social media platforms, there have been calls to regulate the dissemination of such content on these platforms. Some nations have passed statutes in order to deal with this crisis.[7] A critical appraisal of such regulatory laws is necessary in order to construct a framework which will effectively curb the menace of fake news. In this regard, a critical look at the German law is necessary considering the large attention it has received from legislators and legislatures of many nations.[8]

In Part I of this paper, I will introduce the German law instituted to regulate fake news. In Part II, an implication on free speech will be explored. Part III will offer an alternative framework to counter fake news.

I. The Network Enforcement Act, 2017, of Germany

The necessity of countering fake news in Germany was sharpened due to the role played by hate speech and fake news in the Presidential elections of the United States of America in 2016. Hence, the Network Enforcement Act, 2017(hereinafter the Act) was passed in order to combat hate speech and fake news online[9] – two concepts that have increasingly become interconnected.[10] The Act was brought about for two reasons. Firstly, it was to ensure the enforcement of existent provisions in the German criminal code.[11] Secondly, it was to place the onus of such enforcement on social networks[12] (hereinafter intermediaries) with two million or more registered users in Germany.[13]

The Act mandates intermediaries to provide for the user to have a complaint mechanism against illegal content online.[14] In cases of “manifestly unlawful” content, intermediaries must take down the content within twenty-four hours.[15] In order to remove or block access to any other type of illegal content, a period of seven days is provided.[16] Failure to adhere to these requirements is considered a civil violation, and carries fines of up to fifty million Euros.[17]

II. Stifling Free Speech

The Act was considered one of the most expansive laws arising from a Western nation to tackle fake news.[18] However, civil rights activists, media organisations, and other groups condemned the law due to its implications on free speech.[19] Of the many legitimate arguments, one of the most compelling was that the law stifled free speech of people.[20]

There is a chilling effect on freedom of speech in two main ways. Firstly, it arises due to the burden imposed on intermediaries to make a call on what content qualifies as legal or illegal. Secondly, it is manifested through the phenomenon of over-removal.

The main criticism of the Act arises from the obligation imposed on intermediaries to monitor and take down illegal content online. The problem with this feature is that it privatizes enforcement of the law.[21] This happens because intermediaries are obliged to make a call on the legality of content and take down illegal content. However, intermediaries do not possess the technical legal knowledge to take a call on what content qualifies as legal or illegal. Hence, to assume such a level of understanding of the law onto an intermediary so as to equate its function akin to that of the Judiciary[22] is absurd.

While this unfair obligation in and of itself does not stifle speech, it creates, a phenomenon called ‘over-removal’.[23] This phenomenon entails broad and excessive censorship. In this case, over-removal happens due to financial disincentives in the form of fines created by the Act. In scenarios where intermediaries fail in their obligation to timely remove illegal content, they are penalized with fines. Given that intermediaries do not possess a level of understanding of the law so as to guarantee correct judgement for every instance in a time-sensitive framework, they err on the side of censorship and take down even content which is legal.[24] This is also problematic because censorship tends to disproportionately take away from the voices of minority groups.[25] Considering that minorities were intended to benefit from the Act, this consequence takes away from progress made in that direction.

This violates Article 5 of the Basic Law (the Constitution) of Germany which guarantees freedom of speech, expression, and opinion to the German people.[26] Additionally, over-removal takes away from the principle of freedom of speech that is enshrined in different international human rights and civil rights treaties and conventions.[27] The same reasoning was brought forward in the case against the French law on hate speech,[28] which was inspired by the Act.[29] The Constitutional Court struck down critical provisions of the Act, one basis being that it disproportionately infringed freedom of speech. In so doing, it articulated the importance of the freedom in democratic polity.[30]

A draft to amend the Act was introduced recently,[31] but it does not make any changes to the core tent of the Act being critiqued – privatizing law enforcement.

III. From Excessive State Intervention to Community Collaboration

The issue of fake news is not an easy one to analyse, let alone resolve. However, the approach of compromising one human right for preserving another is a position that is borne from the unique German experience with the collapse of a democratic order under the Nazis.[32] As such, this approach should not be replicated with reckless abandon in other nations, especially without the existence of similar historical lessons and popular mandate.

The social utility that is derived from free speech should not be laid at the sacrificial altar without considering other frameworks. In this regard, I argue that there needs to be a shift from an excessive state interventionist approach demonstrated by Germany to a rights-respecting framework of community collaboration.

Fake news targets users of intermediary platforms in insidious ways. Its core characteristic of being viral is used to expose users to fake news. The exposure of fake news to users is additionally problematic because users are susceptible to still believe it even after corrections are issued.[33] Hence, tackling the root of this issue would be to make users more resilient to fake news.

To do this, there needs to be a strategy to decrease the persuasion value of fake news, and build psychological resistance in users. In this regard, the ‘inoculation’ theory is useful. Arising from psychology, it essentially says that users may be ‘inoculated’ to fake news by exposing them to a ‘weakened version’ of fake news.[34] This would mean that users would be exposed to relatively apparent fake news. This exercise in identifying fake news would build resilience and increase vigilance of users. This would lead to a pathway where fake news tends to be preemptively ‘bunked’.[35]

Research around the inoculation theory and its effects are promising. When contextualised around the climate change narrative, users were made resistant to a specific piece of misinformation, and hence avoided the influence carries by fake news on climate change.[36] This is also promising given that the extent and reasons for climate change remains an issue which falls on partisan lines. Hence, it holds potential to be used for other influential partisan issues which are usually subject to fake news.

This research was also extended to help users spot the common strategies employed in producing fake news.[37] By exposing users to relatively apparent manipulation strategies used by fake news disseminators, users were better able to identify the tell-tale signs of fake news.[38] This study is more potent because the relatively time-consuming exercise of inoculating users against specific pieces of misinformation is, while still holding its own value to tackle specific pieces of pernicious and persistent misinformation, now subsumed in the wider understanding of manipulation tactics surrounding fake news. Hence, it functions like a ‘broad spectrum vaccine’.[39]

This approach of radically decentralising the exercise of countering fake news is better because it takes no part in subverting human and constitutional rights, and its consequent adverse effects on democratic discourse and polity. This is because it refuses to submit to false equivalencies of trade-offs between various rights or groups of rights. This approach also has the benefit of empowering the ultimate stakeholder: the user. While the times brought on by a more digitally connected world may consist of new dangers, a framework which is comfortable with de-emphasising cherished rights and principles should not be adopted when other rights-respecting frameworks exist.

In this framework, the State would still have a role. It would be tasked with devising strategies to facilitate collaboration between communities which are countering fake news. This would increase co-ordination between communities, and enable learning and sharing best practices from each other. This would be relevant in, and respectful of, a world with increasing diversity and culture, and would serve to acknowledge that a one-size-fits-all approach is not the best fit for a problem as complex and persistent as fake news.

[1] See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. Law. Rev. 1600 (2018).

[2] See Alberto Alemanno, How to Counter Fake News? A Taxonomy of Anti-fake News Approaches, 9 Eur. J. Risk Regul. 1 (2018).

[3] See Mark Verstraete, Derek E. Bambauer, and Jane R. Bambauer, Identifying and Countering Fake News, Arizona Legal Studies Discussion Paper No. 17-15 (2017).

[4]See Alexis Mardigan, What Facebook Did to American Democracy, The Atlantic, (October 12 2017), https://www.theatlantic.com/technology/archive/2017/10/what-facebook-did/542502/.

[5] See Reuters, Man pleads guilty in Washington pizzeria shooting over fake news, (March 24, 2017, 8:11 PM) https://www.reuters.com/article/us-washingtondc-gunman/man-pleads-guilty-in-washington-pizzeria-shooting-over-fake-news-idUSKBN16V1XC (Last visited on 11 June 2020).

[6] See Kimberly Grambo, Fake News And Racial, Ethnic, And Religious Minorities: A Precarious Quest For Truth, 21(5) U. Pa. J. Const. L. 1319 (2019).

[7] See Fathin Ungku, Factbox: ‘Fake News’ laws around the world, Reuters (April 2, 2019, 3:43 PM), https://in.reuters.com/article/singapore-politics-fakenews/factbox-fake-news-laws-around-the-world-idINKCN1RE0XW.

[8]See Jacob Mchangama and Joelle Fiss, Germany’s Online Crackdowns Inspire The World’s Dictators, Foreign Policy (November 6, 2019, 10:47 AM), https://foreignpolicy.com/2019/11/06/germany-online-crackdowns-inspired-the-worlds-dictators-russia-venezuela-india/; See also Justitia, The Digital Berlin Wall: How Germany (Accidentally) Created a Prototype for Global Online Censorship, 3, 5, 17 (November, 2019), http://justitia-int.org/wp-content/uploads/2019/11/Analyse_The-Digital-Berlin-Wall-How-Germany-Accidentally-Created-a-Prototype-for-Global-Online-Censorship.pdf.

[9]See Press Release, Global Network Institute, (April 20, 2017), https://globalnetworkinitiative.org/proposed-german-legislation-threatens-free-expression-around-the-world/; See also Alana Schetzer, Governments are making fake news a crime but it could stifle free speech, The International Forum for Responsible Media Blog, (July 21, 2019), https://inforrm.org/2019/07/21/governments-are-making-fake-news-a-crime-but-it-could-stifle-free-speech-alana-schetzer/. (‘Schetzer’).

[10] See Kirsten Gollatz and Leontine Jenner, Hate Speech and Fake News – how two concepts got intertwined and politicized, Digital Society Blog (March 15, 2018), https://www.hiig.de/en/hate-speech-fake-news-two-concepts-got-intertwined-politicised/. (‘Gollatz and Jenner’).

[11] See Heidi Tworek and Paddy Leerssen, An Analysis of Germany’s NetzDG Law, Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression 2 (2019). (‘Tworek and Leerssen’).

[12] Id.

[13] Network Enforcement Act, 2017, Art. 1 § 1(2).

[14] Network Enforcement Act, 2017, Art. 1 § 3(1).

[15] Network Enforcement Act, 2017, Art. 1 § 3(2); Tworek and Leerssen, supra note 11.

[16] Network Enforcement Act, 2017, Art. 1 § 3(3); Tworek and Leerssen, supra note 11.

[17] Tworek and Leerssen, supra note 11.

[18] See Emma Thomasson, Germany looks to revise social media law as Europe watches, Reuters, (March 8, 2018), https://www.reuters.com/article/us-germany-hatespeech/germany-looks-to-revise-social-media-law-as-europe-watches-idUSKCN1GK1BN. (‘Reuters’)

[19] See Germany: Flawed Social Media Law, Human Rights Watch, (February 14, 2018, 12:01 AM), https://www.hrw.org/news/2018/02/14/germany-flawed-social-media-law.

[20] See Schetzer, supra note 9; Germany starts enforcing hate speech law, BBC News (January 1, 2018), https://www.bbc.com/news/technology-42510868; Gollatz and Jenner, supra note 10.

[21] Tworek and Leerssen, supra note 11 at 3.

[22]See Germany: The Act to Improve Enforcement of the Law in Social Networks, Article 19, 2 (August, 2017) https://www.article19.org/wp-content/uploads/2017/09/170901-Legal-Analysis-German-NetzDG-Act.pdf.

[23] Tworek and Leerssen, supra note 11 at 3.

[24] Reuters supra note 19; Schetzer, supra note 9.

[25] See Nadine Strossen, Freedom of Speech and Equality: Do We Have to Choose?, 25(1) J Law  Pol. 214 (2016).

[26] Grundgesetz, The Constitution of the Federal Republic of Germany, 1949, Art. 5.

[27] See International Covenant on Civil and Political Rights, 16 December 1966, Art. 19; Universal Declaration of Human Rights, 10 December 1948, Art. 2.

[28] See Chloé Berthélémy, French Avia law declared Unconstitutional: what does this teach us at EU level?, EDRi, (June 24, 2020), https://edri.org/french-avia-law-declared-unconstitutional-what-does-this-teach-us-at-eu-level/.

[29] See Mark McCarthy, In France What’s Illegal Offline Is Now Illegal Online, Forbes (May 18, 2020), https://www.forbes.com/sites/washingtonbytes/2020/05/18/in-france-whats-illegal-offline-is-now-illegal-online/#2609c3ce38b5.

[30] See Manny Marotta, France Constitutional Court strikes down most of online hate speech law, Jurist, (June 20, 2020, 4:47 PM) https://www.jurist.org/news/2020/06/french-court-strikes-down-most-of-online-hate-speech-law/.

[31] European Commission, Draft Act amending the Network Enforcement Act, 2020/174/D (Germany) (Notified on 30 March, 2020).

[32] See Rainer Hofmann, Incitement to National and Racial Hatred: The Legal Situation in Germany in Striking a Balance 160 (1992).

[33] See Sander van der Linden and Jon Roozenbeek, Bad News – A psychological vaccine against fake news, LSE Impact Blog (July 31, 2019), https://blogs.lse.ac.uk/impactofsocialsciences/2019/07/31/bad-news-a-psychological-vaccine-against-fake-news/.

[34] Id.

[35] Id.

[36] See Sander van der Linden, Anthony Leiserowitz, Seth Rosenthal, and Edward Maibach, Inoculating the Public against Misinformation about  Climate Change, 1(2) Global Challenges 5, 6 (2017).

[37] See Jon Roozenbeek and Sander van der Linden, Fake news game confers psychological resistance against online misinformation, 5 Palgrave Commun. 2, 8 (2019).

[38] Id.

[39] Id.

Blog: The New Four Walls of the Workplace

social-media-488886_640By: Micala MacRae, Associate Notes and Comments Editor

The Supreme Court has recognized workplace harassment as an actionable claim against an employer under Title VII of the Civil Rights Act of 1964.[1]  The rise in social media has created a new medium through which workplace harassment occurs.  Courts are just beginning to confront the issue of when social media harassment may be considered as part of the totality of the circumstances of a Title VII hostile work environment claim.  Traditionally, harassment has occurred through face-to-face verbal and physical acts in the workplace.  However, the changing nature of the workplace has continued to expand with the rise of new technology, which allows employees to stay connected to the work environment at different locations outside the physical boundaries of the office.  Harassment has moved beyond the physical walls of the workplace to the virtual workplace.  The broadening conception of the workplace and increasing use of social media in professional settings has expanded the potential employer liability under Title VII.

Social media has become a powerful communication tool that has fundamentally shifted the way people communicate.  Employers and employees increasingly utilize social media and social networking sites.[2]  While companies have turned to social media as a way to increase their business presence and reduce internal communication costs, there has been the consequence of increased social media harassment.  Although social media and social networking sites are not new forms of communication, their legal implications are just now coming into focus.[3]  Several cases have addressed hostile work environment claims stemming from other forms of electronic communication, there are few addressing claims based on social media communications.[4]

The New Jersey Supreme Court, in Blakey v. Continental Airlines, Inc., was one of the first courts to consider whether an employer is responsible for preventing employee harassment over social media.[5]  In Blakey, an airline employee filed a hostile work environment claim arising from allegedly defamatory statements published by co-workers on her employer’s electronic bulletin board.[6]  The electronic bulletin board was not maintained by the employer, but was accessible to all Continental pilots and crew members.[7]  Employees were also required to access the Forum to learn their flight schedules and assignments.[8]

The court analyzed the case under a traditional hostile work environment framework, concluding that the electronic bulletin board was no different from other social settings in which co-workers might interact.[9]  Although the electronic bulletin board was not part of the physical workplace, the employer had a duty to correct harassment occurring there if the employer obtained a sufficient benefit from the electronic forum as to make it part of the workplace.[10]  The court made clear that an employer does not have an affirmative duty to monitor the forum, but that liability may still attach if the company had direct or constructive knowledge of the content posted there.[11]  The court limited consideration of social media harassment to situations where the employer derived a benefit from the forum and it could therefore be considered part of the employee’s work environment.[12]

Workplace harassment is not longer limited to the traditional four walls of the workplace.  As technology and the boundaries of the workplace have changed, courts have struggled to modernize their framework for assessing hostile work environment claims under Title VII.  These problems will only become exacerbated as society continues to embrace social media throughout our daily lives and employers continue to integrate social media into their business practices.

 

[1] See Meritor Sav. Bank v. Vinson, 477 U.S. 57, 64-67 (1986) (finding that workplace harassment based on individual’s race, color, religion, sex, or national origin is actionable under Title VII of the Civil Rights Act).

[2] Jeremy Gelms, High-Tech Harassment: Employer Liability Under Title VII for Employee Social Media Misconduct, 87 Wash. L. Rev. 249 (2012).

[3] See, e.g., Kendall K. Hayden, The Proof Is in the Posting: How Social Media Is Changing the Law, 73 Tex. B.J. 188 (2010).

[4] Id.

[5] Jeremy Gelms, High-Tech Harassment: Employer Liability Under Title VII for Employee Social Media Misconduct, 87 Wash. L. Rev. 249 (2012).

[6] Blakey v. Continental Airlines, Inc., 751 A.2d 538 (N.J. 2000).

[7] Id. at 544.

[8] Id.

[9] Id. at 549.

[10] Blakey, 751 A.2d at 551.

[11] Id.

[12] Id.

Blog: My Executor Has Never Used the Internet: Estate Planning and Digital Property

By Associate Editor Kevin McCann

In 2007, a devoted World of Warcraft player decided it was time to put down his virtual crossbow and axe and sell his player account. Given the amount of time put into leveling up the abilities and gear of the character, the account was in high demand and sold for 7,000 Euros (approximately $9,000). What if before the player decided to sell this he experienced an unfortunate real life death? Most likely there would be no provision in his last will and testament stating what to do with this asset, and the account would have been deleted and the potential money lost.

While this is an extreme example of protecting a digital asset, estate planners and lawyers indicate that few people give the new reality of digital assets and online accounts consideration when drafting their wills. There is a range of issues to contemplate involving electronically stored items, such as preserving online photos, projects and personal records to how you would want your family to manage your social media accounts. A survey by McAfee revealed that U.S. consumers value their digital assets, on average, at nearly $55,000, with approximately $19,000 attributed to personal memories (photographs and videos) alone. A living person would certainly want to determine the distribution of these electronically stored personal memories just as if they were photos in an attic.

In addition, social media websites such as Facebook and Twitter now have deceased user policies. Both policies allow interested parties to select one of two options: either delete the user account entirely or save the account in order to memorialize the deceased and allow others to interact with his or her preserved account. (For an interesting look at the differences between the two policies, see http://news.cnet.com/8301-27076_3-20013219-248.html). One could see a situation where a person would want his account deleted to save his family embarrassment, or the opposite situation where a person would want his family to continue to interact with his account through the grieving process after his death. This would be another consideration to contemplate when drafting a will.

Several states have enacted legislation that pertains to post-death access of digital accounts. For instance, a New Jersey bill was introduced in June of this year that would grant the executor or administrator of an estate the power to take control of any account of the deceased person for social networking, blogging, or e-mail service websites. However, many of the states’ legislation specify that the deceased must have designated the representative in writing prior to the death. The U.S. General Services Administration recommends people set up a “social-media will,” and even go as far as naming a separate “digital executor” who is more up to speed on technology innovations and is more qualified to oversee the administration of the deceased’s digital assets. In addition, estate planners advise that the probate process would take considerable less time if the devisee were to include in his will a list of all accounts, passwords, and security question answers. Otherwise the executor would have to go through the process of submitting death certificates and relationship authentication to each of the websites.

The internet has changed the way society communicates and expresses itself, and various legal issues arose with this modernization. The protection of online assets at death is now a growing concern, with states just beginning to recognize the need for legislation. As the internet continues to reinvent itself with new services to better connect the world, so to must the estate planning process strive to keep up with these innovations.

 

Additional Resources:

Wall Street Journal Article on issue

Chicago tribune article on the issue

List of online services that are designed to help someone plan for probate process of digital assets.

Powered by WordPress & Theme by Anders Norén