By: Catherine Schroeder
In the wake of the terrorist attack at two New Zealand mosques last Friday, the world again had to grapple with the role social media and the internet plays in these horrific events. The event that cost 50 lives on Friday, March 15th was broadcasted live through Facebook and then posted repeatedly over the internet.[1] After New Zealand police flagged the video, Facebook hastily deleted the video; however, Facebook, as well as YouTube and Twitter, struggled to combat the repeated loading and sharing of the video and were still working on removing the video and images over the weekend.[2] Facebook stated that within the first 24 hours after the attack, it removed or blocked 1.5 million copies of the video from its site.[3] Facebook was able to block 80% of these videos while they were being uploaded.[4] YouTube took down tens of thousands of versions of the video.[5]
Facebook, YouTube, and Twitter have come under scrutiny numerous times in recent years for not removing hate speech or terrorist propaganda fast enough.[6] Outside of responding to backlash from these events, Facebook has had community standards for its users for years and is constantly monitoring activity on its website.[7] Facebook has made changes directly in response to this public pressure, such as releasing details of its content review policy this past year in an effort to “do better.”[8] When witnessing how these global corporations have responded to public outcry, the question comes to mind – what force is actually making the companies remove this kind of content? While there are many factors that push these social media corporations to do the “right thing,” in the United States, there are actually no regulations or laws that make Facebook or other social media providers remove speech like violent videos or hate speech.[9]
Facebook, YouTube, and Twitter are self-regulating in the area of governing speech.[10] They are given this freedom in the United States through § 230 of the Communications Decency Act which grants interactive community services immunity from liability for user-generated content.[11] Courts have interpreted § 230 as having two main purposes: 1) to foster Good Samaritan policies for self-policing within these services and 2) to protect free speech for users.[12] The Good Samaritan provision in § 230 sought for these companies to reflect normative expectations of users.[13] Furthermore, for the government to regulate what speech is published, there is a fear of collateral censorship.[14] Too much regulation could also potentially severely restrict speech which would stifle the exchange of ideas on these platforms and have a “chilling effect.”[15]
With a lack of regulations, Facebook, YouTube and Twitter are the “architects” for publishing new speech online.[16] So the question still stands – what force drives them to censure? These social media corporations are, in fact, corporations. They are driven by a sense of corporate responsibility and meeting the users’ expectations in order to increase shareholder value and economic viability.[17] However, whether because of corporate responsibility or economic success, the leading social media services’ values and speech policies have reflected First Amendment norms and United States democratic culture.[18] The community standards and internal policies of Facebook, Twitter, and YouTube were all drafted, analyzed, and created by American lawyers.[19] These platforms have even pushed back against government requests to remove content.[20] In 2012, a video was uploaded to YouTube that negatively depicted the Muslim faith.[21] Violence erupted in countries such as Libya and Egypt, and President Obama asked YouTube and Facebook to take down the video.[22] This video fell outside both companies community standards; and they ultimately decided not to take the video down, a decision that was rooted in American free speech norms being implemented by these corporations.
While these social media providers are generally self-regulated, individual countries can control what is posted within their boundaries and flex some muscle. In 2007, Turkey blocked access to YouTube throughout Turkey when YouTube did not remove videos that Turkey had demanded to be removed.[23] Recently in 2016, in response to terrorist attacks in both Paris and Brussels, Facebook, YouTube, Twitter, and Microsoft entered an agreement with the European Union to remove hate speech within twenty-four hours of it posting.[24] This is a reflection of these companies not censuring because of corporate responsibility or user pressure, but to avoid regulations and fines by countries.[25] This is a shift in the First Amendment norms that were a foundation for these companies’ policies, since unlike the United States, many of these European countries do not have a heavy presumption against speech restrictions.[26]
The issue of terrorist attacks and other hateful propaganda being broadcasted through these platforms is not over. The good news is that while these companies are self-regulated, they are motivated to foster community and a flow of ideas, which means these companies will continue to strike a balance between removing violent, hateful videos and fostering free speech.
[1] See Jon Emont, et. al, Facebook, YouTube, Twitter Scramble to Remove Video of New Zealand Mosque Shooting, Wall Street Journal (March 15, 2019, 7:03 p.m. ET), https://www.wsj.com/articles/live-video-of-new-zealand-mosque-shooting-dodges-social-media-safeguards-11552657931.
[2] See id.
[3] See Yoree Koh, Why Video of New Zealand Massacre Can’t be Stamped Out, Wall Street Journal, (March 17, 2019, 7:00 p.m. ET), https://www.wsj.com/articles/why-video-of-new-zealand-massacre-cant-be-stamped-out-11552863615?mod=article_inline.
[4] See id.
[5] See id.
[6] See Danielle K. Citron, Extremist Speech, Compelled Conformity and Censorship Creep, 93 Notre Dame L. Rev. 1035, 1038 (2018).
[7] See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1608 (2018).
[8] See Ian Wren, Facebook Updates Community Standards, Expands Appeals Process, NPR (April 24, 2018, 5:01 AM ET), https://www.npr.org/2018/04/24/605107093/facebook-updates-community-standards-expands-appeals-process.
[9] See Klonick, supra note 6, at 1602.
[10] See id.
[11] See id.
[12] See id. at 1608.
[13] See id. at 1630.
[14] See id.
[15] See id. at 1608.
[16] See id. at 1617.
[17] See id.
[18] See id. at 1621.
[19] See id.
[20] See id. at 1623.
[21] See id. at 1624.
[22] See id. at 1625.
[23] See id. at 1624.
[24] See Citron, supra note 5 at 1038.
[25] See id.
[26] See id. at 1039.