By: Stephanie Seibert
Image Source: https://www.usatoday.com/story/tech/2019/08/28/banning-neo-nazis-extremists-twitter-how-police-and-avoid-bias/2139370001/.
In today’s social media – centric world there seems to be an overwhelming number of platforms available for individuals to express their thoughts and ideas. Big Tech companies such as Twitter, Facebook and Google alike allow users to post their content at a click of a button. At first glance, these platforms seem to be advancing the First Amendment and furthering its agenda of freedom of speech. However, these Big Tech firms are currently coming under scrutiny for, at times, restricting content which would violate some freedom of speech ideals.
These Big Tech firms are finding it hard to police content while avoiding bias. Social media platforms are facing contradictory demands to oversee internet content, without infringing on First Amendment rights. There are balances that must be considered when allowing people to express themselves in a way that does not harm society; this endeavor is becoming increasingly hard as technology expands and grows.
A fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen. Social media users use social media platforms to engage in various types of protected First Amendment speech. Accordingly, social media lawsuits and legislation regarding the First Amendment stem from deprivations under the Free Speech Clause.
Members of congress have been attempting to pass legislation that places a burden on social media providers to prove that they aren’t using bias to filter content. Michael Beckerman, head of the trade group Internet Association, said these proposed legislation forces platforms to “make an impossible choice: either host reprehensible, but First Amendment- protected speech, or lose legal protections that allow them to moderate illegal content like human trafficking and violent extremism. That shouldn’t be a tradeoff.”
Congressional legislation is not the only place that social media providers are coming under scrutiny; the judiciary is also handing down rulings that place extra burdens on the Big Tech companies. Litigation has come into the judiciary as social media users challenge other users or social media providers for limiting or banning social media users’ ability to participate in the social media platforms based on their viewpoints. These bans are being challenged as an unconstitutional form of discrimination under the First Amendment.
The issue with these cases being solved in the judiciary is how courts adapt established First Amendment jurisprudence to the world of social media. Courts are struggling to find remedies for these types of litigation. The court system is, per se, trying to fit a square peg in a round hole. The judiciary is attempting to tailor precedent First Amendment jurisprudence and apply it to a “revolutionary technology.” At times this leads to the court taking a “rights-centric approach” and creating remedies that are distinct to the specific social media technology used to violate the First Amendment.
Courts are ill equipped to understand the technological distinctions of social media and it is ill advised that courts make legal rulings based on technologies that frequently change and evolve. This leads to outdated rulings and creates confusions and uncertainty about which precedent applies to liability in these First Amendment cases dealing with social media. These court rules will have a chilling effect on both social media users and providers and will negatively impact social media platforms’ environment of free speech. Additionally, courts ruling on social media risks judicial overreach. There is a risk that the courts will try to expand what constitutes as state action. Big Tech firms are private actors, but there is a risk to the nature and number of litigations that will occur if the courts find that policing content constitutes as state action.
Will the judiciary set a precedent for social media providers to follow? Will Congress pass legislation detailing the best way for users’ First Amendment rights to be protected? Will social media providers just be stuck performing a balancing act trying to not infringe First Amendment rights while policing content on their sites? Social media is an invaluable part of todays’ society, and emphasis should be placed on finding a solution to possible First Amendment violations while allowing social media platforms to police content to maintain a safe environment for users.
Marcy Gordon, How Should Big Tech Police Content while Avoiding Bias?, USA Today (Sept. 5, 2019, 7:25 PM), https://www.usatoday.com/story/tech/2019/08/28/banning-neo-nazis-extremists-twitter-how-police-and-avoid-bias/2139370001/.
 See Amanda M. Williams, Notes & Comment: You Want to Tweet About it But You Probably Can’t: How Social Media Platforms Flagantly Violate the First Amendment, 45 Rutgers Comp & Tech L.J. 89, 93 (2019)
 Packingham v. North Carolina, 137 S. Ct. 1730, 1735-36 (2017).
 See Kathleen M. Hidy, Article:Social Media Use and Viewpoint Discrimination: A First Amendment Judicial Tightrope Walk With Rights and Risks Hanging in the Balance, 102 Marq. L. Rev. 1045, 1053 (2019).
 Marcy Gordon, Tech Giants Face Questions on Hate Speech Going into Debates, AP News (Sept. 5, 2019, 7:53PM), https://apnews.com/b8fef98153c24c8dbcc7778115089806.
 See Gordon, supra note 1.
 See Hidy, supra note 6 at 1081.
 Id. at 1046
 Id. at 1081
 Id. at 1082