The first exclusively online law review.

Author: JOLT

Self-Driving Cars May Make Roads Safer, But Will They Make Our Lives Simpler?

By: Melisa Azak

Self-driving cars are predicted to usher in a new era of vehicles free from the perils of human error. Some of the top companies in the autonomous vehicles field, like Waymo, a subsidiary of its ubiquitous parent company, Alphabetic, Inc., have deployed commercial self-driving services in select cities across the country.[1] Waymo’s main mission is to make roads safer by removing human error from the equation, a fact clearly presented on the first page of their lengthy testing report released in 2019.[2] Public officials appear to be convinced by Waymo’s claims. In March, the National Highway Traffic Safety Administration proposed changes to its vehicle safety rules to facilitate the increasing use of self-driving cars across the nation.[3]

Although autonomous vehicles may make roads safer, will they lead to a decrease in accidents? And in turn, will that lead to fewer lawsuits involving those accidents? A closer look at the state of the autonomous vehicles industry indicates that while accidents caused by human error may decrease, the new era of fully automatic vehicles may herald increases in product liability litigation and potentially complicated issues in electronic discovery.

Self-driving cars are traditionally thought of as the Teslas and Ubers of the world that operate completely without human interference. The perception that self-driving cars are fully autonomous is hardly surprising given that CEOs like Elon Musk often make bold predictions in bids to get consumers and investors on board. Musk was criticized for saying “full self-driving capability is there” in Tesla’s new Navigate on Autopilot feature, but subsequently yanking the “full self-driving option” from Tesla’s website, claiming it was confusing consumers.[4]

Although no cars currently on the road are fully autonomous, most partially autonomous technologies have been in cars for years. The first autonomous vehicle technologies were introduced in 2006, with features like adaptive cruise control and parallel-park assist.[5] Features previously out-of-reach for the average consumer, like driverless parking, are now available in the new lines of family-friendly brands like Toyota.[6] So many features fall under the umbrella category of autonomous vehicles that the Society of Automotive Engineers categorizes driving features by the amount of human assistance necessary for operation into levels 0-6, ranging from simple driver support features like blind spot warnings (level 0) to fully autonomous vehicles in which a vehicle can drive without any assistance in all conditions (level 6).[7]

Given all the features that fall under the category of self-driving vehicles, self-driving cars may increase product liability suits as traditional cars leave the roads, and autonomous vehicles with various automation features and products replace them. Although accidents involving human error will decrease as cars operated traditionally by humans leave the roads, personal injury suits involving drivers or insurance companies will transmute into suits concerning the safety and adaptability of certain technologies.[8] Further, it seems more and more likely that ride-sharing apps like Uber and Lyft will use fleets of self-driving cars in the future, presenting questions of who takes the blame when a car inevitably hits a pedestrian or injures a passenger.[9]

Self-driving cars may also complicate the discovery process. The technology in most autonomous vehicles operate by gathering information about the car’s surroundings, evaluating that information and the risks involved, and planning decisions based on that information with or without a driver’s input.[10] More products within cars will store electronic information, which may lead to increased complexities about how to obtain information from those sources.[11]

Although the societal benefits of self-driving cars are noteworthy, the technologies overall may make consumers’ lives, and in turn their lawsuits, more complex overall. Will the trade-off be worth it? Only time will tell.

[1] Andrew J. Hawkins, Waymo launches iOS app as it reflects on first year of its robot taxi service, TheVerge.com, Dec. 5, 2019, https://www.theverge.com/2019/12/5/20995334/waymo-ride-hail-app-ios-apple-self-driving-car-robot-taxi-service-phoenix-arizona; Rob Verger, Where to find self-driving cars on the road right now, Popular Science, Dec. 11, 2018, https://www.popsci.com/self-driving-cars-cities-usa/.

[2] Waymo, Waymo Safety Report: On the Road to Fully Self-Driving, https://storage.googleapis.com/sdc-prod/v1/safety-report/Safety%20Report%202018.pdf (last visited Sep. 4, 2020).

[3] Aarian Marshall, New Rules Could Finally Clear the Way for Self-Driving Cars, Wired.com, Mar. 26, 2020, https://www.wired.com/story/news-rules-clear-way-self-driving-cars/.

[4] Andrew J. Hawkins, No, Elon, the Navigate on Autopilot feature is not ‘full self-driving’, TheVerge.com, Jan. 30, 2019, https://www.theverge.com/2019/1/30/18204427/tesla-autopilot-elon-musk-full-self-driving-confusion.

[5] Xavier Mosquet et al., Revolution in the Driver’s Seat: The Road to Autonomous Vehicles, Boston Consulting Group, Apr. 11, 2015, https://www.bcg.com/publications/2015/automotive-consumer-insight-revolution-drivers-seat-road-autonomous-vehicles.

[6] Fernelius Toyota, How to Use Intelligent Park Assist, FerneliusToyota.net, Jan. 11, 2019, https://www.ferneliustoyota.net/how-to-use-toyota-intelligent-park-assist/#:~:text=For%20Toyota%20vehicles%2C%20this%20features,come%20with%20parking%20assist%20sensors.

[7] Jennifer Shuttleworth, SAE Standards News: J3016 automated-driving graphic update, Society of Automotive Engineers International, Jan. 7, 2019, https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic.

[8] Tiffany Y. Gruenberg, Self-Driving Cars Will Likely Increase Product Liability Litigation, 9 Nat’l L. Rev., no. 22, Jan. 22, 2019, https://www.natlawreview.com/article/self-driving-cars-will-likely-increase-product-liability-litigation.

[9] Id.

[10] Jim Gill, How 3 Cases Involving Self-Driving Cars Highlight eDiscovery and the IoT, iprotech.com, Aug. 29, 2019, https://iprotech.com/industrynews/how-3-cases-involving-self-driving-cars-highlight-ediscovery-and-the-iot/.

[11] Id.

Image Source: https://cdn.vox-cdn.com/thumbor/anf7oVDiE-yVYa9ODsHYlMmf_Ps=/0x0:3000×2000/1200×800/filters:focal(1260×760:1740×1240)/cdn.vox-cdn.com/uploads/chorus_image/image/65650157/acastro_180205_1777_0003.0.jpg

How the Return of Sports Could Help Pull Off a Major Comeback Against COVID-19

By: Noah Holman

We, as a country, have yearned for many relics of normalcy throughout the uncertain times that the COVID-19 pandemic has created. Sports are one of those relics. After months of cries for their return, they are here at last. And they have not disappointed, providing value far beyond mere entertainment. While sports’ potential as a catalyst for positive change is perhaps more well-known, or at least more accepted, now than ever before, it may nevertheless remain understated.

Colin Kaepernick kneeling for the national anthem in 2016 cemented his legacy as the martyr for athletes speaking out about racial inequality.[1] If Kaepernick is the martyr, Lebron James is the torchbearer. Never quiet to speak out about social and political issues, Lebron has spearheaded the NBA’s ongoing “More Than a Vote” campaign with the lofty goal of eradicating voter suppression.[2] While all of these racial inequality and social reform initiatives have garnered much publicity in recent weeks, and deservedly so, there has been noticeably less attention on the potential sports have to advance medical research and improve the effectiveness of COVID-19 protocols.[3]

When I speak of sports in such optimistic light, I am largely referring to the National Basketball Association (NBA) and the National Hockey League (NHL) to a lesser extent. While both the NBA, going on five weeks of no new cases of COVID,[4] and the NHL have been incredibly successful, Major League Baseball (MLB) has quite a conflicting track record.[5] Ultimately, though, the NBA has shown us the potential of sports at their height.

Professional sports aid in our collective response to the COVID-19 pandemic by funding research.[6] Specifically, the NBA funded new saliva test research at Yale.[7] It should come as no surprise that professional sports leagues are committing their resources, as much as half a million dollars in the aforementioned case,[8] to fund such research given not only their vast capital resources but also their economic interest in insuring their players and personnel take the most effective precautions. In addition, live sporting events are a significant source of revenue for the leagues, so they want to return to having full stadiums as quickly as possible. However cynical you may be of their motives, that major sports leagues are helping to fund research relating to COVID-19 is good for all.

In addition to funding research, much new technology is being tested in the NBA’s famous “bubble.”[9] Perhaps most well-known is the Oura rings, which monitor users’ temperature, heartrate, and other biometric data.[10] Players are also using logging their daily vitals on an app, using Bluetooth thermometers, and smart pulse oximeter devices.[11] The NBA, along with the host of its bubble, Disney, has repurposed the “Disney Magic” bands that guests at its amusement parks would typically wear to track the whereabouts of the players and personnel in the bubble.[12] Although it is difficult to say any one of these particular measures is cause for the NBA’s success in returning to play without an outbreak of the virus, all of them together seem to be effective.

Even despite the NBA bubble’s success, it is not a perfect case study. Although the NBA allegedly ordered 2,000 of them, apparently only around 25% of players and personnel are wearing the Oura rings, which is optional under the NBA’s policy.[13] Similar to wearing masks and the COVIDWISE contact tracing app that Virginia rolled out not long ago,[14] the technology that these rings provide is only effective if the overwhelming majority of the population under study is wearing them.[15] Thus, their actual effectiveness is far from proven.[16]

All that being said, the return of sports should be celebrated for many reasons beyond giving us something to watch other than the “gay, gun-toting cowboy with a mullet” that captured America by storm in the beginning of quarantine.[17] Sports are directly funding the medical research needed to save hundreds of thousands of lives, and indirectly stimulating the economy that needs to be revitalized to save the livelihoods of just as many others.[18] We can only hope that we have reached the fourth quarter of this physically and mentally tolling quarantine, but no matter what: sports will not give up until they hear the final whistle. And they will win, along with all of us.

[1] See Tadd Haislop, Colin Kaepernick Kneeling Timeline: How Protests During the National Anthem Started a Movement in the NFL, Sporting News (June 9, 2020), https://www.sportingnews.com/us/nfl/news/colin-kaepernick-kneeling-protests-timeline/xktu6ka4diva1s5jxaylrcsse.

[2] Cf. Emmanuel Morgan, More Than a Vote is More Than a Statement for Lebron James and Other Athletes, L.A. Times (July 30, 2020), https://www.latimes.com/sports/story/2020-07-30/more-than-a-vote-lebron-james.

[3] E.g., Kevin Stankiewicz, NFL’s Goodell: “We’re Going to Stand Behind Our Players” Against Any Backlash Over Protests, CNBC (Sep. 2, 2020, 1:29 PM), https://www.cnbc.com/2020/09/02/nfls-roger-goodell-on-backlash-to-player-protests-over-racial-justice.html.

[4] Lisa Eadicicco, The NBA Bubble Has Rolled Out Some Wild Technology to Help Keep Players, Coaches, and Staff COVID-free — Including a $300 Smart Ring that Can Monitor Biometric Data, Business Insider (Aug. 26, 2020, 2:04 PM), https://www.businessinsider.com/nba-bubble-oura-smart-ring-used-by-quarter-of-campus-2020-8.

[5] Nick Lichtenberg, The NBA and MLB Show Opposite Reopening Strategies — and One of Them is Already Striking Out. Here’s What Businesses and Schools Can Learn from the Great Pro Sports Reboot, Business Insider: Newstex Blog (July 29, 2020), https://www.businessinsider.com/nba-mlb-reopening-lessons-for-businesses-schools-coronavirus-2020-7; Markham Heid, NBA Bubble – How Does It Work? Science Behind the NBA Bubble, Popular Mechanics (Aug. 25, 2020), https://www.popularmechanics.com/science/health/a33796756/nba-bubble/.

[6] See infra note 7.

[7] Chris Hine, COVID Saliva Test Gets Big Boost from Wolves, NBA, StarTribune (Aug. 25, 2020), https://www.startribune.com/timberwolves-doctor-playing-key-role-in-covid-fighting-saliva-test/572208252/?refresh=true.

[8] Id.

[9] Eadicicco, supra note 4.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Geoffrey A. Fowler, I Downloaded America’s First Coronavirus Exposure App. You Should Too., Wash. Post. (Aug. 18, 2020), https://www.washingtonpost.com/technology/2020/08/17/coronavirus-exposure-notification-app/.

[15] Cf. Eadicicco, supra note 4.

[16] Id.

[17] See generally Taylor Borden, One Murder-for-Hire Plot, 5 Husbands, and 176 Tigers: Meet Joe Exotic, the Man Nicholas Cage Will Play in an Upcoming TV Series, Business Insider (May 4, 2020), https://www.businessinsider.com/who-is-joe-exotic-maldonado-passage-tiger-king-netflix.

[18] See generally Hine supra note 7.

Image Source: https://www.vanityfair.com/news/2020/07/inside-the-nbas-covid-free-bubble

The Origins and Original Intent of Section 230 of the Communications Decency Act

By Christopher Cox*

* © 2020 Christopher Cox. Mr. Cox, a former U.S. Representative from California, is the author, and co-sponsor with then-U.S. Representative Ron Wyden, of Section 230 of the Communications Decency Act. He recently retired as a partner in the law firm of Morgan, Lewis & Bockius.

I. INTRODUCTION

[1]    Since 1996, Section 230 of the Communications Decency Act[1] has governed the allocation of liability for online torts and crimes among internet content creators, platforms, and service providers. The statute’s fundamental principle is that content creators should be liable for any illegal content they create. Internet platforms are generally protected from liability for third-party content, unless they are complicit in the development of illegal content, in which case the statute offers them no protection.

[2]    Prior to the enactment of Section 230, the common law had developed a different paradigm. New York courts took the lead in deciding that an internet platform would bear no liability for illegal content created by its users. This protection from liability, however, did not extend to a platform that moderated user-created content. Instead, only if a platform made no effort to enforce rules of online behavior would it be excused from liability for its users’ illegal content. This created a perverse incentive. To avoid open-ended liability, internet platforms would need to adopt what the New York Supreme Court called the “anything goes” model for user-created content. Adopting and enforcing rules of civil behavior on the platform would automatically expose the platform to unlimited downside risk.[2]

[3]    The decision by the U.S. Congress to reverse this unhappy result through Section 230, enacted in 1996, has proven enormously consequential. By providing legal certainty for platforms, the law has enabled the development of innumerable internet business models based on user-created content. It has also protected content moderation, without which platforms could not even attempt to enforce rules of civility, truthfulness, and conformity with law. At the same time, some courts have extended Section 230 immunity to internet platforms that seem complicit in illegal behavior, generating significant controversy about both the law’s intended purpose and its application.[3] Further controversy has been generated as a result of public debate over the role that the largest internet platforms, including social media giants Facebook, Twitter, and YouTube, play in America’s political and cultural life through their content moderation policies.

[4]    The increasing controversy has led to numerous efforts to amend Section 230 that are now pending in Congress.[4] On one side of the debate over the future of the law are civil libertarians including the American Civil Liberties Union, who oppose amending Section 230 because they see it as the reason platforms can host critical and controversial speech without constant fear of suit.[5] They are joined in opposition to amending Section 230 by supporters of responsible content moderation, which would be substantially curtailed or might not exist at all in the absence of protection from liability.[6] Others concerned with government regulation of speech on the internet, including both civil libertarians and small-government conservatives, are similarly opposed to changing the law.[7]

[6]    On the other side of the debate, pressing for reform of Section 230, are proponents of treating the largest internet platforms as the “public square,”[8] which under applicable Supreme Court precedent[9] would mean that all speech protected by the First Amendment must be allowed on those platforms. (Section 230, in contrast, protects a platform’s content moderation “whether or not such material is constitutionally protected.”)[10] Others in favor of amending Section 230 argue that internet platform immunity from suit over user-created content should come with conditions: for example, providing law enforcement access to encrypted data from their customers’ cell phones and computers,[11] or proving that they are taking aggressive steps to screen user-created content for illegal material.[12] Further support for amending Section 230 has come from opponents of “Big Tech” on both ends of the political spectrum, who are unhappy with the moderation practices of the largest platforms. Critics on the left complained, for example, that Facebook did too little to weed out Russian disinformation;[13] on the right, the Trump administration railed against “censorship” by social media platforms.[14] Both groups see Section 230 as unfairly insulating the platforms from lawsuits challenging their current moderation practices.[15]

[7]    The sponsors of the several bills that have been introduced in both the House and the Senate during the 116th Congress (2019-21) to amend Section 230 have emphasized the original purposes of the law, while maintaining that court decisions and the conduct of the large internet platforms have distorted what Congress originally intended.[16] This has brought the history of Section 230’s original enactment into sharp focus, making the actual record of its birth and odyssey from idea into law relevant to both policy makers and courts. In this article, I recount the circumstances that led me to write this legislation in 1995, and the work that my original co-sponsor, then-U.S. Representative Ron Wyden (D-OR), and I did to win its passage. Finally, I describe how the original intent of the law, as interpreted in the courts during the intervening decades, has been reflected in the actual application of Section 230 to the circumstances of the modern internet.

[8]    As will be seen, the intent of Congress — often an elusive concept, given the difficulty of encapsulating the motives of 535 policy makers with competing partisan perspectives — is more readily apprehended in this case, given the widespread support that Section 230 received from both sides of the aisle. That intent is clearly expressed in the plain language of the statute, and runs counter to much of the 21st century narrative about the law.

II. THE COMMUNICATIONS DECENCY ACT, S. 314

[9]    It was a hot, humid Washington day in the summer of 1996 when James Exon (D-NE), standing at his desk on the Senate floor, read the following prayer into the record as a prelude to passage of his landmark legislation[17] that would be the first ever to regulate content on the internet:

Almighty God, Lord of all life, we praise You for the advancements in computerized communications that we enjoy in our time. Sadly, however, there are those who are littering this information superhighway with obscene, indecent, and destructive pornography. … Lord, we are profoundly concerned about the impact of this on our children. … Oh God, help us care for our children. Give us wisdom to create regulations that will protect the innocent.[18]

[10]    Immediately following his prayer, Senator Exon found it had been answered, in the form of his own proposal to ban anything unsuitable for minors from the internet. His bill, as originally introduced, authorized the Federal Communications Commission to adopt and enforce regulations that would limit what adults could access online, and could themselves say or write, to material that is suitable for children. Anyone who posted any “indecent” communication, including any “comment, request, suggestion, proposal, [or] image” that was viewable by “any person under 18 years of age,” would become criminally liable, facing both jail and fines.[19]

[11]    The Exon dragnet was cast wide: not only would the content creator — the person who posted the article or image that was unsuitable for minors — face jail and fines. The bill made the mere transmission of such content criminal as well.[20] Meanwhile, internet service providers would be exempted from civil or criminal liability for the limited purpose of eavesdropping on customer email in order to prevent the transmission of potentially offensive material.

[12]   Like his Nebraska forebear William Jennings Bryan, who passionately defended creationism at the infamous Scopes “Monkey Trial,” James Exon was not known for being on the cutting edge of science and technology. His motivation to protect children from harmful pornography was pure. But his grasp of the rapidly evolving internet was sorely deficient. Nor was he alone: a study completed that same week revealed that of senators who voted for his legislation, 52% had no internet connection.[21] Unfamiliarity with the new technology they were attempting to regulate had immediate side effects. What many of these senators failed to grasp was how different the internet was from the communications technologies with which they were familiar and had regulated through the Federal Communications Commission for decades.

[13]   Broadcast television had long consisted of three networks; and even with the advent of cable, the content sources were relatively few and all the millions of viewers were passive. Radio, likewise. For years there had been one phone company and now there were but a handful more. The locus of all of this activity was domestic, within the jurisdiction and control of the United States. None of this bore any relation to the internet. On this new medium, the number of content creators — each a “broadcaster,” as it were — was the same as the number of users. It would soon expand from hundreds of millions to billions. It would be an impossibility for the federal government to pre-screen all the content that so many people were creating all day, every day. And there was the fact that the moniker “World Wide Web” was entirely apt, since the internet functions globally. It was clear to many, even then, that most of the content creation would ultimately occur outside the jurisdiction of federal authorities — and that enforcement of Exon-like restrictions in the U.S. would simply push the sources of the banned content offshore.

[14]   Above all, the internet was unique in that communications were instantaneous: the content creators could interact with the entire planet without any intermediation or any lag time. In order for censors to intervene, they would have to destroy the real-time feature of the technology that made it so useful.

[15]   Not everyone in the Senate was enthusiastic about the Exon bill. The chairman of the Senate Commerce Committee, Larry Pressler, a South Dakota Republican, chose not to include it in his proposed committee version of the Telecommunications Act.[22] Vermont Senator Patrick Leahy, the ranking Democrat on the Judiciary Committee’s Antitrust, Business Rights and Competition subcommittee, opposed it for a prescient reason: the law of unintended consequences. “What I worry about, is not to protect pornographers,” Leahy said. “Child pornographers, in my mind, ought to be in prison. The longer the better. I am trying to protect the Internet, and make sure that when we finally have something that really works in this country, that we do not step in and screw it up, as sometimes happens with government regulation.”[23]

[16]   But Exon was persistent in pursuing what he called the most important legislation of his career. He went so far as to lobby his colleagues on the Senate floor by showing them the hundreds of lewd pictures he had collected in his “blue book,” all downloaded from the web and printed out in color.[24] It made Playboy and Penthouse “pale in offensiveness,” he warned them.[25] The very day he offered his prayer, the Senate debated whether to add an amended version of Exon’s legislation[26] to a much larger bill pending in Congress.[27] This was the first significant overhaul of telecommunications law in more than 60 years, a thorough-going revision of the Communications Act of 1934. Though that overhaul was loaded with significance, the pornography debate — broadcast live on C-SPAN, then still a novelty — is what caught the public’s attention.

[17]   During that brief debate, breathless speeches conjuring lurid images of sordid sex acts overwhelmed academic points about free speech, citizens’ privacy rights, and the way the internet’s packet-switched architecture actually works. The threat posed to the internet itself by Exon’s vision of a federal speech police paled into irrelevance. With millions of people watching, senators were wary of appearing as if they did not support protecting children from pornography. The lopsided final tally on Exon’s amendment to the Telecommunications Act showed it. The votes were 84 in favor, 16 opposed.[28]

III. THE INTERNET FREEDOM AND FAMILY EMPOWERMENT ACT, H.R. 1978

[18]   When it came to familiarity with the internet, the House of Representatives was only marginally more technologically conversant than the Senate. While a handful of members were conversant with “high tech,” as it was called, most were outright technophobes quite comfortable with the old ways of doing things. Many of the committee chairs, given the informal seniority system in the House, were men in their 70s. They saw little need for improvement in the tried-and-true protocols of paper files in folders, postcards and letters on stationery, and the occasional phone call. The Library of Congress was filled with books, so there was no apparent need for any additional sources of information.

[19]   On the day Exon’s bill passed the Senate, more than half of the senators (including Senator Exon) didn’t even have an email address.[29] In the House it was worse: only 26% of members had an email address.[30] The conventional wisdom was that, with the World’s Greatest Deliberative Body having spoken so definitively, the House would follow suit. And for the same reason: with every House member’s election just around the corner, none would want to appear weak on pornography. The near-unanimous Senate vote seemed dispositive of the question.

[20]   While it is often the case that the House legislates impulsively while the Senate takes its time, in this case the reverse happened. As chairman of the House Republican Policy Committee — and someone who built his own computers and had been using the internet for years — I took a serious interest in the issue. After some study of Exon’s legislation, I had already decided to write my own bill, as an alternative. Fortuitously, I was a member of the Energy and Commerce Committee, which on the House side had jurisdiction over the Telecommunications Act to which Exon had attached his bad idea.

[21]   One of the tech mavens in the House at the time was Ron Wyden, a liberal Democrat from Oregon whose Stanford education and activist streak (he’d run the Gray Panthers advocacy group in his home state during the 1970s) made him a perfect legislative partner. The two of us had recently shared a private lunch and bemoaned the deep partisanship in Congress that mostly prevented Democrats and Republicans from writing legislation together. We decided this was due to members flogging the same old political hot-button questions, on which everyone had already made up their minds.

[22]   At the conclusion of our lunch, we decided to look for cutting-edge issues that would present novel and challenging policy questions, to which neither we nor our colleagues would have a knee-jerk response. Then, after working together to address the particular issue with a practical solution, we’d work to educate members on both sides, and work for passage of truly bipartisan legislation. It was not much longer afterward that the question of regulating speech on the internet presented itself, and Rep. Wyden and I set to work.

[23]   Not long afterward, Time magazine reported that “the balance between protecting speech and curbing pornography seemed to be tipping back toward the libertarians.” They noted that “two U.S. Representatives, Republican Christopher Cox of California and Democrat Ron Wyden of Oregon, were putting together an anti-Exon amendment that would bar federal regulation of the internet and help parents find ways to block material they found objectionable.”[31]

[24]   We named our bill the Internet Freedom and Family Empowerment Act, to describe its two main components: protecting speech and privacy on the internet from government regulation, and incentivizing blocking and filtering technologies that individuals could use to become their own censors in their own households. Pornographers illegally targeting minors would not be let off the hook: they would be liable for compliance with all laws, both civil and criminal, in connection with any content they created. To avoid interfering with the essential functioning of the internet, the law would not shift that responsibility to internet platforms, for whom the burden of screening billions of digital messages, documents, images, and sounds would be unreasonable — not to mention a potential invasion of privacy. Instead, internet platforms would be allowed to act as “Good Samaritans” by reviewing at least some of the content if they chose to do so in the course of enforcing rules against “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content.[32]

[25]   This last feature of the bill resolved a conflict that then existed in the courts. In New York, a judge had held that one of the then-two leading internet platforms, Prodigy, was liable for defamation because an anonymous user of its site had claimed that an investment bank and its founder, Jordan Belfort, had committed securities fraud. (The post was not defamatory: Belfort was later convicted of securities fraud, but not before Prodigy had settled the case for a substantial figure. Belfort would achieve further infamy when he became the model for Leonardo DiCaprio’s character in The Wolf of Wall Street.)[33]

[26]   In holding Prodigy responsible for content it didn’t create, the court effectively overruled a prior New York decision involving the other major U.S. internet platform at the time, CompuServe.[34] The previous case held that online service providers would not be held liable as publishers. In distinguishing Prodigy from the prior precedent, the court cited the fact that Prodigy, unlike CompuServe, had adopted content guidelines. These requested that users refrain from posts that are “insulting” or that “harass other members” or “are deemed to be in bad taste or grossly repugnant to community standards.” The court further noted that these guidelines expressly stated that although “Prodigy is committed to open debate and discussion on the bulletin boards … this doesn’t mean that ‘anything goes.’”[35]

[27]   CompuServe, in contrast, made no such effort. On its platform, the rule was indeed “anything goes.” As a user of both services, I well understood the difference. I appreciated the fact that there was some minimal level of moderation on the Prodigy site. While CompuServe was a splendid service and serious users predominated, the lack of any controls whatsoever was occasionally noticeable and, I could easily envision, bound to get worse.

[28]   If allowed to stand, this jurisprudence would have created a powerful and perverse incentive for platforms to abandon any attempt to maintain civility on their sites. A legal standard that protected only websites where “anything goes” from unlimited liability for user-generated content would have been a body blow to the internet itself. Rep. Wyden and I were determined that good faith content moderation should not be punished, and so the Good Samaritan provision in the Internet Freedom and Family Empowerment Act was born.[36]

[29]   In the House leadership, of which I was then a member, there were plenty of supporters of our effort. The new Speaker, Newt Gingrich, had long considered himself a tech aficionado and had already proven as much by launching the THOMAS project at the Library of Congress to digitize congressional records and make them available to the public online. He slammed the Exon approach as misguided and dangerous. “It is clearly a violation of free speech, and it’s a violation of the right of adults to communicate with each other,” the Speaker of the House said at the time, adding that Exon’s proposal would dumb down the internet to what censors believed was acceptable for children to read. “I don’t think it is a serious way to discuss a serious issue,” he explained, “which is, how do you maintain the right of free speech for adults while also protecting children in a medium which is available to both?”[37]

[30]   Rep. Dick Armey (R-TX), then the new House Majority Leader, joined the Speaker in supporting the Cox-Wyden alternative to Exon. So did California’s David Dreier (R-CA), the Chairman of the House Rules Committee, who was closely in touch with the global hi-tech renaissance being led by innovators in his home state. They were both Republicans, but my fellow Californian Nancy Pelosi, not yet a member of the Democratic leadership, weighed in as well, noting that Exon’s approach would have a chilling effect on serious discussion of HIV-related issues.[38]

[31]   In the weeks and months that followed, Rep. Wyden and I conducted outreach and education among our colleagues in both the House and Senate on the challenging issues involved. It was a rewarding and illuminating process, during which we built not only overwhelming support, but also a much deeper understanding of the unique aspects of the internet that require clear legal rules for it to function.

[32]   Two months after Senator Exon successfully added his Communications Decency Act to the Telecommunications Act in the Senate, the Cox-Wyden measure had its day in the sun on the House floor.[39] Whereas Exon had begun with a prayer, Rep. Wyden and I began on a wing and prayer, trying to counter the seemingly unstoppable momentum of a near-unanimous Senate vote. But on this day in August, the debate was very different than it had been across the Rotunda.

[33]   Speaker after speaker rose in support of the Cox-Wyden measure, and condemned the Exon approach. Rep. Zoe Lofgren (D-CA), the mother of 10 and 13-year old children, shared her concerns with internet pornography and noted that she had sponsored legislation mandating a life sentence for the creators of child pornography. But, she emphasized, “Senator Exon’s approach is not the right way … it will not work.” It was, she said, “a misunderstanding of the technology.”[40] Rep. Bob Goodlatte, a Virginia Republican, emphasized the potential the internet offered and the threat to that potential from Exon-style regulation. “We have the opportunity for every household in America, every family in America, soon to be able to have access to places like the Library of Congress, to have access to other major libraries of the world, universities, major publishers of information, news sources. There is no way,” he said, “that any of those entities, like Prodigy, can take the responsibility to edit out information that is going to be coming in to them from all manner of sources.”[41]

[34]   In the end, not a single Representative spoke against the bill. The final roll call on the Cox-Wyden amendment was 420 yeas, 4 nays.[42] It was a resounding rebuke to the Exon approach in his Communications Decency Act. The House then proceeded to pass its version of the Telecommunications Act — with the Cox-Wyden amendment, and without Exon.[43]

IV. THE TELECOMMUNICATIONS ACT OF 1996

[35]   The Telecommunications Act, including the Cox-Wyden amendment, would not be enacted until the following year. In between came a grueling House-Senate conference that was understandably more concerned with resolving the monumental issues in this landmark modernization of FDR-era telecommunications regulation. During the extended interlude, Rep. Wyden and I, along with our now much-enlarged army of bipartisan, bicameral supporters, continued to reach out in discussions with members about the novel issues involved and how best to resolve them. This resulted in some final improvements to our bill, and ensured its inclusion in the final House-Senate conference report.

[36]   But political realities as well as policy details had to be dealt with. There was the sticky problem of 84 senators having already voted in favor of the Exon amendment. Once on record with a vote one way — particularly a highly visible vote on the politically charged issue of pornography — it would be very difficult for a politician to explain walking it back. The Senate negotiators, anxious to protect their colleagues from being accused of taking both sides of the question, stood firm. They were willing to accept Cox-Wyden, but Exon would have to be included, too.

[37]   The House negotiators, all politicians themselves, understood. This was a Senate-only issue, which could be easily resolved by including both amendments in the final product. It was logrolling at its best.[44]

[38]   President Clinton signed the Telecommunications Act of 1996 into law in February at a nationally televised ceremony from the Library of Congress Reading Room, where he and Vice President Al Gore highlighted the bill’s paving the way for the “information superhighway” of the internet.[45] There was no mention of Exon’s Communications Decency Act. But there was a live demonstration of the internet’s potential as a learning tool, including a live hookup with high school students in their classroom. And the president pointedly objected to the new law’s criminalization of transmission of any “indecent” material, predicting that these provisions would be found violative of the First Amendment and unenforceable.[46]

[39]   Almost before the ink was dry and the signing pens handed out to the VIPs at the ceremony, the Communications Decency facet of the new law faced legal challenges. By summer, multiple federal courts had enjoined its enforcement.[47] The following summer the U.S. Supreme Court delivered its verdict with the same spirit that had characterized its House rejection.[48] The Court (then consisting of Chief Justice Rehnquist and Associate Justices Stephens, O’Connor, Suiter, Kennedy, Thomas, Ginsburg, and Breyer), unanimously held that “[i]n order to deny minors access to potentially harmful speech, the CDA effectively suppresses a large amount of speech that adults have a constitutional right to receive and to address to one another. That burden on adult speech is unacceptable.”[49]

[40]   The Court’s opinion cited Senator Patrick Leahy’s comment that in enacting the Exon amendment, the Senate “went in willy-nilly, passed legislation, and never once had a hearing, never once had a discussion other than an hour or so on the floor.”[50] It noted that transmitting obscenity and child pornography, whether via the internet or other means, was already illegal under federal law for both adults and juveniles, making the draconian Exon restrictions on speech unreasonable overkill.[51] And there was more: under the Exon approach, the high court pointed out, any opponent of particular internet content would gain “broad powers of censorship, in the form of a ‘heckler’s veto.’” He or she “might simply log on and inform the would-be discoursers that his 17-year-old child” was also online. The standard for what could be posted in that forum, chat room, or other online context would immediately be reduced to what was safe for children to see.[52]

[41]   In defenestrating Exon, the Court was unsparing in its final judgment. The amendment was worse than “’burn[ing] the house to roast the pig.” It cast “a far darker shadow over free speech, threaten[ing] to torch a large segment of the Internet community.” Its regime of “governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it.”[53]

[42]   With that, Senator Exon’s deeply flawed proposal finally died. In the Supreme Court, Rep. Wyden and I won the victory that had eluded us in the House-Senate conference.

[43]   One irony, however, persists. When legislative staff prepared the House-Senate conference report on the final Telecommunications Act, they grouped both Exon’s Communications Decency Act and the Internet Freedom and Family Empowerment Act into the same legislative title. So the Cox-Wyden amendment became Section 230 of the Communications Decency Act — the very piece of legislation it was designed to rebuke. Today, with the original Exon legislation having been declared unconstitutional, it is that law’s polar opposite which bears Senator Exon’s label.

V. CONGRESSIONAL INTENT IN PRACTICE: HOW SECTION 230 WORKS

[44]   By 1997, with the Communications Decency Act erased from the future history of the internet, Section 230 had already achieved one of its fundamental purposes. In contrast to the Communications Decency Act, which was punitive, heavily regulatory, and government directed, Section 230 focused on enabling user-created content by providing clear rules of legal liability for website operators that host it. Platforms that are not involved in content creation were to be protected from liability for content created by third-party users.

[45]   This focus of Section 230 proceeded directly from our appreciation of what was at stake for the future of the internet. As the debate on the Cox-Wyden amendment to the Telecommunications Act made clear, not only the bill’s authors but a host of members on both sides of the aisle understood that without such protection from liability, websites would be exposed to lawsuits for everything from users’ product reviews to book reviews. In 21st century terms, this would mean that Yelp would be exposed to lawsuits for its users’ negative comments about restaurants, and Trip Advisor could be sued for a user’s disparaging review of a hotel. Indeed any service that connects buyers and sellers, workers and employers, content creators and a platform, victims and victims’ rights groups — or provides any other interactive engagement opportunity one can imagine — would face open-ended liability if it continued to display user-created content.

[46]   In the years since the enactment of Section 230, in large measure due to a legal framework that facilitates the hosting of user-created content, such content has come to typify the modern internet. Not only have billions of internet users become content creators, but equally they have become reliant upon content created by other users. Contemporary examples abound. In 2020, without user-created content, many in the United States contending with the deadliest tornado season since 2011 could not have found their loved ones.[54] Every day, millions of Americans rely on “how to” and educational videos for everything from healthcare to home maintenance.[55] During the COVID-19 crisis, online access to user-created pre-K, primary, and secondary education and lifelong learning resources has proven a godsend for families across the country and around the world.[56] More than 85% of U.S. businesses with websites rely on user-created content, making the operation of Section 230 essential to ordinary commerce.[57] The vast majority of Americans feel more comfortable buying a product after researching user generated reviews,[58] and over 90% of consumers find user-generated content helpful in making their purchasing decisions.[59] User generated content is vital to law enforcement and social services.[60] Following the rioting in several U.S. cities in 2020, social workers were able to match people with supplies and services to victims who needed life-saving help, directing them with real-time maps.[61]

[47]   Creating a legal environment hospitable to user-created content required that Congress strike the right balance between opportunity and responsibility. Section 230 is such a balance — holding content creators liable for illegal activity while protecting internet platforms from liability for content created entirely by others. Most important to understanding the operation of Section 230 is that it does not protect platforms liable when they are complicit — even if only in part — in the creation or development of illegal content.

[48]   The plain language of Section 230 makes clear its deference to criminal law. The entirety of federal criminal law enforcement is unaffected by Section 230.[62] So is all of state law that is consistent with the policy of Section 230.[63] Still, state law that is inconsistent with the aims of Section 230 is preempted.[64] Why did Congress choose this course? First, and most fundamentally, it is because the essential purpose of Section 230 is to establish a uniform federal policy, applicable across the internet, that avoids results such as the state court decision in Prodigy.  The internet is the quintessential vehicle of interstate, and indeed international, commerce.  Its packet-switched architecture makes it uniquely susceptible to multiple sources of conflicting state and local regulation, since even a message from one cubicle to its neighbor inside the same office can be broken up into pieces and routed via servers in different states. Were every state free to adopt its own policy concerning when an internet platform will be liable for the criminal or tortious conduct of another, not only would compliance become oppressive, but the federal policy itself could quickly be undone. All a state would have to do to defeat the federal policy would be to place platform liability laws in its criminal code. Section 230 would then become a nullity. Congress thus intended Section 230 to establish a uniform federal policy, but one that is entirely consistent with robust enforcement of state criminal and civil law.

[49]   Despite the necessary preemption of inconsistent state laws, Section 230 is constructed in such a way that every state prosecutor and every civil litigant can successfully target illegal online activity by properly pleading that the defendant was at least partially involved in the creation of illegal content, or at least the later development of it. In all such cases, Section 230 immunity does not apply. In this respect, statutory form clearly followed function: Congress intended that this legislation would provide no protection for any website, user, or other person or business involved even in part in the creation or development of content that is tortious or criminal. This specific intent is clearly expressed in the definition of “information content provider” in subsection (f)(3) of the statute.[65]

[50]   In the two and a half decades that Section 230 has been on the books, there have been hundreds of court decisions interpreting and applying it. It is now firmly established in the case law that Section 230 cannot act as a shield whenever a website is in any way complicit in the creation or development of illegal content. In the landmark en banc decision of the Ninth Circuit Court of Appeals in Fair Housing Council of San Fernando Valley v. Roommates.com,[66] which has since been widely cited and applied in circuits across the United States, it was held that not only do websites lose their immunity when they merely “develop” content created by others, but participation in others’ content creation includes wholly automated features of a website that are coded into its architecture.[67]

[51]   There are many examples of courts faithfully applying the plain language of Section 230(f)(3) to hold websites liable for complicity in the creation or development of illegal third-party content. In its 2016 decision in Federal Trade Comm’n v. Leadclick Media, LLC,[68] the Second Circuit Court of Appeals rejected a claim of Section 230 immunity by an internet marketer relying on the fact that it did not create the illegal content at issue, and the content did not appear on its website. The court noted that while this was so, the internet marketer gave advice to the content creators. This made them complicit in the development of the illegal content. As a result, they were not entitled to Section 230 immunity.

[52]   In FTC v. Accusearch,[69] the Tenth Circuit Court of Appeals held that a website’s mere posting of content that it had no role whatsoever in creating — telephone records of private individuals — constituted “development” of that information, and so deprived it of Section 230 immunity. Even though the content was wholly created by others, the website knowingly transformed what had previously been private information into a publicly available commodity. Such complicity in illegality is what defines “development” of content, as distinguished from its creation.

[53]   Other notable examples of this now well-established application of Section 230 are Enigma Software Group v. Bleeping Computer,[70] in which a website was denied immunity despite the fact it did not create the unlawful content at issue, because of an implied agency relationship with an unpaid volunteer who did create it; and Alvi Armani Medical, Inc. v. Hennessey,[71] in which the court found a website to be complicit in content creation because of its alleged knowledge that postings were being made under false identities.

[54]   In its 2016 decision in Jane Doe v. Backpage.com,[72] however, the First Circuit Court of Appeals cast itself as an outlier, rejecting the holding in Roommates.com and its progeny. Instead, it held that “claims that a website facilitates illegal conduct through its posting rules necessarily treat the website as a publisher or speaker of content provided by third parties and, thus, are precluded by section 230(c)(1).”[73] This holding completely ignores the definition in subsection (f)(3) of Section 230, which clearly provides that anyone — including a website — can be an “information content provider” if they are “responsible, in whole or in part, for the creation or development” of online content. The broad protection against being treated as a publisher, as is stated clearly in Section 230(c)(1), applies only with respect to online content provided by another information content provider.

[55]   Despite the fact that the First Circuit holding was out of step with other decisional law in this respect, the Backpage litigation gained great notoriety because the Backpage.com website was widely reported to be criminally involved in sex trafficking. The massive media attention to the case and its apparent unjust result gave rise to the notion that Section 230 routinely operates as a shield against actual wrongdoing by websites. In fact, the opposite is the case: other courts since 2016 have uniformly followed the Roommates precedent, and increasingly have expanded the circumstances in which they are willing to find websites complicit in the creation or development of illegal content provided by their users.[74]

[56]   Ironically, the actual facts in the Backpage case were a Technicolor display of complicity in the development of illegal content. Backpage knowingly concealed evidence of criminality by systematically editing its adult ads; it coached its users on how to post “clean” ads for illegal transactions; it deliberately edited ads in order to facilitate prostitution; it prescribed the language used in ads for prostitution; and it moderated content on the site not to remove ads for prostitution, but to camouflage them. It is difficult to imagine a clearer case of complicity “in part, for the creation or development” of illegal content.[75] Unfortunately, the First Circuit found that the Jane Doe plaintiffs, “whatever their reasons might be,” had “chosen to ignore” any allegation of Backpage’s content creation. Instead, the court said, the argument that Backpage was an “information content provider” under Section 230 was “forsworn” by the plaintiffs, both in the district court and on appeal. The court regretted that it could not interject that issue itself.[76]

[57]   Happily, even within the First Circuit, this mistake has now been rectified. In the 2018 decision in Doe v. Backpage.com,[77] a re-pleading of the original claims by three new Jane Doe plaintiffs, the court held that allegations that Backpage changed the wording of third-party advertisements on its site was sufficient to make it an information content provider, and thus ineligible for Section 230 immunity. Much heartache could have been avoided had the facts concerning Backpage’s complicity been sufficiently pleaded in the original case, and had the court reached this sensible and clearly correct decision on the law in the first place.

[58]   Equally misguided as the notion that Section 230 must shield wrongdoing are assertions that Section 230 was never meant to apply to e-commerce.[78] To the contrary, removing the threat to e-commerce represented by the Prodigy decision was an essential purpose in the development and enactment of Section 230. When Section 230 became law in 1996, user-generated content was already ubiquitous on the internet. The creativity being demonstrated by websites and users alike made it clear that online shopping was an enormously consumer-friendly use of the new technology. Features such as CompuServe’s “electronic mall” and Prodigy’s mail-order stores were instantly popular. So too were messaging and email, which in Prodigy’s case came with per-message transaction fees. Web businesses such as CheckFree demonstrated as far back as 1996 that online bill payment was not only feasible but convenient. Prodigy, America Online, and the fledgling Microsoft Network included features we know today as content delivery, each with a different payment system. Both Rep. Wyden and I had all of these iterations of internet commerce in mind when we drafted our legislation. We made this plain during our extensive outreach to members of both the House and Senate during 1995 and 1996.

[59]   Yet another misconception about the coverage of Section 230, often heard, is that it created one rule for online activity and a different rule for the same activity conducted offline.[79] To the contrary, Section 230 operates to ensure that like activities are always treated alike under the law. When Section 230 was written, just as now, each of the commercial applications flourishing online had an analog in the offline world, where each had its own attendant legal responsibilities. Newspapers could be liable for defamation. Banks and brokers could be held responsible for failing to know their customers. Advertisers were responsible under the Federal Trade Commission Act and state consumer laws for ensuring their content was not deceptive and unfair. Merchandisers could be held liable for negligence and breach of warranty, and in some cases even subjected to strict liability for defective products.

[60]   In writing Section 230, Rep. Wyden and I, and ultimately the entire Congress, decided that these legal rules should continue to apply on the internet just as in the offline world. Every business, whether operating through its online facility or through a brick-and-mortar facility, would continue to be responsible for all of its own legal obligations. What Section 230 added to the general body of law was the principle that an individual or entity operating a website should not, in addition to its own legal responsibilities, be required to monitor all of the content created by third parties and thereby become derivatively liable for the illegal acts of others. Congress recognized that to require otherwise would jeopardize the quintessential function of the internet: permitting millions of people around the world to communicate simultaneously and instantaneously. Congress wished to “embrace” and “welcome” this not only for its commercial potential but also for “the opportunity for education and political discourse that it offers for all of us.”[80] The result is that websites are protected from liability for user-created content, but only if they are wholly uninvolved in the creation or development of that content. Today, virtually every substantial brick-and-mortar business of any kind, from newspapers to retailers to manufacturers to service providers, has an internet presence as well through which it conducts e-commerce. The same is true for the vast majority of even the smallest businesses.[81] The same legal rules and responsibilities apply across the board to all.

[61]   It is worth debunking three other “creation myths” about Section 230. The first is that Section 230 was conceived as a way to protect an infant industry.[82] According to this narrative, in the early days of the internet, Congress decided that small startups needed protection. Now that the internet has matured, it is argued, the need for such protection no longer exists; Section 230 is no longer necessary. As co-author of the legislation, I can verify that this is an entirely fictitious narrative. Far from wishing to offer protection to an infant industry, our legislative aim was to recognize the sheer implausibility of requiring each website to monitor all of the user-created content that crossed its portal each day. In the 1990s, when internet traffic was measured in the tens of millions, this problem was already apparent. Today, in the third decade of the 21st century, the enormous growth in the volume of traffic on websites has made the potential consequences of publisher liability far graver. Section 230 is needed for this purpose now, more than ever.

[62]   The second “creation myth” is that Section 230 was adopted as a special favor to the tech industry, which lobbied for it on Capitol Hill and managed to wheedle it out of Congress by working the system.[83] The reality is far different. In the mid-1990s, internet commerce had very little presence in Washington. When I was moved to draft legislation to remedy the Prodigy decision, it was based on my reading news reports of the case. No company or lobbyist contacted me. Throughout the process, Rep. Wyden and I heard not at all from the leading internet services of the day. This included both Prodigy and CompuServe, whose lawsuits inspired my legislation. As a result, our discussions of the proposed legislation with our colleagues in the House and Senate were unburdened by importunities from businesses seeking to gain a regulatory advantage over their competitors. I willingly concede that this was, therefore, a unique experience in my lawmaking career. It is also the opposite of what Congress should expect if it undertakes to amend Section 230, given that today millions of websites and hundreds of millions of internet users have an identifiable stake in the outcome.

[63]   The final creation myth is that Section 230 was part of a grand bargain with Senator James Exon (D-NE), in which his Communications Decency Act aimed at pornography was paired with the Cox-Wyden bill, the Internet Freedom and Family Empowerment Act, aimed at greenlighting websites to adopt content moderation policies without fear of liability.[84] The claim now being made is that the two bills were actually like legislative epoxy, with one part requiring the other. And since the Exon legislation was subsequently invalidated as unconstitutional by the U.S. Supreme Court, so the argument goes, Section 230 should not be allowed to stand on its own. In fact, this revisionist argument contends, the primary congressional purpose back in 1996 was not to give internet platforms limited immunity from liability as Section 230 does. Rather, the most important part of the imagined “package” was Senator Exon’s radical idea of imposing stringent liability on websites for the illegal acts of others[85] — an idea that Exon himself backed away from before his amendment was actually passed.[86] The logical conclusion of this argument is that, the Supreme Court having thrown out the Exon bathwater, the Section 230 baby should now be thrown out along with it.

[64]   The reality, however, is far different than this revisionist history would have it. The facts that the Cox-Wyden bill was designed as an alternative to the Exon approach; that the Communications Decency Act was uniformly criticized during the House debate by members from both parties, while not a single Representative spoke in support of it; that the vote in favor of the Cox-Wyden amendment was 420-4; and that the House version of the Telecommunications Act included the Cox-Wyden amendment while pointedly excluding the Exon amendment — all speak loudly to this point. The Communications Decency Act was never a necessary counterpart to Section 230.

VI. CONCLUSION

[65]   This history is especially relevant today, as Americans for whom the internet is now a ubiquitous feature of daily life grapple with the same issues of content moderation, privacy, free speech, and the dark side of cyberspace that challenged policy makers in 1995-96. In the current 116th Congress, there is a noticeable resurgence of support for government regulation of content, with all that portends.

[66]   Today’s “techlash” and neo-regulatory resurgence is fueled by the same passions and concerns as it was 25 years ago, including protecting children, as well as the more recent trend toward restricting speech that may be offensive to some segments of adults. In June 2020, The New York Times fired its opinion editor, ostensibly for publishing an op-ed by a sitting Republican U.S. senator on a critical issue of the day.[87] Supporters of President Donald Trump complained when Twitter moved to fact-check and contextualize his tweets,[88] while progressives complained that Facebook was not doing this.[89] Senators and representatives are writing legislation that would settle these arguments through force of law rather than private ordering, including legislation to walk back the now prosaically-named Section 230.

[67]   In these legislative debates, James Exon’s misguided handiwork is often romanticized by the new wave of speech regulators. Recalling its deep flaws, myriad unintended consequences, and dangerous threats to both free speech and the functioning of the internet is a worthwhile reality check.

[68]  Not only is the notion that the Communications Decency Act and Section 230 were conceived together completely wrong, but so too is the idea that the Exon approach enjoyed lasting congressional support. By the time the Telecommunications Act completed its tortuous legislative journey, support for the Communications Decency Act had dwindled even in the Senate, as senators came to understand the mismatch between problem and solution that the bill represented. With the exception of its most passionate supporters, few tears were shed for the Exon legislation at its final demise in 1997. James Exon had retired from the Senate even before his law was declared unconstitutional, leaving few behind him willing to carry the torch. His colleagues made no effort to “fix” and replace the Communications Decency Act, after it was unanimously struck down by the Supreme Court.

[69]   Meanwhile Section 230, originally introduced in the House as a freestanding bill, H.R. 1978, in June 1995, stands on its own, now as then. Its premise of imposing liability on criminals and tortfeasors for their own wrongful conduct, rather than shifting that liability to third parties, operates independently of (and indeed, in opposition to) Senator Exon’s approach that would directly interfere with the essential functioning of the internet.

[70]   In the final analysis, it is useful to place this legislative history in contemporary context.  Imagine returning to a world without Section 230, as some today are urging. In this alternative world, websites and internet platforms of all kinds would face enormous potential liability for hosting content created by others. They would have a powerful incentive to limit that exposure, which they could do in one of two ways. They could strictly limit user-generated content, or even eliminate it altogether; or they could adopt the “anything goes” model through which CompuServe originally escaped liability before Section 230 existed. We would all be very much worse off were this to happen. Without Section 230’s clear limitation on liability it is difficult to imagine that most of the online services on which we rely every day would even exist in anything like their current form. While courts will continue to grapple with the challenges of applying Section 230 to the ever-changing landscape of the 21st century internet, hewing to its fundamental principles and purposes will be a far wiser course for policy makers than opening a Pandora’s box of unintended consequences that upending the law would unleash.

[1] 47 U.S.C. § 230 (“Section 230”).

[2] Stratton Oakmont v. Prodigy Servs. Co., 1995 N.Y. Misc. LEXIS 229, 1995 WL 323710, 23 Media L. Rep. 1794 (N.Y. Sup. Ct. May 24, 1995).

[3] See the discussion of Jane Doe v. Backpage.com in Part V, infra.

[4] See, e.g., S. 3398, the EARN IT Act; S. 1914, the Ending Support for Internet Censorship Act; H. R. 4027, the Stop the Censorship Act; S. 3983, the Limiting Section 230 Immunity to Good Samaritans Act; S.4062, Stopping Big Tech’s Censorship Act; and S. 4066, the ‘‘Platform Accountability and Consumer Transparency Act.”

[5] https://www.aclu.org/issues/free-speech/internet-speech/communications-decency-act-section-230

[6] Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1625-30 (2018); “What Is Content Moderation and Why Companies Need It,” Bridged, September 3, 2019, https://bridged.co/blog/what-is-content-moderation-why-companies-need-it/

[7] Derek E. Bambauer, “How Section 230 Reform Endangers Internet Free Speech,” Tech Stream, Brookings Institution, July 1, 2020, https://www.brookings.edu/techstream/how-section-230-reform-endangers-internet-free-speech/ ; “The Fight Over Section 230—and the Internet as We Know It,” Wired, August 13, 2019, https://www.wired.com/story/fight-over-section-230-internet-as-we-know-it/

[8] Dawn Carla Nunziato, From Town Square to Twittersphere: The Public Forum Doctrine Goes Digital, 25 B.U. J.Sci. & Tech.L. 1 (2019).

[9] Marsh v. Alabama, 326 U.S. 501 (1946); Food Employees Local 590 v. Logan Valley Plaza, Inc., 391 U.S. 308 (1968); Petersen v. Talisman Sugar Corp., 478 F.2d 73 (5th Cir. 1973); Pruneyard Shopping Center v. Robins, 447 U.S. 74 (1980).

[10] 47 U.S. Code § 230(c)(2)(A).

[11] Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996, https://www.justice.gov/ag/department-justice-s-review-section-230-communications-decency-act-1996?utm_medium=email&utm_source=govdelivery ; “Justice Department Wants to Chip Away at Section 230,” Protocol, June 18, 2020, https://www.protocol.com/justice-department-section-230-proposal

[12] “Legislation Would Make Tech Firms Accountable for Child Porn,” New York Times, March 6, 2020, B7; Andrew M. Sevanian, Section 230 of the Communications Decency Act: A “Good Samaritan’ Law Without the Requirement of Acting as a ‘Good Samaritan,” 21 UCLA Ent. L. Rev. 121 (2014).

[13] “Pelosi Says Facebook Enabled Russian Interference in Election,” New York Times, May 30, 2019, B4.

[14] “Legal Shield for Social Media Is Targeted by Trump,” New York Times, May 28, 2020, https://www.nytimes.com/2020/05/28/business/section-230-internet-speech.html ; “What is Section 230 and Why Does Donald Trump Want to Change It?,” MIT Technology Review, August 13, 2019, https://www.technologyreview.com/2019/08/13/610/section-230-law-moderation-social-media-content-bias/

[15] “Section 230 Is the Internet’s First Amendment. Now Both Republicans and Democrats Want To Take It Away,” Reason, July 29, 2019, https://reason.com/2019/07/29/section-230-is-the-internets-first-amendment-now-both-republicans-and-democrats-want-to-take-it-away/

[16] Josh Hawley, “The True History of Section 230,” https://www.hawley.senate.gov/sites/default/files/2020-06/true-history-section-230.pdf

[17] Communications Decency Act, S. 314, 104th Cong., 1st Sess. (February 2, 1995)(“CDA”).

[18] 104th Cong., 1st Sess., 141 Cong. Rec. Pt. 11, 16007 (June 14, 1995)(remarks of Sen. Exon).

[19] Id. at 16007-08 (June 14, 1995).

[20] CDA §2.

[21] Robert Cannon, The Legislative History of Senator Exon’s Communications Decency Act: Regulating Barbarians on the Information Superhighway, 49 Fed. Comm. L.J. 51, 71-72 n.103.

[22] “Cyber Liberties Alert from the ACLU,” March 23, 1995, http://besser.tsoa.nyu.edu/impact/w95/RN/mar24news/Merc-news-cdaupdate.html When the Exon bill was offered as an amendment to the Telecommunications Act in the Commerce Committee, Chairman Pressler first moved to table it; when the motion to table was defeated, the amendment was approved on a voice vote.

[23] 104th Cong., 1st Sess., 141 Cong. Rec. Pt. 11, 16010 (June 14, 1995)(remarks of Sen. Leahy).

[24] 104th Cong., 1st Sess., 141 Cong. Rec. Pt. 11, 15503-04 (June 9, 1995) (remarks of Sen. Exon, describing his “blue book”).

[25] 104th Cong., 1st Sess., 141 Cong. Rec. Pt. 11, 16009 (June 14, 1995)(remarks of Sen. Exon).

[26] Id. at 16007. The amendment was intended to address criticisms that the Communications Decency Act was overbroad. It provided a narrow defense in cases where the defendant “solely” provides internet access—thereby continuing to expose all websites hosting user-created content to uncertain criminal liability.

[27] S. 652, Telecommunications Competition and Deregulation Act of 1995 (March 30, 1995).

[28] 104th Cong., 1st Sess., 141 Cong. Rec. Pt. 11, 16026 (June 14, 1995).

[29] Robert Cannon, The Legislative History of Senator Exon’s Communications Decency Act: Regulating Barbarians on the Information Superhighway, 49 Fed. Comm. L.J. 51, 71-72 and n.103.

[30] Id.

[31] “Cyberporn—On a Screen Near You,” Time, July 3, 1995, 38.

[32] Internet Freedom and Family Empowerment Act, H.R. 1978, 104th Cong., 1st Sess. (June 30, 1995)(“IFFEA”).

[33] Conor Clark, “How the Wolf of Wall Street Created the Internet,” January 7, 2014, https://slate.com/news-and-politics/2014/01/the-wolf-of-wall-street-and-the-stratton-oakmont-ruling-that-helped-write-the-rules-for-the-internet.html

[34] Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991).

[35] Stratton Oakmont v. Prodigy Servs. Co., 1995 N.Y. Misc. LEXIS 229, 1995 WL 323710, 23 Media L. Rep. 1794 (N.Y. Sup. Ct. May 24, 1995).

[36] See IFFEA § 2.

[37] “Gingrich Opposes Smut Rule for Internet,” New York Times, June 22, 1995, A20.

[38] 104th Cong., 2nd Sess., 142 Cong. Rec. H1173 (Daily Ed. Feb. 1, 1996) (statement of Rep. Pelosi).

[39] 104th Cong., 1st Sess., 141 Cong. Rec. Part 16, 22044-054 (August 4, 1995).

[40] Id. at 22046 (remarks of Rep. Lofgren).

[41] Id. (remarks of Rep. Goodlatte).

[42] Id. at 22054.

[43] The Communications Act of 1995, H.R.1555, was approved in the House by a vote of 305-117 on August 4, 1995. Id. at 22084.

[44] Senator Patrick Leahy, a leading proponent of the Cox-Wyden amendment and opponent of Senator Exon’s Communications Decency Act, had previously warned against this logically inconsistent approach. See Comm. Daily Notebook, November 13, 1995, at 6 (reporting Leahy’s concern that the “conference panel would take ‘the easy compromise’” by including both Exon and Cox-Wyden in the Conference Report).

[45] https://clintonwhitehouse4.archives.gov/WH/EOP/OP/telecom/signing.html

[46] https://www.c-span.org/video/?69814-1/telecommunications-bill-signing

[47] ACLU v. Reno, 929 F. Supp. 824 (E.D. Pa. 1996); Shea v. Reno, 930 F.Supp. 916 (S.D.N.Y. 1996); Apollomedia Corp. v. Reno, No. 97-346 (N.D. CA. 1996).

[48] Reno v. American Civil Liberties Union, 521 U.S. 844 (1997).

[49] Id. at 874.

[50] Id. at 858 n.24.

[51] Id. at 877 n.44.

[52] Id. at 880.

[53] Id. at 882.

[54] “At Least 34 Dead, Half a Million Without Power After Storms, Tornadoes Batter South,” ABC News, April 14, 2020, https://abcnews.go.com/US/dead-half-million-power-storms-batter-south/story?id=70113106

[55] “How to Teach Yourself How to Do Almost Anything: Web Sites Help Users Find Instructional Videos,” Wall Street Journal, May 7, 2008, D10.

[56] “The COVID-19 Pandemic Has Changed Education Forever. This Is How,” World Economic Forum, April 29, 2020, https://www.weforum.org/agenda/2020/04/coronavirus-education-global-covid19-online-digital-learning/

[57]  Gaurav Kumar, 50 Stats About 9 Emerging Content Marketing Trends for 2016, SEMRUSH BLOG (December 29, 2015), https://www.semrush.com/blog/50-stats-about-9-emerging-content-marketing-trends-for-2016

[58] Yin Wu, “What Are Some Interesting Statistics About Online Consumer Reviews?,” DR4WARD.COM (March 26, 2013), http://www.dr4ward.com/dr4ward/2013/03/what-are-some-interesting-statistics-about-online-consumer-reviews-infographic.html

[59] Kimberlee Morrison, “Why Consumers Share User-Generated Content,” Adweek (May 17, 2016), http://www.adweek.com/digital/why-consumers-share-user-generated-content-infographic

[60] Jarrod Sadulski, “Why Social Media Plays an Important Role in Law Enforcement,” In Public Safety, March 9, 2018, https://inpublicsafety.com/2018/03/why-social-media-plays-an-important-role-in-law-enforcement/

[61] See, e.g., “These Cities Replaced Cops With Social Workers, Medics, and People Without Guns,” Vice, June 14, 2020, https://www.vice.com/en_au/article/y3zpqm/these-cities-replaced-cops-with-social-workers-medics-and-people-without-guns; see also “How You Can Help Minneapolis-St. Paul Rebuild and Support Social Justice Initiatives,” Fox 9 KMSP, May 30, 2020, https://www.fox9.com/news/how-you-can-help-minneapolis-st-paul-rebuild-and-support-social-justice-initiatives

[62] 47 U.S.C. § 230(e)(1) provides: “No effect on criminal law. Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title, chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute” (emphasis added).

[63] 47 U.S.C. § 230(e)(3) provides: “State law. Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section” (emphasis added).

[64] 47 U.S.C. § 230(e)(3) further provides: “No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

[65] 47 U.S.C. § 230(f)(3) provides: “Information content provider. The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service” (emphasis added).

[66] 521 F.3d 1157 (9th Cir. 2008).

[67] Id. at 1168.

[68] 838 F.3d 158 (2d Cir. 2016).

[69] 570 F.3d1187, 1197 (10th Cir. 2009).

[70] 194 F.Supp.3d 263 (2016).

[71] 629 F. Supp. 2d 1302 (S.D. Fla. 2008).

[72] Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016).

[73] Id. at 22 (emphasis added).

[74] J.S. v. Village Voice Media Holdings, LLC, 184 Wash. 2d 95, 359 P.3d 714 (2015), is a representative example, also involving Backpage.com. Stating the test of Section 230, the court held that Backpage would not be a content creator under § 230(f)(3) if it “merely hosted the advertisements.” But if “Backpage also helped develop the content of those advertisements,” then “Backpage is not protected by CDA immunity.” Id. at 717. As in Roommates.com, where the website itself was designed so as to yield illegal content, the court cited the allegations that Backpage’s “content requirements are specifically designed to control the nature and context of those advertisements,” so that they can be used for “the trafficking of children.” Moreover, the complaint alleged that Backpage has a “substantial role in creating the content and context of the advertisements on its website.” In holding that, on the basis of the allegations in the complaint, Backpage was a content creator, the court expressly followed the Roommates holding, and the clear language of Section 230 itself. “A website operator,” the court concluded, “can be both a service provider and a content provider,” id. at 717-18, and when it is a content provider it no longer enjoys Section 230 protection from liability.

[75] These facts are laid out in considerable detail in the 2017 Staff Report of the Senate Permanent Subcommittee on Investigations concerning Backpage.com, https://www.hsgac.senate.gov/imo/media/doc/Backpage%20Report%202017.01.10%20FINAL.pdf

[76] Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016), 19 n.4.

[77] Doe No. 1 v. Backpage, 2018 WL 1542056 (D. Mass. March 29, 2018).

[78] See, e.g., Review of Section 230 of the Communications Decency Act of 1996, U.S. Department of Justice, June 17, 2020 (“The internet has changed dramatically in the 25 years since Section 230’s enactment in ways that no one, including the drafters of Section 230, could have predicted …. Platforms no longer function as simple forums for posting third-party content, but instead … engage in commerce”); Cathy Gellis, “The Third Circuit Joins The Ninth In Excluding E-Commerce Platforms From Section 230’s Protection,” Techdirt, July 15, 2019, https://www.techdirt.com/articles/20190709/13525442547/third-circuit-joins-ninth-excluding-e-commerce-platforms-section-230s-protection.shtml

[79] See, e.g., Chris Reed, Online and Offline Equivalence: Aspiration and Achievement, Int’l J.L.& Info. Tech. 248 (Autumn 2010), 252 and n.22 (characterizing Section 230 as enforcing “rules which apply only online…. [T]he result may well have been to favour online publishing over offline in some circumstances”).

[80] 104th Cong., 1st Sess., 141 Cong. Rec. Part 16, 22045 (August 4, 1995)(remarks of Rep. Cox).

[81] About two-thirds (64%) of U.S. small businesses have websites, while nearly a third (29%) plan to begin using websites for the first time in 2020. Sandeep Rathore, “29% of Small Businesses Will Start a Website This Year,” Small Business News, February 6, 2020, https://smallbiztrends.com/2020/02/2020-small-business-marketing-statistics.html

[82] See, e.g., Michael L. Rustad, The Role of Cybertorts in Internet Governance, The Comparative Law Yearbook of International Business, ed. Dennis Campbell (2018), 391, 411 (“Congress enacted Section 230 … as a subsidy to the infant industry of the Internet”); Mark Schultz, “Is There Internet Life After 30?,” The Hill, Congress Blog, August 23, 2019, https://thehill.com/blogs/congress-blog/technology/458620-is-there-internet-life-after-thirty  (Section 230 was intended as an “exception for an infant industry”); see generally Marc J. Melitz, When and How Should Infant Industries Be Protected?, 66 J. of Int’l. Econ. 177–196 (2005).

[83] See, e.g., Richard Hill, “Trump and CDA Section 230: The End of an Internet Exception?,” Bot Populi, July 2, 2020 (“Internet service providers lobbied to have a special law passed that would allow them to monitor content without becoming fully liable for what they published. The lobbying was successful and led to the US Congress adopting Section 230”), https://botpopuli.net/trump-and-cda-section-230-the-end-of-an-internet-exception ; “Section 230 Was Supposed to Make the Internet a Better Place. It Failed,” Bloomberg Businessweek, August 7, 2019 (positing that Section 230 was the result of “influential [lobbying] of technophobes on Capitol Hill” that included “organizing protests”).

[84] See, e.g., Josh Hawley, “The True History of Section 230,” https://www.hawley.senate.gov/sites/default/files/2020-06/true-history-section-230.pdf

[85] Id.

[86] In an effort to address criticisms from objecting senators, the final version of the Exon amendment, as it was incorporated into the Telecommunications Act of 1996, lessened the criminal exposure of websites and online service providers through the addition of four new defenses. This caused the Clinton administration to complain that the revised Exon legislation now defeated its own purpose by actually making criminal prosecution of obscenity more difficult.  It “would undermine the ability of the Department of Justice to prosecute an on-line service provider,” the Justice Department opined, “even though it knowingly profits from the distribution of obscenity or child pornography.” 104th Cong., 1st Sess., 141 Cong. Rec. Pt. 11, 16023 (June 14, 1995)(letter from Kent Markus, Acting Assistant Attorney General, Department of Justice, to Senator Patrick Leahy).

[87] “The Times’s Opinion Editor Resigns Over Controversy,” New York Times, June 8, 2020, B1 (quoting publisher A. G. Sulzberger’s pronouncement that “Both of us concluded that James [Bennet] would not be able to lead the team”).

[88] Brad Polumbo, “Twitter Sets Itself Up for Failure with ‘Fact-check’ on Trump Tweet,” Washington Examiner, May 27, 2020, https://www.washingtonexaminer.com/opinion/twitters-fact-check-on-trump-tweet-sets-itself-up-for-failure

[89] “Zuckerberg and Trump: Uneasy Ties,” New York Times, June 22, 2020, B1.

 

Lessons From The German Saga Of Fake News – Proposing A Shift From The State To Communities

by Saiesh Kamath, 3L at the National University of Juridical Sciences, Kolkata

As the ubiquity of the Internet reigns, social media platforms have turned into essential vehicles in facilitating participation in a democratic culture.[1] However, these platforms are also used for the dissemination of ‘fake news’.[2] Fake news is an umbrella term which is used to denote propaganda, hoaxes, trolling, and often, satire.[3] The growth of ‘fake news’ has been flagged as a global cause of concern. This is due to its demonstrated capability of subverting democratic processes[4] and instigating violence,[5] with minorities and the marginalised being disproportionately affected.[6] Since fake news achieves virality on social media platforms, there have been calls to regulate the dissemination of such content on these platforms. Some nations have passed statutes in order to deal with this crisis.[7] A critical appraisal of such regulatory laws is necessary in order to construct a framework which will effectively curb the menace of fake news. In this regard, a critical look at the German law is necessary considering the large attention it has received from legislators and legislatures of many nations.[8]

In Part I of this paper, I will introduce the German law instituted to regulate fake news. In Part II, an implication on free speech will be explored. Part III will offer an alternative framework to counter fake news.

I. The Network Enforcement Act, 2017, of Germany

The necessity of countering fake news in Germany was sharpened due to the role played by hate speech and fake news in the Presidential elections of the United States of America in 2016. Hence, the Network Enforcement Act, 2017(hereinafter the Act) was passed in order to combat hate speech and fake news online[9] – two concepts that have increasingly become interconnected.[10] The Act was brought about for two reasons. Firstly, it was to ensure the enforcement of existent provisions in the German criminal code.[11] Secondly, it was to place the onus of such enforcement on social networks[12] (hereinafter intermediaries) with two million or more registered users in Germany.[13]

The Act mandates intermediaries to provide for the user to have a complaint mechanism against illegal content online.[14] In cases of “manifestly unlawful” content, intermediaries must take down the content within twenty-four hours.[15] In order to remove or block access to any other type of illegal content, a period of seven days is provided.[16] Failure to adhere to these requirements is considered a civil violation, and carries fines of up to fifty million Euros.[17]

II. Stifling Free Speech

The Act was considered one of the most expansive laws arising from a Western nation to tackle fake news.[18] However, civil rights activists, media organisations, and other groups condemned the law due to its implications on free speech.[19] Of the many legitimate arguments, one of the most compelling was that the law stifled free speech of people.[20]

There is a chilling effect on freedom of speech in two main ways. Firstly, it arises due to the burden imposed on intermediaries to make a call on what content qualifies as legal or illegal. Secondly, it is manifested through the phenomenon of over-removal.

The main criticism of the Act arises from the obligation imposed on intermediaries to monitor and take down illegal content online. The problem with this feature is that it privatizes enforcement of the law.[21] This happens because intermediaries are obliged to make a call on the legality of content and take down illegal content. However, intermediaries do not possess the technical legal knowledge to take a call on what content qualifies as legal or illegal. Hence, to assume such a level of understanding of the law onto an intermediary so as to equate its function akin to that of the Judiciary[22] is absurd.

While this unfair obligation in and of itself does not stifle speech, it creates, a phenomenon called ‘over-removal’.[23] This phenomenon entails broad and excessive censorship. In this case, over-removal happens due to financial disincentives in the form of fines created by the Act. In scenarios where intermediaries fail in their obligation to timely remove illegal content, they are penalized with fines. Given that intermediaries do not possess a level of understanding of the law so as to guarantee correct judgement for every instance in a time-sensitive framework, they err on the side of censorship and take down even content which is legal.[24] This is also problematic because censorship tends to disproportionately take away from the voices of minority groups.[25] Considering that minorities were intended to benefit from the Act, this consequence takes away from progress made in that direction.

This violates Article 5 of the Basic Law (the Constitution) of Germany which guarantees freedom of speech, expression, and opinion to the German people.[26] Additionally, over-removal takes away from the principle of freedom of speech that is enshrined in different international human rights and civil rights treaties and conventions.[27] The same reasoning was brought forward in the case against the French law on hate speech,[28] which was inspired by the Act.[29] The Constitutional Court struck down critical provisions of the Act, one basis being that it disproportionately infringed freedom of speech. In so doing, it articulated the importance of the freedom in democratic polity.[30]

A draft to amend the Act was introduced recently,[31] but it does not make any changes to the core tent of the Act being critiqued – privatizing law enforcement.

III. From Excessive State Intervention to Community Collaboration

The issue of fake news is not an easy one to analyse, let alone resolve. However, the approach of compromising one human right for preserving another is a position that is borne from the unique German experience with the collapse of a democratic order under the Nazis.[32] As such, this approach should not be replicated with reckless abandon in other nations, especially without the existence of similar historical lessons and popular mandate.

The social utility that is derived from free speech should not be laid at the sacrificial altar without considering other frameworks. In this regard, I argue that there needs to be a shift from an excessive state interventionist approach demonstrated by Germany to a rights-respecting framework of community collaboration.

Fake news targets users of intermediary platforms in insidious ways. Its core characteristic of being viral is used to expose users to fake news. The exposure of fake news to users is additionally problematic because users are susceptible to still believe it even after corrections are issued.[33] Hence, tackling the root of this issue would be to make users more resilient to fake news.

To do this, there needs to be a strategy to decrease the persuasion value of fake news, and build psychological resistance in users. In this regard, the ‘inoculation’ theory is useful. Arising from psychology, it essentially says that users may be ‘inoculated’ to fake news by exposing them to a ‘weakened version’ of fake news.[34] This would mean that users would be exposed to relatively apparent fake news. This exercise in identifying fake news would build resilience and increase vigilance of users. This would lead to a pathway where fake news tends to be preemptively ‘bunked’.[35]

Research around the inoculation theory and its effects are promising. When contextualised around the climate change narrative, users were made resistant to a specific piece of misinformation, and hence avoided the influence carries by fake news on climate change.[36] This is also promising given that the extent and reasons for climate change remains an issue which falls on partisan lines. Hence, it holds potential to be used for other influential partisan issues which are usually subject to fake news.

This research was also extended to help users spot the common strategies employed in producing fake news.[37] By exposing users to relatively apparent manipulation strategies used by fake news disseminators, users were better able to identify the tell-tale signs of fake news.[38] This study is more potent because the relatively time-consuming exercise of inoculating users against specific pieces of misinformation is, while still holding its own value to tackle specific pieces of pernicious and persistent misinformation, now subsumed in the wider understanding of manipulation tactics surrounding fake news. Hence, it functions like a ‘broad spectrum vaccine’.[39]

This approach of radically decentralising the exercise of countering fake news is better because it takes no part in subverting human and constitutional rights, and its consequent adverse effects on democratic discourse and polity. This is because it refuses to submit to false equivalencies of trade-offs between various rights or groups of rights. This approach also has the benefit of empowering the ultimate stakeholder: the user. While the times brought on by a more digitally connected world may consist of new dangers, a framework which is comfortable with de-emphasising cherished rights and principles should not be adopted when other rights-respecting frameworks exist.

In this framework, the State would still have a role. It would be tasked with devising strategies to facilitate collaboration between communities which are countering fake news. This would increase co-ordination between communities, and enable learning and sharing best practices from each other. This would be relevant in, and respectful of, a world with increasing diversity and culture, and would serve to acknowledge that a one-size-fits-all approach is not the best fit for a problem as complex and persistent as fake news.

[1] See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. Law. Rev. 1600 (2018).

[2] See Alberto Alemanno, How to Counter Fake News? A Taxonomy of Anti-fake News Approaches, 9 Eur. J. Risk Regul. 1 (2018).

[3] See Mark Verstraete, Derek E. Bambauer, and Jane R. Bambauer, Identifying and Countering Fake News, Arizona Legal Studies Discussion Paper No. 17-15 (2017).

[4]See Alexis Mardigan, What Facebook Did to American Democracy, The Atlantic, (October 12 2017), https://www.theatlantic.com/technology/archive/2017/10/what-facebook-did/542502/.

[5] See Reuters, Man pleads guilty in Washington pizzeria shooting over fake news, (March 24, 2017, 8:11 PM) https://www.reuters.com/article/us-washingtondc-gunman/man-pleads-guilty-in-washington-pizzeria-shooting-over-fake-news-idUSKBN16V1XC (Last visited on 11 June 2020).

[6] See Kimberly Grambo, Fake News And Racial, Ethnic, And Religious Minorities: A Precarious Quest For Truth, 21(5) U. Pa. J. Const. L. 1319 (2019).

[7] See Fathin Ungku, Factbox: ‘Fake News’ laws around the world, Reuters (April 2, 2019, 3:43 PM), https://in.reuters.com/article/singapore-politics-fakenews/factbox-fake-news-laws-around-the-world-idINKCN1RE0XW.

[8]See Jacob Mchangama and Joelle Fiss, Germany’s Online Crackdowns Inspire The World’s Dictators, Foreign Policy (November 6, 2019, 10:47 AM), https://foreignpolicy.com/2019/11/06/germany-online-crackdowns-inspired-the-worlds-dictators-russia-venezuela-india/; See also Justitia, The Digital Berlin Wall: How Germany (Accidentally) Created a Prototype for Global Online Censorship, 3, 5, 17 (November, 2019), http://justitia-int.org/wp-content/uploads/2019/11/Analyse_The-Digital-Berlin-Wall-How-Germany-Accidentally-Created-a-Prototype-for-Global-Online-Censorship.pdf.

[9]See Press Release, Global Network Institute, (April 20, 2017), https://globalnetworkinitiative.org/proposed-german-legislation-threatens-free-expression-around-the-world/; See also Alana Schetzer, Governments are making fake news a crime but it could stifle free speech, The International Forum for Responsible Media Blog, (July 21, 2019), https://inforrm.org/2019/07/21/governments-are-making-fake-news-a-crime-but-it-could-stifle-free-speech-alana-schetzer/. (‘Schetzer’).

[10] See Kirsten Gollatz and Leontine Jenner, Hate Speech and Fake News – how two concepts got intertwined and politicized, Digital Society Blog (March 15, 2018), https://www.hiig.de/en/hate-speech-fake-news-two-concepts-got-intertwined-politicised/. (‘Gollatz and Jenner’).

[11] See Heidi Tworek and Paddy Leerssen, An Analysis of Germany’s NetzDG Law, Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression 2 (2019). (‘Tworek and Leerssen’).

[12] Id.

[13] Network Enforcement Act, 2017, Art. 1 § 1(2).

[14] Network Enforcement Act, 2017, Art. 1 § 3(1).

[15] Network Enforcement Act, 2017, Art. 1 § 3(2); Tworek and Leerssen, supra note 11.

[16] Network Enforcement Act, 2017, Art. 1 § 3(3); Tworek and Leerssen, supra note 11.

[17] Tworek and Leerssen, supra note 11.

[18] See Emma Thomasson, Germany looks to revise social media law as Europe watches, Reuters, (March 8, 2018), https://www.reuters.com/article/us-germany-hatespeech/germany-looks-to-revise-social-media-law-as-europe-watches-idUSKCN1GK1BN. (‘Reuters’)

[19] See Germany: Flawed Social Media Law, Human Rights Watch, (February 14, 2018, 12:01 AM), https://www.hrw.org/news/2018/02/14/germany-flawed-social-media-law.

[20] See Schetzer, supra note 9; Germany starts enforcing hate speech law, BBC News (January 1, 2018), https://www.bbc.com/news/technology-42510868; Gollatz and Jenner, supra note 10.

[21] Tworek and Leerssen, supra note 11 at 3.

[22]See Germany: The Act to Improve Enforcement of the Law in Social Networks, Article 19, 2 (August, 2017) https://www.article19.org/wp-content/uploads/2017/09/170901-Legal-Analysis-German-NetzDG-Act.pdf.

[23] Tworek and Leerssen, supra note 11 at 3.

[24] Reuters supra note 19; Schetzer, supra note 9.

[25] See Nadine Strossen, Freedom of Speech and Equality: Do We Have to Choose?, 25(1) J Law  Pol. 214 (2016).

[26] Grundgesetz, The Constitution of the Federal Republic of Germany, 1949, Art. 5.

[27] See International Covenant on Civil and Political Rights, 16 December 1966, Art. 19; Universal Declaration of Human Rights, 10 December 1948, Art. 2.

[28] See Chloé Berthélémy, French Avia law declared Unconstitutional: what does this teach us at EU level?, EDRi, (June 24, 2020), https://edri.org/french-avia-law-declared-unconstitutional-what-does-this-teach-us-at-eu-level/.

[29] See Mark McCarthy, In France What’s Illegal Offline Is Now Illegal Online, Forbes (May 18, 2020), https://www.forbes.com/sites/washingtonbytes/2020/05/18/in-france-whats-illegal-offline-is-now-illegal-online/#2609c3ce38b5.

[30] See Manny Marotta, France Constitutional Court strikes down most of online hate speech law, Jurist, (June 20, 2020, 4:47 PM) https://www.jurist.org/news/2020/06/french-court-strikes-down-most-of-online-hate-speech-law/.

[31] European Commission, Draft Act amending the Network Enforcement Act, 2020/174/D (Germany) (Notified on 30 March, 2020).

[32] See Rainer Hofmann, Incitement to National and Racial Hatred: The Legal Situation in Germany in Striking a Balance 160 (1992).

[33] See Sander van der Linden and Jon Roozenbeek, Bad News – A psychological vaccine against fake news, LSE Impact Blog (July 31, 2019), https://blogs.lse.ac.uk/impactofsocialsciences/2019/07/31/bad-news-a-psychological-vaccine-against-fake-news/.

[34] Id.

[35] Id.

[36] See Sander van der Linden, Anthony Leiserowitz, Seth Rosenthal, and Edward Maibach, Inoculating the Public against Misinformation about  Climate Change, 1(2) Global Challenges 5, 6 (2017).

[37] See Jon Roozenbeek and Sander van der Linden, Fake news game confers psychological resistance against online misinformation, 5 Palgrave Commun. 2, 8 (2019).

[38] Id.

[39] Id.

Data Protection in the Times of COVID-19: Indian Aspect

By: Aditi Jaiswal & Anubhav Das, 4th Year students of NUALS Kochi, RMLNLU Lucknow

Introduction

In the wake of COVID-19, where the collection of data is an essential tool to search and track the individuals infected, an issue might arise in the near future when this pandemic is over. The data collected now by the state can be used for its intended purpose, but it can also be used to the detriment of an individual. As a result of this data collection, the right to privacy, which has been declared a fundamental right by the Supreme Court, may be infringed.

Among the data collected to track individuals infected—or potentially infected—with COVID-19 is location data and  biographical data. This data is personal and can be used to understand an individual and make predictions about that individual which can be then be used against them. For an example, look at the app recently launched by the Indian government, the Aarogya Setu App. This app predicts the chances of an individual having COVID-19 by tracking the location of an individual. Using the individual’s location, the app then checks if that individual has come into proximity with an infected person. This location data can be used to predict the number of family members present, which grocery store they shop at, and more. Just as Target[1] predicted the pregnancy of a woman by analysing her shopping list, algorithms can be used to process the data collected, ostensibly for COVID-19 tracking, to learn things that one might never wish to reveal. The location tracking could even be used to predict something as personal as an extramarital affair.

Although the privacy policy[2] of the app mentions that the data collected, will be deleted after a certain period of time, do we have any legislation which could be used to make the state liable should they fail to do so? This article will deal with a very basic concept in data protection law: purpose limitation. This article will analyse the existing law in India and whether or not it can combat such issues, before further analysing the Data Protection Bill of 2019 and its importance in the current case.

The Present:

The principle of purpose limitation under the data protection law is this: the data collected must be used for the purpose specified and when that specified purpose is accomplished, the data must later be deleted. It is essential that this principle is adhered to with the data collected by the state to track COVID-19 cases. This is the only way to ensure the data can never be used later for any other purpose.

Currently, the IT (Reasonable Security Practices And Procedures And Sensitive Personal Data or Information) Rules, 2011 (“IT Act”) governs the collection of data.[3] The IT Act recognizes the need for a privacy policy, information collection requirements, information disclosure requirements and more. Rule 5 (4) of the IT Act deals with purpose limitation. Rule 5 states that the “body corporate” cannot retain the data collected after the purpose for which it was collected has been accomplished. This means the “body corporate” are mandated to delete the data once the purpose is accomplished. Had this rule been written more broadly, it could have effectively dealt with the issue of data collection by the state.  The problem lies in the definition of “body corporate” under the rule. Rule 5(4) only defines “body corporate” to be “any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities”.[4] Thus, this rule does not apply to the state and the data collection done by the state to track cases of COVID-19. This oversight in the IT Act can be dealt with, either through an amendment to the IT Act or with new data protection legislation.

The Future:

In the Puttuswamy case,[5] the Supreme Court acknowledged the need for separate legislation for data protection in India. As a result, after much deliberation, the Personal Data Protection Bill, 2019 (the “PDP Bill”) was introduced in parliament on December, 11 2019. The bill is currently being analysed by a Joint Parliamentary Committee and it includes important provisions which could be used to combat the problem of data collection by the state in the times of COVID-19.

Section 9(1) of the PDP Bill states that the “data fiduciary” shall not retain personal data once the purpose of collecting that data is fulfilled.[6] An important aspect of this provision is that the rule governs not just any “body corporate,” but rather any “data fiduciary.” Section 3 (13) of the PDP Bill defines “data fiduciary” to include the state. This means that if this Bill was law right now, the data collected by the state would be required to be deleted once the pandemic is over or once that data has been used for its purpose. Moreover, if the state does not comply with this provision then it “shall be liable to a penalty which may extend to fifteen crore rupees or four percent of its total worldwide turnover of the preceding financial year, whichever is higher.” However, the PDP Bill is not law right now and there is no a chance of it being enacted soon. This might give the state an active chance to evade liability, even if they violate the privacy of an individual.

The Reality:

As mentioned above, the PDP Bill is still being considered by a Joint Parliamentary Committee. It has not even seen the floor of discussion in the parliament. No one knows when or if this Bill will pass and become law. Moreover, considering the current situation of the COVID-19 pandemic, such discussions or deliberations will only take place once the pandemic is over. Thus, even if the Bill becomes an Act, the important thing to consider now is the retrospective applicability of it.

The state will only be liable for misuse of the data collected now if, when the PDP Bill becomes law, the law has retrospective application. The Bill, in its current form, is silent regarding retrospective application. As per the BN Srikrishna Committee report,[7] which presented the draft Personal Data Protection Bill 2018, the PDP Bill will have no retrospective application. The rationale given by the committee is that retrospective application of the law will not give the data fiduciary enough time to come into compliance. Thus, the state can evade all liability for misuse of personal data and the data collected now can be misused without legal repercussion until the PDP Bill becomes law. This will ultimately hamper the privacy of individuals.

Conclusion:

Data protection legislation in India is needed now. This legislation will help prevent data misuse in the future and will help to maintain the privacy of an individual. The issue of data collection in the times of COVID-19 can also be remedied by amending the IT Act’s definition of “body corporate” to include the state or by enacting the PDP Bill along with a provision for its retrospective application. The retrospective application of the Bill will be an essential step towards curbing the potential misuse of data being done now by the state. This in turn will preserve and protect the informational privacy of individuals.

[1] See Kashmir Hill, How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did, FORBES (Feb. 16, 2012, 11:02am), https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/#69f563446668.

[2] See Rohit Chatterjee, Arogya Setu App Gets Revised Privacy Policy, ANALYTICSINDIAMAG (Apr. 2020), https://analyticsindiamag.com/arogya-setu-app-gets-revised-privacy-policy/.

[3] See S.S. Rana & Co, Advocates, India: Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011, Mondaq (Sept. 5, 2017), https://www.mondaq.com/india/data-protection/626190/information-technology-reasonable-security-practices-and-procedures-and-sensitive-personal-data-or-information-rules-2011.

[4] See Elonnai Hickok, Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011, Centre for internet & society (Aug. 11, 2015), https://cis-india.org/internet-governance/blog/big-data-and-information-technology-rules-2011.

[5] See K. S. Puttaswamy v. Union of India, Writ Petition (Civil) No . 494 of 2012 (Sup. Ct. India Aug. 24, 2017).

[6] See The Personal Data Protection Bill, 2019, PRS Legislative Research, https://www.prsindia.org/billtrack/personal-data-protection-bill-2019.

[7] Committee of Experts Under the Chairmanship of Justice B.N Srikrishna, A Free and Fair Digital Economy: Protecting Privacy, Empowering Indians, Ministry of Electronics & Info. Technology,  July 2018, https://meity.gov.in/writereaddata/files/Data_Protection_Committee_Report.pdf.

Health Information Technology: Technology in Your Health Care

By: Rachel Whalen

“In 2019, healthcare consumers continue to demand greater transparency, accessibility and personalization.”[1] In this increasingly digital age, incorporating Health information technology (“Health IT”) into the health industry is very important. Health IT is “the exchange of health information in an electronic environment.”[2] A variety of electronic methods are used, such as computerized disease registries, electronic record systems (“EHRs”), and electronic prescribing.[3] Health care systems are implementing Health IT to mange health information and care for individuals and groups.[4]

The widespread use of Health IT improves quality of care, prevents medical error, reduces costs, and decreases inefficiencies.[5] Communication between health care providers and patients is better than ever before thanks to advances in securing Health IT networks.[6] More accurate EHRs can follow a patient to different health care providers. Apps and increased access to information can give patients more control over their care. This has improved the ability to help patients meet their health goals and to give the patients more control over their health.[7] Health IT’s merging of technology with healthcare has improved access to healthcare and the consistency of care.[8]

There are several different components of Health IT that add complexity to the system which does not exist in other communication technologies. The central component of Health IT infrastructure is the EHR.[9] These EHRs, or electronic medical records (“EMRs”), contain all of a person’s official health record in a digital format.[10] These digital records can be viewed even when the doctor’s office is closed, providing greater access to a person’s health information.[11] EHRs can also be used to share information between multiple healthcare providers and agencies within the healthcare system.[12] This makes it easier for doctors to share information with specialists and ensure consistent care.[13] Health IT also works outside of the healthcare system with personal health records (“PHRs”). PHRs are self-maintained health records controlled by the patient themself.[14] PHRs can be used to track doctor visits and treatments, as well as activities outside of the doctor’s office.[15] Patients can track their eating and exercise habits, as well as their blood pressure, heartbeat, and other medical parameters.[16] PHRs may even record medications and prescriptions if the PHR is linked to the doctor’s electronic prescribing (“E-prescribing”).[17] E-prescribing connects the doctors directly to the pharmacy, so no paper prescriptions are lost or misread.[18] This gives patients wider access to pharmaceuticals without having to bring paper prescriptions with them.[19]

Developments in Health IT have improved the popularity and access to health records among patients. Smartphones and apps have encouraged patients to use PHRs and have helped patients become more comfortable with their digital health information.[20] Health care providers have also increasingly implemented and used patient portals due to more consumer-friendly designs. Apps and patient portals were clunky and limited near the beginning of Health IT, but modern systems provide more options and customization options.[21] Patient portals used to only provide information of upcoming appointments and perhaps some test results.[22] Now, patient portals are used to download health records, securely communicate with physicians, pay bills, check services, check insurance coverage, and order prescriptions.[23] These Health IT services grant patients more access to and control over their health information and health care treatment.

In addition to individual records, Health IT has established a health information exchange (“HIE”).[24] Health care providers must manage a mountain of patient health information. Thus, there has been a consequential increase in the importance of data analytics.[25] HIEs are systems developed by groups of health care providers to share data between Health IT networks.[26] These shared systems and agreements between health care providers not only allow for better communication and consistent care, but also provide a large database of health information to analyze the health of communities as a whole.[27] Academic researchers can use the shared health information to develop new medical treatments and pharmaceuticals.[28] This plethora of information can be used to manage population health goals and research health trends.[29]

Unfortunately, this amount of information is very difficult to manage, which again increases the reliance on data analytics to find relevant files.[30] This is where other Health IT technologies come in, specifically picture archiving and communication systems (“PACs”) and vendor-neutral archives (“VNAs”). While images have been of most importance to radiologists, other specialties, such as cardiology and neurology, are also producing a large amount of clinical images.[31] PACs and VNAs are widely used to store and manage patient medical images and, in some cases, have even been integrated into shared systems between facilities and health providers.[32] Some Health IT systems even use artificial intelligence (“AI”) to sort and manage files.[33]

In addition to the advantages discussed above, the ability to quickly share accurate information, called “interoperability,” could be the difference between life and death for a patient. Health IT tools improve the necessary cooperation between health care providers for improved patient care and lower healthcare costs.[34] The “interoperability” and rapid information sharing provided by Health IT tools provides health care providers with the most updated information and can even provide patients with immediate access to their health records. Health care providers need personal information and basic medical history, which requires patients to provide repetitive information and paperwork. Interoperability information sharing provides that basic information to health care providers without the excess paperwork and allows for faster treatment. Similarly, health care providers have access to test results from other facilities, which prevents unnecessary tests and improves consistency of treatment. Consistent treatment is further aided by follow up treatment with alerts and reminders for ongoing health conditions, appointments, and medications.[35]

Digital records protect patient information in the event of emergency by allowing recovery of documents, as well as constant access to health records, which can follow patients to any provider, regardless of location. This allows for consistent treatment. The use of electronic systems also provides the ability to encrypt information so only authorized personnel have access. Electronic information can also be tracked to record who accesses the information and when they accessed it. Several of these safety advantages are required by the Federal Government. For example, certified Health IT systems are required to designate professionals and others, to limit access to information, so as to manage care effectively.[36]

Strict government regulations limit Health IT due to the amount of confidential information contained in the health information managed by Health IT.[37] Privacy and security is a top priority for the Federal Government as well as patients and health care providers.[38] Medical records can commonly contain the most intimate details of a patient’s life.[39] These files document physical health, mental health, behavioral issues, family information including child care relationships, and financial status.[40] Health care providers need all of this sensitive information to properly treat patients, but a breach of that information could cause innumerable harms to the patients.[41] Therefore, patients are guaranteed clearly defined rights to the privacy of their health information, including electronic health information.[42]

Health care and technology touch on every aspect of our lives. Ever since the computer was invented, various methods have been implemented to improve the efficiency and access of health care incorporation.[43] From EHRs to electronic prescriptions, Health IT has been connecting vital information for patients and health care providers.[44] There are still some issues and miscommunications within the systems, but Health IT will improve as technology improves, providing crucial information and technical support to the health care industry.

[1] Ashley Brooks, What Is Health Information Technology? Exploring the Cutting Edge of Our Healthcare System, Rasmussen C. Health Sci. Blog (June 10, 2019), https://www.rasmussen.edu/degrees/health-sciences/blog/what-is-health-information-technology/ (quoting Patrick Gauthier, director of healthcare solutions at Advocates for Human Potential, Inc.).

[2] Health Information Technology Integration, Agency for Healthcare Research and Quality, https://www.ahrq.gov/ncepcr/tools/health-it/index.html (last visited Apr. 15, 2020).

[3] See id.

[4] See id.

[5] See Department of Health and Human Services, Health Information Technology, Health Information Privacy, https://www.hhs.gov/hipaa/for-professionals/special-topics/health-information-technology/index.html (last visited Apr. 15, 2020).

[6] See Brooks, supra note 1.

[7] See id.

[8] See id.

[9] See Margaret Rouse, Health IT (health information technology), SearchHealthIT (June 2018), https://searchhealthit.techtarget.com/definition/Health-IT-information-technology.

[10] See id.

[11] See Office of the National Coordinator for Health Information Technology, Health IT: Advancing America’s Health Care, https://www.healthit.gov/sites/default/files/pdf/health-information-technology-fact-sheet.pdf (last visited Apr. 15, 2020) [hereinafter “ONC”].

[12] See Rouse, supra note 9.

[13] See ONC, supra note 11.

[14] See Rouse, supra note 9.

[15] See ONC, supra note 11.

[16] See id.

[17] See id.

[18] See id.

[19] See id.

[20] See Rouse, supra note 9.

[21] See id.

[22] See id.

[23] See id.

[24] See id.

[25] See Rouse, supra note 9.

[26] See id.

[27] See id.

[28] See id.

[29] See id.

[30] See Rouse, supra note 9.

[31] See id.

[32] See id.

[33] See id.

[34] See Brooks, supra note 1.

[35] See ONC, supra note 11.

[36] See id.

[37] See Brooks, supra note 1.

[38] See ONC, supra note 11.

[39] See Institute of Medicine, Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research (Laura A. Levit & Lawrence O. Gostin eds., 2009).

[40] See id.

[41] See id.

[42] See ONC, supra note 11.

[43] See The History of Healthcare Technology and the Evolution of EHR, VertitechIT (Mar. 11, 2018), https://www.vertitechit.com/history-healthcare-technology/.

[44] See id.

Courts Across America Adapt and Respond to COVID-19

[et_pb_section fb_built=”1″ admin_label=”section” _builder_version=”3.22″][et_pb_row admin_label=”row” _builder_version=”3.25″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″ _builder_version=”3.25″ custom_padding=”|||” custom_padding__hover=”|||”][et_pb_text admin_label=”Text” _builder_version=”3.27.4″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”]

By: Derek Reigle

 

The COVID-19 pandemic has resulted in extraordinary changes across America and the world. This phenomena has not escaped the American legal system. Last week, the Supreme Court announced that it will begin hearing cases via teleconference, a first for the court.[1] The Supreme Court is not alone in its changes. Indeed, all over America, courts have moved hearings online, which are conducted through video teleconference software programs like Zoom.[2]

States have responded to these new online courts in interesting ways. In Texas, there are guidelines on how to dress and present oneself on Zoom in Court.[3] This makes sense because some attorneys are appearing online while still in bed, leading to judicial reprimands.[4] Further problems also have emerged. Some court hearings have even been hacked into—online trolls “zoom bombing” a court proceeding in order to disrupt the process.[5] As a result of “zoombombing,” federal prosecutors are now issuing warnings that declare intruding into uninvited zoom calls is a felony.[6]

Issues beyond just the logistical impact of online court proceedings have also developed. Some of these substantive concerns will inevitably lead to several interesting and complicated constitutional questions. One of the concerns raised by criminal defense counsels is already being argued in some state courts. They argue that the video examination of witnesses during criminal trials does not fulfill the confrontation requirement enumerated in our Constitution.[7]  Another issue is the right to a speedy trial. Cases are being pushed back to June and July across the country,[8] but what if the pandemic continues to linger? The result of these complicated legal questions could determine the future plans that are put in place for the next potential pandemic.

As of now, the extent and length of the pandemic is unknown. However, several additional changes are already permanently altering the legal landscape. Some of these changes could be beneficial in the long run for judges. Confrontation Clause issues aside, many courtroom procedures that require face to face interaction could be moved online.[9] This could help provide some transitional juice to an antiquated profession. Think about it: lawyers could handle cases further away, access to courts could be increased because people could call in remotely, and judicial efficiency could be increased as a result.

Another positive of this situation is that some prisoners who are eligible for parole are now being released in greater numbers.[10]  This is due to fears of the coronavirus spreading throughout our prisons and infecting prisoners.[11]America has highest number of incarcerated persons in the world,[12] and reducing those numbers through the release of non-violent offenders would be a great thing.

Ultimately, the next few months and, potentially, years will bring about a significant amount of changes in the way Court is conducted, both in person and online. There will also be changes in how we handle our vulnerable prison population.  All of this will lead to several interesting constitutional questions. Hopefully, we can take note of all of these noteworthy changes and implement the unexpected positives from a terrible situation.

[1] See Pete Williams, In Historic First, Supreme Court to Hear Arguments by Phone, NBC News (Apr. 20, 2020), https://www.nbcnews.com/politics/supreme-court/historic-first-supreme-court-hear-arguments-phone-n1182681.

[2] See Aaron Holmes, Courts and Government Meetings Have Fallen into Chaos After Moving Hearings to Zoom and Getting Swarmed With Nudity and Offensive Remarks, Business Insider (Apr. 20, 2020), https://www.businessinsider.com/zoom-courts-governments-struggle-to-adapt-video-tools-hearings-public-2020-4.

[3] See Electronic Hearings with Zoom, Texas Judicial Branch, https://www.txcourts.gov/programs-services/electronic-hearings-with-zoom/.

[4] See Danielle Wallace, Florida Judge Urges Lawyers to Get Out of Bed and Get Dressed for Zoom Court Cases, Fox News (Apr. 15, 2020), https://www.foxnews.com/us/florida-coronavirus-judge-lawyers-zoom-shirtless-bed-poolside-dressed.

[5] See Nick Statt,’Zoombombing’ is a Federal Offense That Could Result in Imprisonment, Prosecutors Warn, The Verge (Apr. 3, 2020), https://www.theverge.com/2020/4/3/21207260/zoombombing-crime-zoom-video-conference-hacking-pranks-doj-fbi.

[6] See id.

[7] See Interview with Hon. Anne Hartnett, Judge, Court of Common Pleas of The State of Delaware, in Dover, Del. (Apr. 21, 2020).

[8] See id.

[9] See id.

[10] See Iowa to Release Prisoners to Minimize Spread of COVID-19, KCCI (Apr. 20, 2020), https://www.kcci.com/article/iowa-to-release-prisoners-to-minimize-spread-of-covid-19/32216621.

[11] See id.

[12] See Drew Kann, 5 facts behind America’s High Incarceration Rate, CNN (Apr. 21, 2019), https://www.cnn.com/2018/06/28/us/mass-incarceratio.n-five-key-facts/index.html.

Image Source: https://www.drugtargetreview.com/news/57287/3d-visualisation-of-covid-19-surface-released-for-researchers/

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

Page 9 of 9

Powered by WordPress & Theme by Anders Norén