Richmond Journal of Law and Technology

The first exclusively online law review.

Dial “A” for Alexa

alexa-911

By: Victoria Linney

Amazon Echo is a hands free speaker that is controlled by your voice.[1] Echo answers to the name Alexa, and plays music, reads audiobooks aloud, gives headlines, and does so much more.[2] But, is one of Alexa’s “skills” the ability to help solve a murder?

This question is being asked in connection with a murder investigation in Bentonville, Arkansas.[3] Police have issued a warrant asking Amazon to produce the transcripts, voice recordings, and other information that an Echo speaker may have recorded the night of the murder.[4] But, the chances of Alexa being helpful in solving the murder are slim. This is because Echo is not recording everything that you say in your home.[5] Instead, the Echo is listening for “hot words” like the word “Alexa.”[6] While the Echo’s microphone is always on,[7] only upon hearing the word “Alexa” does the device begin recording for the amount of time it would take to make a request, and then it sends that audio to Amazon.[8] These recordings are then stored “until the user deletes them through the Echo smartphone app or on Amazon’s website.”[9] The user knows when Echo is sending audio for Amazon to store because the ring on top of the Echo illuminates and turns blue.[10]

Even though there is only a slim chance that the Echo could help solve the murder, Amazon has refused to turn over the data to the prosecutor.[11] Amazon released a statement stating “Amazon will not release customer information without a valid and binding legal demand properly served on us. Amazon objects to overbroad or otherwise inappropriate demands as a matter of course.”[12]

The prosecutor’s request for the Echo data has brought the privacy implications of voice activated speakers and other smart technology to the forefront of legal discussions.[13] Even though police have historically taken other electronics such as computers and cell phones to help solve crimes, the question remains whether new devices with built in microphones that are theoretically always listening should be subjected to the same standard as computers and cell phones.[14] Rather, the question becomes “is there a difference in the reasonable expectation of privacy one should have when dealing with a device that is ‘always on’ in one’s own home?”[15]

The Fourth Amendment provides people the right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”[16] In order to assert a claim under the Fourth Amendment, you must have a “reasonable expectation of privacy, which contains both an objective standpoint and a subjective standpoint.”[17] On one hand, the argument can be made that there is a reasonable expectation of privacy because even though one chooses to put this technology in their home, they are not necessarily consenting to having their private conversations broadcasted to the world. However, in Smith v. Maryland, the Supreme Court held that “a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.”[18] Therefore, since the Alexa requests are sent to Amazon (a third party), it may also be argued that one does not have a reasonable expectation of privacy when it comes to these recordings.

While it is unclear how the courts will deal with the privacy issues that smart devices with always on microphones like Alexa pose, there are avenues for purchasers of these devices to protect themselves. In addition to deleting the recordings in the Alexa app, users are also able to turn off the microphone on the device, by pushing “the microphone button on top of the Echo” and then waiting for the button and the ring to “illuminate bright red to let you know it is not listening.”[19] But, until the answer to the privacy level question for these devices has been determined by courts, turning off the microphone when not in use and deleting requests from the app is probably the safest avenue for purchasers of Alexa to protect their privacy.

 

 

 

[1] See Amazon Echo – Black, Amazon, https://www.amazon.com/Amazon-Echo-Bluetooth-Speaker-with-WiFi-Alexa/dp/B00X4WHP5E (last visited Feb. 8, 2017).

[2] See James Kendrick, How to Use the Amazon Echo and Why You Should Get One, Mobile News (Feb. 9, 2016), http://www.zdnet.com/article/how-to-use-the-amazon-echo-and-why-you-should-get-one/.

[3] See Jake Swearingen, Can an Amazon Echo Testify Against You?, N.Y. Mag. (Dec. 27, 2016), http://nymag.com/selectall/2016/12/can-an-amazon-echo-testify-against-you.html.

[4] See Janko Roettgers, Relax: Your Amazon Echo Isn’t Recording Everything You Say, Boston Herald (Dec. 28, 2016), http://www.bostonherald.com/business/technology/2016/12/relax_your_amazon_echo_isnt_recording_everything_you_say.

[5] See id.

[6] Id.

[7] See Times Editorial Board, The Smart Home Has Ears, And it Can’t Keep a Secret, L.A. Times (Jan. 4, 2017), http://www.latimes.com/opinion/editorials/la-ed-amazon-echo-surveillance-20170104-story.html.

[8] Swearingen, supra note 3.

[9] Times Editorial Board, supra note 7.

[10] See Tony Bradley, How Amazon Echo Users Can Control Privacy, Forbes (Jan. 5, 2017), http://www.forbes.com/sites/tonybradley/2017/01/05/alexa-is-listening-but-amazon-values-privacy-and-gives-you-control/#2550510e5eed.

[11] See CNN Wire, Data Recorded on Voice-Activated Amazon Echo Sought by Prosecutor in Arkansas Murder Trial, KLTA (Dec. 28, 2016), http://ktla.com/2016/12/28/data-on-amazon-echo-sought-by-prosecutor-in-arkansas-murder-trial/.

[12] Id.

[13] See Roettgers, supra note 4.

[14] See Amy Wang, Can Alexa Help Solve a Murder? Police Think So – But Amazon Won’t Give Up Her Data, Wash. Post (Dec. 28, 2016), https://www.washingtonpost.com/news/the-switch/wp/2016/12/28/can-alexa-help-solve-a-murder-police-think-so-but-amazon-wont-give-up-her-data/?utm_term=.cd95df5221fd.

[15] Id.

[16] U.S. Const. amend. IV.

[17] Andrew L. Rossow, Amazon Echo May Be Sending Its Sound Waves into the Courtroom As Out First ‘Smart Witness’, Huff. Post (Dec. 29, 2016) http://www.huffingtonpost.com/entry/amazons-echo-may-be-sending-its-sound-waves-into-the_us_58656ceae4b04d7df167d377.

[18] Smith v. Maryland, 442 U.S. 735, 743-44 (1979).

[19] Bradley, supra note 10.

Image Source: https://kfiretv-cn396iwnnfhyi.netdna-ssl.com/wp-content/uploads/2016/12/alexa-911.jpg.

Your Personal Web History Could Soon Be for Sale

WASHINGTON, DC - OCTOBER 29:  Speaker-elect of the House Paul Ryan (R-WI) delivers remarks before being sworn in on the floor of the House chamber at the U.S. Capitol October 29, 2015 in Washington, DC. Ryan was elected the 62nd speaker of the House with 236 votes and will attempt to steer that chaotic legislative body following the resignation of former Speaker John Boehner (R-OH).  (Photo by Chip Somodevilla/Getty Images)

By: Brad Stringfellow,

Voting down strict party lines, the Republican-majority Senate recently threw out FCC rules which would have provided consumers with more privacy from Internet Service Providers (ISPs).1 As it stands, ISPs such as Comcast are on an even playing field with free services such as Google or Facebook who are able to capture, package, and sell your activity. The reason the FCC sought to put harsher restrictions on ISPs is that consumers have the choice of whether or not to use free services such as Google or Facebook, but there is little consumers can do to escape an ISP’s oversight: using the internet in almost any capacity is accomplished through an always-watching ISP.

The FCC was unhappy with this lucrative opportunity ISPs have to exploit and sell consumer browsing data, especially since consumers must pay ISPs for internet service. In 2016, the FCC passed a new set of rules entitled Protecting the Privacy of Customers of Broadband and Other Telecommunication Services in 2016.2 The new rules were meant to increase consumer privacy by forcing ISPs to increase data security and privacy measures as well as a measure which would only allow the sale of browsing history if the consumer opted-in.3

The FCC explained their view of ISPs by saying that “ ISPs are in a position to develop highly detailed and comprehensive profiles of their customers – and to do so in a manner that may be completely invisible.”4 In justifying the proposed rules, the FCC explained, “[W]ell-functioning commercial marketplaces rest on informed consent.”5 ISPs were not happy about these new rules as it would cost them significantly to upgrade their infrastructure for the security requirements and lost revenue from selling consumer browsing history.6

Stay petitions on the new rules were filed by organizations composed of advertisers, telecom, broadband, ISPs, and other commercial groups sympathetic to ISPs. The FCC granted the stay on March 1, 2017.7 The Commissioner of the FCC, Michael O’Rielly, noted that “there has been no evidence of any privacy harms, and “no benefit to be gained from increased regulations,” while the new rules “place substantial, unjustified costs on businesses and consumers.”8

On March 23, 2017, the Senate passed a vote disapproving the stayed rules. Congress has the power to overturn agency rules with a simple majority through Chapter 8 Title 5 of the US Code.9 The vote was 50-48, with 50 Republican votes to overturn the rules against 48 Democratic votes to approve the rules, with two absent Republican Senators not voting.10 The vote will now go to the majority-Republican House where it will likely follow suit and throw the rules out.

Speaking of the vote’s outcome, Senator Edward Markey, a Democrat from Massachussets, said, “President Trump may be outraged by fake violations of his own privacy, but every American should be alarmed by the very real violation of privacy that will result of the Republican roll-back of broadband privacy protections. With today’s vote, Senate Republicans have just made it easier for American’s sensitive information about their health, finances and families to be used, shared, and sold to the highest bidder without their permission. The American public wants us to strengthen privacy protections, not weaken them. We should not have to forgo our fundamental right to privacy just because our homes and phones are connected to the internet.”11

After winning the vote, Senate Majority Leader Mitch McConnell justified overturning the regulation as it “makes the internet an uneven playing field, increases complexity, discourages competition, innovation, and infrastructure investment.”12

It is curious to note how strictly the vote the went by party lines. Republicans have been supporters of individual rights and privacy in some regards, but here the desire to let big business work things out amongst themselves seems to won out. From the 2016 Republican Platform, they give a statement on internet privacy:

“We intend to advance policies that protect data privacy while fostering innovation and growth and ensuring the free flow of data across borders…We intend to facilitate access to spectrum by paving the way for high-speed, next-generation broadband deployment and competition on the internet and for internet services.”13

Protecting data privacy is balanced with the need to foster innovation and growth: in this case it seems the need to foster innovation and growth won out. In other elements of the Republican Platform, the party is protective of individual rights against big business, such as medical records and farmers’ data.14 The medical records position is stated, “We applaud the advance of technology in electronic medical records while affirming patient privacy and ownership of personal health information.”15 On farmers’ rights, it reads “   We will advance policies to protect the security, privacy, and most of all, the private ownership of individual farmers’ and ranchers’ data.”16 Additionally, the Republican Platform generally opposes aerial surveillance on US soil, with the exception of observation over borders.17

It seems the Republican Party has some intentions to protect individual privacy rights, and even goes so far as to partly acknowledge it, so it is certainly surprising that not one Republican Senator was willing to vote against such a sweeping grant of ISP power.

Since it seems as though this will inevitably pass through the House, what can be done to protect privacy? Virtual Private Networks (VPNs) are perhaps the easiest way to circumvent ISPs, but there are some downsides. VPNs are completely unregulated, and can just easily sell your browsing history as an ISP if careful scrutiny and selection is not applied.18 One VPN company, Private Internet Access, is jumping on the opportunity by taking a full page ad out in the New York Times calling out the 50 Senators who voted to disapprove the rules, and the potential consequences of increased ISP access to private data.19

This is an unsavory turn which grants sweeping power to ISPs to monitor, package, and sell consumers’ browsing history and activity. Hopefully, some Republicans in the House will be more protective of their constituents’ privacy. Contacting your House Representative may help. If things continue along the same path, internet privacy is about to be substantially changed for the worse.

 

 

 

 

1 David Shepardson, U.S. Senate Votes to Overturn Obama Broadband Privacy Rules, Reuters (Mar. 23, 2017, 1:50 PM), http://www.reuters.com/article/us-usa-internet-idUSKBN16U2ER.

2 Protecting the Privacy of Customers of Broadband and Other Telecommunications Services, FCC 2500 (Mar. 31, 2016), https://apps.fcc.gov/edocs_public/attachmatch/FCC-16-39A1_Rcd.pdf

3 Id. at 2502.

4 Id.

5 Id. at 2506.

6 Thorin Klosowski, The Senate Just Voted to Let Internet Sell Your Web History, Life Hacker (Mar. 23, 2017, 1:30 PM), http://lifehacker.com/senate-votes-to-let-internet-providers-sell-your-web-hi-1793574677.

7 Order Granting Stay Petition in the Matter of Protecting the Privacy of Customers of Broadband and Other Telecommunications Services, FCC 1 (Mar. 1, 2017), http://transition.fcc.gov/Daily_Releases/Daily_Business/2017/db0301/FCC-17-19A1.pdf.

8 Id. at 5.

9 5 U.S.C § 8.

10https://www.senate.gov/legislative/LIS/roll_call_lists/roll_call_vote_cfm.cfm?congress=115&session=1&vote=00094.

11 Edward J. Markey, Senator Markey Blasts GOP Roll-back of Broadband Privacy Protections, (Mar. 23, 2017), https://www.markey.senate.gov/news/press-releases/senator-markey-blasts-gop-roll-back-of-broadband-privacy-protections.

12 Shepardson, supra note 1.

13 Republican Platform 2016 6, https://prod-cdn-static.gop.com/static/home/data/platform.pdf.

14 Id. at 18, 36.

15 Id. at 36.

16 Id. at 18.

17 Id. at 13.

18 Klosowski, supra note 6.

19 Private Internet Access, A VPN Provider, Takes Out A Full Page Ad in the New York Times Calling Out 50 Senators, https://i.redditmedia.com/0kc4jJDVgLGbOI0TSY8hwQfcCPoY6ADX-MtA2vilf2s.jpg?w=576&s=f65699ffabac82dbdaf8e6fe8482e133.

Image Source: https://cdn.arstechnica.net/wp-content/uploads/2017/03/getty-house-of-representatives-800×534.jpg.

Snapchat IPO: A Cautionary Tale

snapchat-ipo_exchanges

By: Courtney Gilmore,

 

The much-anticipated public offering has finally been filed as of February 2.[1] Snapchat, formally known as Snap Inc., has officially requested a spot on the New York Stock Exchange under the ticker symbol SNAP.[2] The company took advantage of the JOBS Act (Jumpstart Our Business Startups), which allows companies with less than $1 billion of annual revenue to file for IPOs in secret.[3] The friendly ghost will likely go public in March of this year.[4]

While this is an exciting new endeavor for the everyday social media guru, it may be better suited for the high-risk tolerance investors only. Proceed with caution.

Facebook and Twitter aside, Snapchat boasts itself as a camera company, “giving people the power to express themselves and live in the moment.”[5] On the other hand, “Facebook says its mission is connecting everyone, while Google’s is to organize the world’s information.”[6]

Sure, this sounds like an attractive label for any millennial or investor out there, but beyond this, there is not much out there to lay a stable foundation for Snapchat’s future. For instance, Snapchat has an extremely short financial history.[7] Moreover, the company is labeled as secretive by outsiders and employees alike.[8] Evan Spiegel, Snapchat’s Chief Executive Officer, said in a 2015 note to employees, “‘[k]eeping secrets gives you space to change your mind, until you’re really sure that you’re right.’”[9] So, if Snapchat is unable to be transparent with even its own employees, how will prospective investors be able to keep track of their investments?

Snapchat’s founders are seemingly resistant to give up any control whatsoever. While this is a natural instinct for any sensible businessman or woman, Snapchat’s founders Evan Spiegel and Bobby Murphy, maintain that the shares issued to the public will carry no votes.[10] There are three classes of stock in Snapchat: Class A, Class B, and Class C. Only Spiegel and Murphy will control Class C shares, whereby each share receives 10 votes.[11] Class B shares receive one vote per share and are issued to venture capitalists and those investors that have poured capital into the company before its initial public offering.[12] Finally, the Class A shares will be issued to the public.[13] In addition to no vote shares, Snapchat reportedly has no intention of paying out cash dividends to its investors.[14] Without much control, investors must turn to other factors to weigh their risks and rewards.

In 2016, Snapchat recorded revenue of $404.5 million, but losses amounted to $514.6 million.[15] Although its revenue increased by 600% between 2015 and 2016, Snapchat’s current losses exceed Twitter’s at the time of its own IPO, while Facebook had revenue of $3.7 billion at the same point in its life cycle.[16] This begs the question of whether Snapchat will suffer the same fate that Twitter did when it went public. “‘To me, Snap is Twitter 2.0 – a company with a good growth rate that is losing a ton of cash, coupled w/ a massive valuation.’”[17] Snapchat is seeking a $25 billion valuation, which is sixty-two times its revenue.[18] On the other hand, GoPro, comparable to Snapchat’s “camera company” self-description, trades at one times GoPro sales.[19] Twitter trades at five times its revenue, and Facebook trades at fourteen times its revenue.[20]

Another important factor for investors to consider is the slowing in growth that Snapchat has experienced more recently.[21] Since Facebook’s attempt at similar products to Snapchat’s Stories, Snapchat views have allegedly declined between fifteen and forty percent since August.[22] Instagram’s version of Stories can also be attributed to Snapchat’s recent decline in user status.[23] Of Snapchat’s 158 million users, the majority consists of subscribers ranging between the ages of 18 and 34 years old.[24]

Hosting costs are another concern. Snapchat just signed a deal with Google to host Snapchat’s cloud space for $400 million per year.[25] On the surface this doesn’t seem like anything to get hung up on, except that Snapchat’s revenue last year was just about $400 million.[26] Snapchat’s hosting costs are so large because of the many video features Snapchat offers to consumers.[27] Expenses are also growing by employees.[28] Snapchat has tripled its number of employees to total 1,859 in 2016.[29]

On the upside, Snapchat is in the market of offering new, innovative products (like any logical tech company would). For example, Snapchat added its geofilter options in July 2014.[30] The company went on to release the Spectacles in September 2016.[31] Snapchat’s “foray into hardware and its new identity as a ‘camera company’ could cause investors to value it differently than a pure-play company, where profit margins are typically higher.”[32] Snapchat is also expected to bring in close to $1 billion in revenue by the end of this year.[33]

Moral of the story: while the risks are high, the rewards will likely be higher. Snapchat’s video features certainly distinguish the company from its competitors, as do Snapchat’s endeavors with the Spectacles and more products to hit the market. While the company may be secretive, experiencing minimal user decline, and racking up steep payment obligations, there is still a plethora of innovation to look forward to, and Snapchat remains at the cutting edge of it all. Perhaps Snapchat will not offer stock suitable for the novice investor’s portfolio, but it certainly has the potential to evince high reward for those that are even able to buy in initially. This young company has plenty of room to grow and plenty of buzz to live up to.

 

 

 

[1] See Barbara Ortutay, Snap, Maker of the Teen Social App Snapchat, Files for IPO, The Washington Post (Feb. 2, 2017), https://www.washingtonpost.com/national/snap-maker-of-the-teen-social-app-snapchat-files-for-ipo/2017/02/02/794c3b92-e9a4-11e6-903d-9b11ed7d8d2a_story.html?utm_term=.54565215b4ae.

[2] See id.

[3] See Seth Fiegerman & Matt Egan, Snapchat Files for $3 Billion IPO, CNN (Feb. 2, 2017), http://money.cnn.com/2017/02/02/technology/snapchat-ipo-filing/.

[4] See id.

[5] Sarah Frier & Alex Barinka, Can Snapchat’s Culture of Secrecy Survive an IPO?, Bloomberg (Jan. 17, 2017), https://www.bloomberg.com/news/features/2017-01-17/can-snapchat-s-culture-of-secrecy-survive-an-ipo.

[6] Id.

[7] See id.

[8] See id.

[9] See id.

[10] See Tom Zanki, 4 Takeaways From Snap’s IPO Filing, Law360 (Feb. 3, 2017), https://www.law360.com/technology/articles/888278/4-takeaways-from-snap-s-ipo-filing?nl_pk=a6f0df19-c127-4444-8e8b-a4c34dfadf0b&utm_source=newsletter&utm_medium=email&utm_campaign=technology.

[11] See Fiegerman & Egan, supra note 3.

[12] See id.

[13] See id.

[14] See Jen Wieczner, Here How Insanely Expensive Snap’s IPO Will Be, Fortune (Feb. 2, 2017), http://fortune.com/2017/02/02/snapchat-ipo-snap-stock/.

[15] See Victoria Woollaston, How Snapchat Turned Dick Pics into a Potentially Multi-Billion Dollar IPO, Wired (Feb. 3, 2017), http://www.wired.co.uk/article/snapchat-ipo-cameras.

[16] See Wieczner, supra note 13; Maya Kosoff, Will the Snapchat I.P.O. Be a Flop?, Vanity Fair (Feb. 2, 2017), http://www.vanityfair.com/news/2017/02/will-the-snapchat-ipo-be-a-flop.

[17] See Fiegerman & Egan, supra note 3.

[18] See Wieczner, supra note 14.

[19] See id.

[20] See Eric Jackson, 4 Reasons to Be Wary of the Snapchat IPO, Forbes (Feb. 7, 2017), http://www.forbes.com/sites/ericjackson/2017/02/07/4-reasons-to-be-wary-of-the-snapchat-ipo/#2abf5745339b.

[21] See Kosoff, supra note 16.

[22] See id.

[23] See Vikram Nagarkar, Snapchat IPO: The Pros and Cons of Buying Into Snap Stock Right Now, amigobulls (Feb. 6, 2017), http://amigobulls.com/articles/snapchat-ipo-the-pros-and-cons-of-buying-into-snap-stock-right-now.

[24] See Woollaston, supra note 15.

[25] See Jackson, supra note 20.

[26] See id.

[27] See id.

[28] See Wieczner, supra note 14.

[29] See id.

[30] See Woollaston, supra note 15.

[31] See id.

[32] Portia Crowe, Snap Files for its IPO, Revealing Surging Sales Growth and Huge Losses, Business Insider (Feb. 2, 2017), http://www.businessinsider.com/snap-to-list-on-nyse-report-2017-1.

[33] See Nagarkar, supra note 23.

Image Source: https://i2.wp.com/thenypost.files.wordpress.com/2017/01/snapchat-ipo_exchanges.jpg?quality=90&strip=all&ssl=1.

Put Your Money Where Your Mouth Is

LOS ANGELES, CA - JANUARY 29:  A protester holds a sign during a demonstration against the immigration ban that was imposed by U.S. President Donald Trump at Los Angeles International Airport on January 29, 2017 in Los Angeles, California. Thousands of protesters gathered outside of the Tom Bradley International Terminal at Los Angeles International Airport to denounce the travel ban imposed by President Trump. Protests are taking place at airports across the country.  (Photo by Justin Sullivan/Getty Images)

By: Lindsey McLeod

 

“Put your money where your mouth is” realized a modern meaning in this past week as individuals concerned about President Trump’s travel ban donated to the American Civil Liberties Union (ACLU) as a means of voicing their objection.[1] The ACLU reportedly received $24 million in online donations in the week following the immigration ban, totaling over six-times the ACLU’s yearly donation average.[2] Most of these donations occurred via online portals, flooding the website with donations from 356,306 people. This isn’t the first time that President Trump has sparked an influx of online donations to the ACLU, as the organization received nearly fifteen million dollars in the weeks following Trump’s election.

This online-centric donation model is consistent with millennial behaviors, as millennials tend to donate online, a realm that has dominated millennial financial tendencies.[3] Such innovative and effective online fundraising campaigns are a trademark of the millennial generation, and the ACLU is getting on board. The start-up business model is commonly associated with trendy work environments, invoking images of Ping-Pong tables and office kegs and tech-obsessed millennials. This start-up model, however, has begun to branch beyond the confines of the tech and app environment and into the realm of civil liberties.

The “Y Combinator” provides a new model for funding early stage startups in which the Y Combinator invests “a small amount of money (120k) in a large number of start ups (105),” these startups then move to Silicon Valley for three months where they are able to work with professionals who are familiar with investment pitches and facilitate a business model that effectively reaches target consumers.[4] Because the Y Combinator is typically associated with its graduates such as Airbnb, Dropbox, and similar start-up model consumer products, the ACLU is seemingly out of place in the market, yet the Y Combinator president, Sam Altman, is interested in the potential success that a collaboration between the two groups may have. Although the ACLU is far from a “start up”, having been established in the early 1900s, the ACLU has a history of working with modern, tech-savvy businesses, such as Twitter, to invoke rapid fundraising participation, and thus a more thorough examination of how to improve the business-model may rapidly expand the ACLU’s national and international presence.[5]

This decision by ACLU to partner with Y Combinator is significant in the impact it may have on the expansion of the ACLU and the services that the ACLU is able to offer. Two significant characteristic of the millennial generation, as noted in Leigh Buchanan’s book entitled Meet the Millennials is that they are “masters of digital communication…[and] are primed to do well by doing good. Almost 70 percent say that giving back and being civically engaged are their highest priorities.”[6] Thus, the decision by the ACLU and Y Combinator represents a decision to engage a civic-minded generation on their turf, so to speak. This move is particularly pertinent at a time in American politics in which millennials are seemingly rejecting the current president.[7] The stronger presence that the ACLU may gain upon completion of the three-month Silicon Valley program may prove to ignite a generation of civically engaged individuals, and perhaps future ACLU lawyers.

 

 

 

[1] Katie Mettler, The ACLU says it got $24 million in online donations this weekend, six times its yearly average, The Washington Post (Jan. 30, 2017)

https://www.washingtonpost.com/news/morning-mix/wp/2017/01/30/the-aclu-says-it-got-24-million-in-donations-this-weekend-six-times-its-yearly-average/?utm_term=.77e7a5afb276.

[2] See id.

[3] See Randy Hawthornw, Understanding What Motivates Millennials to Give to Your NPO, NonProfitHub.org http://nonprofithub.org/fundraising/understanding-motivates-millennials-give-npo/ (last visited Feb. 3, 2017).

[4] Y Combinator, https://www.ycombinator.com/.

[5] See Sarah Ashley O’Brien, ACLU is participating in elete Silicon Velley accelerator, CNN Tech (Jan. 31, 2017) http://money.cnn.com/2017/01/31/technology/aclu-ycombinator/index.html.

[6] Jay Gilbert, The Millennials: A new generation of employees, a new set of engagement policies, Ivey Business Journal (Sept. 2011) http://iveybusinessjournal.com/publication/the-millennials-a-new-generation-of-employees-a-new-set-of-engagement-policies/.

[7] See Cody Boteler, Students plan demonstrations and walkouts to protest Trump’s inauguration, USA Today (Jan. 19, 2017), http://college.usatoday.com/2017/01/19/students-plan-demonstrations-and-walkouts-to-protest-trumps-inauguration/.

Image Source: http://thehill.com/sites/default/files/styles/thumb_small_article/public/blogs/protest_1.jpg?itok=ZUbOBxAB.

JOLT Announcement

We’re excited to announce the migration of the JOLT website to a new more secure server.  All of our existing content will be migrated to our new website, featuring: Issue II to be published this week, our ongoing blog posts regarding the most topical subjects on the intersection of law and technology, and our past Issues and Symposium materials. You will also soon be able to view the most recent recordings of our Symposium sessions.  As our most recent Symposium highlighted, ransomware and malware attacks can occur through a variety of platforms and varied levels of sophistication. To enhance our site’s own cyber security, we are in the process of migrating the entire journal site to a new server with enhanced features to increase stability and preventative safety measures. For more details regarding the breadth of cyber attacks, see our upcoming Survey Issue, to be published in the coming weeks, with several articles addressing cyber breaches. We hope you regularly visit the JOLT site to find and discover scholarship on law and technology.

Calling an End to Culling: Predictive Coding and the New Federal Rules of Civil Procedure

pdf_icon Serhan Publication Version PDF

Cite as: Stephanie Serhan, Calling an End to Culling: Predictive Coding and the New Federal Rules of Civil Procedure, 23 Rich. J.L. & Tech. 5 (2016), http://jolt.richmond.edu/index.php/volume23_issue2_serhan/.

Stephanie Serhan*

Table of Contents

I.     Introduction. 2

II.     Why Timing Matters in Predictive Coding. 4

A.      The Technical Difference Between the Two Methods. 5

B.     The Practical Implications in Applying the Two Methods. 6

III.     Court Decisions and the New Federal Rules. 11

A.     Court Decisions under the Old Rules. 11

1.      Ex-Ante Permissibility of Predictive Coding. 11

2.     Ex-Post Permissibility of Keyword Culling. 15

B.     Reinforcement of Court Decisions under the New Rules. 21

1.     Recent Amendments to the Rules. 21

2.     Subsequent Reactions to the New Rules. 25

IV.     Encouraging Predictive Coding Ex Ante. 28

A.     Why Predictive Coding Ex Ante is Preferable. 28

B.     How Parties and Courts Should Proceed. 30

V.     Conclusion. 35

I. Introduction 

[1]       In corporate litigation and dispute resolution, discovery is often a significant undertaking for both the producing and requesting parties. Each party’s approach during discovery is usually guided by considerations regarding efficiency and accuracy during the process. One area of discovery in which parties prioritize these considerations is the implementation of predictive coding. Several studies have proven that the method of predictive coding is substantially more efficient and accurate than traditional methods of conducting discovery.[1]

[2]       The method of predictive coding begins with a senior attorney who is intimately familiar with the case identifying relevant and irrelevant documents to create a “seed set.”[2] This seed set is then fed into the predictive coding software, which trains the software to determine which documents are relevant, while suggesting other documents that may also be relevant.[3] Additionally, the attorney might review a random sample of documents;[4] or the attorney could feed in words, phrases, and concepts that are appropriate to the case, and the software can subsequently find similar phrases, with linguistic or sociological relevance.[5] The aim of the method is to identify the most relevant documents to produce to the requesting party.

[3]       Within predictive coding, tension between efficiency and accuracy frequently arises in deciding the appropriate time at which to apply predictive coding. This timing concern has sparked numerous debates, as well as a split between court opinions. The issue parties and courts address is whether predictive coding should be applied at the outset of discovery to an entire universe of documents, or if it should be applied after keyword culling.

[4]       This issue has become increasingly addressed in virtually every important case that has large volumes of documents in discovery. Addressing this issue is important to the parties involved because it has profound implications regarding efficiency and accuracy. Courts have also been asked to address this question, but have offered little guidance regarding the time at which to implement predictive coding in a case. Rule 1 of the Federal Rules of Civil Procedure addresses this exact balance as a trade-off between the just resolution and the efficiency of a case, which has often arisen in issues concerning discovery.[6] The recent amendments to the Federal Rules of Civil Procedure further emphasize this trade-off.[7]

[5]       This paper examines the impact of the most recent amendments to the Federal Rules of Civil Procedure on the current split between courts about whether predictive coding should be applied at the outset or to a set of keyword-culled documents. Since the new Rules explicitly implement the concept of proportionality and a new set of standards in Rule 26, I argue that applying predictive coding at the outset is more compliant with the Federal Rules of Civil Procedure. Part II will explain the difference in timing between applying predictive coding after keyword culling or prior to it, and discuss the implications of accuracy and efficiency. Part III will first discuss the split between courts regarding the two methods prior to the recent amendments to the Rules, and subsequently, it will discuss reactions by courts and scholars regarding the applicability after the amendments to the Rules. Part IV will argue that the method of applying predictive coding at the outset is more compliant with the new amendments to the Rules since it is more accurate, and it will suggest that parties and courts should begin to implement these changes. Ultimately, this proposal will improve accuracy, without jeopardizing efficiency, with the goal of achieving the just resolution of a case.

II. Why Timing Matters in Predictive Coding

[6]       During the process of discovery, parties often face a choice regarding which method to use on large volumes of documents. Predictive coding has recently become a predominant method through which attorneys and parties alike may narrow down the universe of documents in an efficient and accurate manner.[8] However, parties differ over the appropriate time at which predictive coding should be used in the discovery process, which has created two methods that differ only in timing. The two methods are: (i) the use of predictive coding at the outset, or (ii) the use of predictive coding after keyword culling documents. This Part explains the technical difference between these two methods, as well as the practical implications in applying each of these methods.

            A. The Technical Difference Between the Two Methods

[7]       Regarding the timing of when to apply predictive coding, the two methods are: (i) the use of predictive coding at the outset, or (ii) the use of predictive coding after keyword culling. The first method involves applying predictive coding at the beginning of the discovery phase; the second method involves keyword culling documents first, and subsequently applying predictive coding to the keyword-culled documents. Each of these methods will be explained separately.

[8]       The first method provides the option of applying predictive coding to the entire universe of documents at the beginning of the discovery phase. All documents are gathered, and the predictive coding technology is applied to all of the documents at the outset as a whole.[9] Applying predictive coding to all documents means there is no previous method, such as keyword culling, to narrow down the universe of documents. The use of predictive coding will narrow down the universe of documents based on which documents are relevant, or predicted to be relevant, through a programmed algorithm.[10] Alternatively, the second method allows a party to apply predictive coding to a set of documents that has already been reduced in size by keyword search techniques. These techniques are frequently referred to as “keyword culling.” In order to perform keyword culling on documents, a party would begin with the entire universe of documents that pertain to a case, and narrow down the universe of documents by searching for keywords. Through this method, documents are identified as relevant or irrelevant based on those search terms. The relevant documents remain, and these are a much smaller set of documents. These relevant documents are referred to as the keyword-culled documents, and predictive coding is subsequently applied only to these keyword-culled documents.[11]

            B. The Practical Implications in Applying the Two Methods

[9]       These two methods have significant implications regarding a party’s monetary expenditures and time spent, which relates to important concerns of accuracy and efficiency in choosing between these two methods. Regarding accuracy, the use of predictive coding at the outset provides a much more accurate return of relevant documents than keyword culling.[12] Applying predictive coding on the entire set of documents is the most accurate method in identifying relevant documents because it is applied to all documents, rather than the ones selected by keyword culling.[13] Keyword culling is not as accurate because the party may lose many relevant documents if the documents do not contain the specified search terms, have typographical errors, or use alternative phraseologies.[14] The relevant documents removed by keyword culling would likely have been identified using predictive coding at the outset instead.[15] Therefore, keyword culling is not as accurate as predictive coding when used on the entire set of documents at the outset.

[10]     Regarding efficiency, both methods provide efficient returns, depending on how efficiency is defined. The use of predictive coding at the outset can be beneficial in narrowing down documents based on even “‘linguistic’ or ‘sociological’” relevance.[16] Another efficient benefit is that the technology is programmed at the outset and can identify the most relevant documents.[17] Keyword culling, on the other hand, narrows down the universe of documents by conducting a keyword search that does not identify other potentially-relevant documents, but simply searches through the documents using the keywords that are chosen.[18] The keyword search can be quickly applied to a set of documents to determine which documents to keep and which to remove.[19] Keyword culling can be useful since it narrows down the universe of documents to a much smaller number, as it does not predict other potentially-relevant documents.[20] It may be quicker for the technology to simply apply keyword searches prior to predictive coding to limit the number of documents that need to be coded, but once again, it comes at the cost of accuracy in revealing responsive documents.[21]

[11]     Furthermore, prior to keyword culling, the parties often spend significant amounts of time discussing which keywords to employ in the search.[22] This back and forth between the parties frequently results in disagreement.[23] The danger is that the inputted terms for searching might be “over- or underinclusive, either returning large amounts of irrelevant documents or failing to capture relevant ones.”[24] Consequently, “…the requesting party may ask for additional search terms or request that the producing party takes steps to verify the completeness of production.”[25]

[12]     Since predictive coding would be employed under each of the two methods, the costs associated with each are not significantly different. The majority of costs associated with predictive coding come from: the time of a senior attorney who is intimately familiar with the case, the cost of employing a company that has the available technology and software to run predictive coding, and the time associated with training the software to identify relevant documents.[26] These three categories of costs will be incurred regardless of which of the two methods is employed.

[13]     The point at which the monetary costs and time spent may vary between the two methods is a senior attorney’s identification of potentially relevant documents or training of the software on a larger universe of documents. In predictive coding, there may be a larger universe of potentially relevant documents, simply because the software is more accurate in predicting which documents may be potentially relevant.[27] Keyword culling, on the other hand, eliminates many documents, even if they may be potentially relevant.[28] The reason is that the method of searching by keywords does not have that “predictive” feature; it merely eliminates any documents that do not contain the inputted words and phrases.[29] Accordingly, the cost differential between these two methods is not in the cost of the technology of predictive coding, but in the time it takes to identify the potentially relevant documents, as well as the resulting production of those documents.

[14]     In sum, both methods employ predictive coding but at different stages in the discovery process. Predictive coding at the outset is abundantly more accurate than applying predictive coding after keyword culling.[30] The main costs associated with predictive coding will be the same, but since predictive coding at the outset is applied to more documents than keyword-culled documents, there may be additional time spent in training the software.[31] Therefore, the actual cost of predictive coding will likely be substantially equal in both methods since the majority of the costs will be incurred in both methods.

[15]     The remainder of this paper will discuss how this trade-off between accuracy and efficiency has been approached by several courts, litigating parties, and the Federal Rules of Civil Procedure in choosing the appropriate time to apply predictive coding.

III. Court Decisions and the New Federal Rules

[16]     This Part will first address how courts have dealt with the issue, which developed a split in court decisions between applying predictive coding at the outset versus applying it on keyword-culled documents. Second, this Part will describe the recent amendments to the Federal Rules of Civil Procedure, as well as the subsequent reactions of courts and scholars.

            A. Court Decisions under the Old Rules

[17]     Prior to the recent amendments to the Federal Rules of Civil Procedure, parties and courts were aware of the concept of proportionality, but there have been various outcomes in different cases. In the past few years, the split in authority regarding the timing of predictive coding has spurred important realizations of accuracy and efficiency. The discussion below will reveal that some courts encouraged predictive coding at the outset, while some have allowed defendants to employ keyword culling first. These perspectives often depend on what the parties had mutually agreed on, what the parties had already accomplished, and the specific issue in the case. The arguments for each method are usually party-driven, as requesting parties argue for a broader scope of discovery to find the maximum amount of relevant documents, whereas producing parties tend to argue for a narrower scope of discovery to produce fewer documents.[32]

1. Ex-Ante Permissibility of Predictive Coding

[18]     Courts have routinely found that the application of predictive coding at the outset is appropriate. For example, in the 2012 landmark decision of Da Silva Moore v. Publicis Groupe SA, the court of the Southern District of New York found that predictive coding at the outset was appropriate.[33] The discovery issue in this case was whether predictive coding should be used at the outset, compared to other methods of discovery, including keyword culling.[34] The defendants had gathered approximately three million emails, a sizable amount of documents.[35]

[19]     The defendants sought to use predictive coding, and although the plaintiffs voiced their concerns, the plaintiffs were not opposed to predictive coding.[36] Magistrate Judge Peck allowed the use of predictive coding and emphasized the concept of proportionality from the Federal Rules of Civil Procedure.[37] Subsequently, the plaintiffs raised objections, which fell under the purview of the district judge.[38] The district judge found that the magistrate judge’s decision was not clearly erroneous, denied the plaintiffs’ objections, and accordingly adopted the magistrate judge’s opinion.[39] The district judge noted that “the use of the predictive coding software as specified in the ESI protocol is more appropriate than keyword searching.”[40] In this case, the defendants used, and the court allowed, predictive coding at the outset instead of keyword culling.

[20]     A circuit court in Virginia upheld a similar ruling in Global Aerospace, Inc. v. Landow Aviation, L.P. in the same year. [41] The court addressed whether the defendants would be permitted to use predictive coding at the outset instead of keyword culling. The defendants urged for the application of predictive coding at the outset instead of keyword culling.[42] Although the plaintiffs objected to the use of predictive coding at the outset,[43] the judge allowed it, stating that the defendants “shall be allowed to proceed with the use of predictive coding for purposes of the processing and production of electronically stored information.”[44]

[21]     Similar to the rulings in Da Silva Moore and Global Aerospace, Inc., the court in In Re Actos (Pioglitazone) Products Liability Litigation also allowed the parties to employ predictive coding at the outset.[45] The parties worked together and collaborated in choosing which method to employ. The high level of transparency and cooperation between the parties enabled the successful implementation of predictive coding at the outset on the entire universe of documents.[46] The parties agreed to review document samples collaboratively, meet and confer, and reveal their respective methodologies to each other.[47] The court allowed the parties to proceed in this manner because it was a mutually agreed upon method and proportional under the Rules.[48]

[22]     A slightly different case reveals a court’s hesitation in applying simplistic keyword searches. In McNabb v. City of Overland Park, the defendant produced about 20,000 e-mails after it unilaterally redacted the information that it thought was “confidential or irrelevant.”[49] The plaintiff also submitted a list of about thirty-five search terms for the defendant to use, but the defendant argued that the requests were “overly broad and would encompass a significant number of documents.”[50] The court agreed with the defendant and denied the plaintiff’s motion, on grounds of proportionality. In other words, the court denied the implementation of these broad, general keyword searches.[51] The motion papers in this case indicate “that the parties considered using predictive coding[,]” but the defendant decided not to.[52] The outcome may have been different if the parties agreed to employ predictive coding at the outset because the plaintiff may have received more of the relevant data it was searching for, and the defendant may have been able to protect other documents as well.[53]

[23]    Overall, when presented with the issue at the outset, courts have routinely held that predictive coding is appropriate. The courts in Da Silva Moore v. Publicis Groupe SA, Global Aerospace, Inc. v. Landow Aviation, L.P., and In Re Actos all allowed the parties to proceed with the application of predictive coding at the outset.[54] The judge’s reluctance and refusal to allow simplistic keyword searches in McNabb also points in the same direction, suggestive of the possibility that predictive coding may have been an appropriate approach from the outset.[55] Accordingly, parties and courts have been supportive of the use of predictive coding at the outset.

2. Ex-Post Permissibility of Keyword Culling

[24]     Courts have only permitted the use of predictive coding on previously keyword-culled documents after the fact, meaning after the documents had already been culled. In one example, the Northern District of Illinois court allowed the defendants to first employ keyword culling in Kleen Products, LLC v. Packaging Corporation of America in 2012.[56] The defendants had already produced “more than three million pages of documents” through keyword culling,[57] but plaintiffs requested the judge to order redoing discovery by employing predictive coding at the outset instead.[58] After several months of disputing these discovery issues, the parties reached an agreement.[59] The plaintiffs withdrew their demand to restart and apply predictive coding at the outset on the entire universe of documents in the case.[60] In other words, the defendants kept the documents that were already culled down using keyword searches and were not required to restart the discovery process with predictive coding.[61] The magistrate judge approved their agreement to employ keyword culling at the outset and restated Sedona Principle 6, “responding parties are best situated to evaluate” the appropriate method, with deference to the producing party.[62]

[25]     In the same year, the court in In Re Biomet M2a Magnum Hip Implant Products Liability Litigation also permitted keyword culling prior to the application of predictive coding.[63] The party had already employed keyword culling and reduced the universe of documents from “19.5 million to 3.9 million.”[64] The court stated that if the party was ordered to restart and apply predictive coding on the entire universe of documents, it would not have been proportional under the previous version of Rule 26.[65] The court said this approach was reasonable under the circumstances.[66] The judge stated that the issue is not whether predictive coding is better than keyword culling, but whether the party satisfied its discovery obligations.[67] Furthermore, the judge stated that regardless of the other proportionality factors, the additional cost of going back to do the predictive coding on all documents would have outweighed the benefit of potentially finding more relevant documents.[68]

[26]     In a related line of cases, two courts have allowed keyword culling after the parties had agreed to it, but courts and parties have disagreed as to the proper approach after keyword culling. For example, in Progressive Casualty Insurance Company v. Delaney, the parties agreed to use keyword culling at the outset.[69] The producing party employed keyword culling which reduced the amount of documents from 1.8 million to 565,000.[70] For the remaining 565,000 documents, after employing keyword culling, the parties disagreed as to the appropriate method that should be used.[71] The producing party found that subsequently performing manual review would take a significant amount of time and money. [72] To circumvent these costs, the producing party unilaterally chose to employ predictive coding instead of manual review on the remaining 565,000 documents.[73] After the producing party made this decision, it informed the requesting party, and the requesting party filed a motion to compel.[74] The court did not allow this change from manual review to predictive coding because it was not originally agreed upon by the parties, and it would result in more disputes and delays.[75] This case demonstrates that other disputes may arise after keyword culling is used because it calls into question the accuracy of subsequent methods. Predictive coding is contemplated but disagreed upon after keyword culling since the parties had already agreed upon manual review, although it is a time-consuming approach.[76] Instead, when predictive coding is used at the outset, these disputes are eliminated.

[27]     Another example in which keyword culling was permitted at the outset is in Bridgestone Americas, Inc. v. International Business Machines Corp.[77] The plaintiff had already employed keyword culling and wanted to proceed to use predictive coding. The defendant argued it would be unfair for the plaintiff to use predictive coding after documents had already been keyword culled, relying on Progressive Casualty Insurance Company.[78] However, because of concerns regarding proportionality and efficiency, the judge allowed the use of predictive coding on the previously keyword-culled documents.[79] This case also stands for the proposition that the parties should be the ones to try to resolve this issue.[80] The court believed that the use of keyword culling prior to predictive coding can be appropriate under Rule 26, but it depends on many factors, including “the type of data, the value of the case juxtaposed to the cost of using advanced analytics, and other factors that are matter specific.”[81]

[28]     As demonstrated by Bridgestone Americas, Inc. and Progressive Casualty Insurance Company, when parties agree on keyword culling at the outset, parties and courts are left confused as to the appropriate method to use going forward to review the remaining documents. The reason is that the accuracy of the remaining relevant documents is already called into question since keyword culling is not as accurate as predictive coding.[82] Furthermore, concerns of time, cost, and efficiency going forward in deciding between manual review and predictive coding become prominent issues for the parties.

[29]     All four of these cases share a common denominator of one part of their holding regarding the discovery issue.[83] All four courts in Kleen Products, LLC, In Re Biomet, Progressive Casualty Insurance Company, and Bridgestone Americas, Inc. permitted the parties to employ keyword culling at the outset only after they had already performed keyword culling, or after it was already agreed upon by the parties.[84] Although the parties disagreed as to the proper method to apply after keyword culling was employed,[85] the courts found that ordering the parties to restart discovery and employ predictive coding would have been disproportional under the Rules.[86]

            B. Reinforcement of Court Decisions under the New Rules

[30]     Recently, the drafters of the Federal Rules of Civil Procedure and the Supreme Court rebalanced the priorities of discovery and set a legislative-like answer in the amendments to the Rules. This Part discusses those amendments, as well as the subsequent reactions of courts and scholars.

            1. Recent Amendments to the Rules

[31]     The Federal Rules of Civil Procedure were recently amended and deemed effective as of December 1, 2015. The new revisions can be found in the 2016 edition of the Federal Rules of Civil Procedure.[87] Many rules were amended, but the revisions to Rules 1 and 26 directly impact this discussion. Through these revisions, the rule drafters and the Supreme Court chose to highlight proportionality, as well as the responsibility of parties and courts in making these decisions.

[32]     Rule 1 was amended to emphasize that parties are just as responsible as courts in applying the Federal Rules of Civil Procedure to ensure the efficiency of every action in a case.[88] The previous version of Rule 1 stated that the rules “should be construed and administered to secure the just, speedy, and inexpensive determination of every action and proceeding.”[89] The new version of Rule 1 states that the rules “should be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.”[90]

[33]     Rule 26 was amended to emphasize factors of proportionality in defining the scope of discovery.[91] The previous version of Rule 26(b)(1) stated:

Scope in General. Unless otherwise limited by court order, the scope of discovery is as follows: Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense—including the existence, description, nature, custody, condition, and location of any documents or other tangible things and the identity and location of persons who know of any discoverable matter. For good cause, the court may order discovery of any matter relevant to the subject matter involved in the action. Relevant information need not be admissible at the trial if the discovery appears reasonably calculated to lead to the discovery of admissible evidence. All discovery is subject to the limitations imposed by Rule 26(b)(2)(C).[92]

[34]     The amended version of Rule 26(b)(1) now states: 

Scope in General. Unless otherwise limited by court order, the scope of discovery is as follows: Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit. Information within this scope of discovery need not be admissible in evidence to be discoverable.[93]

[35]     The concept of proportionality appeared in Rule 26(b)(2)(C) in the previous version and has always been present; however, it now appears at the beginning of Rule 26(b)(1), which makes it more explicitly applicable to the entire scope of discovery.[94] Specifically, the proportionality factors moved from Rule 26(b)(2)(C)(iii) to the new location at the beginning of Rule 26(b)(1).[95] The Committee’s intention in moving these factors is to “make them an explicit component of the scope of discovery, requiring parties and courts alike to consider them when pursuing discovery and resolving discovery disputes.”[96]

[36]     It is important to note that the Committee made revisions to the actual factors that pertain to proportionality as well. They amended the order of the factors; the “importance of the issues at stake” now precedes the “amount in controversy” which places an emphasis on proportionality related to the issues, not only the dollar amount.[97] They also added one additional factor: “the parties’ relative access to relevant information.”[98]

[37]     The other change to Rule 26 is the removal of the language “reasonably calculated to lead to the discovery of admissible evidence.”[99] This means that the previous guidance in discovery, to find evidence that might lead to admissible evidence, has been taken out. Since it is no longer a requirement to potentially lead to admissible evidence, there may be a push from attorneys to narrow the scope of discovery.[100] The reason is that the previous requirement did not require a direct nexus to the case as discoverable evidence only had to potentially lead to other admissible evidence. In this application, it might be a call to highlight the most relevant evidence in discovery.

[38]     In sum, Rule 1 now explicitly makes it the priority of parties and courts to ensure that a case proceeds in a just and expedient manner. Rule 26 now explicitly prioritizes proportionality to dictate the scope of discovery. Both of these rules impact the decision of when it is the right time to apply predictive coding for several reasons. Predictive coding and keyword culling, as discussed above, have important implications regarding the accuracy and efficiency of the discovery process.

2. Subsequent Reactions to the New Rules

[39]     Courts have begun to apply these recent amendments of the Federal Rules of Civil Procedure, and there has not been a drastic change in the past few months. Many courts are finding that the priority of proportionality has been present since the prior version of the Rules, but the courts are able to more easily point to this priority as it is explicitly referred to first in Rule 26 regarding the scope of discovery.

[40]     For instance, just six days after the amendments went into effect, the court in Carr v. State Farm Mutual Automobile Insurance Co. found that the burdens on the parties have not fundamentally changed.[101] In that case, the defendant’s motion to compel was granted since the burden on the plaintiff to resist the motion to compel had not changed under the new rules, as evidenced by the Committee’s notes on the amendments.[102] Another court has concluded the application of predictive coding was disproportional under the new rules in Gilead Sciences, Inc. v. Merck & Co., Inc., but it stated that the result would have been the same even under the prior version of the Rules.[103] In that patent infringement case, the defendant’s motion to compel additional discovery was denied because the plaintiff would have needed to produce an excessive amount of information regarding the contents of tubes of compounds that were not at issue in the case.[104]

[41]     The court stated that the amendments now first require an inquiry into whether the additional discovery would be proportional, rather than whether it might lead to something admissible.[105]

[42]     Similarly, the court of the Southern District of Florida allowed the defendants to redact information that was irrelevant from documents that were considered responsive.[106] The court based its opinion on the concept of proportionality in Rule 26.[107]

[43]     The Year-End Report of the Federal Judiciary argues that the amendments have had a profound impact on the expected efficiency of parties and courts.[108] Magistrate Judge John M. Facciola believes the Rules were significantly modified in that the scope of discovery does not regard whether an item is “reasonably calculated to lead to the discovery of admissible evidence,”[109] but rather regards the issues at stake and proportionality concerns.[110] Because of this, lawyers may argue to narrow the scope of discovery.[111]

[44]     The courts that have begun to apply the new amendments to the Rules are finding that the outcome would have been similar even under the old Rules. The courts are only able to more easily point to the primary concerns of proportionality, justness, and expediency through the new amendments.

IV. Encouraging Predictive Coding Ex Ante 

[45]     In light of the court decisions and recent amendments to the Federal Rules of Civil Procedure, predictive coding should be encouraged at the outset of the discovery process to be applied on the entire universe of documents in a case. This Part will first explain the reasons why predictive coding should be used at the outset, and second, it will suggest how parties and courts should proceed in implementing this method.

            A. Why Predictive Coding Ex Ante is Preferable

[46]     Employing predictive coding at the outset provides significantly more accurate results in identifying relevant documents than keyword culling.[112] Predictive coding employs sophisticated technology which can more accurately predict relevant documents, beyond the simplistic search terms used in keyword culling.[113] The method of keyword culling is not as accurate because many relevant documents slip through the cracks when keyword searches are employed.[114] In terms of accuracy, predictive coding is significantly more accurate than keyword culling when used on the entire set of documents at the outset.

[47]     Since predictive coding would be employed under each of the two methods, the costs associated with either method are not significantly different. The majority of costs associated with predictive coding come from the time of a senior attorney who is intimately familiar with the case training the software, and the cost of employing a company that has the available technology and software to run predictive coding.[115] However, these costs will be expended in both methods since predictive coding is used in both methods. The point at which the monetary costs and time spent may vary between the two methods is in the senior attorney identifying potentially relevant documents and training the software on a larger volume of documents.[116] Accordingly, the cost differential between these two methods is in the time it takes to identify these potentially relevant documents, as well as the resulting production of documents. There has not been enough empirical research done on this inquiry, but no courts have held, and no parties have argued, that predictive coding would cost more at the outset. Although there is currently no proof that the costs are steeper, even if that were the case, it is likely not substantial enough to outweigh the benefit of accuracy in identifying relevant documents.

[48]     Furthermore, as discussed in Part III.A, courts have routinely upheld and encouraged the use of predictive coding at the outset. The courts that held keyword culling is permissible at the outset only found it permissible after the documents had already been keyword culled, and found it too burdensome and costly to restart discovery.[117]

[49]     The recent amendments to the Federal Rules of Civil Procedure further reinforce the concepts of proportionality and the responsibilities of the parties and courts to ensure the just and efficient resolution of a case. Rule 1 now mandates that the rules “should be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.”[118] There is now an explicit emphasis on both courts and the parties to work justly and efficiently all throughout a case from the beginning to the end, which includes the discovery phase. More specifically, Rule 26(b) now highlights that the scope of discovery must begin with an inquiry of proportionality.[119] The Rule mandates that the parties and courts consider several factors of proportionality, including “the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit.”[120]

[50]     The Rules explicitly emphasize proportionality with a list of many factors. This legislative-like answer set by the rules’ drafters and the Supreme Court was a deliberate decision to refocus the attention of discovery to the issues at stake as well as the importance of discovery in finding a resolution to those issues. As discussed above, the cost differential between both methods is likely insignificant. Proportionality, as applied in a discovery issue, concerns both accuracy and efficiency because it impacts time, cost, and the just resolution of a case. Since cost is not a determinative factor, the parties will gain accuracy in employing predictive coding at the outset, which is particularly proportional in the scope of discovery under the Rules. In this way, the parties gain accuracy without sacrificing efficiency.

            B. How Parties and Courts Should Proceed

[51]     At the beginning of discovery, parties should opt to employ predictive coding on the entire universe of documents in a case, in light of the benefits regarding accuracy and proportionality. Even under the previous version of the Rules, parties were encouraged to collaborate regarding discovery methods and to consider each step of predictive coding at the outset.[121] This collaboration is essential because the parties are usually the ones that are in the best position to initially evaluate the method rather than courts.[122]

[52]     The ideal protocol is that which was employed by the parties in In Re Actos.[123] In that case, the parties cooperated and collaborated at the beginning of the discovery phase and were able to successfully implement predictive coding.[124] At the opposite end of the spectrum, the parties in Kleen Products, demonstrated how destructive it was to dispute the methodology of discovery for several months, wasting both time and money on the dispute.[125] Further, the plaintiffs withdrew their demand which allowed the defendants to keep their previously keyword-culled documents.[126] This end result of accepting the keyword-culled documents was not a judicial decision, nor was it a collaborative effort by the parties. Rather, it was the easier solution after several months of dispute, and a result that was brought on by the plaintiffs’ withdrawal of the demand.[127] If parties are encouraged to collaborate at the outset and practice transparency by sharing the predictive coding methodology with the other party, there is little left for the other party to object to.[128] The reason is that costs are already being saved by employing predictive coding regardless of the time at which it is applied, and the method of employing predictive coding is overwhelmingly more accurate in producing relevant documents than keyword culling.[129]

[53]     Subsequently, all that is left that the parties may dispute is the input to the predictive coding software. Parties may disagree about the inputs in training the software, but it does not have to be a daunting task, as the parties in In Re Actos planned for that and allowed options to work together on the inputs and scheduled for times to meet and confer.[130] Therefore, it is more proportional and worthwhile to start with predictive coding at the outset.[131]

[54]     The courts in McNabb and Progressive Casualty Insurance Company also teach an important lesson about the importance of collaboration between the parties at the outset.[132] Since the court in McNabb rejected the plaintiff’s motion to compel and employ further keyword searches,[133] the parties could have both benefitted from predictive coding at the outset. The producing party in Progressive Casualty Insurance Company unilaterally decided to switch to predictive coding, which instigated a motion to compel from the requesting party.[134] These situations could have been avoided if there were collaborative efforts at the outset, as well as transparency throughout the process.

[55]     As discussed in Part III.A.2, courts allowed predictive coding to be used after keyword culling, primarily because keyword culling had already been employed by the producing party, and it would have been costly to start over with predictive coding on the entire universe of documents in the case. The judges reasoned that it would have been highly inefficient and disproportional to require that party to start over at the beginning, especially if the parties agreed on the use of the keyword search method at the outset.[135] In Kleen Products, LLC v. Packaging Corporation of America, the “defendants [had] [already] produced more than three million pages of documents” through keyword culling,[136] but plaintiffs requested the judge to order redoing discovery using predictive coding.[137] The parties eventually reached an agreement, with the plaintiffs withdrawing their demand.[138] The court in In Re Biomet M2a Magnum Hip Implant Products Liability Litigation allowed keyword culling prior to the application of predictive coding because if the party was ordered to restart and apply predictive coding on the raw data, it would have been expensive and disproportional under Rule 26.[139]

[56]     As shown by these cases, producing parties continually employ keyword culling at the outset, possibly because it is quicker or because it produces a smaller amount of documents.[140] Regardless of the motive, once this discovery issue is before the courts and the producing party has already employed keyword culling, courts have been hesitant to order the party to start the discovery process again. In effect, the producing parties are permitted to retain their keyword culling methods.

[57]     Courts need to lead the change. If the parties do not begin to employ predictive coding at the outset and continue to employ keyword culling, courts should suggest the use of predictive coding at the outset. It will be relatively simple for courts to encourage or mandate predictive coding at the outset, as the courts discussed in Part III.A did. Courts may be more reluctant to order a producing party to abandon its keyword culling and restart the discovery process to employ predictive coding at the outset, but at this point, it is necessary. Proportionality is a primary concern under the Federal Rules of Civil Procedure. When predictive coding will be used in a case, it should be used at the outset in order to obtain the most accurate documents. It may only take one court in one case to capture the attention of parties and other courts, in order to lead the change for a more accurate and proportional discovery process in the cases to come.

V. Conclusion

[58]     Predictive coding has been proven to be more accurate and efficient than traditional methods of discovery. There has been a split in authority as to the point at which predictive coding should be applied. The issue that courts have been facing is whether predictive coding should be applied at the outset to the entire universe of documents, or if it should be applied to keyword-culled documents. Courts have gone both ways on this issue, but as of December 1, 2015, the drafters of the Federal Rules of Civil Procedure and the Supreme Court approved amendments to the Rules. Primarily, the amendments to Rules 1 and 26(b)(1) directly impact this discussion, as these rules emphasize the responsibility of parties and courts to ensure that a case proceeds justly and efficiently, while highlighting the importance of proportionality in the scope of discovery. Considering these amendments, predictive coding should be applied at the outset on the entire universe of documents in a case. The reason is that it is far more accurate, and is not more costly or time-consuming, especially when the parties collaborate at the outset. As seen in prior cases, this is the best method to identify more relevant documents. The point at which it becomes costly and inefficient is if a party had already used keyword culling and must restart the discovery process to employ predictive coding. However, if parties collaborate and participate in transparency at the outset, they will often find that it is significantly more effective and in the interest of both parties to employ predictive coding to identify the most relevant documents. If parties cannot agree or fall back on old ways of keyword culling, courts can and should lead the change by encouraging predictive coding at the outset of the discovery process, with the recent amendments to the Federal Rules of Civil Procedure on their side.

*J.D. Candidate 2017, University of Richmond School of Law. B.A., 2012, American University of Beirut. The author gratefully acknowledges Professor Jessica Erickson for her mentorship in the organization and articulation of arguments in this article, as well as Ms. Meghan Podolny for her assistance in the primary research phase of this topic. The author would also like to thank the editors and staff of the Richmond Journal of Law & Technology for their efforts in editing this article.

[1] See, e.g., Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1, 43, 48 (2011) (discussing benefits of predictive coding when conducting discovery); see also Joe Palazzolo, Why Hire a Lawyer? Computers are Cheaper, Wall Street J., (June 18, 2012, 2:06 PM), http://www.wsj.com/articles/SB10001424052702303379204577472633591769336, archived at https://perma.cc/FRN2-BTMW (noting that predictive coding is one subset of technology-assisted review (TAR) processes); see Andrew Peck, Search, Forward; Will Manual Document Review and Keyword Searches be Replaced by Computer-Assisted Coding?, Law Tech. News (Oct. 2011), https://law.duke.edu/sites/default/files/centers/judicialstudies/TAR_conference/Panel_1-Background_Paper.pdf, archived at https://perma.cc/7DDK-3HL5.

[2] Covington & Burling LLP, The Duty to Produce ESI, in Litigating Securities Class Actions § 13.04(2)(c) (Jonathan Eisenberg ed., 2016).

[3] See Tonia Hap Murphy, Mandating Use of Predictive Coding in Electronic Discovery: An Ill-Advised Judicial Intrusion, 50 Am. Bus. L.J. 609, 618 (2013) (noting that predictive coding uses sophisticated technology to narrow down documents that are most relevant to a case).

[4] See id.

[5] See id. at 617.

[6] See Fed. R. Civ. P. 1.

[7] See discussion infra Part III.B.1.

[8] See Da Silva Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182, 193 (S.D.N.Y. 2012). The Da Silva Moore case has received a significant amount of attention, since it was the first case in which predictive coding was judicially approved. See also Bennett B. Borden & Jason R. Baron, Finding the Signal in the Noise: Information Governance, Analytics, and the Future of Legal Practice, 20 Rich. J.L. & Tech. 1, 7, 16 (2014) (providing an in-depth statistical analysis finding that predictive coding is abundantly more accurate and efficient than traditional methods of discovery); see generally Grossman & Cormack, supra note 1, at 3 (discussing the efficiency and effectiveness of predictive coding).

[9] See Most Important Documents Get Looked at First: Using Predictive Coding to Prioritize & Expedite Review, Consilio (2016), http://www.consilio.com/wp-content/uploads/2016/01/Using-Predictive-Coding-to-Expedite-Review.pdf, archived at https://perma.cc/8R9L-6N5V (noting that if predictive coding were used at the outset it would have saved 70% of the time it took to conduct manual review).

[10] See Murphy, supra note 3, at 621–22.

[11] See Jim Eidelman, Best Practices in Predictive Coding: When are Pre-Culling and Keyword Searching Defensible?, Catalyst, Jan. 9, 2012, http://catalystsecure.com/blog/2012/01/best-practices-in-predictive-coding-when-are-pre-culling-and-keyword-searching-defensible/, archived at https://perma.cc/GG8K-3MMF.

[12] See id.; see also Barry Kazan & David Wilson, Technology-Assisted Review Is a Promising Tool for Document Production, N.Y. L.J. Online, Mar. 18, 2013, http://www.newyorklawjournal.com/id=1202592178481/TechnologyAssisted-Review-Is-a-Promising-Tool–for-Document-Production, archived at https://perma.cc/QZ6J-BVD6 (citing a case in which one party found that keyword culling only produces 20% of relevant documents, whereas predictive coding would be sufficient even when finding at a 75% responsive rate).

[13] See Eidelman, supra note 11.

[14] See Kazan & Wilson, supra note 12.

[15] See John Hopkins, Large Data and Document Production – Keyword Search and Predictive Coding, Searcy L. Blog, May 31, 2013, https://www.searcylaw.com/large-data-and-document-production-keyword-search-and-predictive-coding/, archived at https://perma.cc/VA9V-HJXM.

[16] Murphy, supra note 3, at 617.

[17] See id. at 620.

[18] See id. at 614–16, 620.

[19] The traditional way to employ keyword culling is run keywords through documents to retain the documents, which contain those keywords. See Ralph C. Losey, Predictive Coding and the Proportionality Doctrine: A Marriage Made in Big Data, 26 Regent U. L. Rev. 7, 58–59 (2013) (arguing that keyword culling could instead be used to cull documents out that are least likely to be relevant).

[20] See Jacob Tingen, Technologies-That-Must-Not-Be-Named: Understanding and Implementing Advanced Search Technologies in E-Discovery, 19 Rich. J.L. & Tech. 1, 33, 37 (2012); see Kate Mortensen, E-discovery Best Practices for Your Practice, Step 4: Search and Review, Inside Counsel, May 20, 2014, http://www.insidecounsel.com/2014/05/20/e-discovery-best-practices-for-your-practice-step, archived at https://perma.cc/Q7JW-XTTZ.

[21] See Joseph H. Looby, E-Discovery – Taking Predictive Coding Out of the Black Box, FTI J. (Nov. 2012), http://ftijournal.com/article/taking-predictive-coding-out-of-the-black-box-deleted, archived at https://perma.cc/4T49-CRTS.

[22] See Mark F. Foley, Expert Testimony May Be Needed for E-Discovery Keyword Searches, vonBreisen, Mar. 1, 2008, http://www.vonbriesen.com/legal-news/2098/expert-testimony-may-be-needed-for-e-discovery-keyword-searches, archived at https://perma.cc/2TGW-9KV9.

[23] See Murphy, supra note 3, at 614.

[24] Id. at 615–16.

[25] Id. at 614–15.

[26] See Matt Miller, Making Sure Your Predictive Coding Solution Doesn’t Cost More, DiscoverReady Blog, Apr. 30, 2013, http://discoverready.com/blog/making-sure-your-predictive-coding-solution-doesnt-cost-more/, archived at https://perma.cc/ZH6T-CZFN.

[27] See id.

[28] See Eidelman, supra note 11.

[29] Id.

[30] See id.

[31] See Miller, supra note 26.

[32] See, e.g., In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig., No. 3:12MD2391, 2013 U.S. Dist. LEXIS 84440, at *1 (N.D. Ind. Apr. 18, 2013) (Order Regarding Discovery of ESI) (noting that the requesting party expected about 10 million documents, but the producing party only produced 2.5 million documents).

[33] See Da Silva Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182, 193 (S.D.N.Y. 2012).

[34] See id. at 184–85.

[35] See id. at 184.

[36] See id. at 184–86.

[37] See id. at 186, 188.

[38] See Da Silva Moore v. Publicis Groupe SA, No. 11 Civ. 1279, 2012 U.S. Dist. LEXIS 58742, at *2 (S.D.N.Y. Apr. 25, 2012).

[39] See id. at *8–9.

[40] Id. at *8.

[41] See Global Aerospace Inc. v. Landow Aviation, L.P., No. CL 61040, 2012 Va. Cir. LEXIS 50, at *2 (Va. Cir. Ct. Apr. 23, 2012).

[42] See Brief in Opposition of Plaintiffs, Motion for Protective Order Regarding Electronic Documents and “Predictive Coding” at 2, Global Aerospace Inc. v. Landow Aviation, L.P., No. CL 61040, 2012 Va. Cir. LEXIS 50 (Va. Cir. Ct. Apr. 16, 2012), 2012 WL 1419848, at *1–2.

[43] See id. at *2–3.

[44] Global Aerospace Inc., 2012 Va. Cir. LEXIS 50, at *2.

[45] See In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2012 U.S. Dist. LEXIS 187519, at *20, *34 (W.D. La. July 27, 2012).

[46] See id. at *20.

[47] See id. at *21.

[48] See id. at *43.

[49] See McNabb v. City of Overland Park, No. 12-CV-2331 CM/TJJ, 2014 U.S. Dist. LEXIS 37312, at *5 (D. Kan. Mar. 21, 2014).

[50] See id. at *2.

[51] See Adam Kuhn, The Interplay Between Proportionality and Predictive Coding in e-Discovery, Recommind, June 12, 2014, http://www.recommind.com/blog/interplay-proportionality-predictive-coding-ediscovery, archived at https://perma.cc/LQX8-HYQM [hereinafter Interplay Between Proportionality and Predictive Coding].

[52] Id.

[53] See id.

[54] See Da Silva Moore v. Publicis Groupe SA, 287 F.R.D. 182, 193 (S.D.N.Y. 2012); see Global Aerospace Inc. v. Landow Aviation, L.P., No. CL 61040, 2012 Va. Cir. LEXIS 50, at *2 (Va. Cir. Ct. Apr. 23, 2012); In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2014 U.S. Dist. LEXIS 187519, at *12, *20 (W.D. La. July 27, 2012) (Case Management Order: Protocol Relating to the Production of Electronically Stored Information).

[55] See McNabb v. City of Overland Park, No. 12-CV-2331 CM/TJJ, 2014 U.S. Dist. LEXIS 52534, at *7, *9 (D. Kan. Apr. 16, 2014).

[56] See Murphy, supra note 3, at 629 (noting that the district judge allowed the discovery issue to be decided separately by the magistrate judge).

[57] Id. at 629–30 (citing the Joint Status Conference Report No. 3, at 3, Kleen Prods., LLC v. Packaging Corp. of Am., Civil Case No. 1:10–cv–05711 (N.D. Ill. May 17, 2012)).

[58] See id. at 630 (quoting Defendants’ Brief on Discovery Issues at 1, Kleen Prods., LLC v. Packaging Corp. of Am., No. 1:10–cv–05711 (N.D. Ill. Feb. 6, 2012).

[59] See id.

[60] See id.

[61] See Murphy, supra note 3, at 630–31.

[62] See Matthew Verga, Predictive Coding Cases, Part 2 – Kleen Products, Modus, Mar. 5, 2015, http://discovermodus.com/blog/predictive-coding-cases-2-kleen-products/, archived at https://perma.cc/6PHG-D49Z.

[63] See In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig., No. 3:12MD2391, 2013 U.S. Dist. LEXIS 84440, at *1 (N.D. Ind. Apr. 18, 2013) (Order Regarding Discovery of ESI).

[64] Bob Ambrogi, In Praise of Proportionality: Judge OKs Predictive Coding After Keyword Search, Catalyst, Apr. 29, 2013, http://www.catalystsecure.com/blog/2013/04/in-praise-of-proportionality-judge-oks-predictive-coding-after-keyword-search/, archived at https://perma.cc/2W7M-ZNHM.

[65] See Citing Proportionality, Court Declines to Require Defendant to Redo Discovery Utilizing Only Predictive Coding, K&L Gates, Apr. 23, 2013, https://www.ediscoverylaw.com/2013/04/citing-proportionality-court-declines-to-require-defendant-to-redo-discovery-utilizing-only-predictive-coding/, archived at https://perma.cc/5YUM-U6CY (citing Order Regarding Discovery of ESI, In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig.) [hereinafter Citing Proportionality].

 [66] See Keyword Filtering Prior to Predictive Coding Deemed Reasonable, EDiscovery Wire, Dec. 6, 2013, http://www.ediscoverywire.com/keyword-filtering-prior-to-predictive-coding-deemed-reasonable/, archived at https://perma.cc/P8S8-2WQZ.

[67] See Ambrogi, supra note 64.

[68] See id.

[69] See Progressive Cas. Ins. Co. v. Delaney, No. 2:11-CV-00678-LRH-PAL, 2014 U.S. Dist. LEXIS 69166, at *5 (D. Nev. May 20, 2014).

[70] See id. at *6-7.

[71] See id.

[72] See id. at *6.

[73] See id.

[74] See Progressive Cas. Ins. Co., 2014 U.S. Dist. LEXIS 69166, at *3–4.

[75] See id. at *31.

[76] See id.

[77] See Bridgestone Ams., Inc. v. Int’l Bus. Machs. Corp., No. 3:13-1196, 2014 U.S. Dist. LEXIS 142525, at *3 (M.D. Tenn., July 24, 2014) (Order Regarding use of Predictive Codes in Discovery) (explaining that the Magistrate Judge may permit the Plaintiff to use predictive coding on the documents).

[78] See Adam Kuhn, Bridgestone v. IBM Approves Predictive Coding Use, Rejects Progressive, Recommind, Aug. 12, 2014, http://www.recommind.com/blog/bridgestone-v-ibm-approves-predictive-coding-use-rejects-progressive, archived at https://perma.cc/NXY6-JX64.

[79] See Bridgestone Ams., Inc., 2014 U.S. Dist. LEXIS 142525, at *3.

[80] See Gilbert S. Keteltas, Predictive Coding After Keyword Screening!? Don’t Miss the Point of Bridgestone Americas, BakerHostetler: Discovery Advocate, Aug. 21, 2014, http://www.discoveryadvocate.com/2014/08/21/predictive-coding-after-keyword-screening-dont-miss-the-point-of-bridgestone-americas/, archived at https://perma.cc/YTR5-9UGX.

[81] Jason Bonk, Reasonableness and Proportionality Win Another Fight for Predictive Coding, E-Discovery L. Rev. (Sept. 17, 2014), http://www.ediscoverylawreview.com/2014/09/17/reasonableness-and-proportionality-win-another-fight-for-predictive-coding/, archived at https://perma.cc/98EY-ASU4 (quoting Eric Seggebruch).

[82] See discussion supra Part II.B.

[83] See Edward Schoenecker Jr., Nine Cases on Predictive Coding from Modus, LinkedIn, April 14, 2015, https://www.linkedin.com/pulse/nine-cases-predictive-coding-from-modus-edward-schoenecker, archived at https://perma.cc/N4ZY-VCRW.

[84] See Bridgestone Ams., Inc. v. Int’l Bus. Machs. Corp., No. 3:13-1196, 2014 U.S. Dist. LEXIS 142525, at *1–2 (M.D. Tenn., July 24, 2014) (Order Regarding use of Predictive Codes in Discovery); see also Progressive Cas. Ins. Co. v. Delaney, No. 2:11-CV-00678-LRH-PAL, 2014 U.S. Dist. LEXIS 69166, at *31 (D. Nev. May 20, 2014); see In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig., No. 3:12MD2391, 2013 U.S. Dist. LEXIS 84440, at *1 (N.D. Ind. Apr. 18, 2013) (Order Regarding Discovery of ESI); see Kleen Prods. LLC v. Packaging Corp. of Am., No. 10 C 5711, 2012 U.S. Dist. LEXIS 139632, at *14–19 (N.D. Ill. Sept. 28, 2012).

[85] See Bridgestone Ams., Inc., 2014 U.S. Dist. LEXIS 142525, at *1–2; see Progressive Cas. Ins. Co., 2014 U.S. Dist. LEXIS 69166, at *31.

[86] See Bridgestone Ams., Inc., 2014 U.S. Dist. LEXIS 142525, at *5; see Kleen Prods., LLC, 2012 U.S. Dist. LEXIS 139632, at *28.

[87] See 2015-2016 Federal Rules of Civil Procedure Amendments Released, Federal Rules of Civil Procedure Updates, May 13, 2015, https://www.federalrulesofcivilprocedure.org/2015-2016-federal-rules-of-civil-procedure-amendments-released/, archived at https://perma.cc/54GY-2XKK [hereinafter 2015-2016 Federal Rules Amendments]

[88] Id.; see also Federal Rule Changes Affecting E-Discovery Are Almost Here – Are You Ready This Time?, K&L Gates, Oct. 1, 2015, http://www.ediscoverylaw.com/wp-content/uploads/2015/10/Rules-Amendment-Alert-100115.pdf, archived at https://perma.cc/H7A3-2C7T [hereinafter Rule Changes].

[89] Fed. R. Civ. P. 1 (2014) (amended 2015).

[90] Fed. R. Civ. P. 1 (emphasis added).

[91] See 2015-2016 Federal Rules Amendments, supra note 87.

[92] Fed. R. Civ. P. 26(b)(1) (2014) (amended 2015).

[93] Fed. R. Civ. P. 26(b)(1) (emphasis added).

[94] See Just Follow the Rules! FRCP Amendments Could be E-Discovery Game Changer, Metropolitan Corporate Counsel (July 17, 2015, 11:49 PM), http://www.metrocorpcounsel.com/articles/32726/just-follow-rules-frcp-amendments-could-be-e-discovery-game-changer, archived at https://perma.cc/A9U7-3CHY [hereinafter Just Follow the Rules!].

[95] See E-Discovery Update: Federal Rules of Civil Procedure Amendments Go into Effect, McGuireWoods, Dec. 1, 2015, https://www.mcguirewoods.com/Client-Resources/Alerts/2015/12/E-Discovery-Update.aspx, archived at https://perma.cc/J5H6-4XET.

[96] Rule Changes, supra note 88, at 2 (quoting The Committee on Rules of Practice and Procedure, Report of the Judicial Conference Committee on Rules of Practice and Procedure to the Chief Justice of the United States and Members of the Judicial Conference of the United States app. at B–8 (2014), http://www.uscourts.gov/rules-policies/archives/committee-reports/reports-judicial-conferenceseptember-2014).

[97] Just Follow the Rules!, supra note 94 (arguing that although a case may not have an amount in controversy, it could still be a significant issue that deserves the concern of proportionality, such as discrimination or First Amendment cases).

[98] Rule Changes, supra note 88, at 2.

[99] Fed. R. Civ. P. 26(b)(1).

[100] See Just Follow the Rules!, supra note 94.

[101] See Court Applies Amended Rule 26 Concludes Burdens on Parties Resisting Discovery have not Fundamentally Changed, K&L Gates, Dec. 17, 2015, http://www.ediscoverylaw.com/2015/12/court-applies-amended-rule-26-concludes-burdens-on-parties-resisting-discovery-have-not-fundamentally-changed/, archived at https://perma.cc/A8W8-QQRK.

[102] See Carr v. State Farm Mut. Auto. Ins. Co., No.3:15-CV-1026-M, 2015 U.S. Dist. LEXIS 163444, at *1517 (N.D. Tex. Dec. 7, 2015).

[103] See Court Concludes Defendant’s Request was “Precisely the Kind of Disproportionate Discovery That Rule 26—Old Or New—Was Intended to Preclude,” K&L Gates, Jan. 19, 2016, https://www.ediscoverylaw.com/2016/01/court-concludes-defendants-request-was-precisely-the-kind-of-disproportionate-discovery-that-rule-26-old-or-new-was-intended-to-preclude/, archived at https://perma.cc/V8T8-WJHG (citing Gilead Sciences, Inc. v. Merck & Co., Inc., No. 5:13-CV-04057-BLF, 2016 U.S. Dist. LEXIS 5616 (N.D. Cal. Jan. 13, 2016)) [hereinafter Court Concludes].

[104] Gilead Sciences, Inc. v. Merck & Co., Inc., No. 5:13-CV-04057-BLF, 2016 U.S. Dist. LEXIS 5616, at *7 (N.D. Cal. Jan. 13, 2016).

[105] See Court Concludes, supra note 103 (citing Gilead Sciences, Inc. v. Merck & Co., Inc., No. 5:13-CV-04057-BLF, 2016 U.S. Dist. LEXIS 5616 (N.D. Cal. Jan. 13, 2016)).

[106] See Court Approves Proposal to Redact or Withhold Irrelevant Information from Responsive Documents and Document Families, K&L Gates, Mar. 3, 2016, http://www.ediscoverylaw.com/2016/03/court-approves-proposal-to-redact-or-withhold-irrelevant-information-from-responsive-documents-and-document-families/, archived at https://perma.cc/26E4-UXLH (citing In re Takata Airbag Prods. Liab. Litig., 2016 U.S. Dist. LEXIS 131746 (S.D. Fla. Mar. 1, 2016)).

[107] See id.

[108] See 2015 Year-End Report on the Federal Judiciary, SupremeCourt.Gov 1, 6, 9 (Dec. 31, 2015), http://www.supremecourt.gov/publicinfo/year-end/2015year-endreport.pdf, archived at https://perma.cc/5RU7-DCF7.

[109] Just Follow the Rules!, supra note 94.

[110] See id.

[111] See id.

[112] See Eidelman, supra note 11; see also Kazan & Wilson, supra note 12.

[113] See Eidelman, supra note 11; see also Kazan & Wilson, supra note 12.

[114] See Eidelman, supra note 11; see also Kazan & Wilson, supra note 12.

[115] See Miller, supra note 26.

[116] See id.

[117] See discussion, supra Part III.A.

[118] Fed. R. Civ. P. 1 (emphasis added).

[119] See Fed. R. Civ. P. 26(b)(1).

[120] Id.

[121] See Karl Schieneman & Thomas C. Gricks III, The Implications of Rule 26(g) on the Use of Technology-Assisted Review, 7 Fed. Cts. L. Rev. 239, 273–74 (2013) (noting that even under the old Rules, counsel was encouraged to consider each step of technology-assisted review under Rule 26(g) and 26(b)(2)(C)(iii)).

[122] See Charles Yablon & Nick Landsman-Roos, Predictive Coding: Emerging Questions and Concerns, 64 S.C. L. Rev. 633, 674 (2013) (citing Sedona Principle 6).

[123] See In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2012 U.S. Dist. LEXIS 187519, at *27 (W.D. La. July 27, 2012).

[124] See id. at *27.

[125] See Kleen Prods. LLC v. Packaging Corp. of Am., No. 10-C-5711, 2012 U.S. Dist. LEXIS 139632, at *60–62 (N.D. Ill. Sept. 28, 2012).

[126] See id. at *62–63.

[127] See id. at *58, *62.

[128] See id. at *58.

[129] See Grossman & Cormack, supra note 1, at 44, 48.

[130] See In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2014 U.S. Dist. LEXIS 86101, at *20–34 (W.D. La. June 23, 2014).

[131] See Interplay Between Proportionality and Predictive Coding, supra note 51 (“[A] party who unilaterally decides later on in discovery that its search tactics were too imprecise could find that proportionality standards prevent the use of more advanced, accurate, and targeted searches with predictive coding technologies.”).

[132] See, e.g., McNabb v. City of Overland Park, No. 12-CV-2331 CM/TJJ, 2014 U.S. Dist. LEXIS 37312, at *2–14 (D. Kan. Mar. 21, 2014); see also Progressive Cas. Ins. Co. v. Delaney, No. 2:11-CV-00678-LRH-PAL, 2014 U.S. Dist. LEXIS 69166, at *30–32 (D. Nev. May 20, 2014).

[133] See McNabb, 2014 U.S. Dist. LEXIS 37312, at *5.

[134] See Progressive Casualty, 2014 U.S. Dist. LEXIS 69166, at *2, *30–32.

[135] See id. at *30–32.

[136] Murphy, supra note 3, at 629–30.

[137] See id. at 630.

[138] See id.

[139] See Citing Proportionality, supra note 65 (discussing the court’s decision in Kleen regarding ESI searches).

 [140] See Eidelman, supra note 11.

Resisting the Resistance: Resisting Copyright and Promoting Alternatives

pdf_icon Frosio Publication Version PDF

Cite as: Giancarlo F. Frosio, Resisting the Resistance: Resisting Copyright and Promoting Alternatives, 23 Rich. J.L. & Tech. 4 (2017), http://jolt.richmond.edu/index.php/volume23_issue2_frosio/.

 Giancarlo F. Frosio*

Abstract

This article discusses the resistance to the Digital Revolution and the emergence of a social movement “resisting the resistance.” Mass empowerment has political implications that may provoke reactionary counteractions. Ultimately—as I have discussed elsewhere—resistance to the Digital Revolution can be seen as a response to Baudrillard’s call to a return to prodigality beyond the structural scarcity of the capitalistic market economy. In Baudrillard’s terms, by increasingly commodifying knowledge and expanding copyright protection, we are taming limitless power with artificial scarcity to keep in place a dialectic of penury and unlimited need. In this paper, I will focus on certain global movements that do resist copyright expansion, such as creative commons, the open access movement, the Pirate Party, the A2K movement and cultural environmentalism. A nuanced discussion of these campaigns must account for the irrelevance of copyright in the public mind, the emergence of new economics of digital content distribution in the Internet, the idea of the death of copyright, and the demise of traditional gatekeepers. Scholarly and market alternatives to traditional copyright merit consideration here, as well. I will conclude my review of this movement “resisting the resistance” to the Digital Revolution by sketching out a roadmap for copyright reform that builds upon its vision.

I. Introduction

[1]       In The Creative Destruction of Copyrights, Raymond Ku applied for the first time the wind of creative destruction—made famous by Joseph Schumpeter—to the Digital Revolution.[1] According to Schumpeter, the “fundamental impulse that sets and keeps the capitalist engine in motion” is the process of creative destruction which “incessantly revolutionizes the economic structure by incessantly destroying the old one, incessantly creating a new one.”[2] Traditional business models’ resistance to technological innovation unleashed the wind of creative destruction. Today, we are in the midst of a war over the future of our cultural and information policies. The preamble of the Washington Declaration on Intellectual Property and the Public Interest explains the terms of this struggle:

[t]he last 25 years have seen an unprecedented expansion of the concentrated legal authority exercised by intellectual property rights holders. This expansion has been driven by governments in the developed world and by international organizations that have adopted the maximization of intellectual property control as a fundamental policy tenet. Increasingly, this vision has been exported to the rest of the world. Over the same period, broad coalitions of civil society groups and developing country governments have emerged to promote more balanced approaches to intellectual property protection. These coalitions have supported new initiatives to promote innovation and creativity, taking advantage of the opportunities offered by new technologies. So far, however, neither the substantial risks of intellectual property maximalism, nor the benefits of more open approaches, are adequately understood by most policy makers or citizens. This must change if the notion of a public interest distinct from the dominant private interest is to be maintained.[3]

[2]       The underpinnings of this confrontation extend to a broader discussion over the cultural and economic tenets of our capitalistic society, freedom of expression and democratization.

II. Resistance and Resisting the Resistance

[3]       Since the origins of the open source movement, mass collaboration has been envisioned as an instrument to create a networked democracy.[4] The political implications of mass collaboration in terms of mass empowerment are relevant to the ideas of freedom and equality. User-generated mass collaboration has promoted decentralization and autonomy in our system of creative production.[5] Internet mass empowerment might spur enhanced content production’s democratization from which political democratization might follow.[6] As Clay Shirky described, open networks reverse the usual sequence of filter, then publish, by making it easy to publish, then filter.[7] Minimizing cultural filtering empowers sub-cultural creativity and thus cultural distinctiveness and identity politics.[8]

[4]       Mass empowerment, however, triggers reactionary effects. Change has always unleashed a fierce resistance from the established power, both public and private. It did so with the Printing Revolution.[9] It does now with the Internet Revolution. For public power, the emergence of limitless access, knowledge, and therefore freedom, is a destabilizing force that causes governments to face increasing accountability and therefore relinquish a share of their power.[10] Through mass empowerment, the Internet, and global access to knowledge, private power sees the dreadful prospect of having to switch from a top-down to a bottom-up paradigm of consumer consumption.[11] Much to the dismay of the corporate sector, the Internet presents serious obstacles for the management of consumer behavior.[12] As Patry noted, “‘[c]opyright owners’ extreme reaction to the Internet is based on the role of the Internet in breaking the vertical monopolization business model long favored by the copyright industries.”[13] In combatting this breakdown, the copyright industries have waged “…[t]he Copyright Wars [which] are an effort to accomplish the impossible: to stop time, to stop innovation, to stop new ways of learning and new ways of creating.”[14] In particular, the steady enlargement of copyright becomes a tool used by reactionary forces willing to counter the Digital Revolution.[15] From a market standpoint, stronger rights allow the private sector to enforce a top-down consumer system.[16] The emphasis of copyright protection on a permission culture favors a unidirectional market, where the public is only a consumer, passively engaged to pay-per use or else stop using copyrighted works.[17] From a political standpoint, a tight control on reuse of information prevents mainstream culture from being challenged by alternative culture.[18] Copyright law empowers mainstream culture and marginalizes minority alternative counter-culture, therefore relenting any process leading to a paradigm shift.[19]

[5]       From a broader socio-economic perspective, there is also a more systemic explanation to the reaction facing the emergence of the networked information society. Baudrillard’s arguments might explain the reaction to the Digital Revolution—driving cultural goods’ marginal cost of distribution and reproduction close to zero.[20] Copyright law might become an instrument to protect the capitalistic notion of consumption and perpetuate a system of artificial scarcity. As the Digital Revolution turns consumers into users, and then creators, it defies the very notion of consumer society. It turns the capitalistic consumer economy into a networked information economy, which is characterized by a sharing and gift economy. So, for the socio-economic consumerist paradigm not to succumb, the limitless power of peer and mass collaboration must be tamed by the artificial scarcity created by copyright law. Ultimately, resistance to the Digital Revolution can be seen as a response to Baudrillard’s call for a return to prodigality beyond the structural scarcity of the capitalistic market economy.[21] The Internet and networked peer collaboration may represent a return to “collective ‘improvidence’ and ‘prodigality’” and their related “real affluence.”[22] New Internet dynamics of exchange and creativity might answer in the positive Baudrillard’s question whether we will “…return, one day, beyond the market economy, to prodigality[.]”[23] In Baudrillard’s terms, by increasingly commodifying knowledge and expanding copyright protection, we are taming limitless power with artificial scarcity to keep in place a “dialectic of penury” and “unlimited need.”[24] Therefore, the reaction to the Internet revolution may be construed as a gatekeepers’ attempt to keep their privileges in place as they thrive within a paradigm that builds the need of production—and overproduction—over an obsession with artificial scarcity.

[6]       In the past few years, a global movement grew under the understanding that the digital networked environment must be protected from external manipulations intended to stop exchange and re-instate scarcity. In this sense, resistance to copyright over-expansion can be understood as a cultural movement “resisting the resistance” to the Digital Revolution.[25] Francis Gurry, Director General of the World Intellectual Property Organization, gives a good explanation of these resistance mechanics.

[7]       Gurry noted that:

…the central question of copyright policy…implies a series of balances: between availability, on the one hand, and control of the distribution of works as a means of extracting value, on the other hand; between consumers and producers; between the interests of society and those of the individual creator; and between the short-term gratification of immediate consumption and the long-term process of providing economic incentives that reward creativity and foster a dynamic culture. Digital technology and the Internet have had, and will continue to have, a radical impact on those balances. They have given a technological advantage to one side of the balance, the side of free availability, the consumer, social enjoyment and short-term gratification. History shows that it is an impossible task to reverse technological advantage and the change that it produces. Rather than resist it, we need to accept the inevitability of technological change and to seek an intelligent engagement with it. There is, in any case, no other choice—either the copyright system adapts to the natural advantage that has evolved or it will perish.[26]

[8]       In the dedication to the Expositiones in Summulas Petri Hispani—printed around 1490 in Lyons—the editor, Johann Trechsel, announced: “[i]n contrast to xylography, the new art of impression I am practi[c]ing ends the career of all the scribes. They have to do the binding of the books now.”[27] Similarly, in the digital era, distributors’ roles and functions might be redefined. One of the key lessons in the gradual shift in market power in the entertainment industry these days is that the power of the old gatekeepers is declining, even as the overall industry grows. The power, instead, has definitely moved directly to the content creators themselves. Creators no longer need to go through a very limited number of gatekeepers, who often provide deal terms that significantly limit the creator’s ability to make a living.[28]

[9]       Instead, “…a major new opportunity has opened up, not for gatekeepers, but for organizations that enable artists to do the different things that the former gatekeeper used to do—but while retaining much more control, as well as a more direct connection with fans.”[29] As discussed at length in another piece of mine,[30] multiple emerging organizations are enabling a direct discourse between artists and users (e.g. Kickstarter, TopSpin or Bandcamp.)[31] As a consequence, traditional cultural intermediaries might be forced to give up their Ancien Régime’s privileges, causing further resistance to change. In the words of Nellie Kroes, European Commission Vice-President for the Digital Agenda, [a]ll revolutions reveal, in a new and less favourable light, the privileges of the gatekeepers of the “Ancien Régime.” It is no different in the case of the internet revolution, which is unveiling the unsustainable position of certain content gatekeepers and intermediaries. No historically entrenched position guarantees the survival of any cultural intermediary. Like it or not, content gatekeepers risk being sidelined if they do not adapt to the needs of both creators and consumers of cultural goods…Today our fragmented copyright system is ill-adapted to the real essence of art, which has no frontiers. Instead, that system has ended up giving a more prominent role to intermediaries than to artists. It irritates the public who often cannot access what artists want to offer and leaves a vacuum which is served by illegal content, depriving the artists of their well-deserved remuneration. And copyright enforcement is often entangled in sensitive questions about privacy, data protection or even net neutrality. It may suit some vested interests to avoid a debate, or to frame the debate on copyright in moralistic terms that merely demonise millions of citizens. But that is not a sustainable approach…My position is that we must look beyond national and corporatist self-interest to establish a new approach to copyright.[32]

III. Resisting Copyright (at Zero Marginal Cost) and Promoting Alternatives

[10]     In the aftermath of the legal battles targeting P2P platforms (such as ThePirateBay), the Pirate Party “emerge[d] [in Sweden] to contest elections on the basis of the abolition or radical reform of intellectual property, in general, and copyright, in particular. The platform of the Pirate Party proclaims that ‘[t]he monopoly for the copyright holder to exploit an aesthetic work commercially should be limited to five years after publication. A five years copyright term for commercial use is more than enough.’”[33] “Non-commercial use should be free from day one”.[34] The Pirate Party saw large successes at its first electoral appearances both in Sweden and Germany and similar political groups have now formed in other countries.[35] The Pirate Party serves as an “extreme expression [of] the sentiment of distaste or disrespect for intellectual property on the Internet”.[36] However, even the Economist has argued that copyright should return to its roots, because as it is now, it may cause more harm than good–proving that the sentiment is widespread.[37] A recent Report from the Australian Government Productivity Commission widely criticized the present “copy(not)right” model, pointing at a number of very critical issues:

…Australia’s copyright arrangements are weighed too heavily in favour of copyright owners, to the detriment of the long-term interests of both consumers and intermediate users. Unlike other IP rights, copyright makes no attempt to target those works where ‘free riding’ by users would undermine the incentives to create. Instead, copyright is overly broad; provides the same levels of protection to commercial and non-commercial works; and protects works with very low levels of creative input, works that are no longer being supplied to the market, and works where ownership can no longer be identified.[38]

[11]     Therefore, copyright law has fallen into a deep crisis of acceptance with respect to both users and creators.[39] Especially with new generations,[40] copyright tends to become irrelevant in the public mind, if not altogether opposed.[41] Sharing a common opinion, David Lange noted that the over-expansion of copyright entitlements lies at the backbone of their crisis in public acceptance:

…Raymond Nimmer has said that copyright cannot survive unless it is accorded widespread acquiescence by the citizenry. I think his insight is acutely perceptive and absolutely correct, for a reason that I also understand him to endorse: Never before has copyright so directly confronted individuals in their private lives. Copyright is omnipresent. But what has to be understood as well is that copyright is also correspondingly over-extended.[42]

[12]     Technological and cultural change played a central role in lowering the acceptance of an over-expansive copyright paradigm. Ubiquitous technology, cost minimization, and the emergence of fan authorship radically affect the traditional market failure that copyright is supposed to cure, both at the creation and distribution levels. The distributive power of the Internet instituted new economics of distribution for digital content.[43] Distribution and reproduction marginal costs being close to zero potentially eliminates, or at least strongly reduces, the need for third-party investment. In The Creative Destruction of Copyrights, Raymond Ku wonders whether a copyright monopoly at close to zero marginal cost is still a sustainable option.[44] Ku concludes that, absent the need for encouraging content distribution, the artificial scarcity and exclusive rights created by copyright cannot find any other social reason for existence.[45] When distributors’ rights are unbundled from creators’ rights, society can no longer support the protection of distributors’ rights.[46] Under these circumstances, copyright would serve no other social purpose than transferring wealth from the public to distributors.[47] Therefore, in Ku’s view, copyright in the digital environment is a meaningless burden for society and should be eliminated.[48] As radical as Ku’s position may be, if technological innovation led to a substantial reduction of the production, reproduction, and distribution costs of cultural artefacts, a case could be made in sharp contrast with any position asserting the expansion of the copyright monopoly.

[13]     Reproduction and distribution cost minimization also affected the traditional discourse regarding incentive to create.[49] Reductions in the production and distribution costs of original expressive works encourages non-professional authors to create.[50] Therefore, the number of authors, for whom the lucre of copyright proves a necessary stimulus, should drop. Additionally, low marginal costs empower few authors to reach a broader audience.[51] If decentralized and unprofessional authors increasingly satisfy the market demand–because non-monetary incentives stimulate creation–a copyright monopoly will eventually prove superfluous, at least for these works.[52] In respect to creative works provided by decentralized and unprofessional authors, the burdens of a copyright monopoly will exceed its benefits.[53]

[14]     This crisis propelled a cultural copyright resistance movement. Neelie Kroes stressed that copyright fundamentalism has prejudiced our capacity to explore new models in the digital age:

So new ideas which could benefit artists are killed before they can show their merit, dead on arrival. This needs to change…So that’s my answer: it’s not all about copyright. It is certainly important, but we need to stop obsessing about that. The life of an artist is tough: the crisis has made it tougher. Let’s get back to basics, and deliver a system of recognition and reward that puts artists and creators at its heart.[54]

[15]     The digital opportunity led many to challenge the obsolescence of the traditional copyright monopoly, seeking more radical reform. In 1994, John Perry Barlow’s manifesto laid out the necessity of re-thinking digitized intellectual property and radically noted that: “[i]n the absence of the old containers, almost everything we think we know about intellectual property is wrong”.[55] Nicholas Negroponte reinforced Barlow’s point by stating that “[c]opyright law is totally out of date…[i]t is a Gutenberg artifact…[s]ince it is a reactive process, it will have to break down completely before it is corrected.”[56] Recently, the Hargreaves report noted that archaic copyright laws “obstruct[] innovation and economic growth[.]”[57] In a message delivered to the G20 leaders, the President of Russia, Dimity Medvedev, pointed out that “[t]he old principles of intellectual property protection established in a completely different technological context do not work any longer in an emerging environment, and, therefore, new conceptual arrangements are required for international regulation of intellectual activities on the Internet.”[58]

[16]     Many highlighted the necessity of re-shaping present copyright laws[59] or abolishing them altogether.[60] In particular, a growing copyright “abolitionism” emerged online in response to a worrying tendency to criminalize the younger generation and new models of online digital creativity, such as mash-up, fanfiction, or machinima.[61] The Committee on Intellectual Property Rights and the Emerging Information Infrastructure considered the notion that copying might not be an appropriate mechanism for achieving the goals of copyright in the digital age.[62] Among the inadequacies, the Committee highlights that “in the digital world copying is such an essential action, so bound up with the way computers work, that control of copying provides, in the view of some, unexpectedly broad powers, considerably beyond those intended by the copyright law.”[63] Sharing is essential to emerging digital culture. Young generations digitize, share, rip, mix, burn, and share again as a basic form of human interaction. Increasingly, many social forces maintain that full recognition of a non-commercial right to share creative works should be the goal of modern policies for digital creativity. At the same time, criminalization of Internet users by cultural conglomerates is a source of social tension.[64] At the WIPO Global Meeting on Emerging Copyright Licensing ModalitiesFacilitating Access to Culture in the Digital Age, Lessig called for an overhaul of the copyright system, which would “never work on the internet” and “[i]t’ll either cause people to stop creating or it’ll cause a revolution.”[65]

[17]     Resistance to copyright lies at the crossroad between academic investigation, civic involvement, and political activity. As Michael Strangelove argued in the Empire of Mind, the Internet set in motion an anti-capitalistic movement resistant to authoritarian forms of consumer capitalism and globalization.[66] This movement is “resisting the resistance” to change, resisting copyright, seeking access to knowledge and promoting the public domain. Creative Commons (CC), the Free Software Foundation, and the Open Source movement,[67] propelled the diffusion of viable market alternatives to traditional copyright management. The “power of open,” as Catherine Casserly and Joi Ito have termed creative commons, has spread quickly with more than four hundred million CC-licensed works available on the Internet.[68] Again, mostly driven by scholarly efforts, projects like the Access to Knowledge (A2K) Movement, the Open Access Publishing Movement, and the Public Domain Project lead the resistance to copyright over-expansion by seeking to re-define the hierarchy of priorities embedded in the traditional politics of intellectual property.[69] Meanwhile, proposals for reform tackled the uneasy coexistence between copyright, digitization, and the networked information economy.[70] I will discuss these proposals first and later discuss the social movements resisting the resistance.

A.    Copyright Terms, Formalities and Registration Systems

[18]     As suggested by some scholars, a potential solution to the weaknesses of the current copyright regime is a setting in which published works are not copyrighted unless the authors comply with specific formalities. These formalities should be very simple, cheap, and non-discriminatory with respect to national versus foreign authors.[71]

[19]     The international community was persuaded to abolish most discriminatory hurdles in the analog world; similarly, the digital era may provide opportunities for creativity in adapting formalities.[72] The idea of a global online copyright registry for creative works is increasingly gaining momentum.[73] A carefully crafted registration system may enrich the public domain, enhance access and reuse, and avoid transaction costs burdening digital creativity and digitization projects.[74] Today, state-of-the-art technology enables the creation of global digital repositories that ensure the integrity of digital works, render filings user-friendly and inexpensive, and enable searches on the status of any creative work.[75] Registration could be a precondition for protection by providing the creators with full ownership rights, while, absent registration, the default level of protection would be limited to the moral right of attribution. Alternatively, if making global registration, rather than notice, a precondition for protection is considered too harsh a requirement, then registration might at least be required as a precondition of protection extensions.

[20]     In particular, registries and data collection should ease the orphan works problem.[76] Measures to improve the provision of rights management information range from encouraging digital content metadata tagging, to promoting the use of CC-like licenses, and encouraging the voluntary registration of rights ownership information in specifically designed databases.[77] Many projects aim at increasing the supply of rights management information to the public, merging unique sources of rights information, and establishing specific databases for orphan works. Notably, the EU mandated project ARROW (Accessible Registries of Rights Information and Orphan Works) includes national libraries, publishers, writers’ organizations and collective management organizations. It aspires to find ways of identifying rights holders, determining and clearing rights, and possibly confirming the public domain status of a work.[78]

[21]     Marco Ricolfi’s Copyright 2.0 proposal is a specific articulation of an alternative copyright default rule, coupled with the implementation of a formality and registration system.[79] Similar proposals have been made by other scholars, such as Lessig.[80] In Ricolfi’s Copyright 2.0, traditional copyright, or Copyright 1.0, is still available. In order to be enjoyed, Copyright 1.0 has to be claimed by the creator at the onset, for example by inserting a copyright notice before the first publication of a work.[81] In certain conditions, the Copyright 1.0 notice could also be added after the first publication, possibly during a short grace period.[82] The Copyright 1.0 protection given by the original notice is deemed withdrawn after a specified short period of time, unless an extension period is formally requested through an Internet based renewal and registration procedure, whose registration data would be accessible online.[83] If no notice is given, Copyright 2.0 applies, and giving creators mainly one right, the right to attribution.[84]

B. Mandatory Exceptions and Diligent Search for Orphan Works and UGC

[22]     Nellie Kroes warns against the welfare loss of the immense cultural riches unveiled by digitization, nevertheless locked behind the intricacies of an outdated copyright model.[85]

Think of the treasures that are kept from the public because we can’t identify the right-holders of certain works of art. These “orphan works” are stuck in the digital darkness when they could be on digital display for future generations. It is time for this dysfunction to end.[86]

[23]     Institutional proposals in both Europe and the United States advocate the implementation of a diligent search system as a defense to copyright infringement. A report from the United States Copyright Office recommended that Congress enact legislation to limit liability for copyright infringement if the alleged infringer performed “a reasonably diligent search” before any use.[87] Additionally, the Copyright Office laid down several suggestions to promote privately-operated registries as a more efficient arrangement than government-operated registries. The Copyright Office’s recommendations were included in the Orphan Works Act of 2006, and again in the Orphan Works Act of 2008.[88] So far, neither bill has been adopted into law. The High Level Expert Group on the European Digital Libraries Initiative made similar recommendations:

Member States are encouraged to establish a mechanism to enable the use of such works for non-commercial and commercial purposes, against agreed terms and remuneration, when applicable, if diligent search in the country of origin prior to the use of the works has been performed in trying to identify the work and/or locate the rightholders…The mechanisms in the Member States need to fulfill prescribed criteria… the solution should be applicable to all kinds of works; a bona fide/good faith user needs to conduct a diligent search prior to the use of the work in the country of origin; best practices or guidelines specific to particular categories of works can be devised by stakeholders in different fields.[89]

[24]     The system should be based on reciprocity so that Member States will recognize solutions in other Member States that fulfill the prescribed criteria. As a result, materials that are lawful to use in one Member State would also be lawful to use in another. Partially endorsing these principles, a Directive on certain permitted uses of orphan works has been recently enacted by the European Commission.[90]

[25]     In Europe, the most comprehensive proposal for an orphan works’ mandatory exception is outlined in a paper for the Gowers Review by the British Screen Advisory Committee (BSAC).[91] This proposal sets up a compensatory liability regime.[92] First, to trigger the exception, a person is required to have made ‘best endeavours’ to locate the copyright owner of a work.[93] ‘Best endeavours’ would be judged against the particular circumstances of each case. The work must also be marked as used under the exception to alert any potential rights owners.[94] If a rights owner emerges, he is entitled to claim a ‘reasonable royalty’ agreed upon by negotiation, rather than sue for infringement. If the parties cannot reach agreement, a third party steps in to establish the royalty amount. The terms of use of the formerly-orphan work would need to be negotiated between the user and the rights owner, according to the traditional copyright rules. However, users should be allowed to continue using the work that has been integrated or transformed into a derivative work, contingent upon payment of a reasonable royalty and sufficient attribution. Slightly modified versions of the U.S. and European model have been also investigated. For example, Canada established a compulsory licensing system based on diligent searches to use orphan works.[95]

[26]     In addition to orphan works, user-generated content (UGC) is another massive phenomenon that struggles with present copyright law. Mandatory exceptions have been claimed as a solution for user-generated content, together with the use of informal copyright practices.[96] Proposals have been made for introducing an exception for transformative use in user-generated works.[97] Both specific and general exception clauses have been under discussion.[98] Canada introduced a specific exception to this effect, allowing the use of a protected work—which has been published or otherwise made available to the public—in the creation of a new work, if the use is done solely for non-commercial purposes and does not have substantial adverse effects on the potential market for the original work.[99] Likewise, European institutions and stakeholders have recently discussed specific exceptions for UGC, after sidelining proposals for micro-licensing arrangements.[100] In a narrower context, the U.S. Copyright Office rulemaking on the Digital Millennium Copyright Act (DMCA) anti-circumvention provisions recently introduced an exception for the use of movie clips for transformative, non-commercial works, bringing a breath of fresh air to the world of ‘vidding’.[101] Also, general fair use exception clauses, if properly construed, may prove effective to give UGC creators some breathing space.[102] In particular, recent U.S. case law protects UGC creators from bogus DMCA takedown notices in cases of blatant misrepresentation of fair use defences by copyright holders. In Lenz v. Universal Music, the 9th Circuit ruled that “the statute requires copyright holders to consider fair use before sending takedown notification.”[103] The Court also recognized the possible applicability of section 512(f) of the DMCA that allows for the recognition of damages in case of proven bad-faith, which would occur if the copyright holder did not consider fair use or paid “lip service to the consideration of fair use by claiming it formed a good faith belief when there is evidence to the contrary.”[104]

C. Extended and Mandatory Collective Management

[27]     Extended Collective Licenses (ECL) are applied in various regions in Denmark, Finland, Norway, Sweden, and Iceland.[105] The ECL arrangement has become a tempting policy option in several jurisdictions, both to tackle the orphan works problem, and the larger issue of file sharing in digital networks.[106] In particular, a recent draft directive would apply this collective management mechanism to the use of out-of-commerce works by cultural heritage institutions.[107]

[28]     The system combines the voluntary transfer of rights from rights holders to a collective society with the legal extension of the collective agreement to third parties who are not members of the collective society. However, to be extended to third parties of the same category, the collective society must represent a substantial number of rights holders.[108] In any event, the legislation in Nordic countries provides the rights holders with the option of claiming individual remuneration or opting out from the system.[109] Therefore, with the exception of the rights holders who opted out, the extended collective license automatically applies to all domestic and foreign rights owners, unknown or untraceable rights holders, and deceased rights holders, even where estates have yet to be arranged. With an extended collective licensing scheme in place, a user may obtain a license to use all the works included in a certain category, with the exception of the opted out works. Re-users of existing works should have no legal concerns all orphan works will be covered by the license, opted out works instantly cease to be orphan. If ECL is applied to legitimize file-sharing, collective management bodies will negotiate the license with users’ associations or internet service providers (ISPs). In exchange for the right of reproductioning and making available content online, rights holders will be remunerated by the proceedings collected through the extended collective license. A related proposal would place the right to make available to the public under mandatory collective management.[110] According to this proposal, to enjoy the economic rights attached to the right of making available to the public, rights holders would be obligated to use collective management. As a consequence, the ISPs would pay a lump-sum fee or levy to the collective societies in exchange for the authorization to download and make the collective society’s entire repertoire of managed available to users.[111] The money collected would be then redistributed to the rights holders.

[29]     Actually, courts have expressed hesitations in endorsing the ECL opt-out mechanism (as seen in the Google books case).[112] A recent ECJ decision ruled against this arrangement, while reviewing a French law that regulated the digital exploitation of out-of-print 20th century books.[113] This French law gave approved collecting societies the right to authorize the reproduction and digital representation of out-of-print books.[114] Meanwhile, the law provided authors—or their successors in title—with an opt-out mechanism subject to certain conditions. In Soulier, the ECJ declared the French law uncompliant with European law,[115] which provides authors—not collecting societies—with the right to authorize the reproduction and communication to the public of their works.[116] The Soulier decision might have far-reaching effects for the EU directive proposal—and more generally for all national systems of extended collective licensing that might be incompatible with EU law. The successful implementation of the directive proposal might remain the sole option to keep ECL arrangements in place by redressing this judicial interpretation

D. Alternative Compensation Systems or Cultural Flat Rate

[30]     As Volker Grassmuck noted, “the world is going flat(-rate).”[117] In search of alternative remuneration systems, researchers, activists, consumer organizations, artist groups, and policy makers have proposed to finance creativity on a flat-rate base. In the past, levies on recording devices and media have been set up upon the acknowledgment that private copying cannot be prevented.[118] The same reasoning applies to the introduction of a legal permission to copy and make available copyrighted works for non-commercial purposes in the Internet.[119] Flat rate proposals favor a sharing ecology that is best suited to the networked information economy.[120] A recent study of the Institute of European Media Law has argued that this may be “no[thing] less than the logical consequence [of] the technical revolution [introduced] by the internet.”[121] The Communia study also described the minimum requirements for a cultural flat-rate as follows: “(i) a legal license permitting private individuals to exchange copyright works for non-commercial purposes; (ii) a levy, possibly collected by ISPs, flat, possibly differentiated by access speed; and (iii) a collective management, i.e. a mechanism for collecting the money and distributing it fairly.”[122]

[31]     Several flat-rate models have been proposed.[123] Some see the flat-rate payment by Internet subscribers as similar to private copying levies managed by collecting societies, while others want to put in place an entirely new reward system, giving the key role to Internet users themselves.[124] A non-commercial use levy permitting non-commercial file sharing of any digitized work was first proposed by Professor Neil Netanel.[125] Such a levy would be imposed on the sale of any consumer electronic devices used to copy, store, send or perform shared and downloaded files, but also on the sale of internet access and P2P software and services.[126] An ad hoc body would be in charge of determining the amount of the levy.[127] The proceeds would be distributed to copyright holders by taking into consideration the popularity of the works measured by tracking and monitoring technologies.[128] Users could freely copy, circulate, and make non-commercial use of any works that the rights holder has made available on the Internet. William Fisher followed up on Netanel with a more refined and comprehensive proposal.[129] Creators’ remuneration would still be collected through levies on media devices and Internet connection.[130] In Fisher’s system, however, a governmentally administered registrar for digital content, or alternatively a private organization, would be in charge of the management of creative works in the digital environment.[131] Digitized works would be registered with the Registrar and embedded with digital watermarks. Tracking technologies would measure the popularity of the works circulating online.[132]The Registrar would then redistribute the proceedings to the registered right holders according to popularity. Philippe Aigrain proposed a “creative contribution” encompassing a global license to share published digital works in the form of ECL, or absent an agreement, of legal licensing.[133] Remuneration would be provided by a flat-rate paid by all Internet subscribers.[134] Half of the money collected would be used for the remuneration of works shared over the Internet—distributed according to their popularity.[135] Measurement of popularity would be based on a large panel of voluntary Internet users transmitting anonymous data on their usage to collective management societies.[136] The other half of the money collected would be devoted to funding the production of new works and the promotion of added-value intermediaries in the creative environment.[137] Another suggestion included among flat-rates models is Peter Sunde’s Flattr “micro-donations” scheme. An internet user would give between 2 and 100 euros per month and could then nominate works that they wish to reward or “flattr,” a play on the words “flatter” and “flat-rate.”[138] Finally, the “German and European Green Parties included in their policy agenda the promotion of a cultural flat rate to decriminalise P2P users, remunerate creativity and relieve the judicial system and the ISPs from mass-scale prosecution.”[139] The “Green Party’s proposal has been backed up by the mentioned EML study that found that a levy on Internet usage legalizing non-commercial online exchanges of creative works conforms with German and European copyright law, even though it requires changes in both.”[140]

IV. The Access 2 Knowledge (A2K) Movement

[32]     As Nelson Mandela once noted, “[e]liminating the distinction between the information rich and information poor is…critical to eliminating economic and other inequalities between North and South, and to improving the life of all humanity.”[141] “Access to learning and knowledge…[are] key elements towards the improvement of the situation of under-privileged countries…”[142] Extreme copyright expansion and constant cultural appropriation, together with a dysfunctional access to scientific and patented knowledge, heightened the North-South cultural divide. The Global South has been exposed to the effects of a pernicious form of cultural imperialism, without the advantages of freely reusing that culture for its own growth. The Vatican noted that

[o]n the part of rich countries there is excessive zeal for protecting knowledge through an unduly rigid assertion of the right to intellectual property, especially in the field of health care. At the same time, in some poor countries, cultural models and social norms of behaviour persist which hinder the process of development.[143]

[33]     The issue of access to knowledge was first publicly expressed by the Brazilian government in a 1961 draft resolution.[144] Since then, access to knowledge has recently returned to become a question of major international concern. Access to Knowledge (A2K) is a globalized movement aimed at promoting redistribution of informational resources in favor of minorities and the Global South.[145] In 2006, the Yale Information Society Project held an A2K conference committed “to building a broad conceptual framework of ‘Access to Knowledge’ that can foster powerful coalitions between diverse groups.”[146] Yale’s 2007 A2K conference aimed to “further build the coalition amongst institutions and stakeholders” from the 2006 conference.[147] The Consumer Project on Technology (CPT) says that A2K:

takes concerns with copyright law and other regulations that affect knowledge and places them within an understandable social need and policy platform: access to knowledge goods…The rich and the poor can be more equal with regard to knowledge goods than to many other areas.[148]

[34]     Under the umbrella of Article 27 of the Universal Declaration of Human Rights, several working projects at the international level have been set up to address the requests of the A2K movement.[149] As part of the discussions leading to the adoption of the WIPO Development Agenda,[150] activists produced a document to start negotiations on a Treaty on Access to Knowledge.[151] The proposed treaty is based on the core idea that “restrictions on access ought to be the exception, not the other way around,” and that “both subject matter exclusions from, and exceptions and limitations to, intellectual property protection standards are mandatory rather than permissive.”[152] Unfortunately, consensus on the A2K Treaty is still an ephemeral mirage. Though, after a long battle,[153] a narrow version of the A2K Treaty, to promote the use of protected works by disabled persons was signed in Marrakesh in 2013.[154]

[35]     The quest for access to knowledge goes hand in hand with the desire of the Global South and minorities to reclaim cultural identity from imperialist power. The search for cultural distinctiveness and access to knowledge becomes a paradigm of equality.[155] Although international agreement from all stakeholders on an A2K Treaty may be hard to reach, grass-roots movements spearheaded similar goals through different routes. A quest for open access to academic knowledge occupied the recent agenda of a global network of institutions and stakeholders.

V. From “Elite-nment” to Open Knowledge Environments

[36]     In a momentous speech at the European Organization for Nuclear Research (CERN) in Geneva, Professor Lawrence Lessig reminded the audience of scientists and researchers that most scientific knowledge is locked away for the general public and can only be accessed by professors and students in a university setting.[156] Lessig pungently made the point that “if you are a member of the knowledge elite, then there is free access, but for the rest of the world, not so much…publisher restrictions do not achieve the objective of enlightenment, but rather the reality of ‘elite-nment.’” [157]

[37]     Other authors have reinforced this point. John Willinsky, for example, suggested that, as its key contribution, open access publishing (OAP) models may move “knowledge from the closed cloisters of privileged, well-endowed universities to institutions worldwide.”[158] As Willinsky noted, “[o]pen access could be the next step in a tradition that includes the printing press and penny post, public libraries and public schools. It is a tradition bent on increasing the democratic circulation of knowledge…”[159] There is a common understanding that the path to digital enlightenment may start with open access to scientific knowledge.

[38]     The open access movement in scholarly publishing was inspired by the dramatic increase in prices for journals and publisher restrictions to the reuse of information.[160] The academics’ reaction against the ‘cost of knowledge’—also known as the serial crisis—is on the rise, especially against the practice of charging “exorbitantly high prices for…journals,” and of “agree[ing] to buy very large ‘bundles.’”[161] As Reto Hilty noted, the price increase of publishers’ products—while publishers’ costs have sunk dramatically—has forced the scientific community to react by implementing open access options, because antiquated copyright laws have failed to bring about reasonable balance of interests.[162] George Monbiot stressed the unfairness of the academic publishing system by noting, with specific reference to publishers such as Elsevier, Springer, or Wiley-Blackwell:

[w]hat we see here is pure rentier capitalism: monopolising a public resource then charging exorbitant fees to use it. Another term for it is economic parasitism. To obtain the knowledge for which we have already paid, we must surrender our feu to the lairds of learning.[163]

[39]     The parasitism lies in a monopoly over content that the academic publishers do not create and do not pay for. Researchers, hoping publish with reputable journals, surrender their copyrights for free.[164] Most of the time, the production of that very content—now monopolized by the academic publishers—was funded by the public, through government research grants and academic incomes.[165] This led some authors to discuss the opportunity of abolishing copyright for academic works all together.[166] From the ancient proverbial idea of scientia donum dei est unde vendi non potest to the emergence of the notion of ‘open science’, the normative structure of science presents an unresolvable tension with the exclusive and monopolistic structure of intellectual property entitlements. Merton powerfully emphasized the contrast between the ethos of science and intellectual property monopoly rights:

“Communism,” in the nontechnical and extended sense of common ownership of goods, is a second integral element of the scientific ethos. The substantive findings of science are a product of social collaboration and are assigned to the community. They constitute a common heritage in which the equity of the individual producer is severely limited. An eponymous law or theory does not enter into the exclusive possession of the discoverer and his heirs, nor do the mores bestow upon them special rights of use and disposition. Property rights in science are whittled down to a bare minimum by the rationale of the scientific ethic. The scientist’s claim to “his” intellectual “property” is limited to that of recognition and esteem which, if the institution functions with a modicum of efficiency, is roughly commensurate with the significance of the increments brought to the common fund of knowledge.[167]

[40]     The major propulsion to open access at the European level was driven by the Berlin Conferences. The first Berlin Conference was organized in 2003 by the Max Planck Society and the European Cultural Heritage Online (ECHO) project to discuss ways of providing access to research findings.[168] Annual follow-up conferences have been organized ever since. The most significant result of the Berlin Conference was the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (“Berlin Declaration”), including the goal of disseminating knowledge through the open access paradigm via the Internet.[169] The Berlin Declaration has been signed by hundreds of European and international institutions. OAP is a publishing model where the research, institution or the party financing the research pays for publication and the article is then freely accessible. In particular, OAP refers to free and unrestricted world-wide electronic distribution and availability of peer-reviewed journal literature.[170] The Budapest Open Access Initiative uses a definition that includes free reuse and redistribution of “[o]pen [a]ccess” material by anyone.[171] According to Peter Suber, the de facto spokesperson of the OAP movement, Open access (OA) is free online access. OA literature is not only free of charge to everyone with an Internet connection, but free of most copyright and licensing restrictions. OA literature is barrier-free literature produced by removing the price barriers and permission barriers that block access and limit usage of most conventionally published literature, whether in print or online.[172]

[41]     Since the inception of the open-access initiative in 2001, there are now almost eleven thousand open access journals and their number is constantly rising.[173] In addition, several leading international academic institutions endorsed open-access policies and are working towards mechanisms to cover open-access journals’ operating expenses.[174] The same approach is increasingly followed by governmental institutions,[175] in light of the fact that economic studies have shown a positive net value of OAP models when compared to other publishing models.[176] The European Commission, for example, plans to make OAP the norm for research receiving founding from its Horizon 2020 programme—the EU framework programme for research and innovation.[177] As part of its Innovation and Research Strategy for Growth, the UK government has announced that all publicly funded scientific research must be published in open-access journals.[178] In the US, several research funding agencies have instituted open access conditions.[179] After an initial voluntary adoption in 2005, the Consolidated Appropriations Act of 2008[180] instituted an open access mandate for research projects funded by the NIH.[181] So far, the NIH has reported a compliance rate of 75%.[182] Together with research articles, data, teaching materials, and the like, the importance of open access models extends also to books. Millions of historic volumes are now openly accessible from various digitization projects such as Europeana, Google Books, or Hathi. In addition, many recent volumes are also openly available from a variety of academic presses, government and nonprofit agencies, and other individuals and groups. Libraries’ cataloging data are increasingly released under open access models.[183]

[42]     Criticizing the university for having become part of the problem of enclosure of scientific commons by “avidly defending their rights to patent their research results, and licence as they choose,” Richard Nelson argues that “the key to assuring that a large portion of what comes out of future scientific research will be placed in the commons is staunch defense of the commons by universities.”[184] Nelson continues by arguing that if universities “have policies of laying their research results largely open, most of science will continue to be in the commons.”[185] There is a true responsibility of the academic community towards expanding OAP. The role of universities in the open access and OAP movement is critical and more than any other institutions they have motive to promote the goals of “open science.” Willinsky advocated the idea that scholars have a responsibility to make their work available OA globally by referring to an ‘access principle’ and noting that “[a] commitment to the value and quality of research carries with it a responsibility to extend the circulation of such work as far as possible and ideally to all who are interested in it and all who might profit by it.”[186] In this sense, the true challenge ahead of the OAP movement is to turn university environments, and the knowledge produced within, into a more easily and freely accessible public good, perhaps better integrating the OAP movement with Open University and Open Learning.

[43]     Seeking to reap the full value that open access can yield in the digital environment, Jerome Reichman and Paul Uhlir proposed a model of open knowledge environments (OKEs) for digitally networked scientific communication.[187] OKEs would “bring the scholarly communication function back into the universities” through “the development of interactive portals focused on knowledge production and on collaborative research and educational opportunities in specific thematic areas.” [188] Also, OKEs might reshape the role of libraries. As mentioned earlier, libraries are knowledge infrastructures and should be one of the main drivers of access to knowledge in the digital networked society. However, extreme commodification of information, propelled by the present legal framework, may drive libraries away from their function as knowledge repositories. As Guy Pessach noted,

[l]ibraries are increasingly consuming significant shares of their knowledge goods from globalized publishers according to the contractual and technological protection measures that these publishers impose on their digital content. Thus there is an unavoidable movement of enclosure regarding the provision of knowledge through libraries, all in a manner that practically compels libraries to take part in the privatization of knowledge supply and distribution.[189]

[44]     Therefore, the road to global access to knowledge is to provide digital libraries with a better framework to support their independence from the increasing commodification of knowledge goods. Several preliminary steps have been taken in the context of articles 3-1(V) and 3-1(VIII) of the WIPO A2K draft treaty and other legal instruments.[190] A World Digital Public Library that integrates OKEs will push forth the rediscovery of currently unused or inaccessible works, open up the riches of knowledge in formats that are accessible to persons with disabilities, and empower a superior democratic process by favoring access regardless of users’ market power.

VI. The Emergence of the Public Domain[191]

[45]     As Jessica Litman noted, “a vigorous public domain is a crucial buttress to the copyright system; without the public domain, it might be impossible to tolerate copyright at all.”[192] The increasing enclosure of the public domain has contributed to the crisis of acceptance in which copyright law is fallen. The emergence and recognition of the public domain, the development of a public domain project, and the advent of a movement for cultural environmentalism are key elements to the resistance to copyright over-expansion. More fundamentally perhaps, the emphasis over the importance of the public domain has gained momentum together with the rise of the networked information economy and its ethical revolution emphasizing mass collaboration, sharing economy and gift exchange. In this respect, Daniel Drache noted that the emergence of the public domain and public goods in the globalized society have increasingly troubled the future prospects of ‘market fundamentalism.’[193]

[46]     Authors suggested that the Statute of Anne actually created the public domain, by limiting the duration of protected works and by introducing formalities.[194] However, in early copyright law, there was no positive term to affirmatively refer to the public domain, though terms like publici juris or propriété publique had been employed by 18th century jurists.[195] Nonetheless, the fact of the public domain was recognized, though no single locution captured that concept. Soon, the idea of the public domain evolved into a “discourse of the public domain—that is, the construction of a legal language to talk about public rights in writings.”[196] Historically, the term public domain was firstly employed in France by the mid-19th century to mean the expiration of copyright.[197] The English and American copyright discourse borrowed the term around the time of the drafting of the Berne Convention with the same meaning.[198] “Traditionally, the public domain has been defined in relation to copyright as the opposite of property, as the “other side of the coin of copyright” that “is best defined in negative terms”.[199] This traditional definition regarded the public domain as a “wasteland of undeserving detritus” and did not “worry about ‘threats’ to this domain any more than [it] would worry about scavengers who go to garbage dumps to look for abandoned property.”[200] This is no more. This definitional approach has been discarded in the last thirty years.

[47]     In 1981, Professor David Lange published his seminal work, Recognizing the Public Domain, and departed from the traditional line of investigation of the public domain. Lange suggested that “recognition of new intellectual property interests should be offset today by equally deliberate recognition of individual rights in the public domain.”[201] Lange called for an affirmative recognition of the public domain and drafted the skeleton of a new theory for the public domain. The public domain that Lange had in mind would become a “sanctuary conferring affirmative protection against the forces of private appropriation” that threatened creative expression.[202] The affirmative public domain was a powerfully attractive idea for scholarly literature and civic society. Lange spearheaded a “conservancy model,” concerned with promoting the public domain and protecting it against any threats, that juxtaposed the traditional “cultural stewardship model” which regarded ownership as the prerequisite of productive management.[203] The positive identification of the public domain propelled the “public domain project,” as Michael Birnhack called it.[204]

[48]     Many authors attempted to define, map, and explain the role of the public domain as an alternative to the commodification of information that threatened creativity.[205] This ongoing public domain project offers many definitions that attempt to positively construe the public domain. In any event, a positive, affirmative definition of the public domain is fluid by nature. An affirmative definition of the public domain is a political statement, the endorsement of a cause. In other words, “[t]he public domain will change its shape according to the hopes it embodies, the fears it tries to lay to rest, and the implicit vision of creativity on which it rests. There is not one public domain, but many.”[206] Notwithstanding many complementary definitions, consistency is found in the common idea that the “materials that compose our cultural heritage must be free for all to use no less than matter necessary for biological survival.”[207] As a corollary, many modern definitions of the public domain are unified by concerns over recent copyright expansionism. The common understanding of the participants to the public domain project is that enclosure of the “materials that compose our cultural heritage” is a welfare loss against which society at large must be guarded from.[208] The modern definitional approach endorsed by the public domain project is intended to turn the old metaphor, describing the public domain as what is “left over after intellectual property had finished satisfying its appetite,”[209] upside down by thinking of copyright as “a system designed to feed the public domain providing temporary and narrowly limited rights…all with the ultimate goal of promoting free access.”[210] Moreover, the public domain envisioned by recent legal, public policy and economic analysis becomes “the place we quarry the building blocks of our culture.”[211] However, the construction of an affirmative idea of the public domain should always consider that the abstraction of the public domain is slippery.[212] That affirmative notion must be embodied in a physical space that may be immediately protected and nourished. As Professor Lange puts it, “the problems will not be resolved until courts have come to see the public domain not merely as an unexplored abstraction but as a field of individual rights fully as important as any of the new property rights.”[213]

[49]     The modern public domain discourse owes much to the legal analysis of the governance of the commons, natural resources used by many individuals in common. Although the public domain and commons are diverse concepts,[214] the similarities are many. Since the origin of the public domain discourse, the environmental metaphor has been largely used to refer to the cultural public domain.[215] Therefore, the traditional environmental conception of the commons was ported to the cultural domain and applied to intellectual property policy issues. Environmental and intellectual property scholars started to look at knowledge as a commons—a shared resource. In 2003, the Nobel Prize Elinor Ostrom and her colleague Charlotte Hesse discussed the applicability of their ideas on the governance and management of common pool resources to the new realm of the intellectual public domain.[216] The following literature continued to develop the concept of cultural commons in the footsteps of the Ostrom’s analyses.[217] The application of the physical commons literature to cultural resources brings a shift in approach and methodology from the previous discourse of the public domain. This different approach has been described as follows:

[t]he old dividing line in the literature on the public domain had been between the realm of property and the realm of the free. The new dividing line, drawn as a palimpsest on the old, is between the realm of individual control and the realm of distributed creation, management, and enterprise. [218]

[50]     Under this conceptual scheme, restraint on use may no longer be an evil, but a necessity of a well-run commons. The individual, legal, and market based control of the property regime is juxtaposed with the collective and informal controls of the well-run commons.[219] The well-run commons can avoid the “tragedy of the commons” without the need for single party ownership.

[51]     The movement to preserve the environmental commons inspired a new politics of intellectual property.[220] The environmental metaphor has propelled what can be termed as a cultural environmentalism.[221] Several authors spearheaded by Professor James Boyle have cast a defense of the public domain on the model of the environmental movement. Morphing the public domain into the commons, and casting the defense of the public domain on the model of the environmental movement, has the advantage of embodying the public domain in a much more physical form, minimizing its abstraction and the related difficulty of actively protecting it.[222] The primary focus of cultural environmentalism is to develop a discourse that will make the public domain visible.[223] Before the movement, the environment was invisible. Therefore, “like the environment”, Boyle suggests by echoing David Lange, “the public domain must be ‘invented’ before it can be saved.”[224] Today, the public domain has been “invented” as a positive concept and the “coalition that might protect it”, evoked if not called into being by scholars more than a decade ago, is formed.[225] Many academic and civic endeavors have joined and propelled this coalition. [226] Civic advocacy of the public domain and access to knowledge has also been followed by several institutional variants, such as the World Intellectual Property Organization’s “Development Agenda.”[227] Recommendation 20 of the Development Agenda endorses the goal “[t]o promote norm-setting activities related to IP that support a robust public domain in WIPO’s Member States.”[228] Europe put together a diversified network of projects for the protection and promotion of the public domain and open access.[229] As a flagship initiative, the European Union has promoted COMMUNIA, the European Thematic Network on the Digital Public Domain.[230] Several COMMUNIA members embodied their vision in the Public Domain Manifesto.[231] In addition, other European policy statements endorsed the same core principles of the Public Domain Manifesto. The Europeana Foundation has published the Public Domain Charter to stress the value of public domain content in the knowledge economy.[232] The Free Culture Forum released the Charter for Innovation, Creativity and Access to Knowledge, pleading for the expansion of the public domain, the accessibility of public domain works, the contraction of the copyright term, and the free availability of publicly funded research.[233] The Open Knowledge Foundation launched the Panton Principles for Open Data in Science to endorse the concept that “data related to published science should be explicitly placed in the public domain.”[234]

[52]     The focus of cultural environmentalism has been magnified in online commons and the Internet as the “über-commons—the grand infrastructure that has enabled an unprecedented new era of sharing and collective action.”[235] In the last decade, we have witnessed the emergence of a “single intellectual movement, centered on the importance of the commons to information production and creativity generally, and to the digitally networked environment in particular.”[236] According to David Bollier, the commoners have emerged as a political movement committed to freedom and innovation.[237] The “commonist” movement created a new order that is embodied in countless collaborative online endeavors.

[53]     The emergence and growth of an environmental movement for the public domain and, in particular, the digital public domain, is morphing the public domain into our cultural commons. We must look at it as a shared resource that cannot be commodified, much like our air, water, and forests. As with the natural environment, the public domain and the cultural commons that it embodies must enjoy a sustainable development. As with our natural environment, the need to promote a “balanced and sustainable development” of our cultural environment is a fundamental right that is rooted in the Charter of Fundamental Rights of the European Union.[238] Overreaching property theory and overly protective copyright law disrupt the delicate tension between access and protection. Unsustainable cultural development, enclosure and commodification of our cultural commons will produce cultural catastrophes. As unsustainable environmental development has polluted our air, contaminated our water, mutilated our forests, and disfigured our natural landscape, unsustainable cultural development will outrage and corrupt our cultural heritage and information landscape.

VI. Conclusions

[54]     I would like to conclude my review of this movement “resisting the resistance” to the Digital Revolution by sketching out a roadmap for reform that builds upon its vision. This roadmap reshapes the interplay between community, law, and market to envision a system that may fully exploit the digital opportunity and looks to the history of creativity as a guide.[239] This proposal revolves around the pivotal role of the users in a modern system for enhancing creativity. The coordinates of the roadmap correlate to four different but interlinked facets of a healthy creative paradigm: namely, (a) the necessity to rethink users’ rights, in particular users’ involvement in the legislative process; (b) the emergence of a politics of the public domain, rather than a politics of intellectual property; (c) the need to make cumulative and transformative creativity regain its role through the re-definition of the copyright permission paradigm; and (d) the transition to a consumer gift system or user patronage, through digital crowd-funding.

[55]     The roadmap for reform emphasizes the role of the users. The Internet revolution is a bottom-up revolution. User-based culture defines the networked society, together with a novel concept of authorship mingling users and authors together. Therefore, the role of users in our legislative process and the relevance of user rights should be reinforced. So far, users have had very limited access to the bargaining table when copyright policies had to be enacted. This is due to the dominant mechanics of lobbying that largely excludes users from any policy decisions. This led to the implementation of a copyright system that is strongly protectionist and pro-distributors. In particular, the regulation of the Internet and the solutions given to the dilemmas posed by digitalization may undermine the potential of this momentous change and limit positive externalities for users.

[56]     In the networked, peer, and mass productive environment, creativity seeks a politics of inclusive rights, rather than exclusive. This is a paradigm shift that would re-define the hierarchy of priorities by thinking in terms of “cultural policy” and developing a political policy of the public domain, rather than a political policy of intellectual property. Before the recognition of any intellectual property interests, a politics of the public domain must set up the “deliberate recognition of individual rights in the public domain.”[240] It must provide positive protection of the public domain from appropriation. A politics of the public domain would reconnect policies for creativity with our past and our future, looking back at our tradition of cumulative creativity and looking forward at networked, mass collaborative, user-generated creativity.[241]

[57]     In order to reconnect the creative process with its traditional cumulative and collaborative nature, a politics of inclusive rights and a politics of the public domain seek the demise of copyright exclusivity.[242] In my roadmap for reform, I argue for the implementation of additional mechanisms to provide economic incentives to create, such as a liability rule integrated into the system and an apportionment of profits. A politics of inclusivity would de-construct the post-romantic paradigm that over-emphasized creative individualism and absolute originality in order to adapt policies to networked and user-generated creativity.

[58]     Finally, I draw a parallel between traditional patronage, corporation patronage, and neo-patronage or user patronage as a re-conceptualization of a patronage system in line with the exigencies of an interconnected digital society.[243] In the future, support for creativity may increasingly derive from a direct and unfiltered exchange between authors and the public, who would become the patrons of our creativity. Remuneration through attribution, self-financing through crowd-funding, ubiquity of digital technology, and mass collaboration will keep the creative process in motion. This market transformation will facilitate a direct, unrestrained “discourse” between creators and the public. Yet, the role of distributors will be redefined and may partially disappear, making the transition long and uncertain.

* Senior Researcher and Lecturer, Centre for International Intellectual Property Studies (CEIPI), University of Strasbourg; Non-Resident Fellow, Stanford Law School, Center for Internet and Society. S.J.D., Duke University School of Law, Durham, North Carolina; LL.M., Duke University School of Law, Durham, North Carolina; LL.M., Strathclyde University, Glasgow, UK; J.D., Università Cattolica del Sacro Cuore, Milan, Italy. The author can be reached at gcfrosio@ceipi.edu.

[1] See Raymond S. R. Ku, The Creative Destruction of Copyrights: Napster and the New Economics of Digital Technology, 69 u. Chi. L. Rev. 263 (2002).

[2] Joseph A. Schumpeter, Capitalism, Socialism, and Democracy 82-83 (Harper and Row 1975) (1942).

[3] The Washington Declaration on Intellectual Property and the Public Interest, Infojustice.org (August 25-27, 2011), http://infojustice.org/washington-declaration-html, archived at https://perma.cc/W9U8-LUNA; See also Sebastian Haunss, The Politicisation of Intellectual Property: IP Conflicts and Social Change, 3 W.I.P.O.J. 129 (2011).

[4] See Douglas Rushkoff, Open Source Democracy 46-62 (DEMOS 2003); see also Yochai Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm, 112 Yale L. J. 369, 371-372 (2002).

[5] See id. at 374.

[6] See Yochai Benkler, A Free Irresponsible Press: Wikileaks and the Battle over the Soul of the Networked Fourth Estate, 46 Harvard Civil Rights-Civil Liberties L. Rev. 311 2011 (discussing the democratic functionality of Wikileaks).

[7] See Clay Shirky, Cognitive Surplus: Creativity and generosity in a Connected Age 81-109 (The Penguin Press 2010).

[8] See, e.g., Rebecca Tushnet, Payment in Credit: Copyright Law and Subcultural Creativity, 70 Law & Contemp. Probs. 138 (2007); see also Theorizing Fandom: Fans, Subculture and Identity (Alexander Alison & Harris Cheryl eds., Hampton Press 1997); see generally Andrew L. Shapiro, The Control Revolution: How the Internet is Putting Individuals in Charge and Changing the World we Know (Public Affairs 1999).

[9] See e.g., Denise E. Murray, Changing Technologies, Changing Literacy Communication, 2 Language Learning & Tech. 43, (2000).

[10] See e.g., William Patry, Moral Panics and the Copyright Wars 27 (Oxford U. Press 2009) (explaining the impossibility of governments prosecuting all violations of copyright infringement in a peer-to-peer network).

[11] See id at 27-28.

[12] See id at 25-27.

[13] Id. at 5.

[14] See Patry, supra note 10 at 39.

[15] See Copyright In The Digital Era, Building Evidence For Policy, National Academies (2013), http://sites.nationalacademies.org/cs/groups/pgasite/documents/webpage/pga_085415.pdf, archived at https://perma.cc/757P-QXY2.

[16] See Patry, supra note 10 at 26.

[17] See id.

[18] See Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm, supra note 4 at 400-401.

[19] I have discussed the effects of copyright expansion on semiotic democracy—with a comprehensive review of literature on point—in a previous piece of mine to which I remand. See generally Giancarlo F. Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness? 13(2) J. Marshall Rev. Intell. Prop. L. 341 (2014), https://papers.ssrn.com/sol3/papers2.cfm?abstract_id=2199210, archived at . https://perma.cc/MUM8-B9H8.

[20] See generally Giancarlo F. Frosio, User Patronage: The Return of the Gift in the “Crowd Society”, 2015(5) Mich. St. L. Rev. 1983, 2036-2039 (2015), https://papers.ssrn.com/sol3/papers2.cfm?abstract_id=2659659, archived at https://perma.cc/UEW8-C9KR (discussing Baudrillard’s categories as applied to cyberspace and the Digital Revolution).

[21] See Jean Baudrillard, The Consumer Society: Myths and Structures 66–68 (Mike Featherstone ed., Sage Publ’ns 1998) (1970).

[22] Id. at 67.

[23] Id. at 68.

[24] Id. at 67.

[25] Eben Moglen, Professor, Speech at the Law of the Common Conference at Seattle University: Free and Open Software: Paradigm for a New Intellectual Commons (March 13, 2009) (transcript available at http://en.wikisource.org/wiki/Free_and_Open_Software:_Paradigm_ for_a_New_Intellectual_ Commons), archived at http://perma.cc/J78D-R8AG.

[26] Francis Gurry, Dir. Gen. of the World Intellectual Prop. Org., Speaker at the Blue Sky Conference: Future Directions in Copyright Law at Queensl. Univ. of Tech., Brisbane, Austl. (February 25, 2011) (transcript available at http://www.wipo.int/about-wipo/en/dgo/speeches/dg_blueskyconf_11.html, 1–2), archived at https://perma.cc/KM6G-6WCL (emphasis added).

[27] See Uwe Neddermeryer, Why were there no Riots of the Scribes? First Result of a Quantitative Analysis of the Book-production in the Century of the Gutenberg, 31 Gazette Dulivre Medieval 1, at 4-7 (1997) (discussing that at the time of the printing revolution, the resistance to the new technology was little. Only few protests from scribes were recorded throughout Europe. In fact, the only reported protests in Genoa in 1472, in Augsburg in 1473, and in Lyon in 1477. Reconversion from old to new jobs was smooth. A variety of new jobs was created and there are no indications of unemployment or poverty suffered by any part of society due to the introduction of the new technology.); see also Peter Burke, The Italian Renaissance: Culture and Society in Italy, at 71 (Princeton U. Press 1999) (noting the adaptability of several scribes, who became printers themselves); see also Cyprian Blagden, The Stationers’ Company: A History, 1403–1959, at 23 (Stanford U. Press 1977) (1960) (reporting that “there is no evidence of unemployment or organized opposition to the new machines” in England). Quite the contrary, in the last quarter of the fifteenth century more money was spent on books that any time before.

[28] Michael Masnick & Michael Ho, The Sky is Rising: A Detailed Look at the State of the Entertainment Industry, Floor 64, 5 (January 2012), http://www.techdirt.com/skyisrising, archived at https://perma.cc/42WV-N9CC.

[29] Id.

[30] See Frosio, supra note 20, at 2039-2046.

[31] See Masnick & Ho, supra note 28 at 5-6.

[32] Neelie Kroes, European Commission Vice-President for the Digital Agenda, A Digital World of Opportunities at the Forum d’Avignon – Les Rencontres Internationales de la Culture, de l’Économie et des Medias, (November 5, 2010), available at http://europa.eu/rapid/press-release_SPEECH-10-619_en.htm, archived at https://perma.cc/ERN7-5TN4.

[33] See Gurry supra note 26.

[34] Copyright Perspectives: Past, Present and Prospect vii (Brian Fitzgerald and John Gilchrist eds., 2015).

[35] See AP, Pirate Party gains three seats in Iceland’s parliament, CBS News (Apr. 30, 2013, 12:16 PM), http://www.cbsnews.com/news/pirate-party-gains-three-seats-in-icelands-parliament/, archived at https://perma.cc/R29V-MNRP.

[36] See Gurry supra note 26. See e.g., Miaoran Li, The Pirate Party and The Pirate Bay: How the Pirate Bay Influences Sweden And International Copyright Relations, 21 Pace Int’l L. Rev. 281 (2009); see also Jonas Anderson, For the Good of the Net: The Pirate Bay as a Strategic Sovereign, 10 Culture Machine 64 (2009); see also Neri Luca, La Baia dei Pirati: Assalto al Copyright (Cooper Editore 2009).

[37] See Copyright and Wrong: Why the Rules on Copyright need to Return to Their Roots, The Economist (Apr. 8, 2010), http://www.economist.com/displayStory.cfm?story_id=15868004, archived at . https://perma.cc/N5JU-YU4U.

[38] Austl. Productivity Commission, Intell. Prop. Arrangements, Drft. Rep. 16-17 (2016), https://assets.documentcloud.org/documents/2819862/Intellectual-Property-Draft.pdf, archived at https://perma.cc/4WFS-4GTU.

[39] See generally, Jessica Silbey, The Eureka Myth: Creators, Innovators, and Everyday Intellectual Property (Stan. U. Press 2015) (noting that, after collecting interview-based empirical data, suggesting that creators – and even businesses – need intellectual property and exclusivity overstates, if not misstates, the facts and explaining how this misunderstanding about creativity sustains a flawed copyright system); see also Jessica Litman, Real Copyright Reform, 96 Iowa L. Rev. 1, 3-5, 31-32 (2010) (noting that “the deterioration in public support for copyright is the gravest of the dangers facing the copyright law in a digital era…[c]opyright stakeholders have let copyright law’s legitimacy crumble…”); see also John Tehranian, Infringement Nation: Copyright 2.0 and You xvi-xxi (Oxford U. Press 2011); see also Brett Lunceford &cohenle Shane Lunceford, The Irrelevance Of Copyright In The Public Mind, 7 Nw. J. Tech. & Intell. Prop. 33 (2008).

[40] See e.g., Music Downloading, File-Sharing and Copyright, Pew Res. Ctr.: Internet & Am. Life Project, http://pewinternet.org/t2003/07/31/music-downloading-file-sharing-and-copyright/, archived at https://perma.cc/X3GP-DL25.

[41] See id.

[42] David Lange, Reimagining The Public Domain, 66 Law & Contemp. Probs. 471 (2003).

[43] See Sacha Wunsch-Vincent, The Economics of Copyright and the Internet: Moving to an Empirical Assessment Relevant in the Digital Age, (World Intell. Prop. Org., Economic Research Working Paper No. 9, 2013) at 2, http://www.wipo.int/edocs/pubdocs/en/wipo_pub_econstat_wp_9.pdf, archived at https://perma.cc/QN3C-A7XL.

[44] See Ku supra note 1, at 300-305; see also Raymond S. R. Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, 18 Berkeley Tech. L. J. 539 (2003); see also Paul Ganley, The Internet, Creativity and Copyright Incentives, 10 J. Intell. Prop. Rts. 188 (2005); see also John F. Duffy, The Marginal Cost Controversy In Intellectual Property, 71 U. Chi. L. Rev. 37 (2004).

[45] See Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, supra note 44 at 539.

[46] See id. at 566.

[47] See id.

[48] See Ku, supra note 1, at 304-305.

[49] See Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, supra note 44 at 539.

[50] See Tom W. Bell, The Specter of Copyism v. Blockheaded Authors: How User-Generated Content Affects Copyright Policy, 10 Vand. J. Ent. & Tech. L. 841, 853 (2008).

[51] See Wunsch-Vincent, supra note 43.

[52] See Bell, supra note 50, at 844.

[53] See id. at 855.

[54] Neelie Kroes, Vice President, Eur. Comm’n, Speech at the Forum d’Avigon, Who Feeds the Artist? (Nov. 19, 2011) (transcript available at http://europa.eu/rapid/pressReleasesAction.do?reference=SPEECH%2F11%2F777), archived at https://perma.cc/QMV4-U24H.

[55] John Perry Barlow, Selling Wine Without Bottles: The Economy of Mind on the Global Net, Wired (Mar. 1, 1994), yin.arts.uci.edu/~studio/readings//barlow-wine.html, archived at https://perma.cc/FL5M-NBKF.

[56] Nicholas Negroponte, Being Digital 58 (First Vintage Books ed. 1996).

[57] Ian Hargreaves, Digital Opportunity: A Review of Intellectual Property and Growth 1 (2011).

[58] Dmitry Medvedev, President of Russ., Message to the G20 Leaders (Nov. 3, 2011) (transcript available at http://eng.kremlin.ru/news/3018), archived at . https://perma.cc/P9TL-7LGL.

[59] See, e.g., Pamela Samuelson, The Copyright Principles Project: Directions for Reform, 25 Berkeley Tech. L. J. 1175, 1178–79 (2010); see also William Patry, How to Fix Copyright (Oxford U. Press 2012); see also Diane Zimmerman, Finding New Paths through the Internet, Content and Copyright, 12 Tul. J. Tech. & Intell. Prop. 145, 145 (2009); see also Hannibal Travis, Opting Out of the Internet in the United States and the European Union: Copyright, Safe Harbors, and International Law, 84 Notre Dame L. Rev. 331, 335 (2008); see also Guy Pessach, Reciprocal Share-Alike Exemptions in Copyright Law, 30 Cardozo L. Rev. 1245, 1247 (2008); see also Jessica Litman, Sharing and Stealing, 27 Hastings Comm. & Ent. L. J. 1, 2 (2004); see also Mark Lemley & R. Anthony Reese, Reducing Digital Copyright Infringement Without Restricting Innovation, 56 Stan. L. Rev. 1345, 1349–50 (2004); see also William Landes & Richard Posner, Indefinitely Renewable Copyright, 70 U. Chi. L. Rev. 471, 471 (2003).

[60] See, e.g., Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harv. L. Rev. 281, 282 (1970) (concluding “[i]t would be possible, for instance, to do without copyright, relying upon authors, publishers, and buyers to work out arrangements among themselves that would provide books’ creators with enough money to produce them.”); see also Jon M. Garon, Normative Copyright: A Conceptual Framework for Copyright Philosophy and Ethics, 88 Cornell L. Rev. 1278, 1283 (2003) (noting “[u]nless there is a valid conceptual basis for copyright laws, there can be no fundamental immorality in refusing to be bound by them.”); see also Michele Boldrin and David Levine, Against Intellectual Monopoly (Cambridge U. Press 2008) (disputing the utility of intellectual property altogether); see also Martin Skladany, Alienation by Copyright: Abolishing Copyright to Spur Individual Creativity, 55 J. Copyright Soc’y U.S.A. 361, 361 (2008); see also Joost Smiers and Marieke van Schijndel, Imagine There Is No Copyright and No Cultural Conglomerates Too (Inst. of Network Cultures 2009); see also Joost Smiers, Art Without Copyright: A Proposal for Alternative Regulation, in Freedom of Culture: Regulation and Privatization of Intellectual Property and Public Space 22–29 (Jorinde Seijdel trans., NAi Publishers 2007); see also Joost Smiers and Marieke Van Schijndel, Imagining a World Without Copyright: The Market and Temporary Protection, a Better Alternative for Artists and Public Domain, in Copyright and Other Fairy Tales: Hans Christian Andersen and the Commodification of Creativity 129 (Helle Porsdam ed., Edward Elgar Publ’g Ltd. 2006); see also Frank Thadeusz, No Copyright Law: The Real Reason for Germany’s Industrial Expansion?, Spiegel Online (Aug. 18, 2010), http://www.spiegel.de/international/zeitgeist/0,1518,710976,00.html, archived at https://perma.cc/BPQ8-TG69 (providing a historical and empirical argument against copyright). Cf. Lior Zemer, The Conceptual Game in Copyright, 28 Hastings Comm. & Ent L. J. 409, 409 (2006).

[61] See, e.g., Lawrence Lessig, Laws that Choke Creativity, Ted (2007) (transcript available at https://www.ted.com/talks/larry_lessig_says_the_law_is_strangling_creativity/transcript?language=en), archived at https://perma.cc/9EFZ-GAX9.

[62] See Nat’l Res. Council, Executive Summary, The Digital Dilemma: Intellectual Property in the Information Age, 62 Ohio St. L. J. 951 (2001), http://moritzlaw.osu.edu/students/groups/oslj/files/2012/03/62.2.nrc_.pdf, archived at https://perma.cc/484D-RWU9.

[63] National Research Board, The Digital Dilemma: Intellectual Property in The Information Age 140 (National Academy Press, 2000).

[64] See COPYRIGHT POLICY, CREATIVITY, AND INNOVATION IN THE DIGITAL ECONOMY, USPTO (July 2013), https://www.uspto.gov/sites/default/files/news/publications/copyrightgreenpaper.pdf, archived at https://perma.cc/K3B8-33GG (demonstrating how lawmakers have struggled for years trying to strike a balance).

[65] See Larry Lessig, Speech at the WIPO Global Meeting on Emerging Copyright Licensing Modalities –Facilitating Access to Culture in the Digital Age, Geneva, Switzerland (November 4, 2010), available at http://www.wipo.int/meetings/en/2010/wipo_cr_lic_ge_10/program.html, archived at https://perma.cc/K7C2-FXLU.

[66] See Michael Strangelove, The Empire of Mind: Digital Piracy and the Anti-Capitalist Movement (University of Toronto Press 2005).

[67] See, e.g., Moglen Eben, Freeing the Mind: Free Software and the Death of Proprietary Culture, June 29, 2003, available at http://emoglen.law.columbia.edu/publications/maine-speech.html, archived at https://perma.cc/44SB-9U3G; see also Moglen Eben, Anarchism Triumphant: Free Software and the Death of Copyright, June 28, 1999, available at http://emoglen.law.columbia.edu/publications/anarchism.html, archived at https://perma.cc/Q93F-5LZW.

[68] See Catherine Casserly and Joi Ito, The Power of Open (Creative Commons 2011), http://thepowerofopen.org, archived at https://perma.cc/WBD4-CDK4; see also Niva Elkin-Koren, Exploring Creative Commons: A Skeptical View of a Worthy Pursuit, in The Future of the Public Domain: Identifying the Commons In Information Law 325-345 (Lucie Guibault and P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[69] See Giancarlo Frosio, Communia Final Report 50-60, (Communia 2011), http://communia-project.eu/final-report/defining-public-domain.html (last visited January 31, 2017).

[70] See, e.g., supra note 64 at iii.

[71] See, e.g., Lewis Hyde, How to Reform Copyright, The Chronicle (October 9, 2011), http://chronicle.com/article/How-to-Reform-Copyright/129280, archived at https://perma.cc/U23A-CMJJ; see also Christopher Sprigman, Reform(aliz)ing Copyright, 57 Stan. L. Rev. 485 (2004) (proposing an optional registration system that subjects unregistered works to a default license under which the use of the work would trigger only a modest statutory royalty liability); see also Lawrence Lessig, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity 140 (Penguin 2004); see also Lawrence Lessig, The Future of Ideas: The Fate of The Commons in a Connected World (Vintage Books 2002); see also Lawrence Lessig, Recognizing the Fight We’re In, Keynote Speech delivered at the Open Rights Group Conference, London, UK (March 24, 2012), at 36:40-38:28, available at http://vimeo.com/39188615, archived at https://perma.cc/7K5Q-DUJY (proposing the reintroduction of formalities at least to secure extensions of copyright, if legislators decide to introduce them).

[72] See Stef van Gompel, Formalities in the digital era: an obstacle or opportunity?, in Global Copyright: Three Hundred Years Since the Statute of Anne, from 1709 to Cyberspace 2-4 (Lionel Bently, Uma Suthersanen and Paul Torremans eds., Edward Elgar 2010) (arguing that the pre-digital objections against copyright formalities cannot be sustained in the digital era); see also Takeshi Hishinuma, The Scope of Formalities in International Copyright Law in a Digital Context, in Global Copyright: Three Hundred Years Since the Statute of Anne, from 1709 to Cyberspace 460-467 (Lionel Bently, Uma Suthersanen and Paul Torremans eds., Edward Elgar 2010).

[73] See Andrew Gowers, Gowers Review of Intellectual Property (HM Treasury, November 2006), at 6, ([r]ecommendation 14b endorses the establishment of a voluntary register of copyright), https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/228849/0118404830.pdf, archived at https://perma.cc/P755-ZSZZ.

[74] See id. at 40. 

[75] See Tanya Aplin, A Global Digital Register for the Preservation and Access to Cultural Heritage: Problems, Challenges and Possibilities, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World 3, at 23 (Estelle Derclaye (ed.), Edward Elgar 2010) (discussing copyright registers); see also Caroline Colin, Registers, Databases and Orphan Works, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World, supra, 28, at 29; see also Steven Hetcher, A Central Register of Copyrightable Works: a U.S. Perspective, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World, 156, at 158.

[76] See Orphan Works and Mass Digitization: A Report of the Register of Copyrights, United States Copyright Office at 66 (June 2015), https://www.copyright.gov/orphan/reports/orphan-works2015.pdf, archived at https://perma.cc/642S-N52A.

[77] See van Gompel, supra note 72, at 12-13 (noting that only voluntary supply of information would be compliant with the no-formalities prescription of the Berne Convention).

[78] See Accessible Registries of Rights Information and Orphan Works [ARROW], http://www.arrow-net.eu, archived at https://perma.cc/RE3M-NS7K (creating registries of rights information and orphan works); see also Barbara Stratton, Seeking New Landscapes: a Rights Clearance Study in the Context of Mass Digitization of 140 Books Published between 1870 and 2010, at 5, 35-36 (British Library 2011), https://www.arrow-net.eu/sites/default/files/Seeking%20New%20Landscapes.pdf, archived at https://perma.cc/WR5D-6SLR, (showing that in contrast to the average four hours per book to undertake a diligent search, “the use of the ARROW system took less than 5 minutes per tile to upload the catalogue records and check the results.”).

[79] See Marco Ricolfi, Copyright Policies for Digital Libraries in the Context of the i2010 Strategy, at 2, 6 (July 1, 2008), http://www.communia-project.eu/communiafiles/conf2008p_Copyright_Policy_for_digital_libraries_in_the_context_of_the_i2010_strategy.pdf, archived at https://perma.cc/4439-9JY9 (paper presented at the 1st COMMUNIA Conference); see also Marco Ricolfi, Making Copyright Fit for the Digital Agenda, 5-6 (Feb. 25, 2011), available at http://nexa.polito.it/nexafiles/Making%20Copyright%20Fit%20for%20the%20Digital%20Agenda.pdf, archived at https://perma.cc/X4UZ-QCMJ.

[80] See Lawrence Lessig, Remix: Making Art and Commerce Thrive in the Hybrid Economy 253-255 (Bloomsbury 2008) (proposing different routes for professional, remix and amateur authors, registries, and the re-introduction of formalities and an opt-in system).

[81] See Ricolfi, Making Copyright Fit for the Digital Agenda, supra note 79 at 6.

[82] See id.

[83] See id.

[84] See id.

[85] Neelie Kroes, Vice-President of the European Commission responsible for the Digital Agenda, Speech at Business for New Europe event: Ending Fragmentation of the Digital Single Market (Feb. 7, 2010) (transcript available at http://europa.eu/rapid/press-release_SPEECH-11-70_en.htm?locale=en, archived at https://perma.cc/WJM6-QJMT), at 2.

[86] Id.

[87] See U.S. Copyright Office, Rep. of the Reg. of Copyrights: Rep. on Orphan Works 95 (Jan. 2006).

[88] See Christian L. Castle & Amy E. Mitchell, Unhand That Orphan: Evolving Orphan Works Solutions Require New Analysis, 27 Ent. & Sports Law. 1 (Spring 2009).

[89] European Comm’n, High Level Expert Group on Digital Libraries, Final Report: Digital Libraries: Recommendations and Challenges for the Future 4 (Dec. 2009) (i2010 European Digital Libraries Initiative).

[90] See Council Directive 2012/28/EU, of the European Parliament and of the Council of 25 October 2012 on Certain Permitted Uses of Orphan Works, 2012 O.J. (L 299/5), 3 [hereinafter Orphan Works Directive].

[91] British Screen Advisory Council, Copyright and Orphan Works 3 (Aug. 31, 2006), http://www.bsac.uk.com/wp-content/uploads/2016/02/copyright__orphan_works_paper_prepared_for_gowers_2006.pdf, archived at https://perma.cc/V9TA-G6ML (paper prepared for the Gowers Review).

[92] See id. at 16.

[93] Id. at 25.

[94] See id. at 30.

[95] See Copyright Act, R.S.C. 1985, c C-42, art. 77 (Can). Under the Canadian system, users can apply to an administrative body to obtain a license to use orphan works. In order to obtain the license the applicant must prove that they have conducted a serious search for the rightsholder. If the Canadian Copyright Board is satisfied that, despite the search, the rightsholders cannot be identified, it issues the applicant a non-exclusive license to use the work. The license will shield the license holder from any liability for infringement. However, the license is limited to Canada. see id.

[96] See Steven A. Hetcher, Using Social Norms to Regulate Fan Fiction and Remix Culture, 157 U. Pa. L. Rev. 1869, 1880 (2009); see also Edward Lee, Warming Up To User-Generated Content, 2008 U. Ill. L. Rev. 1459, 1461 (2008) (noting that “informal copyright practices—i.e., practices that are not authorized by formal copyright licenses but whose legality falls within a gray area of copyright law—effectively serve as important gap fillers in our copyright system”).

[97] See e.g., Daniel Gervais, The Tangled Web of UGC: Making Copyright Sense of User-Generated Content, 11 Vand. J. Ent. & Tech. L. 841, 869–70 (2009); see also Debora Halbert, Mass Culture and the Culture of the Masses: A Manifesto for User-Generated Rights, 11 Vand. J. Ent. & Tech. L. 921, 958 (2009); see also Mary W. S. Wong, “Transformative” User-Generated Content in Copyright Law: Infringing Derivative Works or Fair Use?, 11 Vand. J. Ent. & Tech. L. 1075, 1110 (2009).

[98] See, e.g., Peter K. Yu, Can the Canadian UGC Exception Be Transplanted Abroad?, 26 Intell. Prop. J. 176, 176–79 (2014) (discussing also a Hong Kong proposal for a UGC exception); see also Warren B. Chik, Paying it Forward: The Case for a Specific Statutory Limitation on Exclusive Rights for User-Generated Content Under Copyright Law, 11 J. Marshall Rev. Intell. Prop. L. 240, 270 (2011).

[99] See An Act to Amend the Copyright Act, 2010, Bill C-32, art. 22 (Can.), http://www.parl.gc.ca/HousePublications/Publication.aspx?Docid=4580265&file=4, archived at https://perma.cc/LJ8N-9WPW (introducing an exception for non-commercial UGC).

[100] See Eur. Commission, Rep. on the Responses to the Public Consultation on the Review of the EU Copyright Rules 68 (July 2014), http://ec.europa.eu/internal_market/consultations/2013/copyright-rules/docs/contributions/consultation-report_en.pdf, archived at https://perma.cc/D3FG-YMBD (noting that respondents often favor a legislative intervention, which could be done “by making relevant existing exceptions (parody, quotation and incidental use and private copying are mentioned) mandatory across all Member States or by introducing a new exception to cover transformative uses”); see also Eur. Commission, Commission Comm. on Content in the Digital Single Mkt. 3-4 (2011), http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52012DC0789, archived at https://perma.cc/KW6C-6CKJ (proposing licensing arrangements).

[101] See U.S. Copyright Office, Rulemaking on Exemptions from Prohibition on Circumvention of Technological Measures that Control Access to Copyrighted Works (Jul. 26, 2010), http://www.copyright.gov/1201/2010, archived at https://perma.cc/83D6-7QTM.

[102] See Mariam Awan, The User-Generated Content Exception: Moving Away From a Non-Commercial Requirement (Nov. 11, 2015), at 6, 8–9, http://www.iposgoode.ca/wp-content/uploads/2015/11/Mariam-Awan-The-user-generated-content-exception.pdf, archived at https://perma.cc/FW84-UANW.

[103] Lenz v. Universal Music Corp., 801 F.3d 1126, 1129 (9th Cir. 2015).

[104] Id. at 1134-35 (noting also that there’s no liability under § 512(f), “[i]f, however, a copyright holder forms a subjective good faith belief the allegedly infringing material does not constitute fair use”).

[105] See, e.g., Zijian Zhang, Transplantation of an Extended Collective Licensing System – Lessons from Denmark, 47 Int’l Rev. Intell. Prop. & Competition L. 640, 641–42 (2016).

[106] See European Comm’n, High Level Expert Group—Copyright Subgroup, Report on Digital Preservation, Orphan Works and Out-of-Print Works: Selected Implementation Issues 5 (Apr. 18, 2008) (i2010 European Digital Libraries Initiative), http://ec.europa.eu/information_society/newsroom/cf/itemlongdetail.cfm?item_id=%203366, archived at https://perma.cc/M3EA-VCGG (identifying ECL as a possible solution to the orphan works’ problem); see also Jia Wang, Should China Adopt an Extended Licensing System to Facilitate Collective Copyright Administration: Preliminary Thoughts, 32 Eur. Intell. Prop. Rev. 283, (2010); see also Marco Ciurcina, et al., Creatività Remunerata, Conoscenza Liberata: File Sharing e Licenze Collettive Estese [Remunerating Creativity, Freeing Knowledge: File-Sharing and Extended Collective Licences], Nexa Ctr. for Internet & Soc’y, at 8 (It.) (Mar. 15, 2009), http://nexa.polito.it/nexafiles/NEXACenter-ExtendedCollectiveLicenses-EnglishVersion-June2009.pdf, archived at https://perma.cc/KB75-N8VY (highlighting the positive externalities of the adoption an extended collective licensing scheme as the most appropriate tool to be used by a European Member State to legitimize the file-sharing of copyrighted content); see also Johan Axhamn & Lucie Guibault, Cross-border Extended Collective Licensing: A Solution to Online Dissemination of Europe’s Cultural Heritage?, Instituut Voor Informatierecht , at 4 (Neth.)(Aug. 2011), http://www.ivir.nl/publicaties/download/292, archived at https://perma.cc/D5VQ-K2JF.

[107] See Commission Proposal for a Directive of the European Parliament and of the Council on Copyright in the Digital Single Market, at 26, COM (2016) 593 final (Sept. 14, 2016) [hereinafter DSM Directive Proposal].

[108] See id.

[109] See id. at 5, 30.

 [110] See Silke v. Lewinski, Mandatory Collective Administration of Exclusive Rights – A Case Study on its Compatibility with International and EC Copyright Law, e-Copyright Bulletin (UNESCO), Jan.-Mar. 2004 at 2 (discussing a proposed amendment in the Hungarian Copyright Act); see also Carine Bernault & Audrey Lebois, Peer-to-Peer File Sharing and Literary and Artistic Property: A Feasibility Study Regarding a System of Compensation for the Exchange of Works via the Internet (June 2005) (discussing the same proposal endorsed by the French Alliance Public-Artistes, campaigning for the implementation of a Licence Globale).

[111] See Volker Grassmuck, A New Study Shows Copyright Exception for Legalising File-Sharing is Feasible as a Cease-Fire in the “War on Copying” Emerges, Intellectual Property Watch (Nov. 5, 2009), http://www.ip-watch.org/2009/05/11/the-world-is-going-flat-rate/, archived at https://perma.cc/5XHC-K4NQ.

[112] See Authors Guild v. Google, Inc., 804 F.3d 202, 229 (2d Cir. 2015).

[113] See LOI 2012-287 du 1er mars 2012 relative à l’exploitation numérique des livres indisponibles du XXe siècle [Law 2012-287 of March 1, 2012 on the Digital Explotation of the Unavailable Books of the 20th Century], Journal Officiel de la République Française [J.O.] [Official Gazette of France], Mar. 2, 2012, p. 3986.

[114] See id.

[115] See Case C-301/15, Soulier v. Ministre de la Culture et de la Comm., Premier Ministre, 2016 Curia.Europa.Eu ECLI:EU:C:2016:878 (Nov. 16, 2016) [Fr.], http://curia.europa.eu/juris/document/document.jsf?text=&docid=185423&pageIndex=0&doclang=EN, archived at https://perma.cc/NWH9-NXFC (involving a request for a preliminary hearing by the Council of State, regarding an action Mac Soulier and Sara Doke against the Minister of Culture and Communication, and the Prime Minister, on the interpretation of Articles 2 and 5 of a European Council Directive).

[116] See id.

[117] Grassmuck, supra note 111.

[118] In the analog environment, many national legislations implemented quasi flat rate models and different arrangements of private copying levies that may be envisioned as a form of cultural tax. Private copying levies are special taxes, which are charged on purchases of recordable media and copying devices and then redistributed to the right holders by means of collecting societies. See, e.g., Martin Kretschmer, United Kingdom Intellectual Prop. Office, Private Copying and Fair Compensation: An Empirical Study of Copyright Levies in Europe 64 (2011), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2063809, archived at https://perma.cc/W3QW-49FB (follow “Download this paper” hyperlink).

[119] See generally Bernt Hugenholtz et al., The Future of Levies in a Digital Environment, Institute for Information Law, at ii., 74 (2003), https://www.ivir.nl/publicaties/download/DRM&levies-report.pdf, archived at https://perma.cc/APU4-SHL5.

[120] See generally Grassmuck, supra note 111 (exploring flat rate proposals and emerging models).

[121] See Alexander Roßnagel et al., Die Zulässigkeit einer Kulturflatrate nach Nationalem und Europäischem Recht [The Admissibility of a Cultural Flat Rate under National and European Law], Institut für Europäisches Medienrecht [Institute of European Media Law], at 63 (2009), https://www.gruene-bundestag.de/fileadmin/media/gruenebundestag_de/themen_az/netzpolitik/16_fragen_und_16_antworten/kurzgutachten_zur_kulturflatrate.pdf, archived at https://perma.cc/6E8A-LED2.

[122]See id.; see COMMUNIA Network on the Digital Public Domain, Recommendation 14, in Final Report 171 (Mar. 31, 2011), http://nexa.polito.it/nexacenterfiles/D1.11-COMMUNIA%20Final%20Report-nov2011.pdf, archived at https://perma.cc/3XG7-NLSA).

[123] See e.g., Alain Modot et al., The “Content Flat-Rate”: A Solution to Illegal File-Sharing?, European Parliament, at 26 (2011), http://www.europarl.europa.eu/RegData/etudes/etudes/join/2011/460058/IPOL-CULT_ET(2011)460058_EN.pdf, archived at https://perma.cc/2LWA-QTJS.

[124] See Neil W. Netanel, Impose A Noncommercial Use Levy To Allow Free Peer-To-Peer File Sharing, 17 Harv. J. L. & Tech. 1, 32, 80 (2003). 

[125] See id. at 4.

[126] See id.

[127] See id. 

[128] See Netanal supra note 124 at 4. 

[129] See generally William W. Fisher, Promises To Keep: Technology, Law and the Future of Entertainment (2004).

[130] See id. at 217.

[131] See id. at 223–24.

[132] See id.

[133] See Philippe Aigrain with Suzanne Aigrain, Sharing: Culture and the Economy in the Internet Age 76–77 (2012).

[134] See id. at 65.

[135] See id .

[136] See id at 152–53.

[137] See id..

[138] See Re:publica, Peter Sunde – Flattr Social Micro Donations, YouTube (Apr. 22, 2010), https://www.youtube.com/watch?v=IyGCsCpofVk, archived at https://perma.cc/TN7J-7VCK (describing the Flattr platform); see also Flattr, https://flattr.com/, archived at https://perma.cc/Y3C7-X3KP (last visited Feb. 9, 2017).

[139] COMMUNIA, Recommendation 14, supra note 121, at 171.

[140] Id.

[141] Nelson Mandela, Remarks Made at the TELECOM 95 Conference, 3 Oct. 1995, 9 Trotter Rev. 4, 4 (1995).

[142] World Intellectual Property Organization (WIPO), Provisional Committee on Proposals Related to a WIPO Development Agenda (PCDA), Revised Draft Report, at 6 (Aug. 20, 2007), http://www.wipo.int/edocs/mdocs/mdocs/en/pcda_4/pcda_4_3.pdf, archived at https://perma.cc/Y9AK-YNH5.

[143] Benedict XVI, Caritas In Veritate [Encyclical Letter on Good Will on Integral Human Development in Charity and Truth], sec. 22 (June 29, 2009) available at http://w2.vatican.va/content/benedict-xvi/en/encyclicals/documents/hf_ben-xvi_enc_20090629_caritas-in-veritate.html, archived at https://perma.cc/K7YL-9ZB8.

[144] See Graham Dutfield and Uma Suthersanen, Global Intellectual Property Law 277 (2008).

[145] See Amy Kapczynski, The Access to Knowledge Mobilization and The New Politics of Intellectual Property, 117 Yale L. J. 804, 807–08 (2008); see generally Access to Knowledge in the Age of Intellectual Property (Gaëlle Krikorian and Amy Kapczynski eds., Zone Books 2010); see also Access to Knowledge in Africa: The Role of Copyright (Chris Armstrong et al. eds., UCT Press 2010) (showing an example of the body of work created by pro-A2K groups).

[146] Conference, 2nd Annual Access to Knowledge Conference (A2K2), Yale Information Society Project (2007), http://mailman.yale.edu/pipermail/development-studies/2007-April/000074.html, archived at https://perma.cc/5A2K-8MPE.

[147] Id.

[148] Consumer Project on Technology, Access to Knowledge, http://www.cptech.org/a2k, archived at https://perma.cc/H2AR-GG39.

[149] See G.A. Res. 217 (III) A, Universal Declaration of Human Rights (Dec. 10, 1948), http://www.un.org/en/universal-declaration-human-rights/, archived at https://perma.cc/RH3X-86MJ (follow “Download PDF”).

[150] See WIPO, Development Agenda for WIPO, http://www.wipo.int/ip-development/en/agenda, archived at https://perma.cc/NW6Y-F465.

[151] See CPTech, Proposed Treaty on Access to Knowledge (May 9, 2005) (Draft), www.cptech.org/a2k/a2k_treaty_may9.pdf, archived at https://perma.cc/33E5-77GE.

[152] Laurence R. Helfer, Toward a Human Rights Framework for Intellectual Property, 40 U.C. Davis L. Rev. 971, 1013 (2007) (citing William New, Experts Debate Access to Knowledge, IP Watch (2005) http://www.ip-watch.org/2005/02/15/experts-debate-access-to-knowledge/?res), archived at https://perma.cc/7QJA-DJBQ; see also Proposed A2K Treaty, supra note 151 (mentioning other actions to achieve A2K goals, such as the use of the Internet as a tool for broader public participation; preservation of public domain; control of anticompetitive practices; restriction of the use of TPMs limiting A2K; use of educational material made available at an unreasonable price; and a new role of fair use, especially for purposes including but not limited to parody, reverse engineering and use of works by disabled person).

[153] See, e.g., Margot E. Kaminski & Shlomit Yanisky-Ravid, Working Paper: Addressing the Proposed WIPO International Instrument on Limitations and Exceptions for Persons with Print Disabilities: Recommendation or Mandatory Treaty?, Yale Information Society 6 (Nov. 14, 2011), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1959694, archived at https://perma.cc/4TXL-XBLZ (follow “Download This Paper” hyperlink).

[154] See Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled, July 27, 2013, WIPO, (entered into force Sept. 30, 2016).

[155] See Joost Smiers & Marieke Van Schijndel, Imagine There is no Copyright and No Cultural Conglomerates too, 4 Institute of Network Cultures 5, 26; see also Johanna Gibson, Community Resources: Intellectual Property, International Trade and Protection of Traditional Knowledge 127–28 (2005).

[156] See Lawrence Lessing, The Architecture of Access to Scientific Knowledge: Just How Badly We Have Messed This Up, Address at CERN Colloquium and Library Science Talk, (Apr. 18, 2011), http://cdsweb.cern.ch/record/1345337, archived at https://perma.cc/L5TM-PVLB; see also Lawrence Lessig, Recognizing the Fight We’re In, Address at the Open Rights Group Conference, (Mar. 24, 2012), http://vimeo.com/39188615, archived at https://perma.cc/9NSW-YD28.

[157] Lessing, CERN Colloquium Address, supra note 156.

[158] John Willinsky, The Access Principle: The Case for Open Access to Research and Scholarship 33, (2006).

[159] Id. at 30.

[160] See Giancarlo F. Frosio, Open Access Publishing: A Literature Review 74 (study prepared for the RCUK Centre for Copyright and New Business Models in the Creative Economy) (2014), http://www.create.ac.uk/publications/open-access-publishing-a-literature-review, archived at https://perma.cc/FLJ4-ELXA (providing a book length overview of the OAP movement and several open access initiatives and projects, economics of academic publishing and copyright implications, OAP business models, and OAP policy initiatives).

[161] 16538 Researchers Taking a Stand, The Cost of Knowledge, http://thecostofknowledge.com, archived at https://perma.cc/YH5Z-JNDG; see also The Price of Information: Academics are Starting to Boycott a Big Publisher of Journals, The Economist, Feb. 4 2012, http://www.economist.com/node/21545974, archived at https://perma.cc/L3BX-MU35; see also Eyal Amiran, The Open Access Debate, 18 Symploke 251, 251 (2011) (reporting several other example of these reactions and boycotts).

[162] See Reto M. Hilty, Copyright Law and the Information Society – Neglected Adjustments and Their Consequences, 38(2) ICC 135 (2007).

[163] George Monbiot, Academic Publishers Make Murdoch Look like a Socialist, The Guardian, (Aug. 29, 2011 4:08 PM), http://www.guardian.co.uk/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist, archived at https://perma.cc/4NZ7-3X4S; see also Richard Smith, The Highly Profitable but Unethical Business of Publishing Medical Research, 99 J. R. Soc. Med. 452–53 (2006) (discussing in similarly strong terms the unethical nature of the business of publishing medical research).

[164] See Richard Smith, supra note 163 at 452.

[165] See id. at 454.

[166] See, e.g., Steven Shavell, Should Copyright of Academic Works Be Abolished?, 2 J. Legal Analysis 301, 301–05 (2010).

[167] Robert K. Merton, The Normative Structure of Science, in The Sociology of Science: Theoretical and Empirical Investigations 267, 273 (Norman W. Storer ed., U. Chicago Press 1973) (1942) (emphasis added), http://www.collier.sts.vt.edu/5424/pdfs/merton_1973.pdf, archived at https://perma.cc/UZ2S-9D7G; see also James Boyle, Mertonianism Unbound? Imagining Free, Decentralized Access to Most Cultural and Scientific Material, in Understanding Knowledge as a Commons: From Theory to Practice 123 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2007), http://www.ess.inpe.br/courses/lib/exe/fetch.php?media=wiki:user:andre.zopelari:understanding-knowledge-as-a-commons-theory-to-practice-2007.pdf, archived at https://perma.cc/5LFJ-FBAP.

[168] See Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (October 22, 2003), Berlin Conference, Berlin, October 20-22, 2003, https://openaccess.mpg.de/Berlin-Declaration, archived at https://perma.cc/3K38-MDXW.

[169] See id.

[170] See id.

[171] Budapest Open Access Initiative, Budapest Open Access Inititative, http://www.soros.org/openaccess/index.shtml, archived at https://perma.cc/LZZ3-6CVD.

[172] Peter Suber, Creating an Intellectual Commons Through Open Access, in Understanding Knowledge as a Commons: From Theory to Practice 171 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2006).

[173] See Directory of Open Access Journals(DOAJ), DOAJ (last visited Feb. 9, 2017), http://www.doaj.org, archived at https://perma.cc/26KJ-NKFY.

[174] See Open Access, The Scholarly Publishing & Academic Resources Coalition [SPARC], http://www.arl.org/sparc/advocacy/campus, archived at https://perma.cc/6RPN-BQJ2; see also SHERPA/JULIET – Research funders’ open access policies, SHERPA (last visited Feb. 9, 2017), http://www.sherpa.ac.uk/juliet/index.php, archived at https://perma.cc/T7HW-XXJD; see also Manual of Policies and Procedures – F/1.3 QUT ePrints repository for research output, Queensland Univ. of Tech. [QUT] (Apr. 6, 2016), http://www.mopp.qut.edu.au/F/F_01_03.jsp, archived at https://perma.cc/97KW-FJ62; see also Eric Priest, Copyright and The Harvard Open Access Mandate, 10 Nw. J. Tech. & Intell. Prop. 377, 394 (2012).

[175] See Frosio, Open Access Publishing, supra note 160, at 9.

[176] See John Houghton, Open Access – What are the Economic Benefits?, Victoria University, 13 (June 23, 2009) (report prepared for Knowledge Exchange) (showing that adopting an open access model to scholarly publications could lead to annual savings of around €70 million in Denmark, €133 million in the Netherlands and €480 million in the United Kingdom); see also John Houghton et al., Economic and Social Returns on Investment in Open Archiving Publicly Funded Research Outputs, Victoria University, 12 (July 2010) (report prepared for The Scholarly Publishing & Academic Resources Coalition [SPARC]) (concluding that free access to U.S. taxpayer-funded research papers could yield $1 billion in benefits).

[177] See What is Horizon 2020?, European Commission, http://ec.europa.eu/programmes/horizon2020/en/what-horizon-2020, archived at https://perma.cc/GHF3-YSEC.

[178] See Department for Business Innovation and Skills, Innovation and Research Strategy for Growth 76–78 (Dec. 8, 2011), http://www.bis.gov.uk/assets/biscore/innovation/docs/i/11-1387-innovation-and-research-strategy-for-growth.pdf, archived at https://perma.cc/QD5R-RGN8; see also Finch Report: Report of the Working Group on Expanding Access to Published Research Findings, Accessibility, Sustainability, Excellence: How to Expand Access to Research Publications, Research Information Network, https://www.acu.ac.uk/research-information-network/finch-report, archived at https://perma.cc/Q287-FXA5.

[179] See U.S. Department of Education, Institute of Education Sciences (IES), Request for Application, IES 11 (2009), http://ies.ed.gov/funding/pdf/2010_84305G.pdf, archived at https://perma.cc/HYW2-8B74; see also New Open Access Policy for NCAR Research, AtmosNews (October 20, 2009), https://www2.ucar.edu/atmosnews/news/1059/new-open-access-policy-ncar-research, archived at https://perma.cc/JEP9-FGST; see also Howard Hughes Medical Institute, Research Policies: Public Access to Publications 1 (June 11, 2007), http://www.hhmi.org/sites/default/files/About/Policies/sc320.pdf, archived at https://perma.cc/7CJP-3NYT.

[180] See Consolidated Appropriations Act of 2008, H.R. 2764, 110th Cong. Div. G, II § 218; see also Eve Heafey, Public Access to Science: The New Policy of The National Institutes of Health in Light of Copyright Protections in National and International Law, 15 UCLA J. L. & Tech. 1, 3 (2011), http://www.lawtechjournal.com/articles/2010/02_100216_heafey.pdf, archived at . https://perma.cc/M93U-HQA6.

[181] See National Institute of Health, Revised Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research, (Jan. 11, 2008), http://grants.nih.gov/grants/guide/notice-files/NOT-OD-08-033.html, archived at https://perma.cc/UGB3-QR38; see also Peter Suber, An Open Access Mandate for the National Institutes of Health, 2(2) Open Medicine e39–e41 (2008), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3090178/, archived at https://perma.cc/H8M5-NFN6.

[182] See Richard Poynder, Open Access Mandates: Ensuring Compliance, Open and Shut? (May 18, 2012), http://poynder.blogspot.fi/2012/05/open-access-mandates-ensuring.html, archived at https://perma.cc/LWT5-F6SB.

[183] See, e.g., Adrian Pohl, Launch of the Principles on Open Bibliographic Data, Open Knowledge International Blog (Jan. 18, 2011), http://blog.okfn.org/2011/01/18/launch-of-the-principles-on-open-bibliographic-data/, archived at https://perma.cc/DCY7-VNPL.

[184] Richard R Nelson, The Market Economy, and the Scientific Commons, 33 Research Policy 455, 467 (2004), http://dimetic.dime-eu.org/dimetic_files/NelsonRP2004.pdf, archived at https://perma.cc/SP3Z-Y7NT.

[185] Id.

[186] Willinsky, supra note 158, at xii; see also Peter Suber, Open Access (MIT Press 2012) (discussing the emergence of this principle).

[187] See Jerome H. Reichman, Tom Dedeurwaerdere, & Paul F. Uhlir, Governing Digitally Integrated Genetic Resources, Data and Literature: Global Intellectual Property Strategies for a Redesigned Microbial Research Commons 441 (Cambridge U. Press, 2016).

[188] Paul F. Uhlir, Revolution and Evolution in Scientific Communication: Moving from Restricted Dissemination of Publicly-Funded Knowledge to Open Knowledge Environments, Paper Presented at the 2nd COMMUNIA Conference (June 28, 2009) (on file with COMMUNIA), http://www.communia-project.eu/communiafiles/Conf%202009_P_Uhlir_BS.pdf, archived at https://perma.cc/9AQS-B52J.

[189] Pessach Guy, The Role of Libraries in A2K: Taking Stock and Looking Ahead, 2007 Mich. St. L. Rev. 257, 267 (2007).

[190] See Proposed WIPO A2K Treaty, supra note 151, at 5; see also Orphan Works Directive, supra note 90 (enabling the use of orphan works after diligent search for public libraries digitization projects); see also Case C-117/13, Technische Universität Darmstadt v Eugen Ulmer KG, 2014 E.C.R. 23 (September 11, 2013) (stating that European libraries may digitize books in their collection without permission from the rightholders with caveats); see also Act of September 11, 2015, on Amendments to the Copyright and Related Rights Act and Gambling Act (Poland) (bringing library services in Poland into the twenty-first century by enabling digitization for socially beneficial purposes, such as education and preservation of cultural heritage).

[191] Portions of the analysis in this Section can also be found in the Communia Final Report, supra note 69.

[192] Jessica Litman, The Public Domain, 39 Emory L. J. 965, 977 (1990).

[193] See Daniel Drache, Introduction: The Fundamentals of Our Time – Values and Goals that are Inescapably Public, in The Market or the Public Domain?: Global Governance and the Asymmetry of Power 1 (Daniel Drache ed., Routledge 2000).

[194] See Jane C. Ginsburg, “Une Chose Publique”? The Author’s Domain and the Public Domain in Early British, French and US Copyright Law, 65 Cambridge L. J. 636, 642 (2006).

[195] Id. at 638.

[196] Mark Rose, Nine-Tenths of the Law: The English Copyright Debates and the Rhetoric of the Public Domain, 66 Law & Contemp. Probs. 75, 77 (2003).

[197] See Ginsburg, supra note 194, at 637–38.

[198] See id. at 637.

[199] M. William Krasilovsky, Observations on Public Domain, 14 Bull. Copyright Soc’y 205 (1967).

[200] Pamela Samuelson, Mapping the Digital Public Domain: Threats and Opportunities, 66 Law & Contemp. Probs. 147, 147 (2003).

[201] David Lange, Recognizing The Public Domain, 44 Law & Contemp. Probs. 147, 147 (1981).

[202] See Lange, Reimagining The Public Domain, supra note 42 at 466.

[203] Julie E. Cohen, Copyright, Commodification, and Culture: Locating the Public Domain, in The Future of the Public Domain: Identifying the Commons in Information Law 133–34 (Lucie Guibault & P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[204] Michael D. Birnhack, More or Better? Shaping the Public Domain, in The Future of the Public Domain: Identifying the Commons in Information Law 59–60 (Lucie Guibault & P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[205] See e.g., id.

[206] James Boyle, The Second Enclosure Movement and the Construction of the Public Domain, 66 Law & Contemp. Probs. 62 (2003).

[207] L. Ray Patterson & Stanley W. Lindberg, The Nature of Copyright: A Law of Users’ Rights 50 (University of Georgia Press 1991).

[208] Id. at 50–51.

[209] See Lange, Reimagining The Public Domain, supra note 42, at 465, n.11 (for the “feeding” metaphor).

[210] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 60.

[211] James Boyle, The Public Domain: Enclosing the Commons of the Mind 41 (Yale Univ. Press 2009).

[212] See Ronan Deazley, Rethinking Copyright: History, Theory, Language 105 (Edward Elgar Pub. 2008).

[213] Lange, Recognizing the Public Domain, supra note 201, at 178.

[214] The main difference lies in the fact that a commons may be restrictive. The public domain is free of property rights and control. A commons, on the contrary, can be highly controlled, though the whole community has free access to the common resources. Free Software and Open Source Software are examples of intellectual commons. See Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom 63–67 (Yale Univ. Press 2007). The source code is available to anyone to copy, use and improve under the set of conditions imposed by the General Public License. However, this kind of control is different than under traditional property regimes because no permission or authorization is required to enjoy the resource. These resources “are protected by a liability rule rather than a property rule.” Lawrence Lessig, The Architecture of Innovation, 51 Duke L. J. 1783, 1788 (2002). A commons is defined by the notions of governance and sanctions, which may imply rewards, punishment, and boundaries. See Wendy J. Gordon, Response, Discipline and Nourish: On Constructing Commons, 95 Cornell L. Rev. 733, 736–49 (2010).

[215] See Mark Rose, Copyright and Its Metaphors, 50 UCLA L. Rev. 1, 8 (2002); see also William St Clair, Metaphors of Intellectual Property, in Privilege and Property: Essays on the History of Copyright 369, 391–92 (Ronan Deazley et al. eds., Open Book Publishers 2010).

[216] See Charlotte Hess & Elinor Ostrom, Ideas, Artifacts, and Facilities: Information as a Common-Pool Resource, 66 Law & Contemp. Probs. 111, 111 (2003); see also Michael J. Madison, Brett M. Frischmann & Katherine J. Strandburg, The University as Constructed Cultural Commons, 30 Wash. U. J. L. & Pol’y 365, 403 (2009).

[217] See, e.g., Madison, Frischmann, & Strandburg, supra note 216, at 373 (acknowledging that Ostrom’s previous work laid the groundwork for their research); see also Elinor Ostrom & Charlotte Hess, A Framework for Analyzing the Knowledge Commons, in Understanding Knowledge as a Commons: From Theory to Practice 41–81 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2007), http://surface.syr.edu/cgi/viewcontent.cgi?article=1020&context=sul, archived at https://perma.cc/48HT-3YUE (using Ostrom’s previous research as a base for new research throughout the chapter).

[218] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 66.

[219] See James Boyle, Foreword: The Opposite of Property, 66 Law & Contemp. Probs. 1, 8 (2003), http://scholarship.law.duke.edu/lcp/vol66/iss1/1/, archived at https://perma.cc/J4SL-YJU2.

[220] See James Boyle, A Politics of Intellectual Property: Environmentalism for the Net?, 47 Duke L. J. 87, 110 (1997).

[221] See James Boyle, Cultural Environmentalism and Beyond, 70 Law & Contemp. Probs. 5, 6 (2007).

[222] See Boyle, The Public Domain: Enclosing the Commons of the Mind, supra note 211, at 180.

[223] See id. at 241–42.

[224] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 52.

[225] Boyle, A Politics of Intellectual Property: Environmentalism for the Net?, supra note 220, at 113.

[226] See COMMUNIA, Survey of Existing Public Domain Competence Centers, Deliverable No. D6.01 (Draft, September 30, 2009) (survey prepared by Federico Morando and Juan Carlos De Martin for the European Commission) (on file with the author), https://www.yumpu.com/en/document/view/17424248/survey-of-existing-public-domain-competence-centers-communia/6, archived at https://perma.cc/B745-GH72 (reviewing the current landscape of European competence and excellence centers that focus on the study of the public domain).

[227] See WIPO, Development Agenda for WIPO, supra note 152; see also Severine Dusollier, Scoping Study on Copyright and the Public Domain, WIPO (prepared for the Word Intellectual Property Organization) (May 7, 2010).

[228] Chair of the Provisional Committee on Proposals Related to a WIPO Development Agenda (PCDA), Initial Working Document for the Committee on Development and Intellectual Property (CDIP), WIPO (Mar. 3, 2008), http://www.wipo.int/meetings/en/doc_details.jsp?doc_id=92813, archived at . https://perma.cc/98AG-HNHL.

[229] Compare Communia Final Report, supra note 69 (launching programs together with Communia, as part of the i2010 policy strategy); with LAPSI: The European Thematic Network on Legal Aspects of Public Sector Information, European Commission (Dec. 17, 2012), https://joinup.ec.europa.eu/community/epractice/case/lapsi-european-thematic-network-legal-aspects-public-sector-information, archived at https://perma.cc/6VEH-6MEU; and Digital Repository Infrastructure Vision for European Research, CORDIS (last visted Jan. 30, 2017), http://cordis.europa.eu/project/rcn/86426_en.html, archived at https://perma.cc/P37J-PNQU; and ARROW, supra note 78; and DARIAH, Digital Research Infrastructure for the Arts and Humanities, http://www.dariah.eu, archived at https://perma.cc/Q2NN-N5EZ (aiming to enhance and support digitally-enabled research across the humanities and the arts).

[230] See Communia, The European Thematic Network on the Digital Public Domain, COMMUNIA, http://communia-project.eu, archived at https://perma.cc/LR3B-JNHJ; see also Giancarlo F. Frosio, Communia and the European Public Domain Project: A Politics of the Public Domain, in The Digital Public Domain: Foundations for an Open Culture (Juan Carlos De Martin & Melanie Dulong de Rosnay eds., OpenBooks Publishers 2012).

[231] See The Public Domain Manifesto, The Public Domain Manifesto (2009), http://publicdomainmanifesto.org/manifesto.html, archived at https://perma.cc/79YY-PHTD.

[232] See generally The Europeana Public Domain Charter, http://www.europeana.eu/portal/en/rights/public-domain-charter.html, archived at https://perma.cc/KX8M-VVV6 (advocating for the public’s interest in maintaining access to Europe’s cultural and scientific heritage).

[233] See Charter for Innovation Creativity and Access to Knowledge, Free Culture Forum, http://fcforum.net, archived at https://perma.cc/N9N4-D93F (last visited Jan. 30, 2017).

[234] John Dupuis, Panton Principles: Principles for Open Data in Science, Science Blogs (Feb. 22, 2010), http://scienceblogs.com/confessions/2010/02/22/panton-principles-principles-f/, archived at https://perma.cc/27WH-ALQE.

[235] David Bollier, The Commons as a New Sector of Value-Creation: It’s Time to Recognize and Protect the Distinctive Wealth Generated by Online Commons, On the Commons (Apr. 22, 2008), http://www.onthecommons.org/commons-new-sector-value-creation, archived at https://perma.cc/9QBP-JZ5Z.

[236] Benkler, supra note 214 at I.

[237] See David Bollier, Viral Spiral: How the Commoners Built a Digital Republic of Their Own 3–14, (New Press 2009).

[238] See Charter of Fundamental Rights of the European Union, December 18, 2000, 2000 O.J. (C364) 1, 8, 37.

[239] Individual components of this roadmap for reform have been described in previous works of mine—to which I refer in this article. A more detailed review of this roadmap for reform—with each component of the proposal acting as a pillar for a metaphorical temple dedicated to the enhancement of creativity—will be the subject of Chapter 12 from my forthcoming book. Giancarlo F. Frosio, Rediscovering Cumulative Creativity: From the Oral-Formulaic Tradition to Digital Remix: Can I Get a Witness? (Edward Elgar, forthcoming 2017) (expanding on Frosio, Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness?, supra note 19).

[240] Lange, Reimagining the Public Domain, supra note 42, at 463.

[241] See Communia Final Report, supra note 69 (further discussing the politics of the public domain).

[242] This proposal—and the historical interdisciplinary research that serves as a background—has been discussed at length in previous works of mine to which I refer. See Giancarlo F. Frosio, A History of Aesthetics from Homer to Digital Mash-ups: Cumulative Creativity and the Demise of Copyright Exclusivity, 9(2) Law and Humanities 262 (2015), http://www.tandfonline.com/doi/full/10.1080/17521483.2015.1093300, archived at https://perma.cc/YEC3-34FK; see also Murray, supra note 9.

[243] For a full discussion of the idea of user patronage—and a review of the economics of creativity form a historical perspective—See Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness? supra note 19 at 376–90.

“Danger, Will Robinson”? Artificial Intelligence in the Practice of Law: An Analysis and Proof of Concept Experiment

pdf_icon Greenbaum Publication Version PDF

Cite as: Daniel Ben-Ari, Yael Frish, Adam Lazovski, Uriel Eldan, & Dov Greenbaum, “Danger, Will Robinson”? Artificial Intelligence in the Practice of Law: An Analysis and Proof of Concept Experiment, 23 Rich. J.L. & Tech. 3 (2017), http://jolt.richmond.edu/index.php/volume23_issue2_greenbaum/.

Daniel Ben-Ari,*Yael Frish,** Adam Lazovski,*** Uriel Eldan,**** & Dov Greenbaum*****

Table of Contents

I.     Introduction: What Is Artificial Intelligence?. 3

II.     Disciplines & Recent Developments. 5

III.     Ethics & Philosophy. 9

IV.     The Emergence of Artificial Intelligence, Its Pioneers, and The Beginning of Its Implications. 13

A.     The Turing Test. 13

B.     The Roots of Artificial Intelligence. 17

C.     Physical Symbol Systems Hypothesis 19

D.     Computational intelligence. 20

E.     Child Machine. 21

V.     Artificial Intelligence and Its Implications in Law: 22

A.     Market Failure. 24

B.     The Vast Market Size. 24

C.     Funding. 25

VI.     The Reality as We See It, The Day After Artificial Intelligence. 34

VII.    Specific Ethical, Legal, and Social Implications. 36

VIII.    Artificial Intelligence in Fair Use–An Early Stage Proof of Concept. 45

IX.      Conclusion and Recommendations for Courses of Action. 53

“Artificial intelligence is our biggest existential threat”

– Elon Musk[2]

I. Introduction: What Is Artificial Intelligence?

[1]       In this position paper, we seek to provide a preliminary outline of the ethical, legal, and social implications facing society in light of the growing engagement of artificial intelligence (“AI”) in our everyday lives as attorneys. In particular, we investigated these implications by developing, in collaboration with the IBM Watson team, a proof of concept. In this proof of concept, we aimed to specifically demonstrate the usefulness of AI in analyzing case law in the field of intellectual property, particularly within copyright fair use. To this end, we have extensively reviewed the relevant literature in an effort to pose pertinent and challenging questions regarding the implications of AI in all areas of law.

[2]       AI is a sub-field of computer science;”[3] it can be broadly characterized as intelligence by machines and software.[4] Intelligence refers to many types of abilities, yet is often constrained to the definition of human intelligence. It involves mechanisms, some that are fully discovered and understood by scientists and engineers, and some that are not.[5]

[3]       AI is playing an increasingly important role in our everyday lives.[6] It is asserted that in the near-future AI will replace or enhance various human professions.[7] One of the overarching goals of the AI discipline is to improve machines and systems so that they can reason, learn, self-collect information, create knowledge, communicate autonomously, and manipulate their environment in unexpected fashions.[8] During the past two decades, AI has advanced to make major and influential improvements in quality and efficiency for services and manfucturing procedures.

[4]       Some researchers hope AI will closely approximate or even surpass human intelligence, via an emphasis on problem solving and goals achievement.[9] Both are possible, and AI may even reach computing levels more complicated than the human mind could ever reach.[10]

[5]       Many claim we are still far from achieving this objective, and that fundamental new ideas and paradigm shifts are required in order to push this field forward.[11] These aims notwithstanding, AI studies thus far continue to progress in the direction of understanding and “modeling human consciousness and the inner mind.”[12]

II. Disciplines & Recent Developments

[6]       To understand the field of AI we must first understand how researchers and philosophers observe this field. They divide AI into two categories: strong and weak.[13] Strong AI further divides into human formed AI and non-human formed AI.[14] The first refers to the ability of computers to think, reason, and deduce in a manner similar to humans, and the latter refers to the ability to reason independently, without similarity to the human brain.[15] Weak AI refers to computers mimicking thinking and reasoning abilities, without actually having these abilities. [16] Understanding these observations is important when discussing issues of AI, thinking, and consciousness.

[7]       The main progress made so far has been within weak AI. However, some computer scientists are not “holding their breath” to attribute actual thinking and reasoning abilities to a machine with AI.[17] To quote Edsger W. Dijkstra–a member of computer science’s founding generation–“[t]he question of whether Machines Can Think (…) is about as relevant as the question of whether Submarines Can Swim.”[18] Analogically, computer scientists argue that planes are tested on how well they fly, not whether they fly as birds. Essentially, these scientists believe that we need to step out of the current linguistic frameworks. Can a submarine swim? Can an airplane fly? Can a machine think? Many scientists claim these distinctions are meaningless–when we refer to machines as ‘acting’ intelligently, we are actually saying that they do not possess a mind or a consciousness.[19]

[8]       There are various AI applications each different from the other. For example: speech recognition, language understanding, problem solving, game playing, computer vision (two-dimensional vs three-dimensional), expert systems, heuristic classification, and more.[20] These applications comprise two interest groups. One involves narrow applications (such as speech recognition), and the other is broader (artificial general intelligence (AGI), including autonomous agent possibilities).[21]

[9]       Currently, most AI applications are narrow (i.e., highly specialized entities used to carry out specific tasks).[22] In contrast, the human brain excels in many different environments and combines strategies across applications. Current AI examples include a word processing program that automatically corrects spelling, a computer that learns and plays a video game, a chess-playing computer (e.g. Deep Blue, IBM’s chess-playing computer),[23] or a GO playing system (e.g. AlphaGo, Google’s GO playing system).[24]

[10]     Due to the obvious distinction from human intelligence, society generally sees this type of AI as posing no immediate danger or threat. Yet, it is important to understand that even the current state of AI is represented by a broad spectrum of applications–including “[25] (assessing one simple task); “speech recognition programs,…collaborative filtering software, like that used by Amazon.com…”;[26] “Aaron, a robotic artist that produces paintings that could easily pass for human work;”[27] IBM’s Watson,[28] eBay’s computerized arbitration Modria;[29] and much more. All of these narrow AI applications range in capability from one simple task to intricate intelligent procedures.[30]

[11]     One explanation for the vast immersion of AI within current society may be the process of incorporating basic science researchers (such as computer scientists) in high tech companies.[31] Here, scientists have quickly learned to appreciate that in order for AI to become accepted in human society, the emphasis must be on its benefits as a bridge for what could not have been achieved thus far – assisting and contributing to humans–instead of on how AI could replace them.[32] This is in stark contrast to AI in fiction.[33]

[12]     In fiction and cinema, AI is frequently portrayed as an ominous entity entwined with danger (e.g. HAL in “2001: A Space Odyssey,”[34] Agent Smith in “The Matrix,”[35] and the T1000 in “The Terminator”).[36] In many of these plots, AI is depicted as fully autonomous machines acting out in a way that is harmful to human beings.[37] However, there are also movies, such as Spielberg’s “A.I.,”[38] that portray machines in a softer, more humanlike light. Other films use AI simply for comedic relief, such as Star Wars[39] or its spoof, Spaceballs.[40] While reality is still far from the entities portrayed in science fiction, there are already AI machines that can cause injuries or death (e.g. autonomous cars), act as home and service robots (e.g., iRobot’s Roomba, Anny the CareBot), or serve in the private, finance, and governmental sectors.[41]

[13]     In light of all of the bad press it gets, it is important to understand how AI is being presented to society, what people think about it, and what needs to be considered nowadays in order to promote innovation in this area.

III. Ethics & Philosophy

[14]     The use of AI poses many important ethical questions. The philosopher John Searle, in his famed Chinese Room Argument, noted that the idea of a non-biological machine being intelligent is incoherent: “[t]he point is not that computers cannot think. The point is rather that computation as standardly defined in terms of the manipulation of formal symbols is not by itself constitutive of, nor sufficient for, thinking.”[42] Further the eminent computer scientist, Joseph Weizenbaum, warned that “the idea [of an AI] is obscene, anti-human and immoral.”[43]

[15]     Many philosophers, scientists, and others have deliberated on such ethical and existential dilemmas. The artificial intelligence control problem for example, was discussed in a book published in 2014 by Swedish philosopher Nick Bostom, titled “Superintelligence: Paths, Dangers, Strategies.”[44] It hypothesizes that AI could evolve into a form of super intelligent entities that outsmart human intelligence,[45] and are even capable of self-improvement.[46] In that process, he suggests the entities might become uncontrollable and lead to a human existential catastrophe.[47]

[16]     Two foundational concepts in the evolution of AI that tend to come up when people refer to the dangers of AI are technological singularity and swarm intelligence. Technological singularity refers to the point at which technological progress will become incomprehensibly rapid and complicated beyond our human capabilities.[48] The AI, in a feedback loop of ever accelerating self-improvement, will surpass us in its intelligence and become too smart for us to control. [49] The term was first used in this context by the mathematician John von Neumann, and was published in 1958 when Stanislaw Ulam wrote about a conversation he had with Neumann.[50]

[17]     When we speak about technological singularity in the AI context, we speak about the point at which the intelligence will surpass all human control or understanding, becoming too immeasurable and profound for humans to grasp – an “intelligence explosion.”[51] It can occur either when AI enters into a “runaway effect” of ever accelerating self-improvement, or when AI is autonomously capable of building other more intelligent and powerful entities.[52]

[18]     The second term, swarm intelligence, refers to incorporation of self-replicating machines in all aspects of life, science, industry, and even politics.[53] The swarm will become a decentralized, self-organizing system.[54] In the Terminator movies, this is Cyberdyne: a swarm of self-improving AI machines that take over the world. [55]

[19]     In addition to the ethical dangers of AI machines, there are also complicated existential questions, that raise not only questions regarding AI, but also humanity. Can machines have, or act as though they have, human intelligence? And if so, then do they have a mind? If they have a conscience, or self-awareness, do they have rights?

[20]     Consciousness relates to abilities of understanding and thinking. Nevertheless, consciousness is still a widely unknown concept. Should a machine be aware of its mental state and actions? Can it be aware? Is it even relevant? Can minds be artificially created? (as John Searle stated[56]) And how about free will? Even in some fields of philosophy it is debatable whether humans have free will, so how does it reflect on artificial entities? And if we consider AI entities as entities with consciousness or minds, then does it become immoral to dismantle them? And then how do we program them with an understanding of right and wrong? [57]

[21]     The vast majority of AI researchers do not pay attention to most of these ethical and social questions. Whether the machines actually think is not a concern for them, as long as the machines function properly.[58] Yet, philosophers urge all researchers to consider the ethical and social implications of their modus operandi.[59]

[22]     When examining the connection between society and science, history shows us dreadful events regarding ethics and responsibility. However, the science of AI raises new intricacies – regarding employment, rights, duties, and accountability. For example, are we as a society obligated to establish robot rights? This is not so implausible. For instance, the UK Office of Science and Innovation commissioned a report in 2006 dealing with robo-rights and possible future implications on law and politics.[60]

[23]     All of the above questions and discussions are yet to be answered, and as long as deeper understanding in the subject is not evident, strong AI will likely remain controversial.[61]

[24]     Evolving new technologies come with both a risk and a utility. It is unclear what AI will look like in the years to come. However, today we have the ability to try and lay the groundwork for a future in which man and machine will function together, and quite possibly as one.

IV. The Emergence of Artificial Intelligence, Its Pioneers, and The Beginning of Its Implications

[25]     The AI field began evolving after World War II when a number of people, among them the English mathematician Alan Turing, independently started working on intelligent machines.[62]

            A. The Turing Test

[26]     It is argued that Alan Turing’s publication entitled “Computing Machinery and Intelligence,”[63] published in 1950, was the first significant milestone in the AI field. [64] In his book, Turing presented what is now known as the Turing Test.[65] The goal of the test is to determine, to a satisfactory level, whether a computer has intelligence.[66] Succinctly, to pass the test an observer has to be unable to determine if he is interacting with a computer or a human.[67] There are three test participants – a ‘judge’ played by a human being, and two entities, a human and a computer.[68] The judge asks both entities questions through a computer terminal, and if he cannot distinguish between the human and the computer, then the computer is said to have passed the test and is considered to have intelligence.[69]

[27]     The Turing test is both highly acknowledged and highly criticized We have already witnessed situations in which computers have outsmarted man: IBM’s Deep Blue won a chess game in 1996 against one of the world’s best players and IBM’s Watson won the U.S. trivia game-show Jeopardy in 2011 against two former winners.[70]

[28]     In his relatively simple test, Turing aimed to elegantly examine a narrow range of AI capabilities including thinking, natural language processing, logic, and learning.[71]

[29]     The Test also has its critics who claim that the comparison to human intelligence is deficient in two respects: first, the comparison includes non-intelligent human behavior, and second, it does not include non-human intelligent behavior.[72] For the second reason, a number of alternative tests have been designed to assess super-intelligent non-human computational capabilities:

  • C-tests, or Comprehension Tests: designed to test comprehension abilities – a main component of intelligence – while formulating information with new given data.[73]
  • Universal Anytime Intelligence Tests: aim to examine intelligence of any present or future biological or artificial system.[74]
  • The Winograd Schema Challenge: conceived by Levesque Hector, a professor of Computer Science at the University of Toronto, is based on a series of multiple choice questions (i.e. linguistic antecedents) which require spatial and interpersonal skills, preliminary knowledge, and other commonsense insights.[75]
  • The Logic Theorist System: demonstrated by Alan Newell and Herb Simon, is engineered to mimic the problem solving skills of humans and the determination of high-order intellectual processes.[76]
  • The Lovelace 2.0 Test: conceived in 2001 by Selmer Bringsjord and colleagues (and perfected in 2014 by Mark Riedl, a Georgia Tech professor),[77] examines intelligence by measuring creativity under the assumption that there are works of art that require intelligence in order to create them.[78]

[30]     Another aspect of the Turing Test that received criticism is human misidentification,[79] meaning it is not uncommon for humans to be misidentified as machines. One explanation for this is judge bias based on the answers he expects to receive.[80]

            B. The Roots of Artificial Intelligence

[31]     The 1955 Dartmouth Summer Research Project on Artificial Intelligence is considered the birthplace of AI as a discipline.[81] Amongst its participants were John McCarthy and Marvin Minsky.[82]

[32]     John McCarthy, who is typically thought to have coined the term artificial intelligence, was an American computer scientist and cognitive scientist, and one of the founders of the AI discipline.[83] In 1979 McCarthy published “Ascribing Mental Qualities to Machines,” where he argued that “[m]achines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance.”[84]

[33]     Marvin Lee Minsky was an American cognitive scientist in the field of AI and one of the main AI theorists.[85] Minsky believed that computers were not fundamentally different than the human mind.[86] Amongst his achievements was the construction of robotic arms and grippers, computer vision systems, and the first electronic learning system.[87] In 1969, Minsky, along with Seymour Papet, published the book “Perceptrons”[88] in which he emphasized critical issues that he felt prevented developmental research of the neural networks.[89] Minsky was also an active contributor to the symbolic approach (described below) and the research of human intelligence.[90] In general, Minksy had a positive outlook regarding the future humanlike intelligence capabilities of AI.[91]

            C. Physical Symbol Systems Hypothesis

[34]     The “Physical Symbol Systems Hypothesis” was developed in 1976 by Newell and Simon, and later became a core part of AI.[92] The hypothesis states that “[i]ntelligence is the work of symbol systems…a physical symbol system has the necessary and sufficient means for general intelligent action.”[93] AI computers, as recognized physical symbol systems, are able to exhibit intelligence, and humans, as intelligent beings, must also be physical symbol systems, and therefore similar to computers.[94] Both are capable of processing structures of symbols.[95]

[35]     One problem related to the Physical Symbol Systems Hypothesis, is that some activities human beings find hard or challenging–like mathematics–are easy for computers; while some activities that human beings find easy–like face recognition–are difficult for computers.[96] This problem led researchers to develop a strategy that later became known as the “Artificial Neural Network” (also known as Connectionism) which aims to create systems with brain-like characteristics that are capable of learning.[97] These particular efforts embrace some key elements from Machine Learning Strategy and provide partial answers for “The Common Sense Knowledge Problem” through an effort to create a database containing all of the general common sense knowledge a human possesses, presentable in an AI retrievable fashion.[98]

            D. Computational intelligence

[36]     Computational intelligence aims to understand the principles that enable intelligent behavior in artificial systems. According to this area of research, AI has the following four common features:

  • Ability and flexibility to change in the environment;
  • Evidential reasoning and perception;
  • Ability to plan and execute goals; and
  • Ability to learn.[99]

[37]     The early AI successes left researchers optimistic; however, in the late 1950’s the field began to encounter obstacles and difficulties. One concern that is still highly relevant today is the “Common Sense Knowledge Problem”: a system only “knows” the information that it explicitly receives, and it is often incapable of making trivial connections on its own.[100] To this end, many research strategies are trying to find a way around this problem, including limited domain systems and machine learning.[101]

            E. Child Machine

[38]     The idea of a “Child Machine” was first introduced in the 1950’s.[102] A child machine aims to emulate the learning experience of a human child and implement it on an AI computer.[103] In that way, a computer starts as a “child” and improves by acquiring experiences and knowledge.[104] Yet current programs still have many drawbacks regarding physical experiences and language skills, which hinder the desired successful outcome.[105]

[39]     Even though there has been substantial progress in the science of AI, high hurdles remain. Difficult issues and thought-provoking questions that were raised over two decades ago are still far from receiving answers. Achieving human-level abilities, such as described in the common sense knowledge problem above, is still far from being reached.[106] While some types of human reasoning have been emulated to varying degrees, overall progress remains relatively sluggish.[107]

[40]     In order for AI to further evolve, it is necessary to continue researching different implementation techniques of common reasoning such as: logical analysis, handcrafted large scale databases, web mining, and crowd sourcing.[108]

[41]     The next sections examine and analyze the involvement of AI in the field of law, including its ethical, legal, and social implications in both the short term and the long term. Further, the third chapter discusses the fair use doctrine (a subfield of copyright law) which is used as a test-case to demonstrate AI abilities.[109] The proof of concept was conducted through IBM’s Watson with the guidance of IBM Israel.[110]

V. Artificial Intelligence and Its Implications in Law

“Of course I’ve got lawyers. They are like nuclear weapons; I’ve got them ‘cause everyone else has. But as soon as you use them they screw everything up.”

Danny DeVito.[111]

[42]     Notwithstanding the dire lack of paradigm shifting progress described above, AI technologies are still progressing rapidly, not only theoretically, but also practically. Developers in both large corporations and in start-ups aim to create learning and computerized thinking algorithms that will disrupt our reality.[112] While some of these algorithms encompass the future of mankind’s welfare, others pose dramatic and imminent threats.[113]

[43]     This chapter depicts the rationale that brought forth and promoted the ‘invasion’ of AI into the world of law.[114] After reviewing the causes, we depict the technologies and companies worthy of the title ‘game-changing’, that might bring great value to society, followed by dramatic shifts – ethically, socially and legally.

[44]     In the final part of this chapter we discuss what these dramatic societal shifts can offer, both as opportunities and threats.

[45]    Market failure provides a great opportunity for AI to come in to the field of law with a big impact on it. Our analysis focuses mainly on the United States market.

            A. Market Failure

[46]     Legal systems around the world are collapsing under an ever-growing workload.[115] It is not a secret that the United States is currently leads the world in number of lawyers per-capita and has dramatically overloaded judicial systems. [116] The fact remains, the judicial process is time consuming, inefficient, and cannot keep up with the speed and scalability in which conflicts grow.[117] Add to that the legal tactics lawyers use to stall, earn time, and sometimes ‘dry’ their opponents out of resources, and you have a very dysfunctional system. The system’s own frequent users, lawyers, are active partners in creating the inability to function.[118]

[47]     Although this realization is not news to most, the fact remains that with the current population growth, as well as the ever process of the internet, the worldwide potential for legal conflicts continues to grow as many judicial systems cannot keep up to face this growth.

            B. The Vast Market Size

[48]     The United States is among the largest consumers of legal services in the world.[119] The market size in estimated to be 437 billion USD annually.[120] Additionally, in recent years there is an on-going shift of power. While in the past large law firms controlled most of the market, today, nimble boutique firms are gaining an ever-increasing market share.[121] The potential to compete with the largest firms empowers young and small firms to innovate, become more efficient, and even try new services, enabling them to gain a competitive edge.[122]

            C. Funding

[49]     The legal industry is currently witnessing two trends in funding which make the invasion of AI into the world of law a fait accompli. First, reaching a five-year record, 2015’s fourth quarter had the highest funding levels for the entire area of AI.[123] In addition, funding for legal tech start-ups has grown from seven million USD in 2009 to a whopping one hundred-fifty million in 2013.[124]

[50]     These account for a fertile ground in which technological solutions can arise, solving big scale problems like the ones portrayed by the judicial systems around the world. [125]

[51]     While the use of computation and software is not new to the field of law, [126] we can now identify three main technological fields–Machine Learning, Natural Language Processing, and Big Data–which may enable AI to reign the world of law.

[52]     Some of these technologies comprise different pieces of the puzzle which AI will soon piece together. When applied in a holistic manner these technologies may replace most lawyers and judges.[127] These changes will not be in the short term, but rather in years to come.[128] Yet, we believe it will be faster than expected. The three main technological fields are:

  • Machine Learning:[129] A computer science subfield in which computer generated algorithms are trained to recognize patterns within data.[130] This usually involves massive amounts of data in all areas–from visuals, to categorizing language patterns within human conversations, to written data.[131]
  • Natural Language Processing: (NLP)[132] A sub-category within AI and machine learning.[133] In essence, NLP is heavily reliant on machine learning.[134] This form of research integrates computer science, psychology, and the interaction between the two.[135] Research in this field seeks to ‘teach’ computers how to comprehend human language, seek patterns, and perform deductions based on language patterns and reasoning.[136] The difference between NLP and machine learning is the added value from interactions with human behavior, human language, and even human biases and other psychological traits.[137]
  • Big Data: This field typically refers to data sets too excessive to deal with and analyze via traditional data analytics.[138] Big data sets are relatively young and likely due in part to the accumulation of legal data, which has accelerated greatly since the beginning of the digital storage age (2002).[139] These data sets are used to create predictive analysis algorithms in various fields, from business trends to target audience marketing methods.[140] It can also be used to analyze legal claims, judicial opinions, and more.[141] This type of data usually exists in public records.[142]

[53]     We are now on the verge of a legal renaissance.[143] Market failure mixed with an immense market, growing funding for start-ups, and available and rapidly growing technology is a volatile concoction, which will likely create dramatic and disruptive changes in the nearby future.[144]

[54]     Authors Buchanan and Thomas first raised the notion of using AI in the legal field in November 1970 in their article “Some Speculation about Artificial Intelligence and Legal Reasoning.”[145] In their research, they suggest the use of computers to model human thought processes and as a direct outcome, also help lawyers in their reasoning processes.[146] Later, an experiment was conducted by Thorne McCarty who created a program that was capable of performing a narrow form of legal reasoning in the specific area of corporate reorganization taxation.[147] Given a ‘description’ of the ‘facts’ of a corporate reorganization case, the program could implement an analysis of these facts in terms of several legal concepts.[148]

[55]     Today, in this subfield of AI and law there are already numerous technologies such as:

  • IBM’s Watson Debater:[149] The debater is a new feature of IBM’s well-known Watson computer.[150] When asked to discuss any topic, it can autonomously scan its knowledge database for relevant content, ‘understand’ the data, select what it believes are the strongest arguments, and then construct sentences in natural language to illustrate the points it had selected, in favor and against the topic. Using that process, it can assist lawyers by suggesting the most persuasive arguments and precedents when dealing with a legal matter.[151]
  • ROSS Intelligence:[152] “SIRI for the law”[153] was developed in IBM’s Watson labs. ROSS is a legal research tool that enables users to obtain legal answers from thousands of legal documents, statutes, and cases.[154] The question can be asked in plain English and not necessarily in legal form. Ross’s responses include legal citations, suggested articles for further reading, and calculated ratings to help lawyers prepare for cases.[155] Because Ross is a cognitive computing platform, it learns from past interactions, i.e. Ross’s responses increase in accuracy as lawyers continue to use it. This feature can help lawyers reduce the time spent on research.[156]
  • ModusP:[157] An Israeli startup which has created an advanced search engine using sophisticated algorithms based on AI. The search function helps jurists reduce legal research hours by finding legal knowledge and insights more efficiently.[158]
  • Lex Machina:[159] An intellectual property (“IP”) research company that helps companies anticipate, manage, and win patent and other IP lawsuits by comparing cases to a database of information and helping their customers draw valuable conclusions that inform winning business and legal strategies.[160] The technology compiles data and documents from court cases and converts them into searchable text files.[161] After a keyword, patent, or party is searched for, data and documents are sent back out.[162] It gives lawyers more information on specific judges, a client’s history, and information on what they can do to have a better chance at winning.[163]
  • Modria:[164] A cloud based platform, initially developed for eBay and PayPal, functions as Online Dispute Resolution (“ODR”).[165] It enables companies “to deliver fast and fair resolutions to disputes of any type and volume.”[166] This technology aims to prevent submission of lawsuits, by providing easily accessible alternatives for dispute resolution.[167] Modria aims to create fair ODRs, based on the knowledge and insights from millions of cases and other disputes that the system has already solved.[168]
  • Premonition:[169] A technology which utilizes Big Data and AI to expose which lawyers win the most cases and before which judges.[170]
  • BEAGLE:[171] A technology that uses AI to quickly highlight the most important clauses in a contract and also provides a real-time collaboration platform that enables lawyers to easily negotiate a contract or pass it around an organization for quick feedback.[172] Beagle’s learning process allows the program to adapt to focus on what users care about most.[173]
  • Legal Robot:[174] A platform that enables users to check, analyze, and spot problems in contracts before signing them.[175] The platform is also meant to help users understand complex legal language by parsing legal documents and translating them into accessible language by transforming them into numeric expressions, so statistical and machine learning techniques can derive meaning.[176] It is also designed to compare thousands of documents in order to build a legal language model to be used as a tool for referencing and analyzing contracts.[177]

[56]     The development of the field of AI and law starts with programs that analyze cases and continues with technologies that make lawyers’ tasks efficient, solve disputes, and replace human intervention. Surveying this course of development, we can predict that in the long run, AI technologies using machine learning and deep learning techniques may replace lawyers, arbitrators, mediators, and even judges. Computers could do the work of a lawyer – examining a case, analyzing the issues it raises, conducting legal research, and even deciding on a strategy.

VI. The Reality as We See It, The Day After Artificial Intelligence

            A. Judges and Physical Courts

[57]     Judges and their courts will become less necessary.[178] Most commercial disputes and criminal sentencing will be run by algorithms and wizards,[179] enabling algorithms like Modria to construct conflict resolutions in a much healthier and down to earth manner. After all, they reportedly solve over fifty million disputes every year without any human intervention.[180] Most disputes can then be solved by an AI algorithm to determine the amount of damages to be paid to each side. Similar processes can occur in divorce hearings–algorithms can automatically asses the individuals’ property, financial background, and calculate the amount of time spent together to create a fair agreement of divorce.

[58]     One of the biggest problems with conflict resolution is the fact that it is run by human beings– prone to effects of emotion, fatigue, and general current mood.[181] When a legal claim is first constructed by algorithms instead of human beings, the outcomes are likely to be more productive. For example, Modria is able to resolve hundreds of millions of commercial disputes yearly without the intervention of third party human beings providing a verdict.[182] Claimants will, of course, be able to appeal to a human judge, but the need for those should dramatically decrease over time as machine learning algorithms gain better understandings of the statistical meaning of justice. To reduce the amount of appeals in tort cases, a government can create a fund to financially accommodate damages in order to facilitate a ‘sense of justice’ in the claimants’ minds.

[59]     Some judges may remain in office to rule on algorithm cases not brought to a decision suitable to both sides, and in cases where entirely new issues are being presented.

            B. Lawyers

[60]     Lawyers may also become a dying breed,[183] as algorithms learn how to structure claims, check contracts for problematic caveats, negotiate deals, predict legal strategies, and more. Using AI to create simple, optimally designed regulations and laws that are easier to learn, understand, and litigate by computer, will further the winnowing of the legal profession.[184]

[61]     Lawyers–or something similar–will still be necessary, however they will focus mainly on risk engineering instead of litigation and contracts.[185] Lawyers will need to use intuition and skills not yet available to machines to analyze exposure and various aspects of performing business and civil actions.[186] They will, however, be helped by AIs that have already sifted through all the relevant data. Until AI is able to integrate the data into a nuanced analysis that requires some form of higher thinking, creativity, and predicting likely outcomes based on human reactions, we still need lawyers. In the future, all but the most skilled litigation and corporate lawyers will become unemployed as computer algorithms learn to emulate earlier successful strategies and avoid unsuccessful strategies to achieve optimal outcomes.[187] Young (often overpaid) associates will become unnecessary as much of their grunt work will be doable by machine.[188]

[62]     In some areas of the law, lawyers may take longer to disappear entirely. In areas without clear precedent, cases may be deemed too delicate to be dealt with by computers. Some clients may never trust computers and insist on using humans; it will take time until we are willing to entrust our freedom (or our lives, in certain states in the United States) in the hands of algorithms.[189]

            C. Jury

[63]     Juries, like the other members of the legal system, will not be needed for most cases as there will be fewer trials.[190] The majority of legal issues will be solved by algorithms. In addition, technology may ensure that juries are designed to represent society, perhaps even mimicking human biases involving race, background, and life experience.[191] Such a jury could easily be instructed to disregard information, or weigh some data differently than others.[192]           

            D. Law School

[64]     Law schools will change dramatically, not least because we will need fewer lawyers. Moreover, the nature of legal learning will change to include subjects that are not taught in law schools today–creativity, understanding of statistics, big data analysis, and more.[193]

VII. Specific Ethical, Legal, and Social Implications

[65]     When considering these technologies and the changes they bring to the legal field, we must refer to the ethical, legal, and social implications that they create:

[66]     Today, the legal profession—lawyers, judges, and legal education—faces a disruption, mostly because of the growth of AI technology, both in power and capacity.[194]

[67]     An example of this disruption is that today, computers can review documents, a task which human lawyers did in the past. The role of AI is growing exponentially,  so it is predicted that technology will evolve to a level that will enable computers to replace more complex legal tasks such as legal document generation and predicting litigation outcomes in litigation.[195] These implementations will become possible as the learning abilities of the machine intelligence becomes better and better. Already, fifty-eight percent of respondents to the question “Is your firm doing any of the following to increase efficiency of legal service delivery” responded saying “Using technology tools to replace human resources.”[196] More specifically, forty-seven percent saw Watson replacing paralegals, thirty-five percent thought the same for first year associates. Thirteen and a half percent even though Watson could replace experienced legal partners.[197] Notably, while twenty percent said that computers will never replace human practitioners,[198] that number has gone down from forty-six percent in 2011.[199]

[68]     There are some benefits which derive from these implications. First, they will increase competition in the legal services market, which will increase efficiency.[200] Second, today the pricing process of lawyer’s services is very ambiguous because it is hard to predict the total required services. This implication could enable price comparisons and entrance of new players to the legal services market.[201]

[69]     The forecast is that these implications will affect the following legal areas:[202]

  • Legal Discovery: Machine searches will enhance the legal discovery process by making the review of legal documents more efficient. There are already a handful of software tools that use predictive coding to minimize lawyerly interference in the e-discovery process, including, Relativity,[203] Modus,[204] OpenText,[205] kCura,[206] and others.[207] The courts have also acknowledged the use and promise of predictive coding.[208]
  • Legal Search: Search tools as Lexis[209] and Westlaw[210] were the first legal search engines to use an intelligence search tool. Afterward, Watson enabled searching using semantics instead of keywords.[211] Semantic search allows searching natural language queries and the computer responds semantically with relevant legal information.[212] Ross, mentioned above, is an example of this kind of system.[213] Advanced features provide information about the strength of a precedent, considering how much others rely on it, and enabling an effective use of it.[214] Eventually AI will even be able to issue spot, based on the searches conducted.[215]
  • Compliance: Legal and regulatory compliance is often socially and morally required, not to mention the penalties that are due for non-compliance.[216] As such, many corporations employ teams of lawyers to confirm that they comply with the applicable regulatory regimes. AI machines are already being employed in this area, including Neota Logic,[217] which powers other companies’ AI regulatory compliance systems, such as, Compliance HR for employment regulations[218] or Foley and Lardner Global Risk Solutions (GRS) for Foreign Corrupt Practices Act of 1977 (FCPA) compliance.[219]
  • Legal document Generation: In the past, the usage of templates helped reduce the cost of these legal services. Machine intelligence will evolve to generate documents that answer the specific needs of an individual. When these files are reviewed in court, AI will be able to improve the documents by tracking their effectiveness, using his learning abilities.[220]
  • Document Analysis: In addition to generating documents, AI can and will continue to assess the liabilities and risks associated with particular contracts, as well as determining ways for companies to optimize contracts to reduce costs.[221] Nowadays Companies such as eBrevia[222] and LegalSifter[223] are doing just that.
  • Brief and Memos Generation: Machine intelligence will be able to create drafts and memos that will then be revised and shaped by lawyers. In the future it will create much more accurate briefs and memos, assisted by legal research programs which will provide useful data.[224] Some have even suggested using AI to draft legislative documents.[225]
  • Legal Analytics: Companies such as Lex Machina,[226] Lex Predict,[227] and Legal Operations Company, LLC[228] already combine data and analysis abilities to predict the outcomes of situations that have not yet occurred. There are areas of law ­such as copyright and fair use, which will be discussed next that are easier to model because the data related to this subject revolves around specific, easily predictable.[229] Combining the exponential improvement of computers and their learning abilities, these models and predictions will evolve to support more complex areas of law, and to make prediction of case outcomes.[230]

[70]     These changes will not only affect access to hard to obtain legal representation,[231] they also affect the workplace of lawyers. Those who practice these tasks and do not assimilate these shifts could lose their jobs.[232] Additionally, in the future, fewer substandard lawyers will be needed. On the other hand, super-star lawyers or bespoke attorneys[233] will be more easily identifiable (because of the legal analytics which can monitor lawyers success rate) and will use these technologies to their use. Even though machines could replace many tasks of a lawyer, they cannot speak in courts in the foreseeable future, so litigators will be needed.[234] Moreover, there are some areas of law that are subject to a rapid legal change, so even intelligence machines won’t be able to learn them that fast, so lawyers will be needed in those specialized areas. Also, lawyers’ human judgments may still add value to computer predictions.[235]

[71]     As a result of these changes, predicting case outcomes will be easier and more accurate, cases will be more likely to get settled, and fewer trials will be conducted. So it follows that the number of physical courtrooms may also reduce dramatically.

[72]     Another change will occur in law schools. As a result of the changes mentioned above, fewer jurists will be needed and only in certain areas of law. Therefore, law schools should change their aim and focus on the necessities of the new legal profession including technical expertise and an ability to interact with and efficiently use the new multidisciplinary AI technology.[236]

VIII. Artificial Intelligence in Fair Use–An Early Stage Proof of Concept

[73]     In our quest to explore the social, legal, and ethical implications of AI we partnered with the IBM Watson team which is creating a workable software product in the area of the AI and law. Being students with a strong orientation to other disciplines such as law, psychology, business, and government–we were naturally drawn to the field of conflict resolution. First, in the words of Steve Jobs, we had to create a “stupid-simple” legal analysis scheme.[237] In this scheme we aim to effectively explain how lawyers and law students approach a case, to the engineering staff at IBM Watson.

[74]     We drilled down on the set of questions one asks himself when reading a ruling. As a rule, the more features or details one adds to the algorithm, the more data needs to be analyzed in order for Watson to effectively learn how the data was initially analyzed. To summarize this point, if we just needed Watson to identify a win or loss, the task would be relatively easy. However, we wanted Watson to analyze why someone won or lost, which is orders of magnitude more complex.

Model 1 – Case Law Analysis Scheme:

[75]     The logical process of analyzing case law is roughly similar, independent of the law, but requires a process of specific sets of stages in order to analyze the case at hand. After learning that process, the system creates a data set in certain legal topics, thus gaining the ability to analyze new cases.

Stage 1: Identifying the case type variables

[76]     In this stage the focus is on the details of the case and establishing the specific normative framework.

  • Variable 1: Court type. The algorithm must identify in which court, state, or jurisdiction the case is being tried. This is imperative, since the court hierarchy dictates if an earlier ruling is binding for different lower courts. For example: United States Supreme Court case law is considered precedent for all lower courts. Any ruling that conflicts with binding precedent case law will not hold up on appeal. In addition, there are different approaches to the case in each court – district courts find facts and then apply the law, while in most appellate courts, previously found facts are applied to their understanding of the law.
  • Variable 2: Location & Date. The general rules in legal precedent are that new rulings overturn old rulings at that same judicial level and below, and that specific rulings overturn general rulings. This is why it is imperative for the machine learning algorithm to appreciate the source of each ruling.
  • Variable 3: Parties. The algorithm must identify which of the parties is the plaintiff and which is the defendant. This differentiation is imperative to refine which claims have been accepted by the court and which have not. In addition, different degrees of proof may be applied to different stakeholders in a case.
  • Variable 4: Legislative Standards. This should be categorized by both federal law and state law. The labeling of case laws and statuses makes it easier to locate cases with similar issues. In this variable, it must be remembered that there is also a hierarchy in legal sources.
  • Variable 5: Rulings & Other Case Law. IBM’s algorithms need to identify other case law cited within each case as persuasive precedent or unpersuasive precedent. This enables the algorithm to develop a broad network structure, enabling it to understand which ties between rulings are relevant and to suggest more cases, which can be addressed in a legal matter.
  • Variable 6: Secondary Sources. Legal literature such as academic articles, books, and blogs, provide valuable academic information, enabling the user of a search engine to find new ideas for forming his claims or to find opinions that oppose binding precedents (which will be valuable when dealing with a case that is being tried in a court with the same jurisdictional level as the court who ruled the existing precedent).
  • Variable 7: The Judiciary. The algorithm should identify the names of the judges and if they ruled with the majority or minority. In some instances, it should be determined whether a dissenting opinion could be used in another case to provide valuable insights into which claims might be taken under consideration by specific judges. As the famous saying goes, “A good lawyer knows the law; a great lawyer knows the judge.”

[77]     We have left out some critical factors in this document which might dramatically influence the form in which IBM’s algorithms approaches cases. However, it was essential to create a relatively simplified approach for the algorithm to read the available case law, in order for it to understand the basic ground rules. Factors related to whether laws are general or specific, the times in which they were legislated, history of upholding and overturning a particular ruling, and other factors have been left out for the sake of creating an initial proof of concept which can predict or evaluate real legal claims to an extent greater than chance. Another important consideration for this effort is the size of the data set–additional factors exponentially increase the amount of training data necessary to teach the algorithm how to think like a lawyer.

Stage 2: Selecting the Field of Dispute for the Case Law Data Set

[78]     The second stage required to create the proof of concept was finding a relatively structured area of law with hard and fast, consistent, factors, which have not changed much over recent years. A legal area with a very simple and clear list of standards would be the optimal tool to assure that no claims have been skipped regarding a respective field of dispute. Further, we sought a field of law defined mostly by federal law rather than state law, as that would require us to create a different schema for every state.

[79]     Lastly, we wanted to challenge ourselves and find an area of law that would be of interest to the general public and that could result in a usable product. We eventually chose to pursue the creation of an AI algorithm in the field of fair use in publishing, under the Copyright Act. The fair use doctrine incorporates all of the above requirements, and also plays an important societal role due to the public’s misunderstanding and content owners’ misuse of the doctrine, which contribute to copyright’s continued and expanding burden on free speech.[239]

Stage 3: Defining the Fair Use Analysis Scheme

[80]     In order to teach Watson how to analyze a fair use case, we have created, based on various resources (text books, articles, and the web), a fair use analysis scheme depicting the rationale and analysis performed by lawyers when approaching such claims. One particularly helpful site was the Stanford Copyright and Fair Use Center[240] and Cornell’s Fair Use Checklist.[241]

  • Fair Use in Publishing – Analysis Scheme: As part of the first model for our case law analysis scheme, we built a data set of fair use in publishing based on verdicts from all of the United States federal circuit courts. Although we had initially attempted to limit this to just the Second and Ninth Circuits, these two circuits did not provide sufficient case law for the analysis.
  • Copyright in a Nutshell: Copyright protection in the United States is legislated under the Copyright Act of 1976.[242] Section 102 of this act elaborates which works of authorship fall under the copyright protection.[243] Section 104 of the act deals with the question of when a work becomes the subject matter of copyright. [244] Section 104(a) rules that unpublished works specified by section 102 and/or section 103 are subject to copyright protection under this act, without regard to the nationality or domicile of the author.[245] Regarding published works, section 104(b) elaborates on when copyright protection will apply regarding the nature of the work and the nationality or domicile of the author.[246] Section 106 covers the exclusive rights of the author, like the right to reproduce copies of the copyrighted work, prepare derivative works based upon the copyrighted work, distribute copies to the public, and more. [247] The Copyright Act provides for limitations to these exclusive rights, like reproduction by libraries and archives[248] and transfers of particular copy after the first sale[249] (e.g. selling a CD that you bought from a store). The fair use doctrine in U.S. law is based on Section 107.[250] The fair use doctrine provides a defense for infringement–“Fair use was traditionally defined as ‘a privilege in others than the owner of the copyright to use the copyrighted material in a reasonable manner without his consent’”[251]—the application of the formerly judicial doctrine[252] requires the balancing of four statutory factors:
    • (1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
    • (2) the nature of the copyrighted work;
    • (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
    • (4) the effect of the use upon the potential market or value of the copyrighted work.[253]

[83]     The court decides each factor, ruling in favor or against the fair use. Then, each of the four factors is weighed against the total weight of the other factors.[254] This is not a trivial process, even for an experienced judge: “the four statutory factors be [may not] treated in isolation, one from another. All are to be explored, and the results weighed together, in light of the purposes of copyright.”[255] As such, it is an optimal exercise for an AI.

Stage 4: Method of the analysis

[84]     In each case law verdict, we examine and analyze each sentence and categorize it within the fair use doctrine analysis (i.e. marking which factor each sentence relates to and determining whether it supports the claims of the plaintiff or the defendant). Some sentences are deemed irrelevant to either side and are categorized as dicta or as support for the judge’s ruling.

[85]     Each sentence is then electronically tagged with information such as whether it favored the plaintiff or defendant in each factor. After reviewing the checklist with the Watson team, we concluded that in order to teach Watson to understand which sentences favor or oppose each factor –Purpose, Nature, Amount, and Effect – without going into the details of each sub-factor, there is a need for approximately five hundred analyzed and marked verdicts with tags. This produced approximately ten thousand sentences as a learning set for Watson.

[86]     Through examining various fair use cases, most of which concentrate in the Second and Ninth circuits (most of the relevant IP claims are filed in these courts which encompass New York and California —the centers of literature and film, respectively) we note that each case, in the aspect of the fair use doctrine, has roughly twenty to twenty-five sentences relating to the subject. Following the examination, we analyzed all relevant cases, marking each relevant sentence in each case that discussed fair use doctrine. This marking included determinations for each sentence in the following categories:

  • Data: The minimal amount of words needed to classify the sentence under the Factor label or the Side label, as discussed in the following articles.
  • Factor: Purpose / Nature / Amount / Effect / Ratio / Dicta.[256]
  • Side: Plaintiff / Defendant / Neutral.

For example, Figure 1.

__

[87]     We are currently conducting a pattern analysis via the AI algorithms of Watson in order to identify patterns in the rational of judges based on the given data. After incorporating this vast data set, Watson can provide, for a hypothetical case, exactly which claims and arguments are best, depending on whether we argue for the plaintiff or the defendant.

[88]     This entire project is complex and will take substantial time to complete. Nevertheless, with the planning phase complete and complications accounted for, the next step is to implement the technology.

IX. Conclusion and Recommendations for Courses of Action

[89]     The fruits of AI research are often attributed to other fields, as new revelations rapidly turn into mundane computer science inventions. However, we must remember that there is much more to explore and reveal within the yet unknown realms of AI.

[90]     In this paper we reviewed what defines AI and how it came about and evolved. We covered recent developments in AI relevant to the field of law and how they are leading to changes such as: automated cases analysis, increased efficiency in judicial tasks, replacing or minimizing human intervention in solving disputes, etc.

[91]     It is gradually becoming more conceivable that AI will change the world of law and the profession of lawyers in the near future. We are ready for it in two ways: via the market and technology. First, market failure has resulted in a judicial system overload. Second, funding for legal tech start-ups has grown from 7 million USD in 2009 to a whopping 450 million in 2013.[257] Market failures and technological achievements will work together to pave the way for a new version of the legal profession.

[1] Lost in Space (1965–1968) Quotes, IMDB, http://www.imdb.com/title/tt0058824/quotes, archived at https://perma.cc/J8RH-UYSB (last visited Sept. 16, 2016) (quoting “Robot: “Danger, Will Robinson! Danger!”).

The authors would like to thank the Zvi Meitar Family for their continued support of all of our research endeavors. In addition, the authors would like to thank the researchers at IBM Watson for their help and support throughout this project. Finally, the authors would like to especially thank Inbar Carmel for her incredible management of the Zvi Meitar Institute.

*Daniel Ben-Ari is a fourth-year student in the joint program in Law and Business Administration at Radzyner Law School and Arison Business School at Interdisciplinary Center Herzliya (IDC). Daniel served as an Operations Sergeant in the operations division in the Israel Defense Forces. At the IDC, Daniel participated in the Law Clinic for Class Actions and the Certificate Program in European Studies, and he is also a member of the Israeli-German Lawyers Association (IDJV). Additionally, Daniel is a member of the elite program of KPMG Accounting Firm for excellent students. He also volunteers with Nova Project, which provides business consulting services to NGOs. Currently, Daniel is the coordinator of the Zvi Meitar Emerging Technologies Program and working as a teaching assistant of the course Accounting Theory.

**Yael Frish is a BA graduate of the Honors Track at Lauder School of Government, Diplomacy and Strategy at IDC Herzliya.  In the IDF, Yael served as an intelligence officer in an elite intelligence unit, ranked Lieutenant. Yael graduated from the “Zvi Meitar Emerging Technologies” Program and is an alumna of the ProWoman organization. In summer 2014, Yael participated in an Israeli-Palestinian delegation to Germany, and in summer 2015, she represented Israel at the American Institution for Political and Economic Solutions’ Summer Program at Charles University, Prague. In the last two years, Yael has gained professional experience as an analyst, consultant and business developer in consulting and business intelligence companies.

***Adam Lazovski is a Managing Partner at Quedma Innovation Ltd.  Adam is also the founder and Program Manager of the Excel Ventures Program part of Birthright’s Leadership Branch (Excel).  Adam has also worked as a strategy and business development consultant in Robus, Israel’s largest and leading legal marketing and consulting firm.  Adam holds a B.A in Psychology and an LL.B, both from IDC Herzliya. During his studies Adam was part of the Zvi Meitar Emerging Technologies Program.  In addition, Adam was also part of the Rabin Leadership Program, where he initiated a social venture and studied the science behind leaders and entrepreneurs.  Adam served as a first sergeant in the Demolition and Sabotage special unit within the Paratroopers Brigade in the Israel Defense Forces. He maintains an active reserve status.

****Uriel Eldan is a graduate of the joint program in Law (LL.B) and Business Administration (B.A.) at Radzyner Law School and Arison Business School at IDC Herzliya. Uriel served in the elite unit “8200” and in the Research Department of the Army Intelligence in the Israel Defense Forces.  Uriel co-founded the Capital Markets Investment Club at IDC and was a member of the Zvi Meitar Emerging Technologies honors program.  Uriel has worked as a teaching assistant in numerous courses at IDC, and now also in Tel Aviv University.  After graduation, Uriel started his legal internship at one of Israel’s top law firms Herzog, Fox & Neeman in the Technology & Regulation department.

*****Dov Greenbaum is Director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, Interdisciplinary Center, Herzliya, Israel (IDC). Dov is also an Assistant Professor (adj) in the Department of Molecular Biophysics and Biochemistry at Yale University and a practicing intellectual property attorney.  Dov has degrees and postdoctoral fellowships from Yale, UC Berkeley, Stanford, and  Eidgenössische Technische Hochschule Zürich (ETH Zürich).

[2] Samuel Gibbs, Elon Musk: Artificial Intelligence is Our Biggest Existential Threat, The Guardian (Oct. 27 2014, 6:26), https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat, archived at https://perma.cc/MSN2-5TWC.

[3] Kris Hammond, What is Artificial Intelligence?, Computerworld (Apr. 10, 2015, 4:05 AM), http://www.computerworld.com/article/2906336/emerging-technology/what-is-artificial-intelligence.html, archived at https://perma.cc/J7VS-HG43.

[4] See id.; see, e.g., Stuart Jonathan Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 18 (3rd ed. 2010) (discussing important aspects of A.I.).

[5] See John McCarthy, What Is Artificial Intelligence? 2–3 (Nov. 12, 2007) (unpublished manuscript) (on file with Stanford University), http://www-formal.stanford.edu/jmc/whatisai.pdf, archived at https://perma.cc/XF9R-UHKV.

[6] See Ido Roll & Ruth Wylie, Evolution and Revolution in Artificial Intelligence in Education, 26 Int’l J. Artificial Intelligence in Educ. 582, 583 (2016); see Monika Hengstler, Ellen Enkel & Selina Duelli, Applied Artificial Intelligence and Trust—The Case of Autonomous Vehicles and Medical Assistance Devices, 105 Technological Forecasting & Social Change 105, 114 (2016).

[7] See Karamjit S. Gill, Artificial Super Intelligence: Beyond Rhetoric, 31 AI & SOCIETY 137, 137 (2016).

[8] See Avneet Pannu, Artificial Intelligence and its Application in Different Areas, 4 Int’l J. Engineering & Innovative Tech. (IJEIT) 79, 79, 84 (2015).

[9] See id. at 5.

[10] See id. at 3.

[11] See id. at 5.

[12] Katie Hafner, Still a Long Way from Checkmate, N.Y. Times, Dec. 28, 2000, http://www.nytimes.com/2000/12/28/technology/28ARTI.html?pagewanted=1, archived at https://perma.cc/X2PX-25EW.

[13] See Russell & Norvig, supra note 4, at 1020.

[14] See id.

[15] See id.; see John Frank Weaver, Robots Are People Too: how Siri, Google Car, and artificial intelligence will force us to change our laws 3 (2014) [hereinafter Robots Are People Too].

[16] See Russell & Norvig, supra note 4, at 1020; see Robots Are People Too, supra note 15, at 3.

[17] See Russell & Norvig, supra note 4, at 1026; see Robots Are People Too, supra note 15, at 3.

[18] E.W. Dijkstra, The Threats to Computing Science (EWD898), E.W. Dijkstra Archive. USA: Center for American History, University of Texas at Austin, http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD898.html, archived at https://perma.cc/ZU8Y-26TY.

[19] Russell & Norvig, supra note 4, at 1026.

[20] McCarthy, supra note 5, at 10-11.

[21] See Richard Thomason, Logic and Artificial Intelligence, Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/logic-ai/, archived at https://perma.cc/3RPH-PVKV, (last updated Oct. 30, 2013); see Raymond Reiter, Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems 133 (2001).

[22] See David Senior, Narrow AI: Automating The Future of Information Retrieval, TechCrunch, Jan. 31, 2015, https://techcrunch.com/2015/01/31/narrow-ai-cant-do-that-or-can-it/, archived at https://perma.cc/LP5K-Z47X.

[23] See generally Feng-hsiung Hsu, IBM’s Deep Blue Chess Grandmaster Chips, 19 IEEE Micro 70, 70 (1999) (describing IBM’s Deep Blue super computer and discussing the main source of its computation power).

[24] See generally Aviva Rutkin, Anything You Can Do . . ., 229 New Scientist 20,20 (2016) (discussing how artificial intelligence has developed and advanced).

[25] Hafner, supra note 12.

[26] Id.

[27] Id.

[28] See generally Rob High, The Era of Cognitive Systems: An Inside Look at IBM Watson and How it Works (IBM Corp. ed., 2012) (providing a detail analysis on how Watson works).

[29] See Modria, http://modria.com/product/, archived at https://perma.cc/RKN5-LPWT (last visited Nov. 1, 2016).

[30] See Hafner, supra note 12.

[31] See id.

[32] See id.

[33] See, e.g., Robert Fisher, Representations of Artificial Intelligence in Cinema, University of Edinburgh–School of Informatics, http://homepages.inf.ed.ac.uk/rbf/AIMOVIES/AImovies.htm, archived at https://perma.cc/Y7KC-XHP3 (last updated Apr. 16, 2015); see Kathleen Richardson, Rebranding the Robot, 4 Engineering & Technology 42 (2009); see Robert B. Fisher, AI and Cinema Does Artificial Insanity Rule?, in Twelfth Irish Conf. on Artificial Intelligence and Cognitive Science (2001); see Elinor Dixon, Constructing the Identity of AI: A Discussion of the AI Debate and its Shaping by Science Fiction (May 28, 2015) (unpublished Bachelor thesis, Leiden University) (on file with the Leiden University Repository), https://openaccess.leidenuniv.nl/bitstream/handle/1887/33582/Elinor%20Dixon%20BA%20Thesis%20Final.pdf?sequence=1, archived at https://perma.cc/H2P7-NXVC.

[34] 2001: A Space Odyssey, (Stanley Kubrick Productions 1968).

[35] The Matrix, (Village Roadshow Pictures, Groucho II Film Partnership & Silver Pictures 1999).

[36] The Terminator (Cinema ’84 & Pacific Western 1984).

[37] See Jean- Baptiste Jeangène Vilmer, Terminator Ethics: Should We Ban “Killer Robots” Ethics & Int’l Affairs, Mar. 23, 2015, https://www.ethicsandinternationalaffairs.org/2015/terminator-ethics-ban-killer-robots/, archived at https://perma.cc/8XSE-BNVC.

[38] A.I. Artificial Intelligence (Amblin Entertainment & Stanley Kubrick Productions 2001).

[39] Star Wars: Episode IV – A New Hope (Lucasfilm Ltd. 1977).

[40] Spaceballs (Brooksfilms 1987).

[41] See Wendell Wallach & Colin Allen, Moral Machines: Teaching Robots Right from Wrong 7–8 (2009).

[42] John Searle, The Chinese Room Argument, 4 Scholarpedia 3100 (2009), http://www.scholarpedia.org/article/Chinese_room_argument, archived at https://perma.cc/FK4A-5X7Q.

[43] David Adrian Sanders & Giles Eric Tewkesbury, It Is Artificial Idiocy That Is Alarming: Not Artificial Intelligence, in Proc. of the 11th Int’l Conf. on Web Info. Sys. and Technologies 345, 347 (2015).

[44] See Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014).

[45] See id. at 26, 155.

[46] See id. at 29.

[47] See id. at 140.

[48] See Singularity Hypotheses: A Scientific and Philosophical Assessment 1–4 (Amnon H. Eden et al. eds., 2012) [hereinafter Singularity Hypotheses]

[49] See id. at 28–29. 

[50] See Stanislaw Ulam, John Von Neumann, 64 Bull. of the Am. Mathematical Soc’y 1, 5 (May 1958), http://www.ams.org/journals/bull/1958-64-03/S0002-9904-1958-10189-5/S0002-9904-1958-10189-5.pdf, archived at https://perma.cc/AV9D-EJ3T.

[51] Guia Marie Del Prado, Stephen Hawking Warns of an ‘Intelligence Explosion,’ Bus. Insider (Oct. 9, 2015, 2:17 PM), http://www.businessinsider.com/stephen-hawking-prediction-reddit-ama-intelligent-machines-2015-10, archived at https://perma.cc/P4NL-2AJ2.

[52] Singularity Hypotheses, supra note 48, at 3.

[53] See Hazem Ahmed & Janice Glasgow, Swarm Intelligence: Concepts, Models and Applications: Technical Report 2012-585, Queen’s Univ. School of Computing 2 (2012), http://ftp.qucis.queensu.ca/TechReports/Reports/2012-585.pdf, archived at https://perma.cc/8APG-T4ZX.

[54] See Eric Bonabeau, Marco Dorigo & Guy Theraulaz, Swarm Intelligence: From Natural to Artificial Systems 19 (1999).

[55] See Vilmer, supra note 37.

[56] See John Searle, Minds, Brains, and Computers, 3 The Behavioral & Brain Sciences 349, 353 (1980), http://faculty.arts.ubc.ca/rjohns/searle.pdf, archived at https://perma.cc/7K9U-98FA (stating that the equation “mind is to brain as program is to hardware” is flawed).

[57] See Russell & Norvig, supra note 4, at 36–37.

[58] See Michael R. LaChat, Artificial Intelligence and Ethics: An Exercise in the Moral Imagination, 7 AI Mag. 70, 70–71 (1986), http://www.aaai.org/ojs/index.php/aimagazine/article/view/540/476, archived at https://perma.cc/YQ72-FAXG (“[T]he possibility of constructing a personal AI raises many ethical and religious questions that have been dealt with seriously only by imaginative works of fiction; they have largely been ignored by technical experts and by philosophical and theological ethicists”). 

[59] Russell & Norvig, supra note 4, at 1020.

[60] See Nick Bostrom, Robots & Rights: Will Artificial Intelligence Change the Meaning of Human Rights? 5, 5 (Matt James & Kyle Scott eds., 2008).

[61] See Russell & Norvig, supra note 4, at 331.

[62] See István S. N. Berkeley, What is Artificial Intelligence?, Univ. of La. at Lafayette (1997), http://www.ucs.louisiana.edu/~isb9112/dept/phil341/wisai/WhatisAI.html, archived at https://perma.cc/2ZGB-L8P7.

[63] See Alan M. Turing, Computing Machinery and Intelligence (1950).

[64] See Berkeley, supra note 62.

[65]See Daniel C. Dennett, Can Machines Think?, in How we Know (Michael Shafto ed., 1985), http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/dennettcanmach.pdf, archived at https://perma.cc/4JWH-XK3K (last visited Sept. 22, 2016).

[66] See id.

[67] See id.  

[68] See id.

[69] See id.

[70] See Jo Best, IBM Watson: The Inside Story of How the Jeopardy-Winning Supercomputer was Born and What it Wants to do Next TechRepublic, (Sept. 9, 2013, 8:45 AM), http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/, archived at https://perma.cc/Z6MD-ZGUA.

[71] See Stuart Russell, Introduction to AI: A Modern Approach, Univ. of CA- Berkeley, https://people.eecs.berkeley.edu/~russell/intro.html, archived at https://perma.cc/R2DQ-94R3 (last visited Oct. 31, 2016).

[72] See Gary Fostel, The Turing Test is For the Birds, 4 SIGART Bull. 7, 8 (1993).

[73] Jose Hernandez-Orallo, Beyond the Turing Test, 9 J. of Logic, Language & Info. 447, 447-466, 458 (2000).

[74] See José Hernández-Orallo & David L. Dowe, Measuring Universal Intelligence: Towards an Anytime Intelligence Test, 174 Artificial Intelligence 1508, 1509 (2010), http://ac.els-cdn.com/S0004370210001554/1-s2.0-S0004370210001554-main.pdf?_tid=179c084e-83e4-11e6-b8dd-00000aacb362&acdnat=1474892815_a27d3e23a8991e0587ff0c3a6c4c0086, archived at https://perma.cc/C3PW-438Q.

[75] See Hector J. Levesque, Ernest Davis, & Leora Morgenstern, The Winograd Schema Challenge, Proc. of the Thirteenth Int’l Conf. on Principles of Knowledge Representation & Reasoning 552, 554, 557–58 (2012), http://www.aaai.org/ocs/index.php/KR/KR12/paper/viewFile/4492/4924/, archived at https://perma.cc/4VWX-7SYY.

[76] See generally Allen Newell & Herbert Simon, The Logic Theory Machine—A Complex Information Processing System, 2 IRE Transactions on Info. Theory 61, (1956), https://www.u-picardie.fr/~furst/docs/Newell_Simon_Logic_Theory_Machine_1956.pdf, archived at https://perma.cc/NM8Q-LJZS (detailing logic theorist system).

[77] See generally Mark O. Riedl, The Lovelace 2.0 Test of Artificial Creativity and Intelligence, arXiv: 1410.6142 (2014), http://arxiv.org/pdf/1410.6142v3.pdf, archived at https://perma.cc/9HC5-HYF3 (detailing the Lovelace 2.0 test).

[78] See id.

[79] See Kevin Warwick & Huma Shah, Human Misidentification in Turing Tests, 27 J. Exp. & Theoretical Artificial Intelligence 123, 124-25 (2014) http://www.tandfonline.com/doi/pdf/10.1080/0952813X.2014.921734, archived at https://perma.cc/53VP-42NZ.

[80] See Kevin Warwick & Huma Shah, Can Machines Think? A Report on Turing Test Experiments at the Royal Society, 27 J. Exp. & Theoretical Artificial Intelligence 1, 17 (2015) http://www.tandfonline.com/doi/pdf/10.1080/0952813X.2015.1055826?needAccess=true, archived at https://perma.cc/V279-8BRN.

[81] See generally John McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955, 27 AI magazine 12, 13-14 (2006), http://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802, archived at https://perma.cc/WA82-QMSZ (reproducing part of the Dartmouth summer research project and summarizing its proposal); see also Berkeley, supra note 62.

[82] See Berkeley, supra note 62.

[83] See Interview by Jeffrey Mishlove with John McCarthy, Ph.D., Thinking Allowed, Conversations on the Leading Edge of Knowledge and Discovery: Artificial Intelligence (1989), http://www.intuition.org/txt/mccarthy.htm, archived at https://perma.cc/3LCQ-KYW5.

[84] John McCarthy, Ascribing Mental Qualities to Machines, Stan. Artificial Intelligence Lab. 1, 2 (1979), http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA071423, archived at https://perma.cc/HJ9K-VC8V.

[85] See Marvin Minsky, ‘Father of Artificial Intelligence,’ Dies at 88, MIT News, Jan. 25, 2016, http://news.mit.edu/2016/marvin-minsky-obituary-0125, archived at https://perma.cc/AS9V-GN4S.

[86] See Will Knight, What Marvin Minsky Still Means for AI, MIT Technology Rev., Jan. 26, 2016, https://www.technologyreview.com/s/546116/what-marvin-minsky-still-means-for-ai/, archived at https://perma.cc/BN2U-AXE5.

[87] See id. 

[88] See Jan Mycielski, Book Reviews, Perceptrons, An Introduction to Computational Geometry, 78 Bull. of the Am. Mathematical Soc’y 12, 12 (1972), http://www.ams.org/journals/bull/1972-78-01/S0002-9904-1972-12831-3/S0002-9904-1972-12831-3.pdf, archived at https://perma.cc/ZT2X-X8JS (reviewing Perceptrons by Minsky and Papert); see also Jordan B. Pollack, Book Review, No Harm Intended, 33 J. Mathematical Psycholog. 358, 358 (1988), http://www.demo.cs.brandeis.edu/papers/perceptron.pdf, archived at https://perma.cc/99US-9KK8 (reviewing the expanded edition of Perceptrons by Minsky and Papert).

[89] See Knight, supra note 86.

[90] See id. 

[91] See id.

[92] See Allen Newell & Herbert A. Simon, Computer Science as Empirical Inquiry: Symbols and Search, 19 Comm. ACM 113, 116 (1976).

[93] Herbert A. Simon, The Sciences of the Artificial 23 (3rd ed. 1996).

[94] See id. at 22.

[95] See Nils Nilsson, The Physical Symbol System Hypothesis: Status and Prospects, in 50 Years of AI 9, 11 (Max Lungarella, Fumiya Iida, Josh Bongard & Rolf Pfeifer eds., 2007).

[96] See David S. Touretzky & Dean A. Pomerleau, Reconstructing Physical Symbol Systems, 18 Cognitive Science, 345, 349 (1994).

[97] See generally Alexander Singer, Implementations of Artificial Neural Networks on the Connection Machine, 14 Parallel Computing 305 (1990) (discussing the practical implementation of artificial neural networks on the Connection Machine and the natural match between the two concepts).

[98] See Davis Ernest, Representations of Commonsense Knowledge 2 (Ronald J. Brachman ed., 1990); see John McCarthy Applications of Circumscription to Formalizing Common-Sense Knowledge, Dep’t of Computer Science, Stan. Univ. (1986), http://www-formal.stanford.edu/jmc/applications.pdf, archived at https://perma.cc/NZG6-6ZN3.

[99] See David Lynton Poole, Alan K. Mackworth & Randy Goebel, Computational Intelligence: A Logical Approach 1, 18 (1998).

[100] See Nilssson, supra note 95, at 11; see Bo Göranzon, Artificial Intelligence, Culture and Language: On Education and Work 220 (Magnus Florin ed., 1990).

[101] See Berkeley, supra note 62.

[102] See John McCarthy, The Well-Designed Child, 172 Artificial Intelligence 2003, 2011 (2008).

[103] See id. 

[104] See Brenden M. Lake et al., Building Machines That Learn and Think Like People. Center for Brains, Minds, and Machines Memo No. 046, at 7 (2016), http://www.mit.edu/~tomeru/papers/machines_that_think.pdf, archived at https://perma.cc/3Q9P-87XD.

[105] See McCarthy, supra note 4.

[106] See Ernest Davis & Gary Marcus, Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence, 58 Communications of the ACM 92, 93 (2015) http://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/fulltext#, archived at https://perma.cc/7PJH-KZP6.

[107] See Vincent C. Müller & Nick Bostrom, Future Progress In Artificial Intelligence: A Survey of Expert Opinion, Fundamental Issues Of Artificial Intelligence, 553, 553 (2016) (“The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter”), http://www.nickbostrom.com/papers/survey.pdf, archived at https://perma.cc/UA4E-P6GP.

[108] See Davis & Marcus, supra note 106, at 99-102.

[109] See infra text accompanying notes 240-42.

[110] See Education in Communities, IBM Corp. Resp. Rep. (2014), http://www.ibm.com/ibm/responsibility/2014/communities/education-in-communities.html, archived at https://perma.cc/P2W6-RHFD.

[111] Other People’s Money (Warner Bros. 1991).

[112] See generally, Mark Bergen, Another AI Startup Wants to Replace Hedge Funds, recode, (Aug. 7, 2016, 11:15 AM), http://www.recode.net/2016/8/7/12391180/artificial-intelligence-emma-hedge-fund, archived at https://perma.cc/924G-F822 (explaining how a company aiming to integrate artificial intelligence in stock market trading is a part of a larger wave of start-ups attempting to integrate AI learning in financial markets).

[113] See generally Jacob Brogan, What’s the Deal With Artificial Intelligence Killing Humans? Slate, (April 1 2016, 7:03 AM), http://www.slate.com/articles/technology/future_tense/2016/04/will_artificial_intelligence_kill_us_all_an_explainer.html, archived at https://perma.cc/9HSE-XA7Z (explaining the differing views on the danger of AI in a variety of fields); see also Heather M. Roff, Killer Robots on the Battlefield, Slate (April 7, 2016 11:45 AM), http://www.slate.com/articles/technology/future_tense/2016/04/the_danger_of_using_an_attrition_strategy_with_autonomous_weapons.html, archived at https://perma.cc/Q3FR-M8J2 (discussing the fears and benefits that accompany the prospect of autonomous weapons that engage targets entirely independent of human operation).

[114] See John O. McGinnis & Russell G. Pearce, The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services, 82 Fordham L. Rev. 3041, 3055 (2014) (discussing the possible disruptions that the legal profession may face as a result of integration of A.I. into the legal profession).

[115] See e.g., Overloaded Courts, Not Enough Judges: The Impact on Real People, People for the Am. Way, http://www.pfaw.org/sites/default/files/lower_federal_courts.pdf, archived at https://perma.cc/8CPY-4LGP (last visited Oct. 31, 2016) (explaining the current strain on the American judiciary).

[116] See Guilty as Charged, The Economist (Feb 2, 2013, 4:02 PM), http://www.economist.com/news/leaders/21571141-cheaper-legal-education-and-more-liberal-rules-would-benefit-americas-lawyersand-their, archived at https://perma.cc/Z7KX-Y6S9 (“America has more lawyers per person of its population than any of 29 countries studied (except Greece)”).

[117] See Maria L. Marcus, Judicial Overload: The Reasons and the Remedies, 28 Buffalo L. Rev 111, 112-15, 120 (1978).

[118] See id. at 111. 

[119] See How Big is the US Legal Services Market?, Thompson Reuters (2015) http://legalexecutiveinstitute.com/wp-content/uploads/2016/01/How-Big-is-the-US-Legal-Services-Market.pdf, archived at https://perma.cc/6LH8-AXGN [hereinafter U.S. Legal Services Market]

[120] Id.

[121] See William D. Henderson, From Big Law to Lean Law, 38 Int’l Rev. of L. & Econ. 1, 3-5, 10, 11 (2014). But c.f., Russell G. Pearce & Eli Wald, The Relational Infrastructure of Law Firm Culture and Regulation: The Exaggerated Death of Big Law, 42 Hofstra L. Rev 109, 110 (2013) (discussing how death of big law is not dying and present contradicting evidence).

[122] See U.S. Legal Services Market, supra note 119.

[123] See Artificial Intelligence Global Quarterly Financing History 2010-2015, CB Insights (2016), https://cbi-blog.s3.amazonaws.com/blog/wp-content/uploads/2016/02/AI_quarterly_finance_20160203.jpg, archived at https://perma.cc/7W29-26UG.

[124] See Christine Magee, The Jury is Out on Legal Startups, TechCrunch, Aug. 5 2014 http://techcrunch.com/2014/08/05/the-jury-is-out-on-legal-startups/, archived at https://perma.cc/Y95J-C2U9.

[125] See Raymond H. Brescia, et al. Embracing Disruption: How Technological Change in the Delivery of Legal Services Can Improve Access to Justice, 78 Albany L. Rev. 553, 553-55 (2014); see generally, Joan C. Williams, Aaron Platt & Jessica Lee, Disruptive Innovation: New Models of Legal Practice, at 2-3 (2015) http://ssrn.com/abstract=2601133, archived at https://perma.cc/4SVK-YMFX (explaining the impact of new business models and technology on legal access).

[126] See Julius Stone, Legal System and Lawyers’ Reasonings 37-41 (1964).

[127] But see, Jonathan Smithers, President of the Law Society, Speech at the Union Internationale des Avocats (UIA) Conference: Lawyers Replaced by Robots: Will Artificial Intelligence Replace the Judgement and Independence of Lawyers? (Oct. 30, 2015) http://www.lawsociety.org.uk/news/speeches/lawyers-replaced-by-robots-artificial-intelligence-replace-judgment/, archived at https://perma.cc/EV4A-RYBV.

[128] See Ian Lopez, Can AI Replace Lawyers?, Law.Com, Apr. 8, 2016, http://www.law.com/sites/articles/2016/04/08/can-ai-replace-lawyers-vanderbilt-law-event-to-address-legal-machines/?slreturn=20160414054949, archived at https://perma.cc/UY4Q-U8FY.

[129]See Machine Learning: What it is and Why it Matters, SAS Institute, http://www.sas.com/en_us/insights/analytics/machine-learning.html, archived at https://perma.cc/X5VD-4WPW (last visited Sept. 26, 2016).

[130] See id.

[131] See id.

[132] See Prakash M. Nadkarni, Lucila Ohno-Machado & Wendy W. Chapman, Natural Language Processing: An Introduction, 18 J. Am. Med. Informatics Ass’n. 544, 544 (2011), http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168328/, archived at https://perma.cc/3D53-TFM4.

[133] See id.

 [134] See id. at 545-46. 

[135] See Elizabeth D. Liddy, Natural Language Processing, Surface (Syracuse Univ. Research Facility and Collaborative Env’t) (2001) http://surface.syr.edu/cgi/viewcontent.cgi?article=1043&context=istpub, archived at https://perma.cc/MTC7-HHYK.

[136] See Nadkarni et al., supra note 132, at 544-45; see Steve Lohr, Aiming to Learn as We Do, a Machine Teaches Itself, N.Y. Times, Oct. 4, 2010, http://www.nytimes.com/2010/10/05/science/05compute.html?hpw=&pagewanted=all&_r=0, archived at https://perma.cc/KNP8-KWPE.

[137] See Nadkarni et al., supra note 132, at 549.

[138] See Big Data: What It is and Why It Matters, SAS Institute, http://www.sas.com/en_us/insights/big-data/what-is-big-data.html, archived at https://perma.cc/G3N7-566N (last visited Sept. 26, 2016).

[139] See Martin Hilbert & Priscila Lopez, The World’s Technological Capacity to Store, Communicate, and Compute Information: Tracking the Global Capacity of 60 Analog and Digital Technologies During the Period from 1986 to 2007, MartinHilbert.Net, Apr. 1, 2011, http://www.martinhilbert.net/WorldInfoCapacity.html/, archived at https://perma.cc/D5MK-CF5L (last visited Sept. 26, 2016).

[140] See SAS Institute, supra note 138.

[141] See Bernard Marr, How Big Data is Disrupting Law Firms and The Legal Profession, Forbes (Jan. 20, 2016, 2:31 AM), http://www.forbes.com/sites/bernardmarr/2016/01/20/how-big-data-is-disrupting-law-firms-and-the-legal-profession/#57a63cf35ed6, archived at https://perma.cc/9WLW-7EZV.

[142] See SAS Institute, supra note 138.

[143] See Why Artificial Intelligence is Enjoying a Renaissance, The Economist (July 15, 2016, 4:26) http://www.economist.com/blogs/economist-explains/2016/07/economist-explains-11, archived at https://perma.cc/J63S-RGKH.

[144] See id.

[145] See Bruce G. Buchanan & Thomas E. Headrick, Some Speculation About Artificial Intelligence and Legal Reasoning, 23 Stan. L. Rev. 40, 40-41 (1970).

[146] See id. at 40.

[147] See L. Thorne McCarty, The Taxman Project: Towards a Cognitive Theory of Legal Argument, in Computer Science & Law: An Advanced Course 23, 23 (Brian Niblett ed., 1980).

[148] See id.

[149] See Olaf Mw, IBM Debating Technologies, YouTube (May 6, 2014), https://www.youtube.com/watch?v=7g59PJxbGhY, archived at https://perma.cc/L3MF-FJYV (excerpt from the 2014 Milken Institute).

[150] See IBM Watson, IBM Watson: How it Works, YouTube (Oct. 7, 2014), https://www.youtube.com/watch?v=_Xcmh1LQB9I, archived at https://perma.cc/68XU-J3D6.

[151] See Ruty Rinott et al., Show Me Your Evidence – An Automatic Method for Context Dependent Evidence Detection, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 440, 440 (2015).

[152] ROSS Intelligence, http://www.rossintelligence.com/, archived at https://perma.cc/8Q63-XBAQ (last visited Sep. 26, 2016)[hereinafter ROSS Intelligence].

[153] ROSS Intelligence, https://wefunder.me/ross, archived at https://perma.cc/ZBG4-6JBX. “What they do: ROSS is an A.I. lawyer built on top of Watson, IBM’s cognitive computer, that provides cited legal answers instantly. Ross works much like Siri. With ROSS, lawyers ask a simple question the system sifts through its database of legal documents and spits out an answer paired with a confidence rating. Why it’s a big deal: Legal research is time-consuming and expensive. It erodes law firms’ profits and prices clients of [sic] out of services — Law firms spend $9.6 billion on research annually. Up until now the current research databases have relied heavily on the flawed system of keyword search. With Ross it’s as easy as asking a question. Ross has the potential to save both lawyers and clients billions every year. If they succeed, Ross will be the first ever artificial [sic] intelligence research and indexing software.”

[154] ROSS Intelligence, supra note 152. 

[155] See id.; see Karen Turner, Meet ‘Ross,’ the Newly Hired Legal Robot, Wash. Post, May, 16, 2016, https://www.washingtonpost.com/news/innovations/wp/2016/05/16/meet-ross-the-newly-hired-legal-robot//, archived at https://perma.cc/J2US-RP9M.

[156] See id.; ROSS Intelligence, supra note 152.

[157] ModusP, http://modusp.com/, archived at https://perma.cc/7BSM-54AT (last visited Sep. 26, 2016).

[158] See id.

[159] Lex Machina: a LexisNexis Company, https://lexmachina.com/, archived at https://perma.cc/QLF2-SM8V (last visited Sept. 26, 2016).

[160] See John R. Allison et al., Understanding the Realities of Modern Patent Litigation., 92 Tex. L. Rev. 1769, 1772-73 (2014).

[161] See id.

[162] See id.

[163] See id. at 1773.

[164] See Product, Modria, http://www.modria.com/product/, archived at https://perma.cc/K5AE-DS9L (last visited Sept. 26, 2016).

[165] See id. 

[166] Id.

[167] See id.

[168] See Ben Barton, Modria and the Future of Dispute Resolution, Bloomberg Law, Oct 1, 2015, https://bol.bna.com/modria-and-the-future-of-dispute-resolution/, archived at https://perma.cc/9J3D-N5UU.

[169] See Solutions, Premonition, http://premonition.ai/law/, archived at https://perma.cc/MXW5-B7RR (last visited Sept. 26, 2016).

[170] See id.

[171] See How It Helps, Beagle, http://beagle.ai/, archived at https://perma.cc/WY6X-CGV8 (last visited Sept. 26, 2016).

[172] See id.

[173] See id.  

[174] See Legal Robot, http://www.legalrobot.com, archived at https://perma.cc/9NJE-JZ6J (last visited Sep. 22, 2016).

[175] See id.  

[176] See id. 

[177] See id.

[178] See Mohammad Raihanul Islam et al., Could Antonin Scalia be replaced by an AI? Researchers reveal system that can already predict how Supreme Court justices will vote, Daily Mail (Mar.11, 2016), http://www.dailymail.co.uk/sciencetech/article-3488508/Could-Antonin-Scalia-replaced-AI-Researchers-reveal-smart-predict-justices-vote.html, archived at https://perma.cc/3YWP-8SQ3.

[179] See Ephraim Nissan, Digital Technologies and Artificial Intelligence’s Present and Foreseeable Impact on Lawyering, Judging, Policing and Law Enforcement, AI & SOC’Y (Oct. 14, 2015), http://link.springer.com/article/10.1007/s00146-015-0596-5/fulltext.html, archived at https://perma.cc/ZJ6G-YRSE.

[180] See, About Us, Modria, http://modria.com/about-us/, archived at https://perma.cc/2PHR-RLH6 (last visited Sep. 22, 2016).

[181] See Peter Reilly, Mindfulness, Emotions, and Ethics in Law and Dispute Resolution: Mindfulness, Emotions, and Mental Models: Theory that Leads to More Effective Dispute Resolution, 10 Nev. L.J. 433, 438, 447 (2010).

[182] See Frequently Asked Questions, Modria, http://modria.com/faq/, archived at https://perma.cc/9JEM-FYKC (last visited Sep. 22, 2016).

[183] See Mark Wilson, The Latest in ‘Technology Will Make Lawyers Obsolete!‘, Findlaw (Jan. 6, 2015, 11:39 AM), http://blogs.findlaw.com/technologist/2015/01/the-latest-in-technology-will-make-lawyers-obsolete.html#sthash.nkz8BvRE.dpuf, archived at https://perma.cc/GX98-4LY7.

[184] See id.

[185] See Dominic Carman, ‘We’re not even at the fear stage’ – Richard Susskind on a very different future for the legal profession, LegalWeek (Nov. 16, 2015), http://www.legalweek.com/sites/legalweek/2015/11/16/were-not-even-at-the-fear-stage-richard-susskind-on-a-very-different-future-for-the-legal-profession/, archived at https://perma.cc/S3RA-MSVU.

[186] See Jane Croft, Legal firms unleash office automatons, Fin. Times (May 16, 2016), https://www.ft.com/content/19807d3e-1765-11e6-9d98-00386a18e39d, archived at https://perma.cc/5E6L-7E7K.

 [187] See Id.

[188] See Frank A. Pasquale & Glyn Cashwell, Four Futures of Legal Automation, 63 UCLA L. Rev Discourse 26, 28 (2015); see also, David Kravets, Law Firm Bosses Envision Watson-Type Computers Replacing Young Lawyers, Ars Technica (Sept. 26, 2015), http://arstechnica.com/tech-policy/2015/10/law-firm-bosses-envision-watson-type-computers-replacing-young-lawyers/, archived at https://perma.cc/J3TN-64R3 (discussing the possibility of IBM Watson-like computers replacing lawyers and paralegals within the next ten years).

[189] See Erik Sherman, ‘Highly Creative’ Professionals Won’t Lose their Jobs to Robots, Study Finds, Fortune (Apr. 22, 2015), http://fortune.com/2015/04/22/robots-white-collar-ai, archived at https://perma.cc/GH8F-7YEU.

[190] See Jacob Gershman, Could Robots Replace Jurors?, Wall St. J. L. blog (Mar. 6, 2013, 1:30 PM), http://blogs.wsj.com/law/2013/03/06/could-robots-replace-jurors/, archived at https://perma.cc/5LT5-JBAP.

[191] See Anthony D’Amato, Can/Should Computers Replace Judges?, 11 Ga. L. Rev. 1277, 1280–81 (1977).

[192] See id. at 1292.

[193] See Michael Horm, Disruption Looms For Law Schools, Forbes (Mar. 17, 2016, 8:23 AM), http://www.forbes.com/sites/michaelhorn/2016/03/17/disruption-looms-for-law-schools/#6f77e6002708, archived at https://perma.cc/JY9M-FJ53.

[194] See id.

[195] IBM Watson computer is an example of a machine with strong computation skills, represented in hardware, software and connectivity. See What is Watson?, IBM, http://www.ibm.com/watson/what-is-watson.html, archived at https://perma.cc/8WCK-X3G7 (last visited Oct. 31, 2016).

[196] Thomas S. Clay & Eric A. Seeger, 2015 Law Firms in Transition: An Altman Weil Flash Survey, Altman Weil, 55 (2015), http://www.altmanweil.com/index.cfm/fa/r.resource_detail/oid/1c789ef2-5cff-463a-863a-2248d23882a7/resources/Law_Firms_in_Transition_2015_An_Altman_Weil_Flash_Survey.cfm, archived at https://perma.cc/6YRC-BFBP.

[197] See id. at 82. 

[198] See id.

[199] See id. at 83.

[200] See Richard Susskind, Tomorrow’s Lawyers: An Introduction To Your Future 8 (2013).

[201] See id.

[202] See McGinnis & Pearce, supra note 114, at 3046.

[203] See Relativity, https://www.kcura.com/relativity/, archived at https://perma.cc/5N67-ERGV (last visited Nov. 7, 2016).

[204] See Overview, Modus, www.discovermodus.com/overview/, archived at https://perma.cc/8VRG-TRN6 (last visited Oct. 31, 2016).

[205] See Who We Are, OpenText, http://www.recommind.com/products/ediscovery-review-analysis, archived at https://perma.cc/3D9A-MBTW (last visited Oct. 31, 2016).

[206] See kCura, http://contentanalyst.com/, archived at https://perma.cc/E5NC-BWG4 (last visited Oct. 31, 2016).

[207] See Gordon V. Cormack & Maura R. Grossman, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery, Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (2014).

[208] See Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 193 (S.D.N.Y. 2012) (“This Opinion appears to be the first in which a Court has approved of the use of computer-assisted review. That does not mean computer-assisted review must be used in all cases, or that the exact ESI protocol approved here will be appropriate in all future cases that utilize computer-assisted review. . . What the Bar should take away from this Opinion is that computer-assisted review is an available tool and should be seriously considered for use in large-data-volume cases where it may save the producing party (or both parties) significant amounts of legal fees in document review.”); see also Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 126 (S.D.N.Y. 2015) (“This judicial opinion now recognizes that computer-assisted review [i.e., TAR][Technology Assisted Review] is an acceptable way to search for relevant ESI in appropriate cases.”); see also Dynamo Holdings Ltd. P’ship v. Comm’r, 143 T.C. 183, 190 (T.C. 2014) (“We find a potential happy medium in petitioners’ proposed use of predictive coding. Predictive coding is an expedited and efficient form of computer-assisted review that allows parties in litigation to avoid the time and costs associated with the traditional, manual review of large volumes of documents. Through the coding of a relatively small sample of documents, computers can predict the relevance of documents to a discovery request and then identify which documents are and are not responsive.”); see also, id at 191–92 (“Respondent asserts that predictive coding should not be used in these cases because it is an ‘unproven technology.’ We disagree. Although predictive coding is a relatively new technique, and a technique that has yet to be sanctioned (let alone mentioned) by this Court in a published Opinion, the understanding of e-discovery and electronic media has advanced significantly in the last few years, thus making predictive coding more acceptable in the technology industry than it may have previously been. In fact, we understand that the technology industry now considers predictive coding to be widely accepted for limiting e-discovery to relevant documents and effecting discovery of ESI without an undue burden.”).

[209] See Lexis, www.lexis.com, archived at https://perma.cc/L3T7-PTFG (last visited Oct. 31, 2016).

[210] See Westlaw, www.westlaw.com, archived at https://perma.cc/EU3U-6B6Q (last visited Oct. 31, 2016).

[211] See Mathieu d’Aquin & Enrico Motta, Watson, More than A Semantic Web Search Engine. 2 Semantic Web 55 (2011), http://www.semantic-web-journal.net/sites/default/files/swj96_1.pdf, archived at https://perma.cc/NV5S-NNV9.

[212] See id.

[213] See ROSS Intelligence, supra note 152.

[214] See, e.g., Anthony Sills, ROSS and Watson Tackle the Law, IBM Watson Blog (Jan. 14, 2016), https://www.ibm.com/blogs/watson/2016/01/ross-and-watson-tackle-the-law/, archived at https://perma.cc/J4WZ-353U (“The ROSS application works by allowing lawyers to research by asking questions in natural language, just as they would with each other. Because it’s built upon a cognitive computing system, ROSS is able to sift through over a billion text documents a second and return the exact passage the user needs. Gone are the days of manually poring through endless Internet and database search result. . . Not only can ROSS sort through more than a billion text documents each second, it also learns from feedback and gets smarter over time. To put it another way, ROSS and Watson are learning to understand the law, not just translate words and syntax into search results. That means ROSS will only become more valuable to its users over time, providing much of the heavy lifting that was delegated to all those unfortunate associates.”).

[215] See McGinnis & Pearce, supra note 114, at 3050.

[216] See, e.g., Jon G. Sutinen & Keith Kuperan, A Socio-Economic Theory of Regulatory Compliance, 26 Int’l J. Soc. Econ. 174, 174–75 (1999). PARENTHETICAL NEEDED

[217] See Neota Logic, http://www.neotalogic.com/, archived at https://perma.cc/LW4M-LRBE (“Applications created in Neota Logic are executed by the Reasoning Engine, which contains many integrated, hybrid reasoning methods. All reasoning methods are automatically integrated and prioritized.” Neota Logic claims to “[e]nsure compliance with regulations, policies, and procedures” and to “[m]eet changing requirements, rapidly and inexpensively.”)

[218] See ComplianceHR, http://compliancehr.com/, archived at https://perma.cc/GD3Q-SD6Z (“[ComplianceHR is] a revolutionary approach to employment law compliance designed by, and for, legal professionals. Our unique suite of intelligent, web-based compliance applications, covering all U.S. jurisdictions, combine the unparalleled experience and knowledge of Littler, the world’s largest global employment law practice, with the power of Neota Logic’s expert system software platform.)

[219] See Foley Global Risk Solutions, https://www.foley.com/grs/, archived at https://perma.cc/45EB-45EL (last visited Oct. 31, 2016).

[220] See McGinnis & Pearce, supra note 114, at 3050.

[221] See id. at 3052.

[222] See eBrevia, http://ebrevia.com/#overview/, archived at https://perma.cc/FP9X-CHXZ (last visited Oct. 31, 2016) (“eBrevia uses industry-leading artificial intelligence, including machine learning and natural language processing technology, developed at Columbia University to extract data from contracts, bringing unprecedented accuracy and speed to contract analysis, due diligence, and lease abstraction.”).

[223] See Legal Sifter, https://www.legalsifter.com/, archived at https://perma.cc/DZP4-DAZN (last visited Oct. 31, 2016).

[224] See McGinnis & Pearce, supra note 114 at 3046.

[225] See Wim Voermans, Lex ex Machina: Using Computertechnology for Legislative Drafting, 5 Tilburg Foreign L. Rev. 69, 69 (1996).

[226] See Lyria Bennett Moses & Janet Chan, Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools, 37 U. New South Wales L. J. 643, 644 (2014).

[227] See Lexpredict, https://lexpredict.com/, archived at https://perma.cc/LBD3-X5K7 (last visited Sept. 21, 2016).

[228] See Casey Sullivan, AIG to Launch Data-Driven Legal Ops Business in 2016, Bloomberg Law (Oct. 20, 2015), https://bol.bna.com/aig-to-launch-data-driven-legal-ops-business-in-2016/, archived at https://perma.cc/9TU7-Q23A.

[229] See, e.g., Lex Machina, https://lexmachina.com/legal-analytics/, archived at https://perma.cc/G5J4-Z53Q (last visited Oct. 31, 2016) (illustrating the levels of specificity described above, such as a particular judge’s likelihood of ruling on a specific motion).

[230] See McGinnis & Pearce, supra note 114 at 3046.

[231] See Deborah L. Rhode, Access to Justice, 69 Fordham L. Rev. 1785, 1785 (2001).

[232] See Susskind, supra note 200, at 3.

[233] See Raymond T. Brescia, What We Know and Need to Know About Disruptive Innovation, 67 S. Carolina L. Rev. 203, 206 (2016).

[234] See id.

[235] See id. at 213.

[236] See id. at 222.

[237] See Liz Stinson, This Tool Makes it Stupid Simple to Turn Data into Charts, Wired (Apr. 8, 2016, 2:15 PM), https://www.wired.com/2016/04/tool-makes-turning-data-charts-stupid-simple/, archived at https://perma.cc/8LKT-YB86.

[238] Unattributed.

[239] See, e.g., Golan v. Holder, 132 S. Ct. 873, 890 (2012) (noting how copyright protection is designed to be a protection for fair use); see Stewart v. Abend, 495 U. S. 207, 236 (1990) (citations omitted) (noting how the fair use doctrine “permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster”). See generally Daniel P. Fernandez, et al., Copyright Infringement and the Fair Use Defense: Navigating the Legal Maze, 27 U. Fla. J.L. & Pub. Pol’y 135, 137 (2016) (analyzing the issues that are presented when dealing with copyrighted materials within the scope of the fair use defense); see Joseph P. Liu, Fair Use, Notice Failure, and the Limits of Copyright as Property, 96 B.U. L. Rev. 833, 834 (2016) (identifying and discussing the relationship between the fair use doctrine and notice failure); see Hannibal Travis, Free Speech Institutions and Fair Use: A New Agenda for Copyright Reform, 33 Cardozo Arts & Ent. L. J, 673, 677 (2015) (exploring the idea that ongoing issues in the area of copyrights are directly and negatively affecting free speech).

[240] See Stanford Copyright and Fair Use Center, http://fairuse.stanford.edu/, archived at https://perma.cc/UM3X-399J (last visited Sept. 23, 2016).

[241] See Fair Use Checklist, Cornell University, http://copyright.cornell.edu/policies/docs/Fair_Use_Checklist.pdf, archived at https://perma.cc/FTV4-3LZK (last visited Oct. 31, 2016).

[242] See Copyright Act of 1976, Pub. L. No. 94-553 (1976) (codified at 17 U.S.C. § 101–810 (2016)).

[243] See 17 U. S. C. § 102 (2016).

[244] See 17 U. S. C. § 104 (2016).

[245] See 17 U. S. C. § 104(a) (2016). 

[246] See 17 U. S. C. § 104(b) (2016).

[247] See 17 U. S. C. § 106 (2016).

[248] See 17 U.S.C. § 108 (2016).

[249] See 17 U.S.C. § 109 (2016).

[250] See 17 U.S.C. § 107 (2016).

[251] Harper & Row, Publishers, Inc. v. Nation Enterprises, 471 U.S. 539, 549 (1985).

[252] See, e.g., Folsom v. Marsh, 9 F. Cas. 342, 344-45 (1841); see also Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 577 (1994) (noting that “Congress meant § 107 ‘to restate the present judicial doctrine of fair use, not to change, narrow, or enlarge it in any way’ and intended that courts continue the common-law tradition of fair use adjudication.”).

[253] 17 U. S. C. § 107 (2011).

[254] See Campbell, 510 U.S. at 577-78.

[255] Id. at 578.

[256] *We have added more categories as some sentences do not relate to any distinct factor and serve as ‘negative’ language from which the computer can distinct between relevant data and irrelevant data.

[257] See Nicole Bradick, All Rise: The Era of Legal Startups is Now in Session, Venture Beat (Apr. 13, 2014, 8:32AM), at http://venturebeat.com/2014/04/13/all-rise-the-era-of-legal-startups-is-now-in-session/, archived at https://perma.cc/YE29-7K6L.

Airbus Flying Car Prototype Announced: How Will the Law Adapt?

1-7uuMgA9VvRsJRw80Q-z20A.0

By: Will MacIlwaine,

In 2016, Urban Air Mobility, a division of Airbus Group, began looking into the possibility of self-flying vehicles.[1] On January 16, Airbus Chief Executive Officer Tom Enders announced that the company plans to test a prototype of a self-flying taxi for a single passenger by the end of 2017.[2] The company’s flying taxi system will be called CityAirbus, and customers will be able to book a taxi using a smartphone device.[3]

While Airbus plans to have a taxi prototype ready by the end of this year, it also hopes to have models of its flying vehicle for sale by as early as 2020.[4] The benefits of flying vehicles seem abundant, two obvious benefits being avoidance of congested roadways, as well as potentially faster travel times. Aside from the sheer convenience of a flying car, Mr. Enders believes that a product such as his company’s prototype could decrease costs for city infrastructure planners, as flying cars would not travel on roads or bridges that are often costly to maintain and repair.[5] Further, air pollution could be reduced significantly in a move toward flying vehicles, as Airbus is committed to making its flying vehicles fully electric.[6]

As intriguing as this idea may seem, there are certainly issues that will need to be addressed, as well as potential legal ramifications that could arise through the introduction of this product. Airbus believes the biggest task its team will face is making its CityAirbus taxi fly on its own, without a pilot.[7] Tesla has introduced a similar autopilot feature for its Tesla Model S automobile, but has faced criticism as reports of accidents have surfaced in the past year. Enders’ team faces an even taller task: ensuring that its autopilot feature is successful in the air.

There are a variety of potential disastrous lawsuits that the CityAirbus technology might cause. For one, if two CityAirbus taxis crash into each other, how is liability determined? Certainly the passengers in the flying vehicles would not be liable, as the passenger is not the one operating the self-flying car. Airbus would most certainly be legally responsible for these accidents. This could also extend further, encompassing situations in which Airbus vehicles malfunction and damage buildings, or worse, injure the passengers of the flying cars.

Regarding the risk of injury while using a CityAirbus taxi, it is likely that passengers would be given extensive warnings about the dangers and risks of using the vehicles. If the user sees these warnings and understands the dangers inherent in flying cars, yet still voluntarily decides to ride in the vehicle, wouldn’t this amount to implied assumption of risk and bar any negligence claims by the passenger against Airbus?

Further, a new legal framework would need to be developed for flying cars. Would flying cars have to abide by speed limits? Would owners of these vehicles who purchase them in 2020 have to obtain a “flying license,” even though the vehicle is self-operated? Would flying cars need insurance just like ordinary cars? Would federal government regulate all of these things, or would the states be responsible for creating guidelines for flying cars?[8]

These are not the only legal questions surrounding flying vehicles. Would there be restricted areas where flying cars could not travel, such as around airports? If so, how would these regulations be legally enforced, when law enforcement officials are busy fulfilling their duties on the ground? Cities and states might be required to purchase similar flying vehicles so that its law enforcement officers could travel in them to enforce these regulations in the air. Wouldn’t this certainly offset and likely exceed the cost savings for city infrastructure planners that Mr. Enders predicted? While only hypothetical questions today, these legal issues will likely arise eventually if the Airbus team is successful in introducing its prototype by the end of this year.

Flying cars could certainly offer obvious advantages, but it seems that Mr. Enders and his team have many questions to consider in its development of CityAirbus if the company is to ensure that its potentially historical technological advancement does not turn into a legal nightmare.

 

 

[1] See Forget Self-Driving Cars: Airbus Will Test a Prototype Flying-Taxi by the End of This Year, Reuters, Jan. 16, 2017, http://www.dailymail.co.uk/sciencetech/article-4124412/Airbus-CEO-sees-flying-car-prototype-ready-end-year.html.

[2] See id.

[3] See id.

[4] See id.

[5] See Victoria Bryan, Airbus CEO Sees ‘Flying Car’ Prototype Ready by End of Year, Reuters, Jan. 16, 2017, http://www.reuters.com/article/us-airbus-group-tech-idUSKBN1501DM.

[6] See Jay Bennett, Airbus Wants to Test its Flying Car Prototype This Year, Popular Mechanics, Jan. 16, 2017, http://www.popularmechanics.com/flight/a24780/airbus-test-its-flying-car-prototype-2017/.

[7] See Forget Self Driving Cars, supra note 1.

[8] See Cory Smith, Soaring to New Heights: Flying Cars and the Law, Michigan Telecomm. & Tech. L. Rev., Oct. 22, 2015, http://mttlr.org/2015/10/22/soaring-to-new-heights-flying-cars-and-the-law/.

Image Source: http://i2.cdn.turner.com/money/dam/assets/161020184223-airbus-flying-car-4-780×439.jpg.

Page 58 of 83

Powered by WordPress & Theme by Anders Norén