Richmond Journal of Law and Technology

The first exclusively online law review.

Resisting the Resistance: Resisting Copyright and Promoting Alternatives

pdf_icon Frosio Publication Version PDF

Cite as: Giancarlo F. Frosio, Resisting the Resistance: Resisting Copyright and Promoting Alternatives, 23 Rich. J.L. & Tech. 4 (2017), http://jolt.richmond.edu/index.php/volume23_issue2_frosio/.

 Giancarlo F. Frosio*

Abstract

This article discusses the resistance to the Digital Revolution and the emergence of a social movement “resisting the resistance.” Mass empowerment has political implications that may provoke reactionary counteractions. Ultimately—as I have discussed elsewhere—resistance to the Digital Revolution can be seen as a response to Baudrillard’s call to a return to prodigality beyond the structural scarcity of the capitalistic market economy. In Baudrillard’s terms, by increasingly commodifying knowledge and expanding copyright protection, we are taming limitless power with artificial scarcity to keep in place a dialectic of penury and unlimited need. In this paper, I will focus on certain global movements that do resist copyright expansion, such as creative commons, the open access movement, the Pirate Party, the A2K movement and cultural environmentalism. A nuanced discussion of these campaigns must account for the irrelevance of copyright in the public mind, the emergence of new economics of digital content distribution in the Internet, the idea of the death of copyright, and the demise of traditional gatekeepers. Scholarly and market alternatives to traditional copyright merit consideration here, as well. I will conclude my review of this movement “resisting the resistance” to the Digital Revolution by sketching out a roadmap for copyright reform that builds upon its vision.

I. Introduction

[1]       In The Creative Destruction of Copyrights, Raymond Ku applied for the first time the wind of creative destruction—made famous by Joseph Schumpeter—to the Digital Revolution.[1] According to Schumpeter, the “fundamental impulse that sets and keeps the capitalist engine in motion” is the process of creative destruction which “incessantly revolutionizes the economic structure by incessantly destroying the old one, incessantly creating a new one.”[2] Traditional business models’ resistance to technological innovation unleashed the wind of creative destruction. Today, we are in the midst of a war over the future of our cultural and information policies. The preamble of the Washington Declaration on Intellectual Property and the Public Interest explains the terms of this struggle:

[t]he last 25 years have seen an unprecedented expansion of the concentrated legal authority exercised by intellectual property rights holders. This expansion has been driven by governments in the developed world and by international organizations that have adopted the maximization of intellectual property control as a fundamental policy tenet. Increasingly, this vision has been exported to the rest of the world. Over the same period, broad coalitions of civil society groups and developing country governments have emerged to promote more balanced approaches to intellectual property protection. These coalitions have supported new initiatives to promote innovation and creativity, taking advantage of the opportunities offered by new technologies. So far, however, neither the substantial risks of intellectual property maximalism, nor the benefits of more open approaches, are adequately understood by most policy makers or citizens. This must change if the notion of a public interest distinct from the dominant private interest is to be maintained.[3]

[2]       The underpinnings of this confrontation extend to a broader discussion over the cultural and economic tenets of our capitalistic society, freedom of expression and democratization.

II. Resistance and Resisting the Resistance

[3]       Since the origins of the open source movement, mass collaboration has been envisioned as an instrument to create a networked democracy.[4] The political implications of mass collaboration in terms of mass empowerment are relevant to the ideas of freedom and equality. User-generated mass collaboration has promoted decentralization and autonomy in our system of creative production.[5] Internet mass empowerment might spur enhanced content production’s democratization from which political democratization might follow.[6] As Clay Shirky described, open networks reverse the usual sequence of filter, then publish, by making it easy to publish, then filter.[7] Minimizing cultural filtering empowers sub-cultural creativity and thus cultural distinctiveness and identity politics.[8]

[4]       Mass empowerment, however, triggers reactionary effects. Change has always unleashed a fierce resistance from the established power, both public and private. It did so with the Printing Revolution.[9] It does now with the Internet Revolution. For public power, the emergence of limitless access, knowledge, and therefore freedom, is a destabilizing force that causes governments to face increasing accountability and therefore relinquish a share of their power.[10] Through mass empowerment, the Internet, and global access to knowledge, private power sees the dreadful prospect of having to switch from a top-down to a bottom-up paradigm of consumer consumption.[11] Much to the dismay of the corporate sector, the Internet presents serious obstacles for the management of consumer behavior.[12] As Patry noted, “‘[c]opyright owners’ extreme reaction to the Internet is based on the role of the Internet in breaking the vertical monopolization business model long favored by the copyright industries.”[13] In combatting this breakdown, the copyright industries have waged “…[t]he Copyright Wars [which] are an effort to accomplish the impossible: to stop time, to stop innovation, to stop new ways of learning and new ways of creating.”[14] In particular, the steady enlargement of copyright becomes a tool used by reactionary forces willing to counter the Digital Revolution.[15] From a market standpoint, stronger rights allow the private sector to enforce a top-down consumer system.[16] The emphasis of copyright protection on a permission culture favors a unidirectional market, where the public is only a consumer, passively engaged to pay-per use or else stop using copyrighted works.[17] From a political standpoint, a tight control on reuse of information prevents mainstream culture from being challenged by alternative culture.[18] Copyright law empowers mainstream culture and marginalizes minority alternative counter-culture, therefore relenting any process leading to a paradigm shift.[19]

[5]       From a broader socio-economic perspective, there is also a more systemic explanation to the reaction facing the emergence of the networked information society. Baudrillard’s arguments might explain the reaction to the Digital Revolution—driving cultural goods’ marginal cost of distribution and reproduction close to zero.[20] Copyright law might become an instrument to protect the capitalistic notion of consumption and perpetuate a system of artificial scarcity. As the Digital Revolution turns consumers into users, and then creators, it defies the very notion of consumer society. It turns the capitalistic consumer economy into a networked information economy, which is characterized by a sharing and gift economy. So, for the socio-economic consumerist paradigm not to succumb, the limitless power of peer and mass collaboration must be tamed by the artificial scarcity created by copyright law. Ultimately, resistance to the Digital Revolution can be seen as a response to Baudrillard’s call for a return to prodigality beyond the structural scarcity of the capitalistic market economy.[21] The Internet and networked peer collaboration may represent a return to “collective ‘improvidence’ and ‘prodigality’” and their related “real affluence.”[22] New Internet dynamics of exchange and creativity might answer in the positive Baudrillard’s question whether we will “…return, one day, beyond the market economy, to prodigality[.]”[23] In Baudrillard’s terms, by increasingly commodifying knowledge and expanding copyright protection, we are taming limitless power with artificial scarcity to keep in place a “dialectic of penury” and “unlimited need.”[24] Therefore, the reaction to the Internet revolution may be construed as a gatekeepers’ attempt to keep their privileges in place as they thrive within a paradigm that builds the need of production—and overproduction—over an obsession with artificial scarcity.

[6]       In the past few years, a global movement grew under the understanding that the digital networked environment must be protected from external manipulations intended to stop exchange and re-instate scarcity. In this sense, resistance to copyright over-expansion can be understood as a cultural movement “resisting the resistance” to the Digital Revolution.[25] Francis Gurry, Director General of the World Intellectual Property Organization, gives a good explanation of these resistance mechanics.

[7]       Gurry noted that:

…the central question of copyright policy…implies a series of balances: between availability, on the one hand, and control of the distribution of works as a means of extracting value, on the other hand; between consumers and producers; between the interests of society and those of the individual creator; and between the short-term gratification of immediate consumption and the long-term process of providing economic incentives that reward creativity and foster a dynamic culture. Digital technology and the Internet have had, and will continue to have, a radical impact on those balances. They have given a technological advantage to one side of the balance, the side of free availability, the consumer, social enjoyment and short-term gratification. History shows that it is an impossible task to reverse technological advantage and the change that it produces. Rather than resist it, we need to accept the inevitability of technological change and to seek an intelligent engagement with it. There is, in any case, no other choice—either the copyright system adapts to the natural advantage that has evolved or it will perish.[26]

[8]       In the dedication to the Expositiones in Summulas Petri Hispani—printed around 1490 in Lyons—the editor, Johann Trechsel, announced: “[i]n contrast to xylography, the new art of impression I am practi[c]ing ends the career of all the scribes. They have to do the binding of the books now.”[27] Similarly, in the digital era, distributors’ roles and functions might be redefined. One of the key lessons in the gradual shift in market power in the entertainment industry these days is that the power of the old gatekeepers is declining, even as the overall industry grows. The power, instead, has definitely moved directly to the content creators themselves. Creators no longer need to go through a very limited number of gatekeepers, who often provide deal terms that significantly limit the creator’s ability to make a living.[28]

[9]       Instead, “…a major new opportunity has opened up, not for gatekeepers, but for organizations that enable artists to do the different things that the former gatekeeper used to do—but while retaining much more control, as well as a more direct connection with fans.”[29] As discussed at length in another piece of mine,[30] multiple emerging organizations are enabling a direct discourse between artists and users (e.g. Kickstarter, TopSpin or Bandcamp.)[31] As a consequence, traditional cultural intermediaries might be forced to give up their Ancien Régime’s privileges, causing further resistance to change. In the words of Nellie Kroes, European Commission Vice-President for the Digital Agenda, [a]ll revolutions reveal, in a new and less favourable light, the privileges of the gatekeepers of the “Ancien Régime.” It is no different in the case of the internet revolution, which is unveiling the unsustainable position of certain content gatekeepers and intermediaries. No historically entrenched position guarantees the survival of any cultural intermediary. Like it or not, content gatekeepers risk being sidelined if they do not adapt to the needs of both creators and consumers of cultural goods…Today our fragmented copyright system is ill-adapted to the real essence of art, which has no frontiers. Instead, that system has ended up giving a more prominent role to intermediaries than to artists. It irritates the public who often cannot access what artists want to offer and leaves a vacuum which is served by illegal content, depriving the artists of their well-deserved remuneration. And copyright enforcement is often entangled in sensitive questions about privacy, data protection or even net neutrality. It may suit some vested interests to avoid a debate, or to frame the debate on copyright in moralistic terms that merely demonise millions of citizens. But that is not a sustainable approach…My position is that we must look beyond national and corporatist self-interest to establish a new approach to copyright.[32]

III. Resisting Copyright (at Zero Marginal Cost) and Promoting Alternatives

[10]     In the aftermath of the legal battles targeting P2P platforms (such as ThePirateBay), the Pirate Party “emerge[d] [in Sweden] to contest elections on the basis of the abolition or radical reform of intellectual property, in general, and copyright, in particular. The platform of the Pirate Party proclaims that ‘[t]he monopoly for the copyright holder to exploit an aesthetic work commercially should be limited to five years after publication. A five years copyright term for commercial use is more than enough.’”[33] “Non-commercial use should be free from day one”.[34] The Pirate Party saw large successes at its first electoral appearances both in Sweden and Germany and similar political groups have now formed in other countries.[35] The Pirate Party serves as an “extreme expression [of] the sentiment of distaste or disrespect for intellectual property on the Internet”.[36] However, even the Economist has argued that copyright should return to its roots, because as it is now, it may cause more harm than good–proving that the sentiment is widespread.[37] A recent Report from the Australian Government Productivity Commission widely criticized the present “copy(not)right” model, pointing at a number of very critical issues:

…Australia’s copyright arrangements are weighed too heavily in favour of copyright owners, to the detriment of the long-term interests of both consumers and intermediate users. Unlike other IP rights, copyright makes no attempt to target those works where ‘free riding’ by users would undermine the incentives to create. Instead, copyright is overly broad; provides the same levels of protection to commercial and non-commercial works; and protects works with very low levels of creative input, works that are no longer being supplied to the market, and works where ownership can no longer be identified.[38]

[11]     Therefore, copyright law has fallen into a deep crisis of acceptance with respect to both users and creators.[39] Especially with new generations,[40] copyright tends to become irrelevant in the public mind, if not altogether opposed.[41] Sharing a common opinion, David Lange noted that the over-expansion of copyright entitlements lies at the backbone of their crisis in public acceptance:

…Raymond Nimmer has said that copyright cannot survive unless it is accorded widespread acquiescence by the citizenry. I think his insight is acutely perceptive and absolutely correct, for a reason that I also understand him to endorse: Never before has copyright so directly confronted individuals in their private lives. Copyright is omnipresent. But what has to be understood as well is that copyright is also correspondingly over-extended.[42]

[12]     Technological and cultural change played a central role in lowering the acceptance of an over-expansive copyright paradigm. Ubiquitous technology, cost minimization, and the emergence of fan authorship radically affect the traditional market failure that copyright is supposed to cure, both at the creation and distribution levels. The distributive power of the Internet instituted new economics of distribution for digital content.[43] Distribution and reproduction marginal costs being close to zero potentially eliminates, or at least strongly reduces, the need for third-party investment. In The Creative Destruction of Copyrights, Raymond Ku wonders whether a copyright monopoly at close to zero marginal cost is still a sustainable option.[44] Ku concludes that, absent the need for encouraging content distribution, the artificial scarcity and exclusive rights created by copyright cannot find any other social reason for existence.[45] When distributors’ rights are unbundled from creators’ rights, society can no longer support the protection of distributors’ rights.[46] Under these circumstances, copyright would serve no other social purpose than transferring wealth from the public to distributors.[47] Therefore, in Ku’s view, copyright in the digital environment is a meaningless burden for society and should be eliminated.[48] As radical as Ku’s position may be, if technological innovation led to a substantial reduction of the production, reproduction, and distribution costs of cultural artefacts, a case could be made in sharp contrast with any position asserting the expansion of the copyright monopoly.

[13]     Reproduction and distribution cost minimization also affected the traditional discourse regarding incentive to create.[49] Reductions in the production and distribution costs of original expressive works encourages non-professional authors to create.[50] Therefore, the number of authors, for whom the lucre of copyright proves a necessary stimulus, should drop. Additionally, low marginal costs empower few authors to reach a broader audience.[51] If decentralized and unprofessional authors increasingly satisfy the market demand–because non-monetary incentives stimulate creation–a copyright monopoly will eventually prove superfluous, at least for these works.[52] In respect to creative works provided by decentralized and unprofessional authors, the burdens of a copyright monopoly will exceed its benefits.[53]

[14]     This crisis propelled a cultural copyright resistance movement. Neelie Kroes stressed that copyright fundamentalism has prejudiced our capacity to explore new models in the digital age:

So new ideas which could benefit artists are killed before they can show their merit, dead on arrival. This needs to change…So that’s my answer: it’s not all about copyright. It is certainly important, but we need to stop obsessing about that. The life of an artist is tough: the crisis has made it tougher. Let’s get back to basics, and deliver a system of recognition and reward that puts artists and creators at its heart.[54]

[15]     The digital opportunity led many to challenge the obsolescence of the traditional copyright monopoly, seeking more radical reform. In 1994, John Perry Barlow’s manifesto laid out the necessity of re-thinking digitized intellectual property and radically noted that: “[i]n the absence of the old containers, almost everything we think we know about intellectual property is wrong”.[55] Nicholas Negroponte reinforced Barlow’s point by stating that “[c]opyright law is totally out of date…[i]t is a Gutenberg artifact…[s]ince it is a reactive process, it will have to break down completely before it is corrected.”[56] Recently, the Hargreaves report noted that archaic copyright laws “obstruct[] innovation and economic growth[.]”[57] In a message delivered to the G20 leaders, the President of Russia, Dimity Medvedev, pointed out that “[t]he old principles of intellectual property protection established in a completely different technological context do not work any longer in an emerging environment, and, therefore, new conceptual arrangements are required for international regulation of intellectual activities on the Internet.”[58]

[16]     Many highlighted the necessity of re-shaping present copyright laws[59] or abolishing them altogether.[60] In particular, a growing copyright “abolitionism” emerged online in response to a worrying tendency to criminalize the younger generation and new models of online digital creativity, such as mash-up, fanfiction, or machinima.[61] The Committee on Intellectual Property Rights and the Emerging Information Infrastructure considered the notion that copying might not be an appropriate mechanism for achieving the goals of copyright in the digital age.[62] Among the inadequacies, the Committee highlights that “in the digital world copying is such an essential action, so bound up with the way computers work, that control of copying provides, in the view of some, unexpectedly broad powers, considerably beyond those intended by the copyright law.”[63] Sharing is essential to emerging digital culture. Young generations digitize, share, rip, mix, burn, and share again as a basic form of human interaction. Increasingly, many social forces maintain that full recognition of a non-commercial right to share creative works should be the goal of modern policies for digital creativity. At the same time, criminalization of Internet users by cultural conglomerates is a source of social tension.[64] At the WIPO Global Meeting on Emerging Copyright Licensing ModalitiesFacilitating Access to Culture in the Digital Age, Lessig called for an overhaul of the copyright system, which would “never work on the internet” and “[i]t’ll either cause people to stop creating or it’ll cause a revolution.”[65]

[17]     Resistance to copyright lies at the crossroad between academic investigation, civic involvement, and political activity. As Michael Strangelove argued in the Empire of Mind, the Internet set in motion an anti-capitalistic movement resistant to authoritarian forms of consumer capitalism and globalization.[66] This movement is “resisting the resistance” to change, resisting copyright, seeking access to knowledge and promoting the public domain. Creative Commons (CC), the Free Software Foundation, and the Open Source movement,[67] propelled the diffusion of viable market alternatives to traditional copyright management. The “power of open,” as Catherine Casserly and Joi Ito have termed creative commons, has spread quickly with more than four hundred million CC-licensed works available on the Internet.[68] Again, mostly driven by scholarly efforts, projects like the Access to Knowledge (A2K) Movement, the Open Access Publishing Movement, and the Public Domain Project lead the resistance to copyright over-expansion by seeking to re-define the hierarchy of priorities embedded in the traditional politics of intellectual property.[69] Meanwhile, proposals for reform tackled the uneasy coexistence between copyright, digitization, and the networked information economy.[70] I will discuss these proposals first and later discuss the social movements resisting the resistance.

A.    Copyright Terms, Formalities and Registration Systems

[18]     As suggested by some scholars, a potential solution to the weaknesses of the current copyright regime is a setting in which published works are not copyrighted unless the authors comply with specific formalities. These formalities should be very simple, cheap, and non-discriminatory with respect to national versus foreign authors.[71]

[19]     The international community was persuaded to abolish most discriminatory hurdles in the analog world; similarly, the digital era may provide opportunities for creativity in adapting formalities.[72] The idea of a global online copyright registry for creative works is increasingly gaining momentum.[73] A carefully crafted registration system may enrich the public domain, enhance access and reuse, and avoid transaction costs burdening digital creativity and digitization projects.[74] Today, state-of-the-art technology enables the creation of global digital repositories that ensure the integrity of digital works, render filings user-friendly and inexpensive, and enable searches on the status of any creative work.[75] Registration could be a precondition for protection by providing the creators with full ownership rights, while, absent registration, the default level of protection would be limited to the moral right of attribution. Alternatively, if making global registration, rather than notice, a precondition for protection is considered too harsh a requirement, then registration might at least be required as a precondition of protection extensions.

[20]     In particular, registries and data collection should ease the orphan works problem.[76] Measures to improve the provision of rights management information range from encouraging digital content metadata tagging, to promoting the use of CC-like licenses, and encouraging the voluntary registration of rights ownership information in specifically designed databases.[77] Many projects aim at increasing the supply of rights management information to the public, merging unique sources of rights information, and establishing specific databases for orphan works. Notably, the EU mandated project ARROW (Accessible Registries of Rights Information and Orphan Works) includes national libraries, publishers, writers’ organizations and collective management organizations. It aspires to find ways of identifying rights holders, determining and clearing rights, and possibly confirming the public domain status of a work.[78]

[21]     Marco Ricolfi’s Copyright 2.0 proposal is a specific articulation of an alternative copyright default rule, coupled with the implementation of a formality and registration system.[79] Similar proposals have been made by other scholars, such as Lessig.[80] In Ricolfi’s Copyright 2.0, traditional copyright, or Copyright 1.0, is still available. In order to be enjoyed, Copyright 1.0 has to be claimed by the creator at the onset, for example by inserting a copyright notice before the first publication of a work.[81] In certain conditions, the Copyright 1.0 notice could also be added after the first publication, possibly during a short grace period.[82] The Copyright 1.0 protection given by the original notice is deemed withdrawn after a specified short period of time, unless an extension period is formally requested through an Internet based renewal and registration procedure, whose registration data would be accessible online.[83] If no notice is given, Copyright 2.0 applies, and giving creators mainly one right, the right to attribution.[84]

B. Mandatory Exceptions and Diligent Search for Orphan Works and UGC

[22]     Nellie Kroes warns against the welfare loss of the immense cultural riches unveiled by digitization, nevertheless locked behind the intricacies of an outdated copyright model.[85]

Think of the treasures that are kept from the public because we can’t identify the right-holders of certain works of art. These “orphan works” are stuck in the digital darkness when they could be on digital display for future generations. It is time for this dysfunction to end.[86]

[23]     Institutional proposals in both Europe and the United States advocate the implementation of a diligent search system as a defense to copyright infringement. A report from the United States Copyright Office recommended that Congress enact legislation to limit liability for copyright infringement if the alleged infringer performed “a reasonably diligent search” before any use.[87] Additionally, the Copyright Office laid down several suggestions to promote privately-operated registries as a more efficient arrangement than government-operated registries. The Copyright Office’s recommendations were included in the Orphan Works Act of 2006, and again in the Orphan Works Act of 2008.[88] So far, neither bill has been adopted into law. The High Level Expert Group on the European Digital Libraries Initiative made similar recommendations:

Member States are encouraged to establish a mechanism to enable the use of such works for non-commercial and commercial purposes, against agreed terms and remuneration, when applicable, if diligent search in the country of origin prior to the use of the works has been performed in trying to identify the work and/or locate the rightholders…The mechanisms in the Member States need to fulfill prescribed criteria… the solution should be applicable to all kinds of works; a bona fide/good faith user needs to conduct a diligent search prior to the use of the work in the country of origin; best practices or guidelines specific to particular categories of works can be devised by stakeholders in different fields.[89]

[24]     The system should be based on reciprocity so that Member States will recognize solutions in other Member States that fulfill the prescribed criteria. As a result, materials that are lawful to use in one Member State would also be lawful to use in another. Partially endorsing these principles, a Directive on certain permitted uses of orphan works has been recently enacted by the European Commission.[90]

[25]     In Europe, the most comprehensive proposal for an orphan works’ mandatory exception is outlined in a paper for the Gowers Review by the British Screen Advisory Committee (BSAC).[91] This proposal sets up a compensatory liability regime.[92] First, to trigger the exception, a person is required to have made ‘best endeavours’ to locate the copyright owner of a work.[93] ‘Best endeavours’ would be judged against the particular circumstances of each case. The work must also be marked as used under the exception to alert any potential rights owners.[94] If a rights owner emerges, he is entitled to claim a ‘reasonable royalty’ agreed upon by negotiation, rather than sue for infringement. If the parties cannot reach agreement, a third party steps in to establish the royalty amount. The terms of use of the formerly-orphan work would need to be negotiated between the user and the rights owner, according to the traditional copyright rules. However, users should be allowed to continue using the work that has been integrated or transformed into a derivative work, contingent upon payment of a reasonable royalty and sufficient attribution. Slightly modified versions of the U.S. and European model have been also investigated. For example, Canada established a compulsory licensing system based on diligent searches to use orphan works.[95]

[26]     In addition to orphan works, user-generated content (UGC) is another massive phenomenon that struggles with present copyright law. Mandatory exceptions have been claimed as a solution for user-generated content, together with the use of informal copyright practices.[96] Proposals have been made for introducing an exception for transformative use in user-generated works.[97] Both specific and general exception clauses have been under discussion.[98] Canada introduced a specific exception to this effect, allowing the use of a protected work—which has been published or otherwise made available to the public—in the creation of a new work, if the use is done solely for non-commercial purposes and does not have substantial adverse effects on the potential market for the original work.[99] Likewise, European institutions and stakeholders have recently discussed specific exceptions for UGC, after sidelining proposals for micro-licensing arrangements.[100] In a narrower context, the U.S. Copyright Office rulemaking on the Digital Millennium Copyright Act (DMCA) anti-circumvention provisions recently introduced an exception for the use of movie clips for transformative, non-commercial works, bringing a breath of fresh air to the world of ‘vidding’.[101] Also, general fair use exception clauses, if properly construed, may prove effective to give UGC creators some breathing space.[102] In particular, recent U.S. case law protects UGC creators from bogus DMCA takedown notices in cases of blatant misrepresentation of fair use defences by copyright holders. In Lenz v. Universal Music, the 9th Circuit ruled that “the statute requires copyright holders to consider fair use before sending takedown notification.”[103] The Court also recognized the possible applicability of section 512(f) of the DMCA that allows for the recognition of damages in case of proven bad-faith, which would occur if the copyright holder did not consider fair use or paid “lip service to the consideration of fair use by claiming it formed a good faith belief when there is evidence to the contrary.”[104]

C. Extended and Mandatory Collective Management

[27]     Extended Collective Licenses (ECL) are applied in various regions in Denmark, Finland, Norway, Sweden, and Iceland.[105] The ECL arrangement has become a tempting policy option in several jurisdictions, both to tackle the orphan works problem, and the larger issue of file sharing in digital networks.[106] In particular, a recent draft directive would apply this collective management mechanism to the use of out-of-commerce works by cultural heritage institutions.[107]

[28]     The system combines the voluntary transfer of rights from rights holders to a collective society with the legal extension of the collective agreement to third parties who are not members of the collective society. However, to be extended to third parties of the same category, the collective society must represent a substantial number of rights holders.[108] In any event, the legislation in Nordic countries provides the rights holders with the option of claiming individual remuneration or opting out from the system.[109] Therefore, with the exception of the rights holders who opted out, the extended collective license automatically applies to all domestic and foreign rights owners, unknown or untraceable rights holders, and deceased rights holders, even where estates have yet to be arranged. With an extended collective licensing scheme in place, a user may obtain a license to use all the works included in a certain category, with the exception of the opted out works. Re-users of existing works should have no legal concerns all orphan works will be covered by the license, opted out works instantly cease to be orphan. If ECL is applied to legitimize file-sharing, collective management bodies will negotiate the license with users’ associations or internet service providers (ISPs). In exchange for the right of reproductioning and making available content online, rights holders will be remunerated by the proceedings collected through the extended collective license. A related proposal would place the right to make available to the public under mandatory collective management.[110] According to this proposal, to enjoy the economic rights attached to the right of making available to the public, rights holders would be obligated to use collective management. As a consequence, the ISPs would pay a lump-sum fee or levy to the collective societies in exchange for the authorization to download and make the collective society’s entire repertoire of managed available to users.[111] The money collected would be then redistributed to the rights holders.

[29]     Actually, courts have expressed hesitations in endorsing the ECL opt-out mechanism (as seen in the Google books case).[112] A recent ECJ decision ruled against this arrangement, while reviewing a French law that regulated the digital exploitation of out-of-print 20th century books.[113] This French law gave approved collecting societies the right to authorize the reproduction and digital representation of out-of-print books.[114] Meanwhile, the law provided authors—or their successors in title—with an opt-out mechanism subject to certain conditions. In Soulier, the ECJ declared the French law uncompliant with European law,[115] which provides authors—not collecting societies—with the right to authorize the reproduction and communication to the public of their works.[116] The Soulier decision might have far-reaching effects for the EU directive proposal—and more generally for all national systems of extended collective licensing that might be incompatible with EU law. The successful implementation of the directive proposal might remain the sole option to keep ECL arrangements in place by redressing this judicial interpretation

D. Alternative Compensation Systems or Cultural Flat Rate

[30]     As Volker Grassmuck noted, “the world is going flat(-rate).”[117] In search of alternative remuneration systems, researchers, activists, consumer organizations, artist groups, and policy makers have proposed to finance creativity on a flat-rate base. In the past, levies on recording devices and media have been set up upon the acknowledgment that private copying cannot be prevented.[118] The same reasoning applies to the introduction of a legal permission to copy and make available copyrighted works for non-commercial purposes in the Internet.[119] Flat rate proposals favor a sharing ecology that is best suited to the networked information economy.[120] A recent study of the Institute of European Media Law has argued that this may be “no[thing] less than the logical consequence [of] the technical revolution [introduced] by the internet.”[121] The Communia study also described the minimum requirements for a cultural flat-rate as follows: “(i) a legal license permitting private individuals to exchange copyright works for non-commercial purposes; (ii) a levy, possibly collected by ISPs, flat, possibly differentiated by access speed; and (iii) a collective management, i.e. a mechanism for collecting the money and distributing it fairly.”[122]

[31]     Several flat-rate models have been proposed.[123] Some see the flat-rate payment by Internet subscribers as similar to private copying levies managed by collecting societies, while others want to put in place an entirely new reward system, giving the key role to Internet users themselves.[124] A non-commercial use levy permitting non-commercial file sharing of any digitized work was first proposed by Professor Neil Netanel.[125] Such a levy would be imposed on the sale of any consumer electronic devices used to copy, store, send or perform shared and downloaded files, but also on the sale of internet access and P2P software and services.[126] An ad hoc body would be in charge of determining the amount of the levy.[127] The proceeds would be distributed to copyright holders by taking into consideration the popularity of the works measured by tracking and monitoring technologies.[128] Users could freely copy, circulate, and make non-commercial use of any works that the rights holder has made available on the Internet. William Fisher followed up on Netanel with a more refined and comprehensive proposal.[129] Creators’ remuneration would still be collected through levies on media devices and Internet connection.[130] In Fisher’s system, however, a governmentally administered registrar for digital content, or alternatively a private organization, would be in charge of the management of creative works in the digital environment.[131] Digitized works would be registered with the Registrar and embedded with digital watermarks. Tracking technologies would measure the popularity of the works circulating online.[132]The Registrar would then redistribute the proceedings to the registered right holders according to popularity. Philippe Aigrain proposed a “creative contribution” encompassing a global license to share published digital works in the form of ECL, or absent an agreement, of legal licensing.[133] Remuneration would be provided by a flat-rate paid by all Internet subscribers.[134] Half of the money collected would be used for the remuneration of works shared over the Internet—distributed according to their popularity.[135] Measurement of popularity would be based on a large panel of voluntary Internet users transmitting anonymous data on their usage to collective management societies.[136] The other half of the money collected would be devoted to funding the production of new works and the promotion of added-value intermediaries in the creative environment.[137] Another suggestion included among flat-rates models is Peter Sunde’s Flattr “micro-donations” scheme. An internet user would give between 2 and 100 euros per month and could then nominate works that they wish to reward or “flattr,” a play on the words “flatter” and “flat-rate.”[138] Finally, the “German and European Green Parties included in their policy agenda the promotion of a cultural flat rate to decriminalise P2P users, remunerate creativity and relieve the judicial system and the ISPs from mass-scale prosecution.”[139] The “Green Party’s proposal has been backed up by the mentioned EML study that found that a levy on Internet usage legalizing non-commercial online exchanges of creative works conforms with German and European copyright law, even though it requires changes in both.”[140]

IV. The Access 2 Knowledge (A2K) Movement

[32]     As Nelson Mandela once noted, “[e]liminating the distinction between the information rich and information poor is…critical to eliminating economic and other inequalities between North and South, and to improving the life of all humanity.”[141] “Access to learning and knowledge…[are] key elements towards the improvement of the situation of under-privileged countries…”[142] Extreme copyright expansion and constant cultural appropriation, together with a dysfunctional access to scientific and patented knowledge, heightened the North-South cultural divide. The Global South has been exposed to the effects of a pernicious form of cultural imperialism, without the advantages of freely reusing that culture for its own growth. The Vatican noted that

[o]n the part of rich countries there is excessive zeal for protecting knowledge through an unduly rigid assertion of the right to intellectual property, especially in the field of health care. At the same time, in some poor countries, cultural models and social norms of behaviour persist which hinder the process of development.[143]

[33]     The issue of access to knowledge was first publicly expressed by the Brazilian government in a 1961 draft resolution.[144] Since then, access to knowledge has recently returned to become a question of major international concern. Access to Knowledge (A2K) is a globalized movement aimed at promoting redistribution of informational resources in favor of minorities and the Global South.[145] In 2006, the Yale Information Society Project held an A2K conference committed “to building a broad conceptual framework of ‘Access to Knowledge’ that can foster powerful coalitions between diverse groups.”[146] Yale’s 2007 A2K conference aimed to “further build the coalition amongst institutions and stakeholders” from the 2006 conference.[147] The Consumer Project on Technology (CPT) says that A2K:

takes concerns with copyright law and other regulations that affect knowledge and places them within an understandable social need and policy platform: access to knowledge goods…The rich and the poor can be more equal with regard to knowledge goods than to many other areas.[148]

[34]     Under the umbrella of Article 27 of the Universal Declaration of Human Rights, several working projects at the international level have been set up to address the requests of the A2K movement.[149] As part of the discussions leading to the adoption of the WIPO Development Agenda,[150] activists produced a document to start negotiations on a Treaty on Access to Knowledge.[151] The proposed treaty is based on the core idea that “restrictions on access ought to be the exception, not the other way around,” and that “both subject matter exclusions from, and exceptions and limitations to, intellectual property protection standards are mandatory rather than permissive.”[152] Unfortunately, consensus on the A2K Treaty is still an ephemeral mirage. Though, after a long battle,[153] a narrow version of the A2K Treaty, to promote the use of protected works by disabled persons was signed in Marrakesh in 2013.[154]

[35]     The quest for access to knowledge goes hand in hand with the desire of the Global South and minorities to reclaim cultural identity from imperialist power. The search for cultural distinctiveness and access to knowledge becomes a paradigm of equality.[155] Although international agreement from all stakeholders on an A2K Treaty may be hard to reach, grass-roots movements spearheaded similar goals through different routes. A quest for open access to academic knowledge occupied the recent agenda of a global network of institutions and stakeholders.

V. From “Elite-nment” to Open Knowledge Environments

[36]     In a momentous speech at the European Organization for Nuclear Research (CERN) in Geneva, Professor Lawrence Lessig reminded the audience of scientists and researchers that most scientific knowledge is locked away for the general public and can only be accessed by professors and students in a university setting.[156] Lessig pungently made the point that “if you are a member of the knowledge elite, then there is free access, but for the rest of the world, not so much…publisher restrictions do not achieve the objective of enlightenment, but rather the reality of ‘elite-nment.’” [157]

[37]     Other authors have reinforced this point. John Willinsky, for example, suggested that, as its key contribution, open access publishing (OAP) models may move “knowledge from the closed cloisters of privileged, well-endowed universities to institutions worldwide.”[158] As Willinsky noted, “[o]pen access could be the next step in a tradition that includes the printing press and penny post, public libraries and public schools. It is a tradition bent on increasing the democratic circulation of knowledge…”[159] There is a common understanding that the path to digital enlightenment may start with open access to scientific knowledge.

[38]     The open access movement in scholarly publishing was inspired by the dramatic increase in prices for journals and publisher restrictions to the reuse of information.[160] The academics’ reaction against the ‘cost of knowledge’—also known as the serial crisis—is on the rise, especially against the practice of charging “exorbitantly high prices for…journals,” and of “agree[ing] to buy very large ‘bundles.’”[161] As Reto Hilty noted, the price increase of publishers’ products—while publishers’ costs have sunk dramatically—has forced the scientific community to react by implementing open access options, because antiquated copyright laws have failed to bring about reasonable balance of interests.[162] George Monbiot stressed the unfairness of the academic publishing system by noting, with specific reference to publishers such as Elsevier, Springer, or Wiley-Blackwell:

[w]hat we see here is pure rentier capitalism: monopolising a public resource then charging exorbitant fees to use it. Another term for it is economic parasitism. To obtain the knowledge for which we have already paid, we must surrender our feu to the lairds of learning.[163]

[39]     The parasitism lies in a monopoly over content that the academic publishers do not create and do not pay for. Researchers, hoping publish with reputable journals, surrender their copyrights for free.[164] Most of the time, the production of that very content—now monopolized by the academic publishers—was funded by the public, through government research grants and academic incomes.[165] This led some authors to discuss the opportunity of abolishing copyright for academic works all together.[166] From the ancient proverbial idea of scientia donum dei est unde vendi non potest to the emergence of the notion of ‘open science’, the normative structure of science presents an unresolvable tension with the exclusive and monopolistic structure of intellectual property entitlements. Merton powerfully emphasized the contrast between the ethos of science and intellectual property monopoly rights:

“Communism,” in the nontechnical and extended sense of common ownership of goods, is a second integral element of the scientific ethos. The substantive findings of science are a product of social collaboration and are assigned to the community. They constitute a common heritage in which the equity of the individual producer is severely limited. An eponymous law or theory does not enter into the exclusive possession of the discoverer and his heirs, nor do the mores bestow upon them special rights of use and disposition. Property rights in science are whittled down to a bare minimum by the rationale of the scientific ethic. The scientist’s claim to “his” intellectual “property” is limited to that of recognition and esteem which, if the institution functions with a modicum of efficiency, is roughly commensurate with the significance of the increments brought to the common fund of knowledge.[167]

[40]     The major propulsion to open access at the European level was driven by the Berlin Conferences. The first Berlin Conference was organized in 2003 by the Max Planck Society and the European Cultural Heritage Online (ECHO) project to discuss ways of providing access to research findings.[168] Annual follow-up conferences have been organized ever since. The most significant result of the Berlin Conference was the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (“Berlin Declaration”), including the goal of disseminating knowledge through the open access paradigm via the Internet.[169] The Berlin Declaration has been signed by hundreds of European and international institutions. OAP is a publishing model where the research, institution or the party financing the research pays for publication and the article is then freely accessible. In particular, OAP refers to free and unrestricted world-wide electronic distribution and availability of peer-reviewed journal literature.[170] The Budapest Open Access Initiative uses a definition that includes free reuse and redistribution of “[o]pen [a]ccess” material by anyone.[171] According to Peter Suber, the de facto spokesperson of the OAP movement, Open access (OA) is free online access. OA literature is not only free of charge to everyone with an Internet connection, but free of most copyright and licensing restrictions. OA literature is barrier-free literature produced by removing the price barriers and permission barriers that block access and limit usage of most conventionally published literature, whether in print or online.[172]

[41]     Since the inception of the open-access initiative in 2001, there are now almost eleven thousand open access journals and their number is constantly rising.[173] In addition, several leading international academic institutions endorsed open-access policies and are working towards mechanisms to cover open-access journals’ operating expenses.[174] The same approach is increasingly followed by governmental institutions,[175] in light of the fact that economic studies have shown a positive net value of OAP models when compared to other publishing models.[176] The European Commission, for example, plans to make OAP the norm for research receiving founding from its Horizon 2020 programme—the EU framework programme for research and innovation.[177] As part of its Innovation and Research Strategy for Growth, the UK government has announced that all publicly funded scientific research must be published in open-access journals.[178] In the US, several research funding agencies have instituted open access conditions.[179] After an initial voluntary adoption in 2005, the Consolidated Appropriations Act of 2008[180] instituted an open access mandate for research projects funded by the NIH.[181] So far, the NIH has reported a compliance rate of 75%.[182] Together with research articles, data, teaching materials, and the like, the importance of open access models extends also to books. Millions of historic volumes are now openly accessible from various digitization projects such as Europeana, Google Books, or Hathi. In addition, many recent volumes are also openly available from a variety of academic presses, government and nonprofit agencies, and other individuals and groups. Libraries’ cataloging data are increasingly released under open access models.[183]

[42]     Criticizing the university for having become part of the problem of enclosure of scientific commons by “avidly defending their rights to patent their research results, and licence as they choose,” Richard Nelson argues that “the key to assuring that a large portion of what comes out of future scientific research will be placed in the commons is staunch defense of the commons by universities.”[184] Nelson continues by arguing that if universities “have policies of laying their research results largely open, most of science will continue to be in the commons.”[185] There is a true responsibility of the academic community towards expanding OAP. The role of universities in the open access and OAP movement is critical and more than any other institutions they have motive to promote the goals of “open science.” Willinsky advocated the idea that scholars have a responsibility to make their work available OA globally by referring to an ‘access principle’ and noting that “[a] commitment to the value and quality of research carries with it a responsibility to extend the circulation of such work as far as possible and ideally to all who are interested in it and all who might profit by it.”[186] In this sense, the true challenge ahead of the OAP movement is to turn university environments, and the knowledge produced within, into a more easily and freely accessible public good, perhaps better integrating the OAP movement with Open University and Open Learning.

[43]     Seeking to reap the full value that open access can yield in the digital environment, Jerome Reichman and Paul Uhlir proposed a model of open knowledge environments (OKEs) for digitally networked scientific communication.[187] OKEs would “bring the scholarly communication function back into the universities” through “the development of interactive portals focused on knowledge production and on collaborative research and educational opportunities in specific thematic areas.” [188] Also, OKEs might reshape the role of libraries. As mentioned earlier, libraries are knowledge infrastructures and should be one of the main drivers of access to knowledge in the digital networked society. However, extreme commodification of information, propelled by the present legal framework, may drive libraries away from their function as knowledge repositories. As Guy Pessach noted,

[l]ibraries are increasingly consuming significant shares of their knowledge goods from globalized publishers according to the contractual and technological protection measures that these publishers impose on their digital content. Thus there is an unavoidable movement of enclosure regarding the provision of knowledge through libraries, all in a manner that practically compels libraries to take part in the privatization of knowledge supply and distribution.[189]

[44]     Therefore, the road to global access to knowledge is to provide digital libraries with a better framework to support their independence from the increasing commodification of knowledge goods. Several preliminary steps have been taken in the context of articles 3-1(V) and 3-1(VIII) of the WIPO A2K draft treaty and other legal instruments.[190] A World Digital Public Library that integrates OKEs will push forth the rediscovery of currently unused or inaccessible works, open up the riches of knowledge in formats that are accessible to persons with disabilities, and empower a superior democratic process by favoring access regardless of users’ market power.

VI. The Emergence of the Public Domain[191]

[45]     As Jessica Litman noted, “a vigorous public domain is a crucial buttress to the copyright system; without the public domain, it might be impossible to tolerate copyright at all.”[192] The increasing enclosure of the public domain has contributed to the crisis of acceptance in which copyright law is fallen. The emergence and recognition of the public domain, the development of a public domain project, and the advent of a movement for cultural environmentalism are key elements to the resistance to copyright over-expansion. More fundamentally perhaps, the emphasis over the importance of the public domain has gained momentum together with the rise of the networked information economy and its ethical revolution emphasizing mass collaboration, sharing economy and gift exchange. In this respect, Daniel Drache noted that the emergence of the public domain and public goods in the globalized society have increasingly troubled the future prospects of ‘market fundamentalism.’[193]

[46]     Authors suggested that the Statute of Anne actually created the public domain, by limiting the duration of protected works and by introducing formalities.[194] However, in early copyright law, there was no positive term to affirmatively refer to the public domain, though terms like publici juris or propriété publique had been employed by 18th century jurists.[195] Nonetheless, the fact of the public domain was recognized, though no single locution captured that concept. Soon, the idea of the public domain evolved into a “discourse of the public domain—that is, the construction of a legal language to talk about public rights in writings.”[196] Historically, the term public domain was firstly employed in France by the mid-19th century to mean the expiration of copyright.[197] The English and American copyright discourse borrowed the term around the time of the drafting of the Berne Convention with the same meaning.[198] “Traditionally, the public domain has been defined in relation to copyright as the opposite of property, as the “other side of the coin of copyright” that “is best defined in negative terms”.[199] This traditional definition regarded the public domain as a “wasteland of undeserving detritus” and did not “worry about ‘threats’ to this domain any more than [it] would worry about scavengers who go to garbage dumps to look for abandoned property.”[200] This is no more. This definitional approach has been discarded in the last thirty years.

[47]     In 1981, Professor David Lange published his seminal work, Recognizing the Public Domain, and departed from the traditional line of investigation of the public domain. Lange suggested that “recognition of new intellectual property interests should be offset today by equally deliberate recognition of individual rights in the public domain.”[201] Lange called for an affirmative recognition of the public domain and drafted the skeleton of a new theory for the public domain. The public domain that Lange had in mind would become a “sanctuary conferring affirmative protection against the forces of private appropriation” that threatened creative expression.[202] The affirmative public domain was a powerfully attractive idea for scholarly literature and civic society. Lange spearheaded a “conservancy model,” concerned with promoting the public domain and protecting it against any threats, that juxtaposed the traditional “cultural stewardship model” which regarded ownership as the prerequisite of productive management.[203] The positive identification of the public domain propelled the “public domain project,” as Michael Birnhack called it.[204]

[48]     Many authors attempted to define, map, and explain the role of the public domain as an alternative to the commodification of information that threatened creativity.[205] This ongoing public domain project offers many definitions that attempt to positively construe the public domain. In any event, a positive, affirmative definition of the public domain is fluid by nature. An affirmative definition of the public domain is a political statement, the endorsement of a cause. In other words, “[t]he public domain will change its shape according to the hopes it embodies, the fears it tries to lay to rest, and the implicit vision of creativity on which it rests. There is not one public domain, but many.”[206] Notwithstanding many complementary definitions, consistency is found in the common idea that the “materials that compose our cultural heritage must be free for all to use no less than matter necessary for biological survival.”[207] As a corollary, many modern definitions of the public domain are unified by concerns over recent copyright expansionism. The common understanding of the participants to the public domain project is that enclosure of the “materials that compose our cultural heritage” is a welfare loss against which society at large must be guarded from.[208] The modern definitional approach endorsed by the public domain project is intended to turn the old metaphor, describing the public domain as what is “left over after intellectual property had finished satisfying its appetite,”[209] upside down by thinking of copyright as “a system designed to feed the public domain providing temporary and narrowly limited rights…all with the ultimate goal of promoting free access.”[210] Moreover, the public domain envisioned by recent legal, public policy and economic analysis becomes “the place we quarry the building blocks of our culture.”[211] However, the construction of an affirmative idea of the public domain should always consider that the abstraction of the public domain is slippery.[212] That affirmative notion must be embodied in a physical space that may be immediately protected and nourished. As Professor Lange puts it, “the problems will not be resolved until courts have come to see the public domain not merely as an unexplored abstraction but as a field of individual rights fully as important as any of the new property rights.”[213]

[49]     The modern public domain discourse owes much to the legal analysis of the governance of the commons, natural resources used by many individuals in common. Although the public domain and commons are diverse concepts,[214] the similarities are many. Since the origin of the public domain discourse, the environmental metaphor has been largely used to refer to the cultural public domain.[215] Therefore, the traditional environmental conception of the commons was ported to the cultural domain and applied to intellectual property policy issues. Environmental and intellectual property scholars started to look at knowledge as a commons—a shared resource. In 2003, the Nobel Prize Elinor Ostrom and her colleague Charlotte Hesse discussed the applicability of their ideas on the governance and management of common pool resources to the new realm of the intellectual public domain.[216] The following literature continued to develop the concept of cultural commons in the footsteps of the Ostrom’s analyses.[217] The application of the physical commons literature to cultural resources brings a shift in approach and methodology from the previous discourse of the public domain. This different approach has been described as follows:

[t]he old dividing line in the literature on the public domain had been between the realm of property and the realm of the free. The new dividing line, drawn as a palimpsest on the old, is between the realm of individual control and the realm of distributed creation, management, and enterprise. [218]

[50]     Under this conceptual scheme, restraint on use may no longer be an evil, but a necessity of a well-run commons. The individual, legal, and market based control of the property regime is juxtaposed with the collective and informal controls of the well-run commons.[219] The well-run commons can avoid the “tragedy of the commons” without the need for single party ownership.

[51]     The movement to preserve the environmental commons inspired a new politics of intellectual property.[220] The environmental metaphor has propelled what can be termed as a cultural environmentalism.[221] Several authors spearheaded by Professor James Boyle have cast a defense of the public domain on the model of the environmental movement. Morphing the public domain into the commons, and casting the defense of the public domain on the model of the environmental movement, has the advantage of embodying the public domain in a much more physical form, minimizing its abstraction and the related difficulty of actively protecting it.[222] The primary focus of cultural environmentalism is to develop a discourse that will make the public domain visible.[223] Before the movement, the environment was invisible. Therefore, “like the environment”, Boyle suggests by echoing David Lange, “the public domain must be ‘invented’ before it can be saved.”[224] Today, the public domain has been “invented” as a positive concept and the “coalition that might protect it”, evoked if not called into being by scholars more than a decade ago, is formed.[225] Many academic and civic endeavors have joined and propelled this coalition. [226] Civic advocacy of the public domain and access to knowledge has also been followed by several institutional variants, such as the World Intellectual Property Organization’s “Development Agenda.”[227] Recommendation 20 of the Development Agenda endorses the goal “[t]o promote norm-setting activities related to IP that support a robust public domain in WIPO’s Member States.”[228] Europe put together a diversified network of projects for the protection and promotion of the public domain and open access.[229] As a flagship initiative, the European Union has promoted COMMUNIA, the European Thematic Network on the Digital Public Domain.[230] Several COMMUNIA members embodied their vision in the Public Domain Manifesto.[231] In addition, other European policy statements endorsed the same core principles of the Public Domain Manifesto. The Europeana Foundation has published the Public Domain Charter to stress the value of public domain content in the knowledge economy.[232] The Free Culture Forum released the Charter for Innovation, Creativity and Access to Knowledge, pleading for the expansion of the public domain, the accessibility of public domain works, the contraction of the copyright term, and the free availability of publicly funded research.[233] The Open Knowledge Foundation launched the Panton Principles for Open Data in Science to endorse the concept that “data related to published science should be explicitly placed in the public domain.”[234]

[52]     The focus of cultural environmentalism has been magnified in online commons and the Internet as the “über-commons—the grand infrastructure that has enabled an unprecedented new era of sharing and collective action.”[235] In the last decade, we have witnessed the emergence of a “single intellectual movement, centered on the importance of the commons to information production and creativity generally, and to the digitally networked environment in particular.”[236] According to David Bollier, the commoners have emerged as a political movement committed to freedom and innovation.[237] The “commonist” movement created a new order that is embodied in countless collaborative online endeavors.

[53]     The emergence and growth of an environmental movement for the public domain and, in particular, the digital public domain, is morphing the public domain into our cultural commons. We must look at it as a shared resource that cannot be commodified, much like our air, water, and forests. As with the natural environment, the public domain and the cultural commons that it embodies must enjoy a sustainable development. As with our natural environment, the need to promote a “balanced and sustainable development” of our cultural environment is a fundamental right that is rooted in the Charter of Fundamental Rights of the European Union.[238] Overreaching property theory and overly protective copyright law disrupt the delicate tension between access and protection. Unsustainable cultural development, enclosure and commodification of our cultural commons will produce cultural catastrophes. As unsustainable environmental development has polluted our air, contaminated our water, mutilated our forests, and disfigured our natural landscape, unsustainable cultural development will outrage and corrupt our cultural heritage and information landscape.

VI. Conclusions

[54]     I would like to conclude my review of this movement “resisting the resistance” to the Digital Revolution by sketching out a roadmap for reform that builds upon its vision. This roadmap reshapes the interplay between community, law, and market to envision a system that may fully exploit the digital opportunity and looks to the history of creativity as a guide.[239] This proposal revolves around the pivotal role of the users in a modern system for enhancing creativity. The coordinates of the roadmap correlate to four different but interlinked facets of a healthy creative paradigm: namely, (a) the necessity to rethink users’ rights, in particular users’ involvement in the legislative process; (b) the emergence of a politics of the public domain, rather than a politics of intellectual property; (c) the need to make cumulative and transformative creativity regain its role through the re-definition of the copyright permission paradigm; and (d) the transition to a consumer gift system or user patronage, through digital crowd-funding.

[55]     The roadmap for reform emphasizes the role of the users. The Internet revolution is a bottom-up revolution. User-based culture defines the networked society, together with a novel concept of authorship mingling users and authors together. Therefore, the role of users in our legislative process and the relevance of user rights should be reinforced. So far, users have had very limited access to the bargaining table when copyright policies had to be enacted. This is due to the dominant mechanics of lobbying that largely excludes users from any policy decisions. This led to the implementation of a copyright system that is strongly protectionist and pro-distributors. In particular, the regulation of the Internet and the solutions given to the dilemmas posed by digitalization may undermine the potential of this momentous change and limit positive externalities for users.

[56]     In the networked, peer, and mass productive environment, creativity seeks a politics of inclusive rights, rather than exclusive. This is a paradigm shift that would re-define the hierarchy of priorities by thinking in terms of “cultural policy” and developing a political policy of the public domain, rather than a political policy of intellectual property. Before the recognition of any intellectual property interests, a politics of the public domain must set up the “deliberate recognition of individual rights in the public domain.”[240] It must provide positive protection of the public domain from appropriation. A politics of the public domain would reconnect policies for creativity with our past and our future, looking back at our tradition of cumulative creativity and looking forward at networked, mass collaborative, user-generated creativity.[241]

[57]     In order to reconnect the creative process with its traditional cumulative and collaborative nature, a politics of inclusive rights and a politics of the public domain seek the demise of copyright exclusivity.[242] In my roadmap for reform, I argue for the implementation of additional mechanisms to provide economic incentives to create, such as a liability rule integrated into the system and an apportionment of profits. A politics of inclusivity would de-construct the post-romantic paradigm that over-emphasized creative individualism and absolute originality in order to adapt policies to networked and user-generated creativity.

[58]     Finally, I draw a parallel between traditional patronage, corporation patronage, and neo-patronage or user patronage as a re-conceptualization of a patronage system in line with the exigencies of an interconnected digital society.[243] In the future, support for creativity may increasingly derive from a direct and unfiltered exchange between authors and the public, who would become the patrons of our creativity. Remuneration through attribution, self-financing through crowd-funding, ubiquity of digital technology, and mass collaboration will keep the creative process in motion. This market transformation will facilitate a direct, unrestrained “discourse” between creators and the public. Yet, the role of distributors will be redefined and may partially disappear, making the transition long and uncertain.

* Senior Researcher and Lecturer, Centre for International Intellectual Property Studies (CEIPI), University of Strasbourg; Non-Resident Fellow, Stanford Law School, Center for Internet and Society. S.J.D., Duke University School of Law, Durham, North Carolina; LL.M., Duke University School of Law, Durham, North Carolina; LL.M., Strathclyde University, Glasgow, UK; J.D., Università Cattolica del Sacro Cuore, Milan, Italy. The author can be reached at gcfrosio@ceipi.edu.

[1] See Raymond S. R. Ku, The Creative Destruction of Copyrights: Napster and the New Economics of Digital Technology, 69 u. Chi. L. Rev. 263 (2002).

[2] Joseph A. Schumpeter, Capitalism, Socialism, and Democracy 82-83 (Harper and Row 1975) (1942).

[3] The Washington Declaration on Intellectual Property and the Public Interest, Infojustice.org (August 25-27, 2011), http://infojustice.org/washington-declaration-html, archived at https://perma.cc/W9U8-LUNA; See also Sebastian Haunss, The Politicisation of Intellectual Property: IP Conflicts and Social Change, 3 W.I.P.O.J. 129 (2011).

[4] See Douglas Rushkoff, Open Source Democracy 46-62 (DEMOS 2003); see also Yochai Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm, 112 Yale L. J. 369, 371-372 (2002).

[5] See id. at 374.

[6] See Yochai Benkler, A Free Irresponsible Press: Wikileaks and the Battle over the Soul of the Networked Fourth Estate, 46 Harvard Civil Rights-Civil Liberties L. Rev. 311 2011 (discussing the democratic functionality of Wikileaks).

[7] See Clay Shirky, Cognitive Surplus: Creativity and generosity in a Connected Age 81-109 (The Penguin Press 2010).

[8] See, e.g., Rebecca Tushnet, Payment in Credit: Copyright Law and Subcultural Creativity, 70 Law & Contemp. Probs. 138 (2007); see also Theorizing Fandom: Fans, Subculture and Identity (Alexander Alison & Harris Cheryl eds., Hampton Press 1997); see generally Andrew L. Shapiro, The Control Revolution: How the Internet is Putting Individuals in Charge and Changing the World we Know (Public Affairs 1999).

[9] See e.g., Denise E. Murray, Changing Technologies, Changing Literacy Communication, 2 Language Learning & Tech. 43, (2000).

[10] See e.g., William Patry, Moral Panics and the Copyright Wars 27 (Oxford U. Press 2009) (explaining the impossibility of governments prosecuting all violations of copyright infringement in a peer-to-peer network).

[11] See id at 27-28.

[12] See id at 25-27.

[13] Id. at 5.

[14] See Patry, supra note 10 at 39.

[15] See Copyright In The Digital Era, Building Evidence For Policy, National Academies (2013), http://sites.nationalacademies.org/cs/groups/pgasite/documents/webpage/pga_085415.pdf, archived at https://perma.cc/757P-QXY2.

[16] See Patry, supra note 10 at 26.

[17] See id.

[18] See Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm, supra note 4 at 400-401.

[19] I have discussed the effects of copyright expansion on semiotic democracy—with a comprehensive review of literature on point—in a previous piece of mine to which I remand. See generally Giancarlo F. Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness? 13(2) J. Marshall Rev. Intell. Prop. L. 341 (2014), https://papers.ssrn.com/sol3/papers2.cfm?abstract_id=2199210, archived at . https://perma.cc/MUM8-B9H8.

[20] See generally Giancarlo F. Frosio, User Patronage: The Return of the Gift in the “Crowd Society”, 2015(5) Mich. St. L. Rev. 1983, 2036-2039 (2015), https://papers.ssrn.com/sol3/papers2.cfm?abstract_id=2659659, archived at https://perma.cc/UEW8-C9KR (discussing Baudrillard’s categories as applied to cyberspace and the Digital Revolution).

[21] See Jean Baudrillard, The Consumer Society: Myths and Structures 66–68 (Mike Featherstone ed., Sage Publ’ns 1998) (1970).

[22] Id. at 67.

[23] Id. at 68.

[24] Id. at 67.

[25] Eben Moglen, Professor, Speech at the Law of the Common Conference at Seattle University: Free and Open Software: Paradigm for a New Intellectual Commons (March 13, 2009) (transcript available at http://en.wikisource.org/wiki/Free_and_Open_Software:_Paradigm_ for_a_New_Intellectual_ Commons), archived at http://perma.cc/J78D-R8AG.

[26] Francis Gurry, Dir. Gen. of the World Intellectual Prop. Org., Speaker at the Blue Sky Conference: Future Directions in Copyright Law at Queensl. Univ. of Tech., Brisbane, Austl. (February 25, 2011) (transcript available at http://www.wipo.int/about-wipo/en/dgo/speeches/dg_blueskyconf_11.html, 1–2), archived at https://perma.cc/KM6G-6WCL (emphasis added).

[27] See Uwe Neddermeryer, Why were there no Riots of the Scribes? First Result of a Quantitative Analysis of the Book-production in the Century of the Gutenberg, 31 Gazette Dulivre Medieval 1, at 4-7 (1997) (discussing that at the time of the printing revolution, the resistance to the new technology was little. Only few protests from scribes were recorded throughout Europe. In fact, the only reported protests in Genoa in 1472, in Augsburg in 1473, and in Lyon in 1477. Reconversion from old to new jobs was smooth. A variety of new jobs was created and there are no indications of unemployment or poverty suffered by any part of society due to the introduction of the new technology.); see also Peter Burke, The Italian Renaissance: Culture and Society in Italy, at 71 (Princeton U. Press 1999) (noting the adaptability of several scribes, who became printers themselves); see also Cyprian Blagden, The Stationers’ Company: A History, 1403–1959, at 23 (Stanford U. Press 1977) (1960) (reporting that “there is no evidence of unemployment or organized opposition to the new machines” in England). Quite the contrary, in the last quarter of the fifteenth century more money was spent on books that any time before.

[28] Michael Masnick & Michael Ho, The Sky is Rising: A Detailed Look at the State of the Entertainment Industry, Floor 64, 5 (January 2012), http://www.techdirt.com/skyisrising, archived at https://perma.cc/42WV-N9CC.

[29] Id.

[30] See Frosio, supra note 20, at 2039-2046.

[31] See Masnick & Ho, supra note 28 at 5-6.

[32] Neelie Kroes, European Commission Vice-President for the Digital Agenda, A Digital World of Opportunities at the Forum d’Avignon – Les Rencontres Internationales de la Culture, de l’Économie et des Medias, (November 5, 2010), available at http://europa.eu/rapid/press-release_SPEECH-10-619_en.htm, archived at https://perma.cc/ERN7-5TN4.

[33] See Gurry supra note 26.

[34] Copyright Perspectives: Past, Present and Prospect vii (Brian Fitzgerald and John Gilchrist eds., 2015).

[35] See AP, Pirate Party gains three seats in Iceland’s parliament, CBS News (Apr. 30, 2013, 12:16 PM), http://www.cbsnews.com/news/pirate-party-gains-three-seats-in-icelands-parliament/, archived at https://perma.cc/R29V-MNRP.

[36] See Gurry supra note 26. See e.g., Miaoran Li, The Pirate Party and The Pirate Bay: How the Pirate Bay Influences Sweden And International Copyright Relations, 21 Pace Int’l L. Rev. 281 (2009); see also Jonas Anderson, For the Good of the Net: The Pirate Bay as a Strategic Sovereign, 10 Culture Machine 64 (2009); see also Neri Luca, La Baia dei Pirati: Assalto al Copyright (Cooper Editore 2009).

[37] See Copyright and Wrong: Why the Rules on Copyright need to Return to Their Roots, The Economist (Apr. 8, 2010), http://www.economist.com/displayStory.cfm?story_id=15868004, archived at . https://perma.cc/N5JU-YU4U.

[38] Austl. Productivity Commission, Intell. Prop. Arrangements, Drft. Rep. 16-17 (2016), https://assets.documentcloud.org/documents/2819862/Intellectual-Property-Draft.pdf, archived at https://perma.cc/4WFS-4GTU.

[39] See generally, Jessica Silbey, The Eureka Myth: Creators, Innovators, and Everyday Intellectual Property (Stan. U. Press 2015) (noting that, after collecting interview-based empirical data, suggesting that creators – and even businesses – need intellectual property and exclusivity overstates, if not misstates, the facts and explaining how this misunderstanding about creativity sustains a flawed copyright system); see also Jessica Litman, Real Copyright Reform, 96 Iowa L. Rev. 1, 3-5, 31-32 (2010) (noting that “the deterioration in public support for copyright is the gravest of the dangers facing the copyright law in a digital era…[c]opyright stakeholders have let copyright law’s legitimacy crumble…”); see also John Tehranian, Infringement Nation: Copyright 2.0 and You xvi-xxi (Oxford U. Press 2011); see also Brett Lunceford &cohenle Shane Lunceford, The Irrelevance Of Copyright In The Public Mind, 7 Nw. J. Tech. & Intell. Prop. 33 (2008).

[40] See e.g., Music Downloading, File-Sharing and Copyright, Pew Res. Ctr.: Internet & Am. Life Project, http://pewinternet.org/t2003/07/31/music-downloading-file-sharing-and-copyright/, archived at https://perma.cc/X3GP-DL25.

[41] See id.

[42] David Lange, Reimagining The Public Domain, 66 Law & Contemp. Probs. 471 (2003).

[43] See Sacha Wunsch-Vincent, The Economics of Copyright and the Internet: Moving to an Empirical Assessment Relevant in the Digital Age, (World Intell. Prop. Org., Economic Research Working Paper No. 9, 2013) at 2, http://www.wipo.int/edocs/pubdocs/en/wipo_pub_econstat_wp_9.pdf, archived at https://perma.cc/QN3C-A7XL.

[44] See Ku supra note 1, at 300-305; see also Raymond S. R. Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, 18 Berkeley Tech. L. J. 539 (2003); see also Paul Ganley, The Internet, Creativity and Copyright Incentives, 10 J. Intell. Prop. Rts. 188 (2005); see also John F. Duffy, The Marginal Cost Controversy In Intellectual Property, 71 U. Chi. L. Rev. 37 (2004).

[45] See Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, supra note 44 at 539.

[46] See id. at 566.

[47] See id.

[48] See Ku, supra note 1, at 304-305.

[49] See Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, supra note 44 at 539.

[50] See Tom W. Bell, The Specter of Copyism v. Blockheaded Authors: How User-Generated Content Affects Copyright Policy, 10 Vand. J. Ent. & Tech. L. 841, 853 (2008).

[51] See Wunsch-Vincent, supra note 43.

[52] See Bell, supra note 50, at 844.

[53] See id. at 855.

[54] Neelie Kroes, Vice President, Eur. Comm’n, Speech at the Forum d’Avigon, Who Feeds the Artist? (Nov. 19, 2011) (transcript available at http://europa.eu/rapid/pressReleasesAction.do?reference=SPEECH%2F11%2F777), archived at https://perma.cc/QMV4-U24H.

[55] John Perry Barlow, Selling Wine Without Bottles: The Economy of Mind on the Global Net, Wired (Mar. 1, 1994), yin.arts.uci.edu/~studio/readings//barlow-wine.html, archived at https://perma.cc/FL5M-NBKF.

[56] Nicholas Negroponte, Being Digital 58 (First Vintage Books ed. 1996).

[57] Ian Hargreaves, Digital Opportunity: A Review of Intellectual Property and Growth 1 (2011).

[58] Dmitry Medvedev, President of Russ., Message to the G20 Leaders (Nov. 3, 2011) (transcript available at http://eng.kremlin.ru/news/3018), archived at . https://perma.cc/P9TL-7LGL.

[59] See, e.g., Pamela Samuelson, The Copyright Principles Project: Directions for Reform, 25 Berkeley Tech. L. J. 1175, 1178–79 (2010); see also William Patry, How to Fix Copyright (Oxford U. Press 2012); see also Diane Zimmerman, Finding New Paths through the Internet, Content and Copyright, 12 Tul. J. Tech. & Intell. Prop. 145, 145 (2009); see also Hannibal Travis, Opting Out of the Internet in the United States and the European Union: Copyright, Safe Harbors, and International Law, 84 Notre Dame L. Rev. 331, 335 (2008); see also Guy Pessach, Reciprocal Share-Alike Exemptions in Copyright Law, 30 Cardozo L. Rev. 1245, 1247 (2008); see also Jessica Litman, Sharing and Stealing, 27 Hastings Comm. & Ent. L. J. 1, 2 (2004); see also Mark Lemley & R. Anthony Reese, Reducing Digital Copyright Infringement Without Restricting Innovation, 56 Stan. L. Rev. 1345, 1349–50 (2004); see also William Landes & Richard Posner, Indefinitely Renewable Copyright, 70 U. Chi. L. Rev. 471, 471 (2003).

[60] See, e.g., Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harv. L. Rev. 281, 282 (1970) (concluding “[i]t would be possible, for instance, to do without copyright, relying upon authors, publishers, and buyers to work out arrangements among themselves that would provide books’ creators with enough money to produce them.”); see also Jon M. Garon, Normative Copyright: A Conceptual Framework for Copyright Philosophy and Ethics, 88 Cornell L. Rev. 1278, 1283 (2003) (noting “[u]nless there is a valid conceptual basis for copyright laws, there can be no fundamental immorality in refusing to be bound by them.”); see also Michele Boldrin and David Levine, Against Intellectual Monopoly (Cambridge U. Press 2008) (disputing the utility of intellectual property altogether); see also Martin Skladany, Alienation by Copyright: Abolishing Copyright to Spur Individual Creativity, 55 J. Copyright Soc’y U.S.A. 361, 361 (2008); see also Joost Smiers and Marieke van Schijndel, Imagine There Is No Copyright and No Cultural Conglomerates Too (Inst. of Network Cultures 2009); see also Joost Smiers, Art Without Copyright: A Proposal for Alternative Regulation, in Freedom of Culture: Regulation and Privatization of Intellectual Property and Public Space 22–29 (Jorinde Seijdel trans., NAi Publishers 2007); see also Joost Smiers and Marieke Van Schijndel, Imagining a World Without Copyright: The Market and Temporary Protection, a Better Alternative for Artists and Public Domain, in Copyright and Other Fairy Tales: Hans Christian Andersen and the Commodification of Creativity 129 (Helle Porsdam ed., Edward Elgar Publ’g Ltd. 2006); see also Frank Thadeusz, No Copyright Law: The Real Reason for Germany’s Industrial Expansion?, Spiegel Online (Aug. 18, 2010), http://www.spiegel.de/international/zeitgeist/0,1518,710976,00.html, archived at https://perma.cc/BPQ8-TG69 (providing a historical and empirical argument against copyright). Cf. Lior Zemer, The Conceptual Game in Copyright, 28 Hastings Comm. & Ent L. J. 409, 409 (2006).

[61] See, e.g., Lawrence Lessig, Laws that Choke Creativity, Ted (2007) (transcript available at https://www.ted.com/talks/larry_lessig_says_the_law_is_strangling_creativity/transcript?language=en), archived at https://perma.cc/9EFZ-GAX9.

[62] See Nat’l Res. Council, Executive Summary, The Digital Dilemma: Intellectual Property in the Information Age, 62 Ohio St. L. J. 951 (2001), http://moritzlaw.osu.edu/students/groups/oslj/files/2012/03/62.2.nrc_.pdf, archived at https://perma.cc/484D-RWU9.

[63] National Research Board, The Digital Dilemma: Intellectual Property in The Information Age 140 (National Academy Press, 2000).

[64] See COPYRIGHT POLICY, CREATIVITY, AND INNOVATION IN THE DIGITAL ECONOMY, USPTO (July 2013), https://www.uspto.gov/sites/default/files/news/publications/copyrightgreenpaper.pdf, archived at https://perma.cc/K3B8-33GG (demonstrating how lawmakers have struggled for years trying to strike a balance).

[65] See Larry Lessig, Speech at the WIPO Global Meeting on Emerging Copyright Licensing Modalities –Facilitating Access to Culture in the Digital Age, Geneva, Switzerland (November 4, 2010), available at http://www.wipo.int/meetings/en/2010/wipo_cr_lic_ge_10/program.html, archived at https://perma.cc/K7C2-FXLU.

[66] See Michael Strangelove, The Empire of Mind: Digital Piracy and the Anti-Capitalist Movement (University of Toronto Press 2005).

[67] See, e.g., Moglen Eben, Freeing the Mind: Free Software and the Death of Proprietary Culture, June 29, 2003, available at http://emoglen.law.columbia.edu/publications/maine-speech.html, archived at https://perma.cc/44SB-9U3G; see also Moglen Eben, Anarchism Triumphant: Free Software and the Death of Copyright, June 28, 1999, available at http://emoglen.law.columbia.edu/publications/anarchism.html, archived at https://perma.cc/Q93F-5LZW.

[68] See Catherine Casserly and Joi Ito, The Power of Open (Creative Commons 2011), http://thepowerofopen.org, archived at https://perma.cc/WBD4-CDK4; see also Niva Elkin-Koren, Exploring Creative Commons: A Skeptical View of a Worthy Pursuit, in The Future of the Public Domain: Identifying the Commons In Information Law 325-345 (Lucie Guibault and P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[69] See Giancarlo Frosio, Communia Final Report 50-60, (Communia 2011), http://communia-project.eu/final-report/defining-public-domain.html (last visited January 31, 2017).

[70] See, e.g., supra note 64 at iii.

[71] See, e.g., Lewis Hyde, How to Reform Copyright, The Chronicle (October 9, 2011), http://chronicle.com/article/How-to-Reform-Copyright/129280, archived at https://perma.cc/U23A-CMJJ; see also Christopher Sprigman, Reform(aliz)ing Copyright, 57 Stan. L. Rev. 485 (2004) (proposing an optional registration system that subjects unregistered works to a default license under which the use of the work would trigger only a modest statutory royalty liability); see also Lawrence Lessig, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity 140 (Penguin 2004); see also Lawrence Lessig, The Future of Ideas: The Fate of The Commons in a Connected World (Vintage Books 2002); see also Lawrence Lessig, Recognizing the Fight We’re In, Keynote Speech delivered at the Open Rights Group Conference, London, UK (March 24, 2012), at 36:40-38:28, available at http://vimeo.com/39188615, archived at https://perma.cc/7K5Q-DUJY (proposing the reintroduction of formalities at least to secure extensions of copyright, if legislators decide to introduce them).

[72] See Stef van Gompel, Formalities in the digital era: an obstacle or opportunity?, in Global Copyright: Three Hundred Years Since the Statute of Anne, from 1709 to Cyberspace 2-4 (Lionel Bently, Uma Suthersanen and Paul Torremans eds., Edward Elgar 2010) (arguing that the pre-digital objections against copyright formalities cannot be sustained in the digital era); see also Takeshi Hishinuma, The Scope of Formalities in International Copyright Law in a Digital Context, in Global Copyright: Three Hundred Years Since the Statute of Anne, from 1709 to Cyberspace 460-467 (Lionel Bently, Uma Suthersanen and Paul Torremans eds., Edward Elgar 2010).

[73] See Andrew Gowers, Gowers Review of Intellectual Property (HM Treasury, November 2006), at 6, ([r]ecommendation 14b endorses the establishment of a voluntary register of copyright), https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/228849/0118404830.pdf, archived at https://perma.cc/P755-ZSZZ.

[74] See id. at 40. 

[75] See Tanya Aplin, A Global Digital Register for the Preservation and Access to Cultural Heritage: Problems, Challenges and Possibilities, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World 3, at 23 (Estelle Derclaye (ed.), Edward Elgar 2010) (discussing copyright registers); see also Caroline Colin, Registers, Databases and Orphan Works, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World, supra, 28, at 29; see also Steven Hetcher, A Central Register of Copyrightable Works: a U.S. Perspective, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World, 156, at 158.

[76] See Orphan Works and Mass Digitization: A Report of the Register of Copyrights, United States Copyright Office at 66 (June 2015), https://www.copyright.gov/orphan/reports/orphan-works2015.pdf, archived at https://perma.cc/642S-N52A.

[77] See van Gompel, supra note 72, at 12-13 (noting that only voluntary supply of information would be compliant with the no-formalities prescription of the Berne Convention).

[78] See Accessible Registries of Rights Information and Orphan Works [ARROW], http://www.arrow-net.eu, archived at https://perma.cc/RE3M-NS7K (creating registries of rights information and orphan works); see also Barbara Stratton, Seeking New Landscapes: a Rights Clearance Study in the Context of Mass Digitization of 140 Books Published between 1870 and 2010, at 5, 35-36 (British Library 2011), https://www.arrow-net.eu/sites/default/files/Seeking%20New%20Landscapes.pdf, archived at https://perma.cc/WR5D-6SLR, (showing that in contrast to the average four hours per book to undertake a diligent search, “the use of the ARROW system took less than 5 minutes per tile to upload the catalogue records and check the results.”).

[79] See Marco Ricolfi, Copyright Policies for Digital Libraries in the Context of the i2010 Strategy, at 2, 6 (July 1, 2008), http://www.communia-project.eu/communiafiles/conf2008p_Copyright_Policy_for_digital_libraries_in_the_context_of_the_i2010_strategy.pdf, archived at https://perma.cc/4439-9JY9 (paper presented at the 1st COMMUNIA Conference); see also Marco Ricolfi, Making Copyright Fit for the Digital Agenda, 5-6 (Feb. 25, 2011), available at http://nexa.polito.it/nexafiles/Making%20Copyright%20Fit%20for%20the%20Digital%20Agenda.pdf, archived at https://perma.cc/X4UZ-QCMJ.

[80] See Lawrence Lessig, Remix: Making Art and Commerce Thrive in the Hybrid Economy 253-255 (Bloomsbury 2008) (proposing different routes for professional, remix and amateur authors, registries, and the re-introduction of formalities and an opt-in system).

[81] See Ricolfi, Making Copyright Fit for the Digital Agenda, supra note 79 at 6.

[82] See id.

[83] See id.

[84] See id.

[85] Neelie Kroes, Vice-President of the European Commission responsible for the Digital Agenda, Speech at Business for New Europe event: Ending Fragmentation of the Digital Single Market (Feb. 7, 2010) (transcript available at http://europa.eu/rapid/press-release_SPEECH-11-70_en.htm?locale=en, archived at https://perma.cc/WJM6-QJMT), at 2.

[86] Id.

[87] See U.S. Copyright Office, Rep. of the Reg. of Copyrights: Rep. on Orphan Works 95 (Jan. 2006).

[88] See Christian L. Castle & Amy E. Mitchell, Unhand That Orphan: Evolving Orphan Works Solutions Require New Analysis, 27 Ent. & Sports Law. 1 (Spring 2009).

[89] European Comm’n, High Level Expert Group on Digital Libraries, Final Report: Digital Libraries: Recommendations and Challenges for the Future 4 (Dec. 2009) (i2010 European Digital Libraries Initiative).

[90] See Council Directive 2012/28/EU, of the European Parliament and of the Council of 25 October 2012 on Certain Permitted Uses of Orphan Works, 2012 O.J. (L 299/5), 3 [hereinafter Orphan Works Directive].

[91] British Screen Advisory Council, Copyright and Orphan Works 3 (Aug. 31, 2006), http://www.bsac.uk.com/wp-content/uploads/2016/02/copyright__orphan_works_paper_prepared_for_gowers_2006.pdf, archived at https://perma.cc/V9TA-G6ML (paper prepared for the Gowers Review).

[92] See id. at 16.

[93] Id. at 25.

[94] See id. at 30.

[95] See Copyright Act, R.S.C. 1985, c C-42, art. 77 (Can). Under the Canadian system, users can apply to an administrative body to obtain a license to use orphan works. In order to obtain the license the applicant must prove that they have conducted a serious search for the rightsholder. If the Canadian Copyright Board is satisfied that, despite the search, the rightsholders cannot be identified, it issues the applicant a non-exclusive license to use the work. The license will shield the license holder from any liability for infringement. However, the license is limited to Canada. see id.

[96] See Steven A. Hetcher, Using Social Norms to Regulate Fan Fiction and Remix Culture, 157 U. Pa. L. Rev. 1869, 1880 (2009); see also Edward Lee, Warming Up To User-Generated Content, 2008 U. Ill. L. Rev. 1459, 1461 (2008) (noting that “informal copyright practices—i.e., practices that are not authorized by formal copyright licenses but whose legality falls within a gray area of copyright law—effectively serve as important gap fillers in our copyright system”).

[97] See e.g., Daniel Gervais, The Tangled Web of UGC: Making Copyright Sense of User-Generated Content, 11 Vand. J. Ent. & Tech. L. 841, 869–70 (2009); see also Debora Halbert, Mass Culture and the Culture of the Masses: A Manifesto for User-Generated Rights, 11 Vand. J. Ent. & Tech. L. 921, 958 (2009); see also Mary W. S. Wong, “Transformative” User-Generated Content in Copyright Law: Infringing Derivative Works or Fair Use?, 11 Vand. J. Ent. & Tech. L. 1075, 1110 (2009).

[98] See, e.g., Peter K. Yu, Can the Canadian UGC Exception Be Transplanted Abroad?, 26 Intell. Prop. J. 176, 176–79 (2014) (discussing also a Hong Kong proposal for a UGC exception); see also Warren B. Chik, Paying it Forward: The Case for a Specific Statutory Limitation on Exclusive Rights for User-Generated Content Under Copyright Law, 11 J. Marshall Rev. Intell. Prop. L. 240, 270 (2011).

[99] See An Act to Amend the Copyright Act, 2010, Bill C-32, art. 22 (Can.), http://www.parl.gc.ca/HousePublications/Publication.aspx?Docid=4580265&file=4, archived at https://perma.cc/LJ8N-9WPW (introducing an exception for non-commercial UGC).

[100] See Eur. Commission, Rep. on the Responses to the Public Consultation on the Review of the EU Copyright Rules 68 (July 2014), http://ec.europa.eu/internal_market/consultations/2013/copyright-rules/docs/contributions/consultation-report_en.pdf, archived at https://perma.cc/D3FG-YMBD (noting that respondents often favor a legislative intervention, which could be done “by making relevant existing exceptions (parody, quotation and incidental use and private copying are mentioned) mandatory across all Member States or by introducing a new exception to cover transformative uses”); see also Eur. Commission, Commission Comm. on Content in the Digital Single Mkt. 3-4 (2011), http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52012DC0789, archived at https://perma.cc/KW6C-6CKJ (proposing licensing arrangements).

[101] See U.S. Copyright Office, Rulemaking on Exemptions from Prohibition on Circumvention of Technological Measures that Control Access to Copyrighted Works (Jul. 26, 2010), http://www.copyright.gov/1201/2010, archived at https://perma.cc/83D6-7QTM.

[102] See Mariam Awan, The User-Generated Content Exception: Moving Away From a Non-Commercial Requirement (Nov. 11, 2015), at 6, 8–9, http://www.iposgoode.ca/wp-content/uploads/2015/11/Mariam-Awan-The-user-generated-content-exception.pdf, archived at https://perma.cc/FW84-UANW.

[103] Lenz v. Universal Music Corp., 801 F.3d 1126, 1129 (9th Cir. 2015).

[104] Id. at 1134-35 (noting also that there’s no liability under § 512(f), “[i]f, however, a copyright holder forms a subjective good faith belief the allegedly infringing material does not constitute fair use”).

[105] See, e.g., Zijian Zhang, Transplantation of an Extended Collective Licensing System – Lessons from Denmark, 47 Int’l Rev. Intell. Prop. & Competition L. 640, 641–42 (2016).

[106] See European Comm’n, High Level Expert Group—Copyright Subgroup, Report on Digital Preservation, Orphan Works and Out-of-Print Works: Selected Implementation Issues 5 (Apr. 18, 2008) (i2010 European Digital Libraries Initiative), http://ec.europa.eu/information_society/newsroom/cf/itemlongdetail.cfm?item_id=%203366, archived at https://perma.cc/M3EA-VCGG (identifying ECL as a possible solution to the orphan works’ problem); see also Jia Wang, Should China Adopt an Extended Licensing System to Facilitate Collective Copyright Administration: Preliminary Thoughts, 32 Eur. Intell. Prop. Rev. 283, (2010); see also Marco Ciurcina, et al., Creatività Remunerata, Conoscenza Liberata: File Sharing e Licenze Collettive Estese [Remunerating Creativity, Freeing Knowledge: File-Sharing and Extended Collective Licences], Nexa Ctr. for Internet & Soc’y, at 8 (It.) (Mar. 15, 2009), http://nexa.polito.it/nexafiles/NEXACenter-ExtendedCollectiveLicenses-EnglishVersion-June2009.pdf, archived at https://perma.cc/KB75-N8VY (highlighting the positive externalities of the adoption an extended collective licensing scheme as the most appropriate tool to be used by a European Member State to legitimize the file-sharing of copyrighted content); see also Johan Axhamn & Lucie Guibault, Cross-border Extended Collective Licensing: A Solution to Online Dissemination of Europe’s Cultural Heritage?, Instituut Voor Informatierecht , at 4 (Neth.)(Aug. 2011), http://www.ivir.nl/publicaties/download/292, archived at https://perma.cc/D5VQ-K2JF.

[107] See Commission Proposal for a Directive of the European Parliament and of the Council on Copyright in the Digital Single Market, at 26, COM (2016) 593 final (Sept. 14, 2016) [hereinafter DSM Directive Proposal].

[108] See id.

[109] See id. at 5, 30.

 [110] See Silke v. Lewinski, Mandatory Collective Administration of Exclusive Rights – A Case Study on its Compatibility with International and EC Copyright Law, e-Copyright Bulletin (UNESCO), Jan.-Mar. 2004 at 2 (discussing a proposed amendment in the Hungarian Copyright Act); see also Carine Bernault & Audrey Lebois, Peer-to-Peer File Sharing and Literary and Artistic Property: A Feasibility Study Regarding a System of Compensation for the Exchange of Works via the Internet (June 2005) (discussing the same proposal endorsed by the French Alliance Public-Artistes, campaigning for the implementation of a Licence Globale).

[111] See Volker Grassmuck, A New Study Shows Copyright Exception for Legalising File-Sharing is Feasible as a Cease-Fire in the “War on Copying” Emerges, Intellectual Property Watch (Nov. 5, 2009), http://www.ip-watch.org/2009/05/11/the-world-is-going-flat-rate/, archived at https://perma.cc/5XHC-K4NQ.

[112] See Authors Guild v. Google, Inc., 804 F.3d 202, 229 (2d Cir. 2015).

[113] See LOI 2012-287 du 1er mars 2012 relative à l’exploitation numérique des livres indisponibles du XXe siècle [Law 2012-287 of March 1, 2012 on the Digital Explotation of the Unavailable Books of the 20th Century], Journal Officiel de la République Française [J.O.] [Official Gazette of France], Mar. 2, 2012, p. 3986.

[114] See id.

[115] See Case C-301/15, Soulier v. Ministre de la Culture et de la Comm., Premier Ministre, 2016 Curia.Europa.Eu ECLI:EU:C:2016:878 (Nov. 16, 2016) [Fr.], http://curia.europa.eu/juris/document/document.jsf?text=&docid=185423&pageIndex=0&doclang=EN, archived at https://perma.cc/NWH9-NXFC (involving a request for a preliminary hearing by the Council of State, regarding an action Mac Soulier and Sara Doke against the Minister of Culture and Communication, and the Prime Minister, on the interpretation of Articles 2 and 5 of a European Council Directive).

[116] See id.

[117] Grassmuck, supra note 111.

[118] In the analog environment, many national legislations implemented quasi flat rate models and different arrangements of private copying levies that may be envisioned as a form of cultural tax. Private copying levies are special taxes, which are charged on purchases of recordable media and copying devices and then redistributed to the right holders by means of collecting societies. See, e.g., Martin Kretschmer, United Kingdom Intellectual Prop. Office, Private Copying and Fair Compensation: An Empirical Study of Copyright Levies in Europe 64 (2011), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2063809, archived at https://perma.cc/W3QW-49FB (follow “Download this paper” hyperlink).

[119] See generally Bernt Hugenholtz et al., The Future of Levies in a Digital Environment, Institute for Information Law, at ii., 74 (2003), https://www.ivir.nl/publicaties/download/DRM&levies-report.pdf, archived at https://perma.cc/APU4-SHL5.

[120] See generally Grassmuck, supra note 111 (exploring flat rate proposals and emerging models).

[121] See Alexander Roßnagel et al., Die Zulässigkeit einer Kulturflatrate nach Nationalem und Europäischem Recht [The Admissibility of a Cultural Flat Rate under National and European Law], Institut für Europäisches Medienrecht [Institute of European Media Law], at 63 (2009), https://www.gruene-bundestag.de/fileadmin/media/gruenebundestag_de/themen_az/netzpolitik/16_fragen_und_16_antworten/kurzgutachten_zur_kulturflatrate.pdf, archived at https://perma.cc/6E8A-LED2.

[122]See id.; see COMMUNIA Network on the Digital Public Domain, Recommendation 14, in Final Report 171 (Mar. 31, 2011), http://nexa.polito.it/nexacenterfiles/D1.11-COMMUNIA%20Final%20Report-nov2011.pdf, archived at https://perma.cc/3XG7-NLSA).

[123] See e.g., Alain Modot et al., The “Content Flat-Rate”: A Solution to Illegal File-Sharing?, European Parliament, at 26 (2011), http://www.europarl.europa.eu/RegData/etudes/etudes/join/2011/460058/IPOL-CULT_ET(2011)460058_EN.pdf, archived at https://perma.cc/2LWA-QTJS.

[124] See Neil W. Netanel, Impose A Noncommercial Use Levy To Allow Free Peer-To-Peer File Sharing, 17 Harv. J. L. & Tech. 1, 32, 80 (2003). 

[125] See id. at 4.

[126] See id.

[127] See id. 

[128] See Netanal supra note 124 at 4. 

[129] See generally William W. Fisher, Promises To Keep: Technology, Law and the Future of Entertainment (2004).

[130] See id. at 217.

[131] See id. at 223–24.

[132] See id.

[133] See Philippe Aigrain with Suzanne Aigrain, Sharing: Culture and the Economy in the Internet Age 76–77 (2012).

[134] See id. at 65.

[135] See id .

[136] See id at 152–53.

[137] See id..

[138] See Re:publica, Peter Sunde – Flattr Social Micro Donations, YouTube (Apr. 22, 2010), https://www.youtube.com/watch?v=IyGCsCpofVk, archived at https://perma.cc/TN7J-7VCK (describing the Flattr platform); see also Flattr, https://flattr.com/, archived at https://perma.cc/Y3C7-X3KP (last visited Feb. 9, 2017).

[139] COMMUNIA, Recommendation 14, supra note 121, at 171.

[140] Id.

[141] Nelson Mandela, Remarks Made at the TELECOM 95 Conference, 3 Oct. 1995, 9 Trotter Rev. 4, 4 (1995).

[142] World Intellectual Property Organization (WIPO), Provisional Committee on Proposals Related to a WIPO Development Agenda (PCDA), Revised Draft Report, at 6 (Aug. 20, 2007), http://www.wipo.int/edocs/mdocs/mdocs/en/pcda_4/pcda_4_3.pdf, archived at https://perma.cc/Y9AK-YNH5.

[143] Benedict XVI, Caritas In Veritate [Encyclical Letter on Good Will on Integral Human Development in Charity and Truth], sec. 22 (June 29, 2009) available at http://w2.vatican.va/content/benedict-xvi/en/encyclicals/documents/hf_ben-xvi_enc_20090629_caritas-in-veritate.html, archived at https://perma.cc/K7YL-9ZB8.

[144] See Graham Dutfield and Uma Suthersanen, Global Intellectual Property Law 277 (2008).

[145] See Amy Kapczynski, The Access to Knowledge Mobilization and The New Politics of Intellectual Property, 117 Yale L. J. 804, 807–08 (2008); see generally Access to Knowledge in the Age of Intellectual Property (Gaëlle Krikorian and Amy Kapczynski eds., Zone Books 2010); see also Access to Knowledge in Africa: The Role of Copyright (Chris Armstrong et al. eds., UCT Press 2010) (showing an example of the body of work created by pro-A2K groups).

[146] Conference, 2nd Annual Access to Knowledge Conference (A2K2), Yale Information Society Project (2007), http://mailman.yale.edu/pipermail/development-studies/2007-April/000074.html, archived at https://perma.cc/5A2K-8MPE.

[147] Id.

[148] Consumer Project on Technology, Access to Knowledge, http://www.cptech.org/a2k, archived at https://perma.cc/H2AR-GG39.

[149] See G.A. Res. 217 (III) A, Universal Declaration of Human Rights (Dec. 10, 1948), http://www.un.org/en/universal-declaration-human-rights/, archived at https://perma.cc/RH3X-86MJ (follow “Download PDF”).

[150] See WIPO, Development Agenda for WIPO, http://www.wipo.int/ip-development/en/agenda, archived at https://perma.cc/NW6Y-F465.

[151] See CPTech, Proposed Treaty on Access to Knowledge (May 9, 2005) (Draft), www.cptech.org/a2k/a2k_treaty_may9.pdf, archived at https://perma.cc/33E5-77GE.

[152] Laurence R. Helfer, Toward a Human Rights Framework for Intellectual Property, 40 U.C. Davis L. Rev. 971, 1013 (2007) (citing William New, Experts Debate Access to Knowledge, IP Watch (2005) http://www.ip-watch.org/2005/02/15/experts-debate-access-to-knowledge/?res), archived at https://perma.cc/7QJA-DJBQ; see also Proposed A2K Treaty, supra note 151 (mentioning other actions to achieve A2K goals, such as the use of the Internet as a tool for broader public participation; preservation of public domain; control of anticompetitive practices; restriction of the use of TPMs limiting A2K; use of educational material made available at an unreasonable price; and a new role of fair use, especially for purposes including but not limited to parody, reverse engineering and use of works by disabled person).

[153] See, e.g., Margot E. Kaminski & Shlomit Yanisky-Ravid, Working Paper: Addressing the Proposed WIPO International Instrument on Limitations and Exceptions for Persons with Print Disabilities: Recommendation or Mandatory Treaty?, Yale Information Society 6 (Nov. 14, 2011), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1959694, archived at https://perma.cc/4TXL-XBLZ (follow “Download This Paper” hyperlink).

[154] See Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled, July 27, 2013, WIPO, (entered into force Sept. 30, 2016).

[155] See Joost Smiers & Marieke Van Schijndel, Imagine There is no Copyright and No Cultural Conglomerates too, 4 Institute of Network Cultures 5, 26; see also Johanna Gibson, Community Resources: Intellectual Property, International Trade and Protection of Traditional Knowledge 127–28 (2005).

[156] See Lawrence Lessing, The Architecture of Access to Scientific Knowledge: Just How Badly We Have Messed This Up, Address at CERN Colloquium and Library Science Talk, (Apr. 18, 2011), http://cdsweb.cern.ch/record/1345337, archived at https://perma.cc/L5TM-PVLB; see also Lawrence Lessig, Recognizing the Fight We’re In, Address at the Open Rights Group Conference, (Mar. 24, 2012), http://vimeo.com/39188615, archived at https://perma.cc/9NSW-YD28.

[157] Lessing, CERN Colloquium Address, supra note 156.

[158] John Willinsky, The Access Principle: The Case for Open Access to Research and Scholarship 33, (2006).

[159] Id. at 30.

[160] See Giancarlo F. Frosio, Open Access Publishing: A Literature Review 74 (study prepared for the RCUK Centre for Copyright and New Business Models in the Creative Economy) (2014), http://www.create.ac.uk/publications/open-access-publishing-a-literature-review, archived at https://perma.cc/FLJ4-ELXA (providing a book length overview of the OAP movement and several open access initiatives and projects, economics of academic publishing and copyright implications, OAP business models, and OAP policy initiatives).

[161] 16538 Researchers Taking a Stand, The Cost of Knowledge, http://thecostofknowledge.com, archived at https://perma.cc/YH5Z-JNDG; see also The Price of Information: Academics are Starting to Boycott a Big Publisher of Journals, The Economist, Feb. 4 2012, http://www.economist.com/node/21545974, archived at https://perma.cc/L3BX-MU35; see also Eyal Amiran, The Open Access Debate, 18 Symploke 251, 251 (2011) (reporting several other example of these reactions and boycotts).

[162] See Reto M. Hilty, Copyright Law and the Information Society – Neglected Adjustments and Their Consequences, 38(2) ICC 135 (2007).

[163] George Monbiot, Academic Publishers Make Murdoch Look like a Socialist, The Guardian, (Aug. 29, 2011 4:08 PM), http://www.guardian.co.uk/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist, archived at https://perma.cc/4NZ7-3X4S; see also Richard Smith, The Highly Profitable but Unethical Business of Publishing Medical Research, 99 J. R. Soc. Med. 452–53 (2006) (discussing in similarly strong terms the unethical nature of the business of publishing medical research).

[164] See Richard Smith, supra note 163 at 452.

[165] See id. at 454.

[166] See, e.g., Steven Shavell, Should Copyright of Academic Works Be Abolished?, 2 J. Legal Analysis 301, 301–05 (2010).

[167] Robert K. Merton, The Normative Structure of Science, in The Sociology of Science: Theoretical and Empirical Investigations 267, 273 (Norman W. Storer ed., U. Chicago Press 1973) (1942) (emphasis added), http://www.collier.sts.vt.edu/5424/pdfs/merton_1973.pdf, archived at https://perma.cc/UZ2S-9D7G; see also James Boyle, Mertonianism Unbound? Imagining Free, Decentralized Access to Most Cultural and Scientific Material, in Understanding Knowledge as a Commons: From Theory to Practice 123 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2007), http://www.ess.inpe.br/courses/lib/exe/fetch.php?media=wiki:user:andre.zopelari:understanding-knowledge-as-a-commons-theory-to-practice-2007.pdf, archived at https://perma.cc/5LFJ-FBAP.

[168] See Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (October 22, 2003), Berlin Conference, Berlin, October 20-22, 2003, https://openaccess.mpg.de/Berlin-Declaration, archived at https://perma.cc/3K38-MDXW.

[169] See id.

[170] See id.

[171] Budapest Open Access Initiative, Budapest Open Access Inititative, http://www.soros.org/openaccess/index.shtml, archived at https://perma.cc/LZZ3-6CVD.

[172] Peter Suber, Creating an Intellectual Commons Through Open Access, in Understanding Knowledge as a Commons: From Theory to Practice 171 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2006).

[173] See Directory of Open Access Journals(DOAJ), DOAJ (last visited Feb. 9, 2017), http://www.doaj.org, archived at https://perma.cc/26KJ-NKFY.

[174] See Open Access, The Scholarly Publishing & Academic Resources Coalition [SPARC], http://www.arl.org/sparc/advocacy/campus, archived at https://perma.cc/6RPN-BQJ2; see also SHERPA/JULIET – Research funders’ open access policies, SHERPA (last visited Feb. 9, 2017), http://www.sherpa.ac.uk/juliet/index.php, archived at https://perma.cc/T7HW-XXJD; see also Manual of Policies and Procedures – F/1.3 QUT ePrints repository for research output, Queensland Univ. of Tech. [QUT] (Apr. 6, 2016), http://www.mopp.qut.edu.au/F/F_01_03.jsp, archived at https://perma.cc/97KW-FJ62; see also Eric Priest, Copyright and The Harvard Open Access Mandate, 10 Nw. J. Tech. & Intell. Prop. 377, 394 (2012).

[175] See Frosio, Open Access Publishing, supra note 160, at 9.

[176] See John Houghton, Open Access – What are the Economic Benefits?, Victoria University, 13 (June 23, 2009) (report prepared for Knowledge Exchange) (showing that adopting an open access model to scholarly publications could lead to annual savings of around €70 million in Denmark, €133 million in the Netherlands and €480 million in the United Kingdom); see also John Houghton et al., Economic and Social Returns on Investment in Open Archiving Publicly Funded Research Outputs, Victoria University, 12 (July 2010) (report prepared for The Scholarly Publishing & Academic Resources Coalition [SPARC]) (concluding that free access to U.S. taxpayer-funded research papers could yield $1 billion in benefits).

[177] See What is Horizon 2020?, European Commission, http://ec.europa.eu/programmes/horizon2020/en/what-horizon-2020, archived at https://perma.cc/GHF3-YSEC.

[178] See Department for Business Innovation and Skills, Innovation and Research Strategy for Growth 76–78 (Dec. 8, 2011), http://www.bis.gov.uk/assets/biscore/innovation/docs/i/11-1387-innovation-and-research-strategy-for-growth.pdf, archived at https://perma.cc/QD5R-RGN8; see also Finch Report: Report of the Working Group on Expanding Access to Published Research Findings, Accessibility, Sustainability, Excellence: How to Expand Access to Research Publications, Research Information Network, https://www.acu.ac.uk/research-information-network/finch-report, archived at https://perma.cc/Q287-FXA5.

[179] See U.S. Department of Education, Institute of Education Sciences (IES), Request for Application, IES 11 (2009), http://ies.ed.gov/funding/pdf/2010_84305G.pdf, archived at https://perma.cc/HYW2-8B74; see also New Open Access Policy for NCAR Research, AtmosNews (October 20, 2009), https://www2.ucar.edu/atmosnews/news/1059/new-open-access-policy-ncar-research, archived at https://perma.cc/JEP9-FGST; see also Howard Hughes Medical Institute, Research Policies: Public Access to Publications 1 (June 11, 2007), http://www.hhmi.org/sites/default/files/About/Policies/sc320.pdf, archived at https://perma.cc/7CJP-3NYT.

[180] See Consolidated Appropriations Act of 2008, H.R. 2764, 110th Cong. Div. G, II § 218; see also Eve Heafey, Public Access to Science: The New Policy of The National Institutes of Health in Light of Copyright Protections in National and International Law, 15 UCLA J. L. & Tech. 1, 3 (2011), http://www.lawtechjournal.com/articles/2010/02_100216_heafey.pdf, archived at . https://perma.cc/M93U-HQA6.

[181] See National Institute of Health, Revised Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research, (Jan. 11, 2008), http://grants.nih.gov/grants/guide/notice-files/NOT-OD-08-033.html, archived at https://perma.cc/UGB3-QR38; see also Peter Suber, An Open Access Mandate for the National Institutes of Health, 2(2) Open Medicine e39–e41 (2008), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3090178/, archived at https://perma.cc/H8M5-NFN6.

[182] See Richard Poynder, Open Access Mandates: Ensuring Compliance, Open and Shut? (May 18, 2012), http://poynder.blogspot.fi/2012/05/open-access-mandates-ensuring.html, archived at https://perma.cc/LWT5-F6SB.

[183] See, e.g., Adrian Pohl, Launch of the Principles on Open Bibliographic Data, Open Knowledge International Blog (Jan. 18, 2011), http://blog.okfn.org/2011/01/18/launch-of-the-principles-on-open-bibliographic-data/, archived at https://perma.cc/DCY7-VNPL.

[184] Richard R Nelson, The Market Economy, and the Scientific Commons, 33 Research Policy 455, 467 (2004), http://dimetic.dime-eu.org/dimetic_files/NelsonRP2004.pdf, archived at https://perma.cc/SP3Z-Y7NT.

[185] Id.

[186] Willinsky, supra note 158, at xii; see also Peter Suber, Open Access (MIT Press 2012) (discussing the emergence of this principle).

[187] See Jerome H. Reichman, Tom Dedeurwaerdere, & Paul F. Uhlir, Governing Digitally Integrated Genetic Resources, Data and Literature: Global Intellectual Property Strategies for a Redesigned Microbial Research Commons 441 (Cambridge U. Press, 2016).

[188] Paul F. Uhlir, Revolution and Evolution in Scientific Communication: Moving from Restricted Dissemination of Publicly-Funded Knowledge to Open Knowledge Environments, Paper Presented at the 2nd COMMUNIA Conference (June 28, 2009) (on file with COMMUNIA), http://www.communia-project.eu/communiafiles/Conf%202009_P_Uhlir_BS.pdf, archived at https://perma.cc/9AQS-B52J.

[189] Pessach Guy, The Role of Libraries in A2K: Taking Stock and Looking Ahead, 2007 Mich. St. L. Rev. 257, 267 (2007).

[190] See Proposed WIPO A2K Treaty, supra note 151, at 5; see also Orphan Works Directive, supra note 90 (enabling the use of orphan works after diligent search for public libraries digitization projects); see also Case C-117/13, Technische Universität Darmstadt v Eugen Ulmer KG, 2014 E.C.R. 23 (September 11, 2013) (stating that European libraries may digitize books in their collection without permission from the rightholders with caveats); see also Act of September 11, 2015, on Amendments to the Copyright and Related Rights Act and Gambling Act (Poland) (bringing library services in Poland into the twenty-first century by enabling digitization for socially beneficial purposes, such as education and preservation of cultural heritage).

[191] Portions of the analysis in this Section can also be found in the Communia Final Report, supra note 69.

[192] Jessica Litman, The Public Domain, 39 Emory L. J. 965, 977 (1990).

[193] See Daniel Drache, Introduction: The Fundamentals of Our Time – Values and Goals that are Inescapably Public, in The Market or the Public Domain?: Global Governance and the Asymmetry of Power 1 (Daniel Drache ed., Routledge 2000).

[194] See Jane C. Ginsburg, “Une Chose Publique”? The Author’s Domain and the Public Domain in Early British, French and US Copyright Law, 65 Cambridge L. J. 636, 642 (2006).

[195] Id. at 638.

[196] Mark Rose, Nine-Tenths of the Law: The English Copyright Debates and the Rhetoric of the Public Domain, 66 Law & Contemp. Probs. 75, 77 (2003).

[197] See Ginsburg, supra note 194, at 637–38.

[198] See id. at 637.

[199] M. William Krasilovsky, Observations on Public Domain, 14 Bull. Copyright Soc’y 205 (1967).

[200] Pamela Samuelson, Mapping the Digital Public Domain: Threats and Opportunities, 66 Law & Contemp. Probs. 147, 147 (2003).

[201] David Lange, Recognizing The Public Domain, 44 Law & Contemp. Probs. 147, 147 (1981).

[202] See Lange, Reimagining The Public Domain, supra note 42 at 466.

[203] Julie E. Cohen, Copyright, Commodification, and Culture: Locating the Public Domain, in The Future of the Public Domain: Identifying the Commons in Information Law 133–34 (Lucie Guibault & P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[204] Michael D. Birnhack, More or Better? Shaping the Public Domain, in The Future of the Public Domain: Identifying the Commons in Information Law 59–60 (Lucie Guibault & P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[205] See e.g., id.

[206] James Boyle, The Second Enclosure Movement and the Construction of the Public Domain, 66 Law & Contemp. Probs. 62 (2003).

[207] L. Ray Patterson & Stanley W. Lindberg, The Nature of Copyright: A Law of Users’ Rights 50 (University of Georgia Press 1991).

[208] Id. at 50–51.

[209] See Lange, Reimagining The Public Domain, supra note 42, at 465, n.11 (for the “feeding” metaphor).

[210] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 60.

[211] James Boyle, The Public Domain: Enclosing the Commons of the Mind 41 (Yale Univ. Press 2009).

[212] See Ronan Deazley, Rethinking Copyright: History, Theory, Language 105 (Edward Elgar Pub. 2008).

[213] Lange, Recognizing the Public Domain, supra note 201, at 178.

[214] The main difference lies in the fact that a commons may be restrictive. The public domain is free of property rights and control. A commons, on the contrary, can be highly controlled, though the whole community has free access to the common resources. Free Software and Open Source Software are examples of intellectual commons. See Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom 63–67 (Yale Univ. Press 2007). The source code is available to anyone to copy, use and improve under the set of conditions imposed by the General Public License. However, this kind of control is different than under traditional property regimes because no permission or authorization is required to enjoy the resource. These resources “are protected by a liability rule rather than a property rule.” Lawrence Lessig, The Architecture of Innovation, 51 Duke L. J. 1783, 1788 (2002). A commons is defined by the notions of governance and sanctions, which may imply rewards, punishment, and boundaries. See Wendy J. Gordon, Response, Discipline and Nourish: On Constructing Commons, 95 Cornell L. Rev. 733, 736–49 (2010).

[215] See Mark Rose, Copyright and Its Metaphors, 50 UCLA L. Rev. 1, 8 (2002); see also William St Clair, Metaphors of Intellectual Property, in Privilege and Property: Essays on the History of Copyright 369, 391–92 (Ronan Deazley et al. eds., Open Book Publishers 2010).

[216] See Charlotte Hess & Elinor Ostrom, Ideas, Artifacts, and Facilities: Information as a Common-Pool Resource, 66 Law & Contemp. Probs. 111, 111 (2003); see also Michael J. Madison, Brett M. Frischmann & Katherine J. Strandburg, The University as Constructed Cultural Commons, 30 Wash. U. J. L. & Pol’y 365, 403 (2009).

[217] See, e.g., Madison, Frischmann, & Strandburg, supra note 216, at 373 (acknowledging that Ostrom’s previous work laid the groundwork for their research); see also Elinor Ostrom & Charlotte Hess, A Framework for Analyzing the Knowledge Commons, in Understanding Knowledge as a Commons: From Theory to Practice 41–81 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2007), http://surface.syr.edu/cgi/viewcontent.cgi?article=1020&context=sul, archived at https://perma.cc/48HT-3YUE (using Ostrom’s previous research as a base for new research throughout the chapter).

[218] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 66.

[219] See James Boyle, Foreword: The Opposite of Property, 66 Law & Contemp. Probs. 1, 8 (2003), http://scholarship.law.duke.edu/lcp/vol66/iss1/1/, archived at https://perma.cc/J4SL-YJU2.

[220] See James Boyle, A Politics of Intellectual Property: Environmentalism for the Net?, 47 Duke L. J. 87, 110 (1997).

[221] See James Boyle, Cultural Environmentalism and Beyond, 70 Law & Contemp. Probs. 5, 6 (2007).

[222] See Boyle, The Public Domain: Enclosing the Commons of the Mind, supra note 211, at 180.

[223] See id. at 241–42.

[224] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 52.

[225] Boyle, A Politics of Intellectual Property: Environmentalism for the Net?, supra note 220, at 113.

[226] See COMMUNIA, Survey of Existing Public Domain Competence Centers, Deliverable No. D6.01 (Draft, September 30, 2009) (survey prepared by Federico Morando and Juan Carlos De Martin for the European Commission) (on file with the author), https://www.yumpu.com/en/document/view/17424248/survey-of-existing-public-domain-competence-centers-communia/6, archived at https://perma.cc/B745-GH72 (reviewing the current landscape of European competence and excellence centers that focus on the study of the public domain).

[227] See WIPO, Development Agenda for WIPO, supra note 152; see also Severine Dusollier, Scoping Study on Copyright and the Public Domain, WIPO (prepared for the Word Intellectual Property Organization) (May 7, 2010).

[228] Chair of the Provisional Committee on Proposals Related to a WIPO Development Agenda (PCDA), Initial Working Document for the Committee on Development and Intellectual Property (CDIP), WIPO (Mar. 3, 2008), http://www.wipo.int/meetings/en/doc_details.jsp?doc_id=92813, archived at . https://perma.cc/98AG-HNHL.

[229] Compare Communia Final Report, supra note 69 (launching programs together with Communia, as part of the i2010 policy strategy); with LAPSI: The European Thematic Network on Legal Aspects of Public Sector Information, European Commission (Dec. 17, 2012), https://joinup.ec.europa.eu/community/epractice/case/lapsi-european-thematic-network-legal-aspects-public-sector-information, archived at https://perma.cc/6VEH-6MEU; and Digital Repository Infrastructure Vision for European Research, CORDIS (last visted Jan. 30, 2017), http://cordis.europa.eu/project/rcn/86426_en.html, archived at https://perma.cc/P37J-PNQU; and ARROW, supra note 78; and DARIAH, Digital Research Infrastructure for the Arts and Humanities, http://www.dariah.eu, archived at https://perma.cc/Q2NN-N5EZ (aiming to enhance and support digitally-enabled research across the humanities and the arts).

[230] See Communia, The European Thematic Network on the Digital Public Domain, COMMUNIA, http://communia-project.eu, archived at https://perma.cc/LR3B-JNHJ; see also Giancarlo F. Frosio, Communia and the European Public Domain Project: A Politics of the Public Domain, in The Digital Public Domain: Foundations for an Open Culture (Juan Carlos De Martin & Melanie Dulong de Rosnay eds., OpenBooks Publishers 2012).

[231] See The Public Domain Manifesto, The Public Domain Manifesto (2009), http://publicdomainmanifesto.org/manifesto.html, archived at https://perma.cc/79YY-PHTD.

[232] See generally The Europeana Public Domain Charter, http://www.europeana.eu/portal/en/rights/public-domain-charter.html, archived at https://perma.cc/KX8M-VVV6 (advocating for the public’s interest in maintaining access to Europe’s cultural and scientific heritage).

[233] See Charter for Innovation Creativity and Access to Knowledge, Free Culture Forum, http://fcforum.net, archived at https://perma.cc/N9N4-D93F (last visited Jan. 30, 2017).

[234] John Dupuis, Panton Principles: Principles for Open Data in Science, Science Blogs (Feb. 22, 2010), http://scienceblogs.com/confessions/2010/02/22/panton-principles-principles-f/, archived at https://perma.cc/27WH-ALQE.

[235] David Bollier, The Commons as a New Sector of Value-Creation: It’s Time to Recognize and Protect the Distinctive Wealth Generated by Online Commons, On the Commons (Apr. 22, 2008), http://www.onthecommons.org/commons-new-sector-value-creation, archived at https://perma.cc/9QBP-JZ5Z.

[236] Benkler, supra note 214 at I.

[237] See David Bollier, Viral Spiral: How the Commoners Built a Digital Republic of Their Own 3–14, (New Press 2009).

[238] See Charter of Fundamental Rights of the European Union, December 18, 2000, 2000 O.J. (C364) 1, 8, 37.

[239] Individual components of this roadmap for reform have been described in previous works of mine—to which I refer in this article. A more detailed review of this roadmap for reform—with each component of the proposal acting as a pillar for a metaphorical temple dedicated to the enhancement of creativity—will be the subject of Chapter 12 from my forthcoming book. Giancarlo F. Frosio, Rediscovering Cumulative Creativity: From the Oral-Formulaic Tradition to Digital Remix: Can I Get a Witness? (Edward Elgar, forthcoming 2017) (expanding on Frosio, Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness?, supra note 19).

[240] Lange, Reimagining the Public Domain, supra note 42, at 463.

[241] See Communia Final Report, supra note 69 (further discussing the politics of the public domain).

[242] This proposal—and the historical interdisciplinary research that serves as a background—has been discussed at length in previous works of mine to which I refer. See Giancarlo F. Frosio, A History of Aesthetics from Homer to Digital Mash-ups: Cumulative Creativity and the Demise of Copyright Exclusivity, 9(2) Law and Humanities 262 (2015), http://www.tandfonline.com/doi/full/10.1080/17521483.2015.1093300, archived at https://perma.cc/YEC3-34FK; see also Murray, supra note 9.

[243] For a full discussion of the idea of user patronage—and a review of the economics of creativity form a historical perspective—See Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness? supra note 19 at 376–90.

“Danger, Will Robinson”? Artificial Intelligence in the Practice of Law: An Analysis and Proof of Concept Experiment

pdf_icon Greenbaum Publication Version PDF

Cite as: Daniel Ben-Ari, Yael Frish, Adam Lazovski, Uriel Eldan, & Dov Greenbaum, “Danger, Will Robinson”? Artificial Intelligence in the Practice of Law: An Analysis and Proof of Concept Experiment, 23 Rich. J.L. & Tech. 3 (2017), http://jolt.richmond.edu/index.php/volume23_issue2_greenbaum/.

Daniel Ben-Ari,*Yael Frish,** Adam Lazovski,*** Uriel Eldan,**** & Dov Greenbaum*****

Table of Contents

I.     Introduction: What Is Artificial Intelligence?. 3

II.     Disciplines & Recent Developments. 5

III.     Ethics & Philosophy. 9

IV.     The Emergence of Artificial Intelligence, Its Pioneers, and The Beginning of Its Implications. 13

A.     The Turing Test. 13

B.     The Roots of Artificial Intelligence. 17

C.     Physical Symbol Systems Hypothesis 19

D.     Computational intelligence. 20

E.     Child Machine. 21

V.     Artificial Intelligence and Its Implications in Law: 22

A.     Market Failure. 24

B.     The Vast Market Size. 24

C.     Funding. 25

VI.     The Reality as We See It, The Day After Artificial Intelligence. 34

VII.    Specific Ethical, Legal, and Social Implications. 36

VIII.    Artificial Intelligence in Fair Use–An Early Stage Proof of Concept. 45

IX.      Conclusion and Recommendations for Courses of Action. 53

“Artificial intelligence is our biggest existential threat”

– Elon Musk[2]

I. Introduction: What Is Artificial Intelligence?

[1]       In this position paper, we seek to provide a preliminary outline of the ethical, legal, and social implications facing society in light of the growing engagement of artificial intelligence (“AI”) in our everyday lives as attorneys. In particular, we investigated these implications by developing, in collaboration with the IBM Watson team, a proof of concept. In this proof of concept, we aimed to specifically demonstrate the usefulness of AI in analyzing case law in the field of intellectual property, particularly within copyright fair use. To this end, we have extensively reviewed the relevant literature in an effort to pose pertinent and challenging questions regarding the implications of AI in all areas of law.

[2]       AI is a sub-field of computer science;”[3] it can be broadly characterized as intelligence by machines and software.[4] Intelligence refers to many types of abilities, yet is often constrained to the definition of human intelligence. It involves mechanisms, some that are fully discovered and understood by scientists and engineers, and some that are not.[5]

[3]       AI is playing an increasingly important role in our everyday lives.[6] It is asserted that in the near-future AI will replace or enhance various human professions.[7] One of the overarching goals of the AI discipline is to improve machines and systems so that they can reason, learn, self-collect information, create knowledge, communicate autonomously, and manipulate their environment in unexpected fashions.[8] During the past two decades, AI has advanced to make major and influential improvements in quality and efficiency for services and manfucturing procedures.

[4]       Some researchers hope AI will closely approximate or even surpass human intelligence, via an emphasis on problem solving and goals achievement.[9] Both are possible, and AI may even reach computing levels more complicated than the human mind could ever reach.[10]

[5]       Many claim we are still far from achieving this objective, and that fundamental new ideas and paradigm shifts are required in order to push this field forward.[11] These aims notwithstanding, AI studies thus far continue to progress in the direction of understanding and “modeling human consciousness and the inner mind.”[12]

II. Disciplines & Recent Developments

[6]       To understand the field of AI we must first understand how researchers and philosophers observe this field. They divide AI into two categories: strong and weak.[13] Strong AI further divides into human formed AI and non-human formed AI.[14] The first refers to the ability of computers to think, reason, and deduce in a manner similar to humans, and the latter refers to the ability to reason independently, without similarity to the human brain.[15] Weak AI refers to computers mimicking thinking and reasoning abilities, without actually having these abilities. [16] Understanding these observations is important when discussing issues of AI, thinking, and consciousness.

[7]       The main progress made so far has been within weak AI. However, some computer scientists are not “holding their breath” to attribute actual thinking and reasoning abilities to a machine with AI.[17] To quote Edsger W. Dijkstra–a member of computer science’s founding generation–“[t]he question of whether Machines Can Think (…) is about as relevant as the question of whether Submarines Can Swim.”[18] Analogically, computer scientists argue that planes are tested on how well they fly, not whether they fly as birds. Essentially, these scientists believe that we need to step out of the current linguistic frameworks. Can a submarine swim? Can an airplane fly? Can a machine think? Many scientists claim these distinctions are meaningless–when we refer to machines as ‘acting’ intelligently, we are actually saying that they do not possess a mind or a consciousness.[19]

[8]       There are various AI applications each different from the other. For example: speech recognition, language understanding, problem solving, game playing, computer vision (two-dimensional vs three-dimensional), expert systems, heuristic classification, and more.[20] These applications comprise two interest groups. One involves narrow applications (such as speech recognition), and the other is broader (artificial general intelligence (AGI), including autonomous agent possibilities).[21]

[9]       Currently, most AI applications are narrow (i.e., highly specialized entities used to carry out specific tasks).[22] In contrast, the human brain excels in many different environments and combines strategies across applications. Current AI examples include a word processing program that automatically corrects spelling, a computer that learns and plays a video game, a chess-playing computer (e.g. Deep Blue, IBM’s chess-playing computer),[23] or a GO playing system (e.g. AlphaGo, Google’s GO playing system).[24]

[10]     Due to the obvious distinction from human intelligence, society generally sees this type of AI as posing no immediate danger or threat. Yet, it is important to understand that even the current state of AI is represented by a broad spectrum of applications–including “[25] (assessing one simple task); “speech recognition programs,…collaborative filtering software, like that used by Amazon.com…”;[26] “Aaron, a robotic artist that produces paintings that could easily pass for human work;”[27] IBM’s Watson,[28] eBay’s computerized arbitration Modria;[29] and much more. All of these narrow AI applications range in capability from one simple task to intricate intelligent procedures.[30]

[11]     One explanation for the vast immersion of AI within current society may be the process of incorporating basic science researchers (such as computer scientists) in high tech companies.[31] Here, scientists have quickly learned to appreciate that in order for AI to become accepted in human society, the emphasis must be on its benefits as a bridge for what could not have been achieved thus far – assisting and contributing to humans–instead of on how AI could replace them.[32] This is in stark contrast to AI in fiction.[33]

[12]     In fiction and cinema, AI is frequently portrayed as an ominous entity entwined with danger (e.g. HAL in “2001: A Space Odyssey,”[34] Agent Smith in “The Matrix,”[35] and the T1000 in “The Terminator”).[36] In many of these plots, AI is depicted as fully autonomous machines acting out in a way that is harmful to human beings.[37] However, there are also movies, such as Spielberg’s “A.I.,”[38] that portray machines in a softer, more humanlike light. Other films use AI simply for comedic relief, such as Star Wars[39] or its spoof, Spaceballs.[40] While reality is still far from the entities portrayed in science fiction, there are already AI machines that can cause injuries or death (e.g. autonomous cars), act as home and service robots (e.g., iRobot’s Roomba, Anny the CareBot), or serve in the private, finance, and governmental sectors.[41]

[13]     In light of all of the bad press it gets, it is important to understand how AI is being presented to society, what people think about it, and what needs to be considered nowadays in order to promote innovation in this area.

III. Ethics & Philosophy

[14]     The use of AI poses many important ethical questions. The philosopher John Searle, in his famed Chinese Room Argument, noted that the idea of a non-biological machine being intelligent is incoherent: “[t]he point is not that computers cannot think. The point is rather that computation as standardly defined in terms of the manipulation of formal symbols is not by itself constitutive of, nor sufficient for, thinking.”[42] Further the eminent computer scientist, Joseph Weizenbaum, warned that “the idea [of an AI] is obscene, anti-human and immoral.”[43]

[15]     Many philosophers, scientists, and others have deliberated on such ethical and existential dilemmas. The artificial intelligence control problem for example, was discussed in a book published in 2014 by Swedish philosopher Nick Bostom, titled “Superintelligence: Paths, Dangers, Strategies.”[44] It hypothesizes that AI could evolve into a form of super intelligent entities that outsmart human intelligence,[45] and are even capable of self-improvement.[46] In that process, he suggests the entities might become uncontrollable and lead to a human existential catastrophe.[47]

[16]     Two foundational concepts in the evolution of AI that tend to come up when people refer to the dangers of AI are technological singularity and swarm intelligence. Technological singularity refers to the point at which technological progress will become incomprehensibly rapid and complicated beyond our human capabilities.[48] The AI, in a feedback loop of ever accelerating self-improvement, will surpass us in its intelligence and become too smart for us to control. [49] The term was first used in this context by the mathematician John von Neumann, and was published in 1958 when Stanislaw Ulam wrote about a conversation he had with Neumann.[50]

[17]     When we speak about technological singularity in the AI context, we speak about the point at which the intelligence will surpass all human control or understanding, becoming too immeasurable and profound for humans to grasp – an “intelligence explosion.”[51] It can occur either when AI enters into a “runaway effect” of ever accelerating self-improvement, or when AI is autonomously capable of building other more intelligent and powerful entities.[52]

[18]     The second term, swarm intelligence, refers to incorporation of self-replicating machines in all aspects of life, science, industry, and even politics.[53] The swarm will become a decentralized, self-organizing system.[54] In the Terminator movies, this is Cyberdyne: a swarm of self-improving AI machines that take over the world. [55]

[19]     In addition to the ethical dangers of AI machines, there are also complicated existential questions, that raise not only questions regarding AI, but also humanity. Can machines have, or act as though they have, human intelligence? And if so, then do they have a mind? If they have a conscience, or self-awareness, do they have rights?

[20]     Consciousness relates to abilities of understanding and thinking. Nevertheless, consciousness is still a widely unknown concept. Should a machine be aware of its mental state and actions? Can it be aware? Is it even relevant? Can minds be artificially created? (as John Searle stated[56]) And how about free will? Even in some fields of philosophy it is debatable whether humans have free will, so how does it reflect on artificial entities? And if we consider AI entities as entities with consciousness or minds, then does it become immoral to dismantle them? And then how do we program them with an understanding of right and wrong? [57]

[21]     The vast majority of AI researchers do not pay attention to most of these ethical and social questions. Whether the machines actually think is not a concern for them, as long as the machines function properly.[58] Yet, philosophers urge all researchers to consider the ethical and social implications of their modus operandi.[59]

[22]     When examining the connection between society and science, history shows us dreadful events regarding ethics and responsibility. However, the science of AI raises new intricacies – regarding employment, rights, duties, and accountability. For example, are we as a society obligated to establish robot rights? This is not so implausible. For instance, the UK Office of Science and Innovation commissioned a report in 2006 dealing with robo-rights and possible future implications on law and politics.[60]

[23]     All of the above questions and discussions are yet to be answered, and as long as deeper understanding in the subject is not evident, strong AI will likely remain controversial.[61]

[24]     Evolving new technologies come with both a risk and a utility. It is unclear what AI will look like in the years to come. However, today we have the ability to try and lay the groundwork for a future in which man and machine will function together, and quite possibly as one.

IV. The Emergence of Artificial Intelligence, Its Pioneers, and The Beginning of Its Implications

[25]     The AI field began evolving after World War II when a number of people, among them the English mathematician Alan Turing, independently started working on intelligent machines.[62]

            A. The Turing Test

[26]     It is argued that Alan Turing’s publication entitled “Computing Machinery and Intelligence,”[63] published in 1950, was the first significant milestone in the AI field. [64] In his book, Turing presented what is now known as the Turing Test.[65] The goal of the test is to determine, to a satisfactory level, whether a computer has intelligence.[66] Succinctly, to pass the test an observer has to be unable to determine if he is interacting with a computer or a human.[67] There are three test participants – a ‘judge’ played by a human being, and two entities, a human and a computer.[68] The judge asks both entities questions through a computer terminal, and if he cannot distinguish between the human and the computer, then the computer is said to have passed the test and is considered to have intelligence.[69]

[27]     The Turing test is both highly acknowledged and highly criticized We have already witnessed situations in which computers have outsmarted man: IBM’s Deep Blue won a chess game in 1996 against one of the world’s best players and IBM’s Watson won the U.S. trivia game-show Jeopardy in 2011 against two former winners.[70]

[28]     In his relatively simple test, Turing aimed to elegantly examine a narrow range of AI capabilities including thinking, natural language processing, logic, and learning.[71]

[29]     The Test also has its critics who claim that the comparison to human intelligence is deficient in two respects: first, the comparison includes non-intelligent human behavior, and second, it does not include non-human intelligent behavior.[72] For the second reason, a number of alternative tests have been designed to assess super-intelligent non-human computational capabilities:

  • C-tests, or Comprehension Tests: designed to test comprehension abilities – a main component of intelligence – while formulating information with new given data.[73]
  • Universal Anytime Intelligence Tests: aim to examine intelligence of any present or future biological or artificial system.[74]
  • The Winograd Schema Challenge: conceived by Levesque Hector, a professor of Computer Science at the University of Toronto, is based on a series of multiple choice questions (i.e. linguistic antecedents) which require spatial and interpersonal skills, preliminary knowledge, and other commonsense insights.[75]
  • The Logic Theorist System: demonstrated by Alan Newell and Herb Simon, is engineered to mimic the problem solving skills of humans and the determination of high-order intellectual processes.[76]
  • The Lovelace 2.0 Test: conceived in 2001 by Selmer Bringsjord and colleagues (and perfected in 2014 by Mark Riedl, a Georgia Tech professor),[77] examines intelligence by measuring creativity under the assumption that there are works of art that require intelligence in order to create them.[78]

[30]     Another aspect of the Turing Test that received criticism is human misidentification,[79] meaning it is not uncommon for humans to be misidentified as machines. One explanation for this is judge bias based on the answers he expects to receive.[80]

            B. The Roots of Artificial Intelligence

[31]     The 1955 Dartmouth Summer Research Project on Artificial Intelligence is considered the birthplace of AI as a discipline.[81] Amongst its participants were John McCarthy and Marvin Minsky.[82]

[32]     John McCarthy, who is typically thought to have coined the term artificial intelligence, was an American computer scientist and cognitive scientist, and one of the founders of the AI discipline.[83] In 1979 McCarthy published “Ascribing Mental Qualities to Machines,” where he argued that “[m]achines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance.”[84]

[33]     Marvin Lee Minsky was an American cognitive scientist in the field of AI and one of the main AI theorists.[85] Minsky believed that computers were not fundamentally different than the human mind.[86] Amongst his achievements was the construction of robotic arms and grippers, computer vision systems, and the first electronic learning system.[87] In 1969, Minsky, along with Seymour Papet, published the book “Perceptrons”[88] in which he emphasized critical issues that he felt prevented developmental research of the neural networks.[89] Minsky was also an active contributor to the symbolic approach (described below) and the research of human intelligence.[90] In general, Minksy had a positive outlook regarding the future humanlike intelligence capabilities of AI.[91]

            C. Physical Symbol Systems Hypothesis

[34]     The “Physical Symbol Systems Hypothesis” was developed in 1976 by Newell and Simon, and later became a core part of AI.[92] The hypothesis states that “[i]ntelligence is the work of symbol systems…a physical symbol system has the necessary and sufficient means for general intelligent action.”[93] AI computers, as recognized physical symbol systems, are able to exhibit intelligence, and humans, as intelligent beings, must also be physical symbol systems, and therefore similar to computers.[94] Both are capable of processing structures of symbols.[95]

[35]     One problem related to the Physical Symbol Systems Hypothesis, is that some activities human beings find hard or challenging–like mathematics–are easy for computers; while some activities that human beings find easy–like face recognition–are difficult for computers.[96] This problem led researchers to develop a strategy that later became known as the “Artificial Neural Network” (also known as Connectionism) which aims to create systems with brain-like characteristics that are capable of learning.[97] These particular efforts embrace some key elements from Machine Learning Strategy and provide partial answers for “The Common Sense Knowledge Problem” through an effort to create a database containing all of the general common sense knowledge a human possesses, presentable in an AI retrievable fashion.[98]

            D. Computational intelligence

[36]     Computational intelligence aims to understand the principles that enable intelligent behavior in artificial systems. According to this area of research, AI has the following four common features:

  • Ability and flexibility to change in the environment;
  • Evidential reasoning and perception;
  • Ability to plan and execute goals; and
  • Ability to learn.[99]

[37]     The early AI successes left researchers optimistic; however, in the late 1950’s the field began to encounter obstacles and difficulties. One concern that is still highly relevant today is the “Common Sense Knowledge Problem”: a system only “knows” the information that it explicitly receives, and it is often incapable of making trivial connections on its own.[100] To this end, many research strategies are trying to find a way around this problem, including limited domain systems and machine learning.[101]

            E. Child Machine

[38]     The idea of a “Child Machine” was first introduced in the 1950’s.[102] A child machine aims to emulate the learning experience of a human child and implement it on an AI computer.[103] In that way, a computer starts as a “child” and improves by acquiring experiences and knowledge.[104] Yet current programs still have many drawbacks regarding physical experiences and language skills, which hinder the desired successful outcome.[105]

[39]     Even though there has been substantial progress in the science of AI, high hurdles remain. Difficult issues and thought-provoking questions that were raised over two decades ago are still far from receiving answers. Achieving human-level abilities, such as described in the common sense knowledge problem above, is still far from being reached.[106] While some types of human reasoning have been emulated to varying degrees, overall progress remains relatively sluggish.[107]

[40]     In order for AI to further evolve, it is necessary to continue researching different implementation techniques of common reasoning such as: logical analysis, handcrafted large scale databases, web mining, and crowd sourcing.[108]

[41]     The next sections examine and analyze the involvement of AI in the field of law, including its ethical, legal, and social implications in both the short term and the long term. Further, the third chapter discusses the fair use doctrine (a subfield of copyright law) which is used as a test-case to demonstrate AI abilities.[109] The proof of concept was conducted through IBM’s Watson with the guidance of IBM Israel.[110]

V. Artificial Intelligence and Its Implications in Law

“Of course I’ve got lawyers. They are like nuclear weapons; I’ve got them ‘cause everyone else has. But as soon as you use them they screw everything up.”

Danny DeVito.[111]

[42]     Notwithstanding the dire lack of paradigm shifting progress described above, AI technologies are still progressing rapidly, not only theoretically, but also practically. Developers in both large corporations and in start-ups aim to create learning and computerized thinking algorithms that will disrupt our reality.[112] While some of these algorithms encompass the future of mankind’s welfare, others pose dramatic and imminent threats.[113]

[43]     This chapter depicts the rationale that brought forth and promoted the ‘invasion’ of AI into the world of law.[114] After reviewing the causes, we depict the technologies and companies worthy of the title ‘game-changing’, that might bring great value to society, followed by dramatic shifts – ethically, socially and legally.

[44]     In the final part of this chapter we discuss what these dramatic societal shifts can offer, both as opportunities and threats.

[45]    Market failure provides a great opportunity for AI to come in to the field of law with a big impact on it. Our analysis focuses mainly on the United States market.

            A. Market Failure

[46]     Legal systems around the world are collapsing under an ever-growing workload.[115] It is not a secret that the United States is currently leads the world in number of lawyers per-capita and has dramatically overloaded judicial systems. [116] The fact remains, the judicial process is time consuming, inefficient, and cannot keep up with the speed and scalability in which conflicts grow.[117] Add to that the legal tactics lawyers use to stall, earn time, and sometimes ‘dry’ their opponents out of resources, and you have a very dysfunctional system. The system’s own frequent users, lawyers, are active partners in creating the inability to function.[118]

[47]     Although this realization is not news to most, the fact remains that with the current population growth, as well as the ever process of the internet, the worldwide potential for legal conflicts continues to grow as many judicial systems cannot keep up to face this growth.

            B. The Vast Market Size

[48]     The United States is among the largest consumers of legal services in the world.[119] The market size in estimated to be 437 billion USD annually.[120] Additionally, in recent years there is an on-going shift of power. While in the past large law firms controlled most of the market, today, nimble boutique firms are gaining an ever-increasing market share.[121] The potential to compete with the largest firms empowers young and small firms to innovate, become more efficient, and even try new services, enabling them to gain a competitive edge.[122]

            C. Funding

[49]     The legal industry is currently witnessing two trends in funding which make the invasion of AI into the world of law a fait accompli. First, reaching a five-year record, 2015’s fourth quarter had the highest funding levels for the entire area of AI.[123] In addition, funding for legal tech start-ups has grown from seven million USD in 2009 to a whopping one hundred-fifty million in 2013.[124]

[50]     These account for a fertile ground in which technological solutions can arise, solving big scale problems like the ones portrayed by the judicial systems around the world. [125]

[51]     While the use of computation and software is not new to the field of law, [126] we can now identify three main technological fields–Machine Learning, Natural Language Processing, and Big Data–which may enable AI to reign the world of law.

[52]     Some of these technologies comprise different pieces of the puzzle which AI will soon piece together. When applied in a holistic manner these technologies may replace most lawyers and judges.[127] These changes will not be in the short term, but rather in years to come.[128] Yet, we believe it will be faster than expected. The three main technological fields are:

  • Machine Learning:[129] A computer science subfield in which computer generated algorithms are trained to recognize patterns within data.[130] This usually involves massive amounts of data in all areas–from visuals, to categorizing language patterns within human conversations, to written data.[131]
  • Natural Language Processing: (NLP)[132] A sub-category within AI and machine learning.[133] In essence, NLP is heavily reliant on machine learning.[134] This form of research integrates computer science, psychology, and the interaction between the two.[135] Research in this field seeks to ‘teach’ computers how to comprehend human language, seek patterns, and perform deductions based on language patterns and reasoning.[136] The difference between NLP and machine learning is the added value from interactions with human behavior, human language, and even human biases and other psychological traits.[137]
  • Big Data: This field typically refers to data sets too excessive to deal with and analyze via traditional data analytics.[138] Big data sets are relatively young and likely due in part to the accumulation of legal data, which has accelerated greatly since the beginning of the digital storage age (2002).[139] These data sets are used to create predictive analysis algorithms in various fields, from business trends to target audience marketing methods.[140] It can also be used to analyze legal claims, judicial opinions, and more.[141] This type of data usually exists in public records.[142]

[53]     We are now on the verge of a legal renaissance.[143] Market failure mixed with an immense market, growing funding for start-ups, and available and rapidly growing technology is a volatile concoction, which will likely create dramatic and disruptive changes in the nearby future.[144]

[54]     Authors Buchanan and Thomas first raised the notion of using AI in the legal field in November 1970 in their article “Some Speculation about Artificial Intelligence and Legal Reasoning.”[145] In their research, they suggest the use of computers to model human thought processes and as a direct outcome, also help lawyers in their reasoning processes.[146] Later, an experiment was conducted by Thorne McCarty who created a program that was capable of performing a narrow form of legal reasoning in the specific area of corporate reorganization taxation.[147] Given a ‘description’ of the ‘facts’ of a corporate reorganization case, the program could implement an analysis of these facts in terms of several legal concepts.[148]

[55]     Today, in this subfield of AI and law there are already numerous technologies such as:

  • IBM’s Watson Debater:[149] The debater is a new feature of IBM’s well-known Watson computer.[150] When asked to discuss any topic, it can autonomously scan its knowledge database for relevant content, ‘understand’ the data, select what it believes are the strongest arguments, and then construct sentences in natural language to illustrate the points it had selected, in favor and against the topic. Using that process, it can assist lawyers by suggesting the most persuasive arguments and precedents when dealing with a legal matter.[151]
  • ROSS Intelligence:[152] “SIRI for the law”[153] was developed in IBM’s Watson labs. ROSS is a legal research tool that enables users to obtain legal answers from thousands of legal documents, statutes, and cases.[154] The question can be asked in plain English and not necessarily in legal form. Ross’s responses include legal citations, suggested articles for further reading, and calculated ratings to help lawyers prepare for cases.[155] Because Ross is a cognitive computing platform, it learns from past interactions, i.e. Ross’s responses increase in accuracy as lawyers continue to use it. This feature can help lawyers reduce the time spent on research.[156]
  • ModusP:[157] An Israeli startup which has created an advanced search engine using sophisticated algorithms based on AI. The search function helps jurists reduce legal research hours by finding legal knowledge and insights more efficiently.[158]
  • Lex Machina:[159] An intellectual property (“IP”) research company that helps companies anticipate, manage, and win patent and other IP lawsuits by comparing cases to a database of information and helping their customers draw valuable conclusions that inform winning business and legal strategies.[160] The technology compiles data and documents from court cases and converts them into searchable text files.[161] After a keyword, patent, or party is searched for, data and documents are sent back out.[162] It gives lawyers more information on specific judges, a client’s history, and information on what they can do to have a better chance at winning.[163]
  • Modria:[164] A cloud based platform, initially developed for eBay and PayPal, functions as Online Dispute Resolution (“ODR”).[165] It enables companies “to deliver fast and fair resolutions to disputes of any type and volume.”[166] This technology aims to prevent submission of lawsuits, by providing easily accessible alternatives for dispute resolution.[167] Modria aims to create fair ODRs, based on the knowledge and insights from millions of cases and other disputes that the system has already solved.[168]
  • Premonition:[169] A technology which utilizes Big Data and AI to expose which lawyers win the most cases and before which judges.[170]
  • BEAGLE:[171] A technology that uses AI to quickly highlight the most important clauses in a contract and also provides a real-time collaboration platform that enables lawyers to easily negotiate a contract or pass it around an organization for quick feedback.[172] Beagle’s learning process allows the program to adapt to focus on what users care about most.[173]
  • Legal Robot:[174] A platform that enables users to check, analyze, and spot problems in contracts before signing them.[175] The platform is also meant to help users understand complex legal language by parsing legal documents and translating them into accessible language by transforming them into numeric expressions, so statistical and machine learning techniques can derive meaning.[176] It is also designed to compare thousands of documents in order to build a legal language model to be used as a tool for referencing and analyzing contracts.[177]

[56]     The development of the field of AI and law starts with programs that analyze cases and continues with technologies that make lawyers’ tasks efficient, solve disputes, and replace human intervention. Surveying this course of development, we can predict that in the long run, AI technologies using machine learning and deep learning techniques may replace lawyers, arbitrators, mediators, and even judges. Computers could do the work of a lawyer – examining a case, analyzing the issues it raises, conducting legal research, and even deciding on a strategy.

VI. The Reality as We See It, The Day After Artificial Intelligence

            A. Judges and Physical Courts

[57]     Judges and their courts will become less necessary.[178] Most commercial disputes and criminal sentencing will be run by algorithms and wizards,[179] enabling algorithms like Modria to construct conflict resolutions in a much healthier and down to earth manner. After all, they reportedly solve over fifty million disputes every year without any human intervention.[180] Most disputes can then be solved by an AI algorithm to determine the amount of damages to be paid to each side. Similar processes can occur in divorce hearings–algorithms can automatically asses the individuals’ property, financial background, and calculate the amount of time spent together to create a fair agreement of divorce.

[58]     One of the biggest problems with conflict resolution is the fact that it is run by human beings– prone to effects of emotion, fatigue, and general current mood.[181] When a legal claim is first constructed by algorithms instead of human beings, the outcomes are likely to be more productive. For example, Modria is able to resolve hundreds of millions of commercial disputes yearly without the intervention of third party human beings providing a verdict.[182] Claimants will, of course, be able to appeal to a human judge, but the need for those should dramatically decrease over time as machine learning algorithms gain better understandings of the statistical meaning of justice. To reduce the amount of appeals in tort cases, a government can create a fund to financially accommodate damages in order to facilitate a ‘sense of justice’ in the claimants’ minds.

[59]     Some judges may remain in office to rule on algorithm cases not brought to a decision suitable to both sides, and in cases where entirely new issues are being presented.

            B. Lawyers

[60]     Lawyers may also become a dying breed,[183] as algorithms learn how to structure claims, check contracts for problematic caveats, negotiate deals, predict legal strategies, and more. Using AI to create simple, optimally designed regulations and laws that are easier to learn, understand, and litigate by computer, will further the winnowing of the legal profession.[184]

[61]     Lawyers–or something similar–will still be necessary, however they will focus mainly on risk engineering instead of litigation and contracts.[185] Lawyers will need to use intuition and skills not yet available to machines to analyze exposure and various aspects of performing business and civil actions.[186] They will, however, be helped by AIs that have already sifted through all the relevant data. Until AI is able to integrate the data into a nuanced analysis that requires some form of higher thinking, creativity, and predicting likely outcomes based on human reactions, we still need lawyers. In the future, all but the most skilled litigation and corporate lawyers will become unemployed as computer algorithms learn to emulate earlier successful strategies and avoid unsuccessful strategies to achieve optimal outcomes.[187] Young (often overpaid) associates will become unnecessary as much of their grunt work will be doable by machine.[188]

[62]     In some areas of the law, lawyers may take longer to disappear entirely. In areas without clear precedent, cases may be deemed too delicate to be dealt with by computers. Some clients may never trust computers and insist on using humans; it will take time until we are willing to entrust our freedom (or our lives, in certain states in the United States) in the hands of algorithms.[189]

            C. Jury

[63]     Juries, like the other members of the legal system, will not be needed for most cases as there will be fewer trials.[190] The majority of legal issues will be solved by algorithms. In addition, technology may ensure that juries are designed to represent society, perhaps even mimicking human biases involving race, background, and life experience.[191] Such a jury could easily be instructed to disregard information, or weigh some data differently than others.[192]           

            D. Law School

[64]     Law schools will change dramatically, not least because we will need fewer lawyers. Moreover, the nature of legal learning will change to include subjects that are not taught in law schools today–creativity, understanding of statistics, big data analysis, and more.[193]

VII. Specific Ethical, Legal, and Social Implications

[65]     When considering these technologies and the changes they bring to the legal field, we must refer to the ethical, legal, and social implications that they create:

[66]     Today, the legal profession—lawyers, judges, and legal education—faces a disruption, mostly because of the growth of AI technology, both in power and capacity.[194]

[67]     An example of this disruption is that today, computers can review documents, a task which human lawyers did in the past. The role of AI is growing exponentially,  so it is predicted that technology will evolve to a level that will enable computers to replace more complex legal tasks such as legal document generation and predicting litigation outcomes in litigation.[195] These implementations will become possible as the learning abilities of the machine intelligence becomes better and better. Already, fifty-eight percent of respondents to the question “Is your firm doing any of the following to increase efficiency of legal service delivery” responded saying “Using technology tools to replace human resources.”[196] More specifically, forty-seven percent saw Watson replacing paralegals, thirty-five percent thought the same for first year associates. Thirteen and a half percent even though Watson could replace experienced legal partners.[197] Notably, while twenty percent said that computers will never replace human practitioners,[198] that number has gone down from forty-six percent in 2011.[199]

[68]     There are some benefits which derive from these implications. First, they will increase competition in the legal services market, which will increase efficiency.[200] Second, today the pricing process of lawyer’s services is very ambiguous because it is hard to predict the total required services. This implication could enable price comparisons and entrance of new players to the legal services market.[201]

[69]     The forecast is that these implications will affect the following legal areas:[202]

  • Legal Discovery: Machine searches will enhance the legal discovery process by making the review of legal documents more efficient. There are already a handful of software tools that use predictive coding to minimize lawyerly interference in the e-discovery process, including, Relativity,[203] Modus,[204] OpenText,[205] kCura,[206] and others.[207] The courts have also acknowledged the use and promise of predictive coding.[208]
  • Legal Search: Search tools as Lexis[209] and Westlaw[210] were the first legal search engines to use an intelligence search tool. Afterward, Watson enabled searching using semantics instead of keywords.[211] Semantic search allows searching natural language queries and the computer responds semantically with relevant legal information.[212] Ross, mentioned above, is an example of this kind of system.[213] Advanced features provide information about the strength of a precedent, considering how much others rely on it, and enabling an effective use of it.[214] Eventually AI will even be able to issue spot, based on the searches conducted.[215]
  • Compliance: Legal and regulatory compliance is often socially and morally required, not to mention the penalties that are due for non-compliance.[216] As such, many corporations employ teams of lawyers to confirm that they comply with the applicable regulatory regimes. AI machines are already being employed in this area, including Neota Logic,[217] which powers other companies’ AI regulatory compliance systems, such as, Compliance HR for employment regulations[218] or Foley and Lardner Global Risk Solutions (GRS) for Foreign Corrupt Practices Act of 1977 (FCPA) compliance.[219]
  • Legal document Generation: In the past, the usage of templates helped reduce the cost of these legal services. Machine intelligence will evolve to generate documents that answer the specific needs of an individual. When these files are reviewed in court, AI will be able to improve the documents by tracking their effectiveness, using his learning abilities.[220]
  • Document Analysis: In addition to generating documents, AI can and will continue to assess the liabilities and risks associated with particular contracts, as well as determining ways for companies to optimize contracts to reduce costs.[221] Nowadays Companies such as eBrevia[222] and LegalSifter[223] are doing just that.
  • Brief and Memos Generation: Machine intelligence will be able to create drafts and memos that will then be revised and shaped by lawyers. In the future it will create much more accurate briefs and memos, assisted by legal research programs which will provide useful data.[224] Some have even suggested using AI to draft legislative documents.[225]
  • Legal Analytics: Companies such as Lex Machina,[226] Lex Predict,[227] and Legal Operations Company, LLC[228] already combine data and analysis abilities to predict the outcomes of situations that have not yet occurred. There are areas of law ­such as copyright and fair use, which will be discussed next that are easier to model because the data related to this subject revolves around specific, easily predictable.[229] Combining the exponential improvement of computers and their learning abilities, these models and predictions will evolve to support more complex areas of law, and to make prediction of case outcomes.[230]

[70]     These changes will not only affect access to hard to obtain legal representation,[231] they also affect the workplace of lawyers. Those who practice these tasks and do not assimilate these shifts could lose their jobs.[232] Additionally, in the future, fewer substandard lawyers will be needed. On the other hand, super-star lawyers or bespoke attorneys[233] will be more easily identifiable (because of the legal analytics which can monitor lawyers success rate) and will use these technologies to their use. Even though machines could replace many tasks of a lawyer, they cannot speak in courts in the foreseeable future, so litigators will be needed.[234] Moreover, there are some areas of law that are subject to a rapid legal change, so even intelligence machines won’t be able to learn them that fast, so lawyers will be needed in those specialized areas. Also, lawyers’ human judgments may still add value to computer predictions.[235]

[71]     As a result of these changes, predicting case outcomes will be easier and more accurate, cases will be more likely to get settled, and fewer trials will be conducted. So it follows that the number of physical courtrooms may also reduce dramatically.

[72]     Another change will occur in law schools. As a result of the changes mentioned above, fewer jurists will be needed and only in certain areas of law. Therefore, law schools should change their aim and focus on the necessities of the new legal profession including technical expertise and an ability to interact with and efficiently use the new multidisciplinary AI technology.[236]

VIII. Artificial Intelligence in Fair Use–An Early Stage Proof of Concept

[73]     In our quest to explore the social, legal, and ethical implications of AI we partnered with the IBM Watson team which is creating a workable software product in the area of the AI and law. Being students with a strong orientation to other disciplines such as law, psychology, business, and government–we were naturally drawn to the field of conflict resolution. First, in the words of Steve Jobs, we had to create a “stupid-simple” legal analysis scheme.[237] In this scheme we aim to effectively explain how lawyers and law students approach a case, to the engineering staff at IBM Watson.

[74]     We drilled down on the set of questions one asks himself when reading a ruling. As a rule, the more features or details one adds to the algorithm, the more data needs to be analyzed in order for Watson to effectively learn how the data was initially analyzed. To summarize this point, if we just needed Watson to identify a win or loss, the task would be relatively easy. However, we wanted Watson to analyze why someone won or lost, which is orders of magnitude more complex.

Model 1 – Case Law Analysis Scheme:

[75]     The logical process of analyzing case law is roughly similar, independent of the law, but requires a process of specific sets of stages in order to analyze the case at hand. After learning that process, the system creates a data set in certain legal topics, thus gaining the ability to analyze new cases.

Stage 1: Identifying the case type variables

[76]     In this stage the focus is on the details of the case and establishing the specific normative framework.

  • Variable 1: Court type. The algorithm must identify in which court, state, or jurisdiction the case is being tried. This is imperative, since the court hierarchy dictates if an earlier ruling is binding for different lower courts. For example: United States Supreme Court case law is considered precedent for all lower courts. Any ruling that conflicts with binding precedent case law will not hold up on appeal. In addition, there are different approaches to the case in each court – district courts find facts and then apply the law, while in most appellate courts, previously found facts are applied to their understanding of the law.
  • Variable 2: Location & Date. The general rules in legal precedent are that new rulings overturn old rulings at that same judicial level and below, and that specific rulings overturn general rulings. This is why it is imperative for the machine learning algorithm to appreciate the source of each ruling.
  • Variable 3: Parties. The algorithm must identify which of the parties is the plaintiff and which is the defendant. This differentiation is imperative to refine which claims have been accepted by the court and which have not. In addition, different degrees of proof may be applied to different stakeholders in a case.
  • Variable 4: Legislative Standards. This should be categorized by both federal law and state law. The labeling of case laws and statuses makes it easier to locate cases with similar issues. In this variable, it must be remembered that there is also a hierarchy in legal sources.
  • Variable 5: Rulings & Other Case Law. IBM’s algorithms need to identify other case law cited within each case as persuasive precedent or unpersuasive precedent. This enables the algorithm to develop a broad network structure, enabling it to understand which ties between rulings are relevant and to suggest more cases, which can be addressed in a legal matter.
  • Variable 6: Secondary Sources. Legal literature such as academic articles, books, and blogs, provide valuable academic information, enabling the user of a search engine to find new ideas for forming his claims or to find opinions that oppose binding precedents (which will be valuable when dealing with a case that is being tried in a court with the same jurisdictional level as the court who ruled the existing precedent).
  • Variable 7: The Judiciary. The algorithm should identify the names of the judges and if they ruled with the majority or minority. In some instances, it should be determined whether a dissenting opinion could be used in another case to provide valuable insights into which claims might be taken under consideration by specific judges. As the famous saying goes, “A good lawyer knows the law; a great lawyer knows the judge.”

[77]     We have left out some critical factors in this document which might dramatically influence the form in which IBM’s algorithms approaches cases. However, it was essential to create a relatively simplified approach for the algorithm to read the available case law, in order for it to understand the basic ground rules. Factors related to whether laws are general or specific, the times in which they were legislated, history of upholding and overturning a particular ruling, and other factors have been left out for the sake of creating an initial proof of concept which can predict or evaluate real legal claims to an extent greater than chance. Another important consideration for this effort is the size of the data set–additional factors exponentially increase the amount of training data necessary to teach the algorithm how to think like a lawyer.

Stage 2: Selecting the Field of Dispute for the Case Law Data Set

[78]     The second stage required to create the proof of concept was finding a relatively structured area of law with hard and fast, consistent, factors, which have not changed much over recent years. A legal area with a very simple and clear list of standards would be the optimal tool to assure that no claims have been skipped regarding a respective field of dispute. Further, we sought a field of law defined mostly by federal law rather than state law, as that would require us to create a different schema for every state.

[79]     Lastly, we wanted to challenge ourselves and find an area of law that would be of interest to the general public and that could result in a usable product. We eventually chose to pursue the creation of an AI algorithm in the field of fair use in publishing, under the Copyright Act. The fair use doctrine incorporates all of the above requirements, and also plays an important societal role due to the public’s misunderstanding and content owners’ misuse of the doctrine, which contribute to copyright’s continued and expanding burden on free speech.[239]

Stage 3: Defining the Fair Use Analysis Scheme

[80]     In order to teach Watson how to analyze a fair use case, we have created, based on various resources (text books, articles, and the web), a fair use analysis scheme depicting the rationale and analysis performed by lawyers when approaching such claims. One particularly helpful site was the Stanford Copyright and Fair Use Center[240] and Cornell’s Fair Use Checklist.[241]

  • Fair Use in Publishing – Analysis Scheme: As part of the first model for our case law analysis scheme, we built a data set of fair use in publishing based on verdicts from all of the United States federal circuit courts. Although we had initially attempted to limit this to just the Second and Ninth Circuits, these two circuits did not provide sufficient case law for the analysis.
  • Copyright in a Nutshell: Copyright protection in the United States is legislated under the Copyright Act of 1976.[242] Section 102 of this act elaborates which works of authorship fall under the copyright protection.[243] Section 104 of the act deals with the question of when a work becomes the subject matter of copyright. [244] Section 104(a) rules that unpublished works specified by section 102 and/or section 103 are subject to copyright protection under this act, without regard to the nationality or domicile of the author.[245] Regarding published works, section 104(b) elaborates on when copyright protection will apply regarding the nature of the work and the nationality or domicile of the author.[246] Section 106 covers the exclusive rights of the author, like the right to reproduce copies of the copyrighted work, prepare derivative works based upon the copyrighted work, distribute copies to the public, and more. [247] The Copyright Act provides for limitations to these exclusive rights, like reproduction by libraries and archives[248] and transfers of particular copy after the first sale[249] (e.g. selling a CD that you bought from a store). The fair use doctrine in U.S. law is based on Section 107.[250] The fair use doctrine provides a defense for infringement–“Fair use was traditionally defined as ‘a privilege in others than the owner of the copyright to use the copyrighted material in a reasonable manner without his consent’”[251]—the application of the formerly judicial doctrine[252] requires the balancing of four statutory factors:
    • (1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
    • (2) the nature of the copyrighted work;
    • (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
    • (4) the effect of the use upon the potential market or value of the copyrighted work.[253]

[83]     The court decides each factor, ruling in favor or against the fair use. Then, each of the four factors is weighed against the total weight of the other factors.[254] This is not a trivial process, even for an experienced judge: “the four statutory factors be [may not] treated in isolation, one from another. All are to be explored, and the results weighed together, in light of the purposes of copyright.”[255] As such, it is an optimal exercise for an AI.

Stage 4: Method of the analysis

[84]     In each case law verdict, we examine and analyze each sentence and categorize it within the fair use doctrine analysis (i.e. marking which factor each sentence relates to and determining whether it supports the claims of the plaintiff or the defendant). Some sentences are deemed irrelevant to either side and are categorized as dicta or as support for the judge’s ruling.

[85]     Each sentence is then electronically tagged with information such as whether it favored the plaintiff or defendant in each factor. After reviewing the checklist with the Watson team, we concluded that in order to teach Watson to understand which sentences favor or oppose each factor –Purpose, Nature, Amount, and Effect – without going into the details of each sub-factor, there is a need for approximately five hundred analyzed and marked verdicts with tags. This produced approximately ten thousand sentences as a learning set for Watson.

[86]     Through examining various fair use cases, most of which concentrate in the Second and Ninth circuits (most of the relevant IP claims are filed in these courts which encompass New York and California —the centers of literature and film, respectively) we note that each case, in the aspect of the fair use doctrine, has roughly twenty to twenty-five sentences relating to the subject. Following the examination, we analyzed all relevant cases, marking each relevant sentence in each case that discussed fair use doctrine. This marking included determinations for each sentence in the following categories:

  • Data: The minimal amount of words needed to classify the sentence under the Factor label or the Side label, as discussed in the following articles.
  • Factor: Purpose / Nature / Amount / Effect / Ratio / Dicta.[256]
  • Side: Plaintiff / Defendant / Neutral.

For example, Figure 1.

__

[87]     We are currently conducting a pattern analysis via the AI algorithms of Watson in order to identify patterns in the rational of judges based on the given data. After incorporating this vast data set, Watson can provide, for a hypothetical case, exactly which claims and arguments are best, depending on whether we argue for the plaintiff or the defendant.

[88]     This entire project is complex and will take substantial time to complete. Nevertheless, with the planning phase complete and complications accounted for, the next step is to implement the technology.

IX. Conclusion and Recommendations for Courses of Action

[89]     The fruits of AI research are often attributed to other fields, as new revelations rapidly turn into mundane computer science inventions. However, we must remember that there is much more to explore and reveal within the yet unknown realms of AI.

[90]     In this paper we reviewed what defines AI and how it came about and evolved. We covered recent developments in AI relevant to the field of law and how they are leading to changes such as: automated cases analysis, increased efficiency in judicial tasks, replacing or minimizing human intervention in solving disputes, etc.

[91]     It is gradually becoming more conceivable that AI will change the world of law and the profession of lawyers in the near future. We are ready for it in two ways: via the market and technology. First, market failure has resulted in a judicial system overload. Second, funding for legal tech start-ups has grown from 7 million USD in 2009 to a whopping 450 million in 2013.[257] Market failures and technological achievements will work together to pave the way for a new version of the legal profession.

[1] Lost in Space (1965–1968) Quotes, IMDB, http://www.imdb.com/title/tt0058824/quotes, archived at https://perma.cc/J8RH-UYSB (last visited Sept. 16, 2016) (quoting “Robot: “Danger, Will Robinson! Danger!”).

The authors would like to thank the Zvi Meitar Family for their continued support of all of our research endeavors. In addition, the authors would like to thank the researchers at IBM Watson for their help and support throughout this project. Finally, the authors would like to especially thank Inbar Carmel for her incredible management of the Zvi Meitar Institute.

*Daniel Ben-Ari is a fourth-year student in the joint program in Law and Business Administration at Radzyner Law School and Arison Business School at Interdisciplinary Center Herzliya (IDC). Daniel served as an Operations Sergeant in the operations division in the Israel Defense Forces. At the IDC, Daniel participated in the Law Clinic for Class Actions and the Certificate Program in European Studies, and he is also a member of the Israeli-German Lawyers Association (IDJV). Additionally, Daniel is a member of the elite program of KPMG Accounting Firm for excellent students. He also volunteers with Nova Project, which provides business consulting services to NGOs. Currently, Daniel is the coordinator of the Zvi Meitar Emerging Technologies Program and working as a teaching assistant of the course Accounting Theory.

**Yael Frish is a BA graduate of the Honors Track at Lauder School of Government, Diplomacy and Strategy at IDC Herzliya.  In the IDF, Yael served as an intelligence officer in an elite intelligence unit, ranked Lieutenant. Yael graduated from the “Zvi Meitar Emerging Technologies” Program and is an alumna of the ProWoman organization. In summer 2014, Yael participated in an Israeli-Palestinian delegation to Germany, and in summer 2015, she represented Israel at the American Institution for Political and Economic Solutions’ Summer Program at Charles University, Prague. In the last two years, Yael has gained professional experience as an analyst, consultant and business developer in consulting and business intelligence companies.

***Adam Lazovski is a Managing Partner at Quedma Innovation Ltd.  Adam is also the founder and Program Manager of the Excel Ventures Program part of Birthright’s Leadership Branch (Excel).  Adam has also worked as a strategy and business development consultant in Robus, Israel’s largest and leading legal marketing and consulting firm.  Adam holds a B.A in Psychology and an LL.B, both from IDC Herzliya. During his studies Adam was part of the Zvi Meitar Emerging Technologies Program.  In addition, Adam was also part of the Rabin Leadership Program, where he initiated a social venture and studied the science behind leaders and entrepreneurs.  Adam served as a first sergeant in the Demolition and Sabotage special unit within the Paratroopers Brigade in the Israel Defense Forces. He maintains an active reserve status.

****Uriel Eldan is a graduate of the joint program in Law (LL.B) and Business Administration (B.A.) at Radzyner Law School and Arison Business School at IDC Herzliya. Uriel served in the elite unit “8200” and in the Research Department of the Army Intelligence in the Israel Defense Forces.  Uriel co-founded the Capital Markets Investment Club at IDC and was a member of the Zvi Meitar Emerging Technologies honors program.  Uriel has worked as a teaching assistant in numerous courses at IDC, and now also in Tel Aviv University.  After graduation, Uriel started his legal internship at one of Israel’s top law firms Herzog, Fox & Neeman in the Technology & Regulation department.

*****Dov Greenbaum is Director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, Interdisciplinary Center, Herzliya, Israel (IDC). Dov is also an Assistant Professor (adj) in the Department of Molecular Biophysics and Biochemistry at Yale University and a practicing intellectual property attorney.  Dov has degrees and postdoctoral fellowships from Yale, UC Berkeley, Stanford, and  Eidgenössische Technische Hochschule Zürich (ETH Zürich).

[2] Samuel Gibbs, Elon Musk: Artificial Intelligence is Our Biggest Existential Threat, The Guardian (Oct. 27 2014, 6:26), https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat, archived at https://perma.cc/MSN2-5TWC.

[3] Kris Hammond, What is Artificial Intelligence?, Computerworld (Apr. 10, 2015, 4:05 AM), http://www.computerworld.com/article/2906336/emerging-technology/what-is-artificial-intelligence.html, archived at https://perma.cc/J7VS-HG43.

[4] See id.; see, e.g., Stuart Jonathan Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 18 (3rd ed. 2010) (discussing important aspects of A.I.).

[5] See John McCarthy, What Is Artificial Intelligence? 2–3 (Nov. 12, 2007) (unpublished manuscript) (on file with Stanford University), http://www-formal.stanford.edu/jmc/whatisai.pdf, archived at https://perma.cc/XF9R-UHKV.

[6] See Ido Roll & Ruth Wylie, Evolution and Revolution in Artificial Intelligence in Education, 26 Int’l J. Artificial Intelligence in Educ. 582, 583 (2016); see Monika Hengstler, Ellen Enkel & Selina Duelli, Applied Artificial Intelligence and Trust—The Case of Autonomous Vehicles and Medical Assistance Devices, 105 Technological Forecasting & Social Change 105, 114 (2016).

[7] See Karamjit S. Gill, Artificial Super Intelligence: Beyond Rhetoric, 31 AI & SOCIETY 137, 137 (2016).

[8] See Avneet Pannu, Artificial Intelligence and its Application in Different Areas, 4 Int’l J. Engineering & Innovative Tech. (IJEIT) 79, 79, 84 (2015).

[9] See id. at 5.

[10] See id. at 3.

[11] See id. at 5.

[12] Katie Hafner, Still a Long Way from Checkmate, N.Y. Times, Dec. 28, 2000, http://www.nytimes.com/2000/12/28/technology/28ARTI.html?pagewanted=1, archived at https://perma.cc/X2PX-25EW.

[13] See Russell & Norvig, supra note 4, at 1020.

[14] See id.

[15] See id.; see John Frank Weaver, Robots Are People Too: how Siri, Google Car, and artificial intelligence will force us to change our laws 3 (2014) [hereinafter Robots Are People Too].

[16] See Russell & Norvig, supra note 4, at 1020; see Robots Are People Too, supra note 15, at 3.

[17] See Russell & Norvig, supra note 4, at 1026; see Robots Are People Too, supra note 15, at 3.

[18] E.W. Dijkstra, The Threats to Computing Science (EWD898), E.W. Dijkstra Archive. USA: Center for American History, University of Texas at Austin, http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD898.html, archived at https://perma.cc/ZU8Y-26TY.

[19] Russell & Norvig, supra note 4, at 1026.

[20] McCarthy, supra note 5, at 10-11.

[21] See Richard Thomason, Logic and Artificial Intelligence, Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/logic-ai/, archived at https://perma.cc/3RPH-PVKV, (last updated Oct. 30, 2013); see Raymond Reiter, Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems 133 (2001).

[22] See David Senior, Narrow AI: Automating The Future of Information Retrieval, TechCrunch, Jan. 31, 2015, https://techcrunch.com/2015/01/31/narrow-ai-cant-do-that-or-can-it/, archived at https://perma.cc/LP5K-Z47X.

[23] See generally Feng-hsiung Hsu, IBM’s Deep Blue Chess Grandmaster Chips, 19 IEEE Micro 70, 70 (1999) (describing IBM’s Deep Blue super computer and discussing the main source of its computation power).

[24] See generally Aviva Rutkin, Anything You Can Do . . ., 229 New Scientist 20,20 (2016) (discussing how artificial intelligence has developed and advanced).

[25] Hafner, supra note 12.

[26] Id.

[27] Id.

[28] See generally Rob High, The Era of Cognitive Systems: An Inside Look at IBM Watson and How it Works (IBM Corp. ed., 2012) (providing a detail analysis on how Watson works).

[29] See Modria, http://modria.com/product/, archived at https://perma.cc/RKN5-LPWT (last visited Nov. 1, 2016).

[30] See Hafner, supra note 12.

[31] See id.

[32] See id.

[33] See, e.g., Robert Fisher, Representations of Artificial Intelligence in Cinema, University of Edinburgh–School of Informatics, http://homepages.inf.ed.ac.uk/rbf/AIMOVIES/AImovies.htm, archived at https://perma.cc/Y7KC-XHP3 (last updated Apr. 16, 2015); see Kathleen Richardson, Rebranding the Robot, 4 Engineering & Technology 42 (2009); see Robert B. Fisher, AI and Cinema Does Artificial Insanity Rule?, in Twelfth Irish Conf. on Artificial Intelligence and Cognitive Science (2001); see Elinor Dixon, Constructing the Identity of AI: A Discussion of the AI Debate and its Shaping by Science Fiction (May 28, 2015) (unpublished Bachelor thesis, Leiden University) (on file with the Leiden University Repository), https://openaccess.leidenuniv.nl/bitstream/handle/1887/33582/Elinor%20Dixon%20BA%20Thesis%20Final.pdf?sequence=1, archived at https://perma.cc/H2P7-NXVC.

[34] 2001: A Space Odyssey, (Stanley Kubrick Productions 1968).

[35] The Matrix, (Village Roadshow Pictures, Groucho II Film Partnership & Silver Pictures 1999).

[36] The Terminator (Cinema ’84 & Pacific Western 1984).

[37] See Jean- Baptiste Jeangène Vilmer, Terminator Ethics: Should We Ban “Killer Robots” Ethics & Int’l Affairs, Mar. 23, 2015, https://www.ethicsandinternationalaffairs.org/2015/terminator-ethics-ban-killer-robots/, archived at https://perma.cc/8XSE-BNVC.

[38] A.I. Artificial Intelligence (Amblin Entertainment & Stanley Kubrick Productions 2001).

[39] Star Wars: Episode IV – A New Hope (Lucasfilm Ltd. 1977).

[40] Spaceballs (Brooksfilms 1987).

[41] See Wendell Wallach & Colin Allen, Moral Machines: Teaching Robots Right from Wrong 7–8 (2009).

[42] John Searle, The Chinese Room Argument, 4 Scholarpedia 3100 (2009), http://www.scholarpedia.org/article/Chinese_room_argument, archived at https://perma.cc/FK4A-5X7Q.

[43] David Adrian Sanders & Giles Eric Tewkesbury, It Is Artificial Idiocy That Is Alarming: Not Artificial Intelligence, in Proc. of the 11th Int’l Conf. on Web Info. Sys. and Technologies 345, 347 (2015).

[44] See Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014).

[45] See id. at 26, 155.

[46] See id. at 29.

[47] See id. at 140.

[48] See Singularity Hypotheses: A Scientific and Philosophical Assessment 1–4 (Amnon H. Eden et al. eds., 2012) [hereinafter Singularity Hypotheses]

[49] See id. at 28–29. 

[50] See Stanislaw Ulam, John Von Neumann, 64 Bull. of the Am. Mathematical Soc’y 1, 5 (May 1958), http://www.ams.org/journals/bull/1958-64-03/S0002-9904-1958-10189-5/S0002-9904-1958-10189-5.pdf, archived at https://perma.cc/AV9D-EJ3T.

[51] Guia Marie Del Prado, Stephen Hawking Warns of an ‘Intelligence Explosion,’ Bus. Insider (Oct. 9, 2015, 2:17 PM), http://www.businessinsider.com/stephen-hawking-prediction-reddit-ama-intelligent-machines-2015-10, archived at https://perma.cc/P4NL-2AJ2.

[52] Singularity Hypotheses, supra note 48, at 3.

[53] See Hazem Ahmed & Janice Glasgow, Swarm Intelligence: Concepts, Models and Applications: Technical Report 2012-585, Queen’s Univ. School of Computing 2 (2012), http://ftp.qucis.queensu.ca/TechReports/Reports/2012-585.pdf, archived at https://perma.cc/8APG-T4ZX.

[54] See Eric Bonabeau, Marco Dorigo & Guy Theraulaz, Swarm Intelligence: From Natural to Artificial Systems 19 (1999).

[55] See Vilmer, supra note 37.

[56] See John Searle, Minds, Brains, and Computers, 3 The Behavioral & Brain Sciences 349, 353 (1980), http://faculty.arts.ubc.ca/rjohns/searle.pdf, archived at https://perma.cc/7K9U-98FA (stating that the equation “mind is to brain as program is to hardware” is flawed).

[57] See Russell & Norvig, supra note 4, at 36–37.

[58] See Michael R. LaChat, Artificial Intelligence and Ethics: An Exercise in the Moral Imagination, 7 AI Mag. 70, 70–71 (1986), http://www.aaai.org/ojs/index.php/aimagazine/article/view/540/476, archived at https://perma.cc/YQ72-FAXG (“[T]he possibility of constructing a personal AI raises many ethical and religious questions that have been dealt with seriously only by imaginative works of fiction; they have largely been ignored by technical experts and by philosophical and theological ethicists”). 

[59] Russell & Norvig, supra note 4, at 1020.

[60] See Nick Bostrom, Robots & Rights: Will Artificial Intelligence Change the Meaning of Human Rights? 5, 5 (Matt James & Kyle Scott eds., 2008).

[61] See Russell & Norvig, supra note 4, at 331.

[62] See István S. N. Berkeley, What is Artificial Intelligence?, Univ. of La. at Lafayette (1997), http://www.ucs.louisiana.edu/~isb9112/dept/phil341/wisai/WhatisAI.html, archived at https://perma.cc/2ZGB-L8P7.

[63] See Alan M. Turing, Computing Machinery and Intelligence (1950).

[64] See Berkeley, supra note 62.

[65]See Daniel C. Dennett, Can Machines Think?, in How we Know (Michael Shafto ed., 1985), http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/dennettcanmach.pdf, archived at https://perma.cc/4JWH-XK3K (last visited Sept. 22, 2016).

[66] See id.

[67] See id.  

[68] See id.

[69] See id.

[70] See Jo Best, IBM Watson: The Inside Story of How the Jeopardy-Winning Supercomputer was Born and What it Wants to do Next TechRepublic, (Sept. 9, 2013, 8:45 AM), http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/, archived at https://perma.cc/Z6MD-ZGUA.

[71] See Stuart Russell, Introduction to AI: A Modern Approach, Univ. of CA- Berkeley, https://people.eecs.berkeley.edu/~russell/intro.html, archived at https://perma.cc/R2DQ-94R3 (last visited Oct. 31, 2016).

[72] See Gary Fostel, The Turing Test is For the Birds, 4 SIGART Bull. 7, 8 (1993).

[73] Jose Hernandez-Orallo, Beyond the Turing Test, 9 J. of Logic, Language & Info. 447, 447-466, 458 (2000).

[74] See José Hernández-Orallo & David L. Dowe, Measuring Universal Intelligence: Towards an Anytime Intelligence Test, 174 Artificial Intelligence 1508, 1509 (2010), http://ac.els-cdn.com/S0004370210001554/1-s2.0-S0004370210001554-main.pdf?_tid=179c084e-83e4-11e6-b8dd-00000aacb362&acdnat=1474892815_a27d3e23a8991e0587ff0c3a6c4c0086, archived at https://perma.cc/C3PW-438Q.

[75] See Hector J. Levesque, Ernest Davis, & Leora Morgenstern, The Winograd Schema Challenge, Proc. of the Thirteenth Int’l Conf. on Principles of Knowledge Representation & Reasoning 552, 554, 557–58 (2012), http://www.aaai.org/ocs/index.php/KR/KR12/paper/viewFile/4492/4924/, archived at https://perma.cc/4VWX-7SYY.

[76] See generally Allen Newell & Herbert Simon, The Logic Theory Machine—A Complex Information Processing System, 2 IRE Transactions on Info. Theory 61, (1956), https://www.u-picardie.fr/~furst/docs/Newell_Simon_Logic_Theory_Machine_1956.pdf, archived at https://perma.cc/NM8Q-LJZS (detailing logic theorist system).

[77] See generally Mark O. Riedl, The Lovelace 2.0 Test of Artificial Creativity and Intelligence, arXiv: 1410.6142 (2014), http://arxiv.org/pdf/1410.6142v3.pdf, archived at https://perma.cc/9HC5-HYF3 (detailing the Lovelace 2.0 test).

[78] See id.

[79] See Kevin Warwick & Huma Shah, Human Misidentification in Turing Tests, 27 J. Exp. & Theoretical Artificial Intelligence 123, 124-25 (2014) http://www.tandfonline.com/doi/pdf/10.1080/0952813X.2014.921734, archived at https://perma.cc/53VP-42NZ.

[80] See Kevin Warwick & Huma Shah, Can Machines Think? A Report on Turing Test Experiments at the Royal Society, 27 J. Exp. & Theoretical Artificial Intelligence 1, 17 (2015) http://www.tandfonline.com/doi/pdf/10.1080/0952813X.2015.1055826?needAccess=true, archived at https://perma.cc/V279-8BRN.

[81] See generally John McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955, 27 AI magazine 12, 13-14 (2006), http://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802, archived at https://perma.cc/WA82-QMSZ (reproducing part of the Dartmouth summer research project and summarizing its proposal); see also Berkeley, supra note 62.

[82] See Berkeley, supra note 62.

[83] See Interview by Jeffrey Mishlove with John McCarthy, Ph.D., Thinking Allowed, Conversations on the Leading Edge of Knowledge and Discovery: Artificial Intelligence (1989), http://www.intuition.org/txt/mccarthy.htm, archived at https://perma.cc/3LCQ-KYW5.

[84] John McCarthy, Ascribing Mental Qualities to Machines, Stan. Artificial Intelligence Lab. 1, 2 (1979), http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA071423, archived at https://perma.cc/HJ9K-VC8V.

[85] See Marvin Minsky, ‘Father of Artificial Intelligence,’ Dies at 88, MIT News, Jan. 25, 2016, http://news.mit.edu/2016/marvin-minsky-obituary-0125, archived at https://perma.cc/AS9V-GN4S.

[86] See Will Knight, What Marvin Minsky Still Means for AI, MIT Technology Rev., Jan. 26, 2016, https://www.technologyreview.com/s/546116/what-marvin-minsky-still-means-for-ai/, archived at https://perma.cc/BN2U-AXE5.

[87] See id. 

[88] See Jan Mycielski, Book Reviews, Perceptrons, An Introduction to Computational Geometry, 78 Bull. of the Am. Mathematical Soc’y 12, 12 (1972), http://www.ams.org/journals/bull/1972-78-01/S0002-9904-1972-12831-3/S0002-9904-1972-12831-3.pdf, archived at https://perma.cc/ZT2X-X8JS (reviewing Perceptrons by Minsky and Papert); see also Jordan B. Pollack, Book Review, No Harm Intended, 33 J. Mathematical Psycholog. 358, 358 (1988), http://www.demo.cs.brandeis.edu/papers/perceptron.pdf, archived at https://perma.cc/99US-9KK8 (reviewing the expanded edition of Perceptrons by Minsky and Papert).

[89] See Knight, supra note 86.

[90] See id. 

[91] See id.

[92] See Allen Newell & Herbert A. Simon, Computer Science as Empirical Inquiry: Symbols and Search, 19 Comm. ACM 113, 116 (1976).

[93] Herbert A. Simon, The Sciences of the Artificial 23 (3rd ed. 1996).

[94] See id. at 22.

[95] See Nils Nilsson, The Physical Symbol System Hypothesis: Status and Prospects, in 50 Years of AI 9, 11 (Max Lungarella, Fumiya Iida, Josh Bongard & Rolf Pfeifer eds., 2007).

[96] See David S. Touretzky & Dean A. Pomerleau, Reconstructing Physical Symbol Systems, 18 Cognitive Science, 345, 349 (1994).

[97] See generally Alexander Singer, Implementations of Artificial Neural Networks on the Connection Machine, 14 Parallel Computing 305 (1990) (discussing the practical implementation of artificial neural networks on the Connection Machine and the natural match between the two concepts).

[98] See Davis Ernest, Representations of Commonsense Knowledge 2 (Ronald J. Brachman ed., 1990); see John McCarthy Applications of Circumscription to Formalizing Common-Sense Knowledge, Dep’t of Computer Science, Stan. Univ. (1986), http://www-formal.stanford.edu/jmc/applications.pdf, archived at https://perma.cc/NZG6-6ZN3.

[99] See David Lynton Poole, Alan K. Mackworth & Randy Goebel, Computational Intelligence: A Logical Approach 1, 18 (1998).

[100] See Nilssson, supra note 95, at 11; see Bo Göranzon, Artificial Intelligence, Culture and Language: On Education and Work 220 (Magnus Florin ed., 1990).

[101] See Berkeley, supra note 62.

[102] See John McCarthy, The Well-Designed Child, 172 Artificial Intelligence 2003, 2011 (2008).

[103] See id. 

[104] See Brenden M. Lake et al., Building Machines That Learn and Think Like People. Center for Brains, Minds, and Machines Memo No. 046, at 7 (2016), http://www.mit.edu/~tomeru/papers/machines_that_think.pdf, archived at https://perma.cc/3Q9P-87XD.

[105] See McCarthy, supra note 4.

[106] See Ernest Davis & Gary Marcus, Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence, 58 Communications of the ACM 92, 93 (2015) http://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/fulltext#, archived at https://perma.cc/7PJH-KZP6.

[107] See Vincent C. Müller & Nick Bostrom, Future Progress In Artificial Intelligence: A Survey of Expert Opinion, Fundamental Issues Of Artificial Intelligence, 553, 553 (2016) (“The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter”), http://www.nickbostrom.com/papers/survey.pdf, archived at https://perma.cc/UA4E-P6GP.

[108] See Davis & Marcus, supra note 106, at 99-102.

[109] See infra text accompanying notes 240-42.

[110] See Education in Communities, IBM Corp. Resp. Rep. (2014), http://www.ibm.com/ibm/responsibility/2014/communities/education-in-communities.html, archived at https://perma.cc/P2W6-RHFD.

[111] Other People’s Money (Warner Bros. 1991).

[112] See generally, Mark Bergen, Another AI Startup Wants to Replace Hedge Funds, recode, (Aug. 7, 2016, 11:15 AM), http://www.recode.net/2016/8/7/12391180/artificial-intelligence-emma-hedge-fund, archived at https://perma.cc/924G-F822 (explaining how a company aiming to integrate artificial intelligence in stock market trading is a part of a larger wave of start-ups attempting to integrate AI learning in financial markets).

[113] See generally Jacob Brogan, What’s the Deal With Artificial Intelligence Killing Humans? Slate, (April 1 2016, 7:03 AM), http://www.slate.com/articles/technology/future_tense/2016/04/will_artificial_intelligence_kill_us_all_an_explainer.html, archived at https://perma.cc/9HSE-XA7Z (explaining the differing views on the danger of AI in a variety of fields); see also Heather M. Roff, Killer Robots on the Battlefield, Slate (April 7, 2016 11:45 AM), http://www.slate.com/articles/technology/future_tense/2016/04/the_danger_of_using_an_attrition_strategy_with_autonomous_weapons.html, archived at https://perma.cc/Q3FR-M8J2 (discussing the fears and benefits that accompany the prospect of autonomous weapons that engage targets entirely independent of human operation).

[114] See John O. McGinnis & Russell G. Pearce, The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services, 82 Fordham L. Rev. 3041, 3055 (2014) (discussing the possible disruptions that the legal profession may face as a result of integration of A.I. into the legal profession).

[115] See e.g., Overloaded Courts, Not Enough Judges: The Impact on Real People, People for the Am. Way, http://www.pfaw.org/sites/default/files/lower_federal_courts.pdf, archived at https://perma.cc/8CPY-4LGP (last visited Oct. 31, 2016) (explaining the current strain on the American judiciary).

[116] See Guilty as Charged, The Economist (Feb 2, 2013, 4:02 PM), http://www.economist.com/news/leaders/21571141-cheaper-legal-education-and-more-liberal-rules-would-benefit-americas-lawyersand-their, archived at https://perma.cc/Z7KX-Y6S9 (“America has more lawyers per person of its population than any of 29 countries studied (except Greece)”).

[117] See Maria L. Marcus, Judicial Overload: The Reasons and the Remedies, 28 Buffalo L. Rev 111, 112-15, 120 (1978).

[118] See id. at 111. 

[119] See How Big is the US Legal Services Market?, Thompson Reuters (2015) http://legalexecutiveinstitute.com/wp-content/uploads/2016/01/How-Big-is-the-US-Legal-Services-Market.pdf, archived at https://perma.cc/6LH8-AXGN [hereinafter U.S. Legal Services Market]

[120] Id.

[121] See William D. Henderson, From Big Law to Lean Law, 38 Int’l Rev. of L. & Econ. 1, 3-5, 10, 11 (2014). But c.f., Russell G. Pearce & Eli Wald, The Relational Infrastructure of Law Firm Culture and Regulation: The Exaggerated Death of Big Law, 42 Hofstra L. Rev 109, 110 (2013) (discussing how death of big law is not dying and present contradicting evidence).

[122] See U.S. Legal Services Market, supra note 119.

[123] See Artificial Intelligence Global Quarterly Financing History 2010-2015, CB Insights (2016), https://cbi-blog.s3.amazonaws.com/blog/wp-content/uploads/2016/02/AI_quarterly_finance_20160203.jpg, archived at https://perma.cc/7W29-26UG.

[124] See Christine Magee, The Jury is Out on Legal Startups, TechCrunch, Aug. 5 2014 http://techcrunch.com/2014/08/05/the-jury-is-out-on-legal-startups/, archived at https://perma.cc/Y95J-C2U9.

[125] See Raymond H. Brescia, et al. Embracing Disruption: How Technological Change in the Delivery of Legal Services Can Improve Access to Justice, 78 Albany L. Rev. 553, 553-55 (2014); see generally, Joan C. Williams, Aaron Platt & Jessica Lee, Disruptive Innovation: New Models of Legal Practice, at 2-3 (2015) http://ssrn.com/abstract=2601133, archived at https://perma.cc/4SVK-YMFX (explaining the impact of new business models and technology on legal access).

[126] See Julius Stone, Legal System and Lawyers’ Reasonings 37-41 (1964).

[127] But see, Jonathan Smithers, President of the Law Society, Speech at the Union Internationale des Avocats (UIA) Conference: Lawyers Replaced by Robots: Will Artificial Intelligence Replace the Judgement and Independence of Lawyers? (Oct. 30, 2015) http://www.lawsociety.org.uk/news/speeches/lawyers-replaced-by-robots-artificial-intelligence-replace-judgment/, archived at https://perma.cc/EV4A-RYBV.

[128] See Ian Lopez, Can AI Replace Lawyers?, Law.Com, Apr. 8, 2016, http://www.law.com/sites/articles/2016/04/08/can-ai-replace-lawyers-vanderbilt-law-event-to-address-legal-machines/?slreturn=20160414054949, archived at https://perma.cc/UY4Q-U8FY.

[129]See Machine Learning: What it is and Why it Matters, SAS Institute, http://www.sas.com/en_us/insights/analytics/machine-learning.html, archived at https://perma.cc/X5VD-4WPW (last visited Sept. 26, 2016).

[130] See id.

[131] See id.

[132] See Prakash M. Nadkarni, Lucila Ohno-Machado & Wendy W. Chapman, Natural Language Processing: An Introduction, 18 J. Am. Med. Informatics Ass’n. 544, 544 (2011), http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168328/, archived at https://perma.cc/3D53-TFM4.

[133] See id.

 [134] See id. at 545-46. 

[135] See Elizabeth D. Liddy, Natural Language Processing, Surface (Syracuse Univ. Research Facility and Collaborative Env’t) (2001) http://surface.syr.edu/cgi/viewcontent.cgi?article=1043&context=istpub, archived at https://perma.cc/MTC7-HHYK.

[136] See Nadkarni et al., supra note 132, at 544-45; see Steve Lohr, Aiming to Learn as We Do, a Machine Teaches Itself, N.Y. Times, Oct. 4, 2010, http://www.nytimes.com/2010/10/05/science/05compute.html?hpw=&pagewanted=all&_r=0, archived at https://perma.cc/KNP8-KWPE.

[137] See Nadkarni et al., supra note 132, at 549.

[138] See Big Data: What It is and Why It Matters, SAS Institute, http://www.sas.com/en_us/insights/big-data/what-is-big-data.html, archived at https://perma.cc/G3N7-566N (last visited Sept. 26, 2016).

[139] See Martin Hilbert & Priscila Lopez, The World’s Technological Capacity to Store, Communicate, and Compute Information: Tracking the Global Capacity of 60 Analog and Digital Technologies During the Period from 1986 to 2007, MartinHilbert.Net, Apr. 1, 2011, http://www.martinhilbert.net/WorldInfoCapacity.html/, archived at https://perma.cc/D5MK-CF5L (last visited Sept. 26, 2016).

[140] See SAS Institute, supra note 138.

[141] See Bernard Marr, How Big Data is Disrupting Law Firms and The Legal Profession, Forbes (Jan. 20, 2016, 2:31 AM), http://www.forbes.com/sites/bernardmarr/2016/01/20/how-big-data-is-disrupting-law-firms-and-the-legal-profession/#57a63cf35ed6, archived at https://perma.cc/9WLW-7EZV.

[142] See SAS Institute, supra note 138.

[143] See Why Artificial Intelligence is Enjoying a Renaissance, The Economist (July 15, 2016, 4:26) http://www.economist.com/blogs/economist-explains/2016/07/economist-explains-11, archived at https://perma.cc/J63S-RGKH.

[144] See id.

[145] See Bruce G. Buchanan & Thomas E. Headrick, Some Speculation About Artificial Intelligence and Legal Reasoning, 23 Stan. L. Rev. 40, 40-41 (1970).

[146] See id. at 40.

[147] See L. Thorne McCarty, The Taxman Project: Towards a Cognitive Theory of Legal Argument, in Computer Science & Law: An Advanced Course 23, 23 (Brian Niblett ed., 1980).

[148] See id.

[149] See Olaf Mw, IBM Debating Technologies, YouTube (May 6, 2014), https://www.youtube.com/watch?v=7g59PJxbGhY, archived at https://perma.cc/L3MF-FJYV (excerpt from the 2014 Milken Institute).

[150] See IBM Watson, IBM Watson: How it Works, YouTube (Oct. 7, 2014), https://www.youtube.com/watch?v=_Xcmh1LQB9I, archived at https://perma.cc/68XU-J3D6.

[151] See Ruty Rinott et al., Show Me Your Evidence – An Automatic Method for Context Dependent Evidence Detection, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 440, 440 (2015).

[152] ROSS Intelligence, http://www.rossintelligence.com/, archived at https://perma.cc/8Q63-XBAQ (last visited Sep. 26, 2016)[hereinafter ROSS Intelligence].

[153] ROSS Intelligence, https://wefunder.me/ross, archived at https://perma.cc/ZBG4-6JBX. “What they do: ROSS is an A.I. lawyer built on top of Watson, IBM’s cognitive computer, that provides cited legal answers instantly. Ross works much like Siri. With ROSS, lawyers ask a simple question the system sifts through its database of legal documents and spits out an answer paired with a confidence rating. Why it’s a big deal: Legal research is time-consuming and expensive. It erodes law firms’ profits and prices clients of [sic] out of services — Law firms spend $9.6 billion on research annually. Up until now the current research databases have relied heavily on the flawed system of keyword search. With Ross it’s as easy as asking a question. Ross has the potential to save both lawyers and clients billions every year. If they succeed, Ross will be the first ever artificial [sic] intelligence research and indexing software.”

[154] ROSS Intelligence, supra note 152. 

[155] See id.; see Karen Turner, Meet ‘Ross,’ the Newly Hired Legal Robot, Wash. Post, May, 16, 2016, https://www.washingtonpost.com/news/innovations/wp/2016/05/16/meet-ross-the-newly-hired-legal-robot//, archived at https://perma.cc/J2US-RP9M.

[156] See id.; ROSS Intelligence, supra note 152.

[157] ModusP, http://modusp.com/, archived at https://perma.cc/7BSM-54AT (last visited Sep. 26, 2016).

[158] See id.

[159] Lex Machina: a LexisNexis Company, https://lexmachina.com/, archived at https://perma.cc/QLF2-SM8V (last visited Sept. 26, 2016).

[160] See John R. Allison et al., Understanding the Realities of Modern Patent Litigation., 92 Tex. L. Rev. 1769, 1772-73 (2014).

[161] See id.

[162] See id.

[163] See id. at 1773.

[164] See Product, Modria, http://www.modria.com/product/, archived at https://perma.cc/K5AE-DS9L (last visited Sept. 26, 2016).

[165] See id. 

[166] Id.

[167] See id.

[168] See Ben Barton, Modria and the Future of Dispute Resolution, Bloomberg Law, Oct 1, 2015, https://bol.bna.com/modria-and-the-future-of-dispute-resolution/, archived at https://perma.cc/9J3D-N5UU.

[169] See Solutions, Premonition, http://premonition.ai/law/, archived at https://perma.cc/MXW5-B7RR (last visited Sept. 26, 2016).

[170] See id.

[171] See How It Helps, Beagle, http://beagle.ai/, archived at https://perma.cc/WY6X-CGV8 (last visited Sept. 26, 2016).

[172] See id.

[173] See id.  

[174] See Legal Robot, http://www.legalrobot.com, archived at https://perma.cc/9NJE-JZ6J (last visited Sep. 22, 2016).

[175] See id.  

[176] See id. 

[177] See id.

[178] See Mohammad Raihanul Islam et al., Could Antonin Scalia be replaced by an AI? Researchers reveal system that can already predict how Supreme Court justices will vote, Daily Mail (Mar.11, 2016), http://www.dailymail.co.uk/sciencetech/article-3488508/Could-Antonin-Scalia-replaced-AI-Researchers-reveal-smart-predict-justices-vote.html, archived at https://perma.cc/3YWP-8SQ3.

[179] See Ephraim Nissan, Digital Technologies and Artificial Intelligence’s Present and Foreseeable Impact on Lawyering, Judging, Policing and Law Enforcement, AI & SOC’Y (Oct. 14, 2015), http://link.springer.com/article/10.1007/s00146-015-0596-5/fulltext.html, archived at https://perma.cc/ZJ6G-YRSE.

[180] See, About Us, Modria, http://modria.com/about-us/, archived at https://perma.cc/2PHR-RLH6 (last visited Sep. 22, 2016).

[181] See Peter Reilly, Mindfulness, Emotions, and Ethics in Law and Dispute Resolution: Mindfulness, Emotions, and Mental Models: Theory that Leads to More Effective Dispute Resolution, 10 Nev. L.J. 433, 438, 447 (2010).

[182] See Frequently Asked Questions, Modria, http://modria.com/faq/, archived at https://perma.cc/9JEM-FYKC (last visited Sep. 22, 2016).

[183] See Mark Wilson, The Latest in ‘Technology Will Make Lawyers Obsolete!‘, Findlaw (Jan. 6, 2015, 11:39 AM), http://blogs.findlaw.com/technologist/2015/01/the-latest-in-technology-will-make-lawyers-obsolete.html#sthash.nkz8BvRE.dpuf, archived at https://perma.cc/GX98-4LY7.

[184] See id.

[185] See Dominic Carman, ‘We’re not even at the fear stage’ – Richard Susskind on a very different future for the legal profession, LegalWeek (Nov. 16, 2015), http://www.legalweek.com/sites/legalweek/2015/11/16/were-not-even-at-the-fear-stage-richard-susskind-on-a-very-different-future-for-the-legal-profession/, archived at https://perma.cc/S3RA-MSVU.

[186] See Jane Croft, Legal firms unleash office automatons, Fin. Times (May 16, 2016), https://www.ft.com/content/19807d3e-1765-11e6-9d98-00386a18e39d, archived at https://perma.cc/5E6L-7E7K.

 [187] See Id.

[188] See Frank A. Pasquale & Glyn Cashwell, Four Futures of Legal Automation, 63 UCLA L. Rev Discourse 26, 28 (2015); see also, David Kravets, Law Firm Bosses Envision Watson-Type Computers Replacing Young Lawyers, Ars Technica (Sept. 26, 2015), http://arstechnica.com/tech-policy/2015/10/law-firm-bosses-envision-watson-type-computers-replacing-young-lawyers/, archived at https://perma.cc/J3TN-64R3 (discussing the possibility of IBM Watson-like computers replacing lawyers and paralegals within the next ten years).

[189] See Erik Sherman, ‘Highly Creative’ Professionals Won’t Lose their Jobs to Robots, Study Finds, Fortune (Apr. 22, 2015), http://fortune.com/2015/04/22/robots-white-collar-ai, archived at https://perma.cc/GH8F-7YEU.

[190] See Jacob Gershman, Could Robots Replace Jurors?, Wall St. J. L. blog (Mar. 6, 2013, 1:30 PM), http://blogs.wsj.com/law/2013/03/06/could-robots-replace-jurors/, archived at https://perma.cc/5LT5-JBAP.

[191] See Anthony D’Amato, Can/Should Computers Replace Judges?, 11 Ga. L. Rev. 1277, 1280–81 (1977).

[192] See id. at 1292.

[193] See Michael Horm, Disruption Looms For Law Schools, Forbes (Mar. 17, 2016, 8:23 AM), http://www.forbes.com/sites/michaelhorn/2016/03/17/disruption-looms-for-law-schools/#6f77e6002708, archived at https://perma.cc/JY9M-FJ53.

[194] See id.

[195] IBM Watson computer is an example of a machine with strong computation skills, represented in hardware, software and connectivity. See What is Watson?, IBM, http://www.ibm.com/watson/what-is-watson.html, archived at https://perma.cc/8WCK-X3G7 (last visited Oct. 31, 2016).

[196] Thomas S. Clay & Eric A. Seeger, 2015 Law Firms in Transition: An Altman Weil Flash Survey, Altman Weil, 55 (2015), http://www.altmanweil.com/index.cfm/fa/r.resource_detail/oid/1c789ef2-5cff-463a-863a-2248d23882a7/resources/Law_Firms_in_Transition_2015_An_Altman_Weil_Flash_Survey.cfm, archived at https://perma.cc/6YRC-BFBP.

[197] See id. at 82. 

[198] See id.

[199] See id. at 83.

[200] See Richard Susskind, Tomorrow’s Lawyers: An Introduction To Your Future 8 (2013).

[201] See id.

[202] See McGinnis & Pearce, supra note 114, at 3046.

[203] See Relativity, https://www.kcura.com/relativity/, archived at https://perma.cc/5N67-ERGV (last visited Nov. 7, 2016).

[204] See Overview, Modus, www.discovermodus.com/overview/, archived at https://perma.cc/8VRG-TRN6 (last visited Oct. 31, 2016).

[205] See Who We Are, OpenText, http://www.recommind.com/products/ediscovery-review-analysis, archived at https://perma.cc/3D9A-MBTW (last visited Oct. 31, 2016).

[206] See kCura, http://contentanalyst.com/, archived at https://perma.cc/E5NC-BWG4 (last visited Oct. 31, 2016).

[207] See Gordon V. Cormack & Maura R. Grossman, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery, Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (2014).

[208] See Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 193 (S.D.N.Y. 2012) (“This Opinion appears to be the first in which a Court has approved of the use of computer-assisted review. That does not mean computer-assisted review must be used in all cases, or that the exact ESI protocol approved here will be appropriate in all future cases that utilize computer-assisted review. . . What the Bar should take away from this Opinion is that computer-assisted review is an available tool and should be seriously considered for use in large-data-volume cases where it may save the producing party (or both parties) significant amounts of legal fees in document review.”); see also Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 126 (S.D.N.Y. 2015) (“This judicial opinion now recognizes that computer-assisted review [i.e., TAR][Technology Assisted Review] is an acceptable way to search for relevant ESI in appropriate cases.”); see also Dynamo Holdings Ltd. P’ship v. Comm’r, 143 T.C. 183, 190 (T.C. 2014) (“We find a potential happy medium in petitioners’ proposed use of predictive coding. Predictive coding is an expedited and efficient form of computer-assisted review that allows parties in litigation to avoid the time and costs associated with the traditional, manual review of large volumes of documents. Through the coding of a relatively small sample of documents, computers can predict the relevance of documents to a discovery request and then identify which documents are and are not responsive.”); see also, id at 191–92 (“Respondent asserts that predictive coding should not be used in these cases because it is an ‘unproven technology.’ We disagree. Although predictive coding is a relatively new technique, and a technique that has yet to be sanctioned (let alone mentioned) by this Court in a published Opinion, the understanding of e-discovery and electronic media has advanced significantly in the last few years, thus making predictive coding more acceptable in the technology industry than it may have previously been. In fact, we understand that the technology industry now considers predictive coding to be widely accepted for limiting e-discovery to relevant documents and effecting discovery of ESI without an undue burden.”).

[209] See Lexis, www.lexis.com, archived at https://perma.cc/L3T7-PTFG (last visited Oct. 31, 2016).

[210] See Westlaw, www.westlaw.com, archived at https://perma.cc/EU3U-6B6Q (last visited Oct. 31, 2016).

[211] See Mathieu d’Aquin & Enrico Motta, Watson, More than A Semantic Web Search Engine. 2 Semantic Web 55 (2011), http://www.semantic-web-journal.net/sites/default/files/swj96_1.pdf, archived at https://perma.cc/NV5S-NNV9.

[212] See id.

[213] See ROSS Intelligence, supra note 152.

[214] See, e.g., Anthony Sills, ROSS and Watson Tackle the Law, IBM Watson Blog (Jan. 14, 2016), https://www.ibm.com/blogs/watson/2016/01/ross-and-watson-tackle-the-law/, archived at https://perma.cc/J4WZ-353U (“The ROSS application works by allowing lawyers to research by asking questions in natural language, just as they would with each other. Because it’s built upon a cognitive computing system, ROSS is able to sift through over a billion text documents a second and return the exact passage the user needs. Gone are the days of manually poring through endless Internet and database search result. . . Not only can ROSS sort through more than a billion text documents each second, it also learns from feedback and gets smarter over time. To put it another way, ROSS and Watson are learning to understand the law, not just translate words and syntax into search results. That means ROSS will only become more valuable to its users over time, providing much of the heavy lifting that was delegated to all those unfortunate associates.”).

[215] See McGinnis & Pearce, supra note 114, at 3050.

[216] See, e.g., Jon G. Sutinen & Keith Kuperan, A Socio-Economic Theory of Regulatory Compliance, 26 Int’l J. Soc. Econ. 174, 174–75 (1999). PARENTHETICAL NEEDED

[217] See Neota Logic, http://www.neotalogic.com/, archived at https://perma.cc/LW4M-LRBE (“Applications created in Neota Logic are executed by the Reasoning Engine, which contains many integrated, hybrid reasoning methods. All reasoning methods are automatically integrated and prioritized.” Neota Logic claims to “[e]nsure compliance with regulations, policies, and procedures” and to “[m]eet changing requirements, rapidly and inexpensively.”)

[218] See ComplianceHR, http://compliancehr.com/, archived at https://perma.cc/GD3Q-SD6Z (“[ComplianceHR is] a revolutionary approach to employment law compliance designed by, and for, legal professionals. Our unique suite of intelligent, web-based compliance applications, covering all U.S. jurisdictions, combine the unparalleled experience and knowledge of Littler, the world’s largest global employment law practice, with the power of Neota Logic’s expert system software platform.)

[219] See Foley Global Risk Solutions, https://www.foley.com/grs/, archived at https://perma.cc/45EB-45EL (last visited Oct. 31, 2016).

[220] See McGinnis & Pearce, supra note 114, at 3050.

[221] See id. at 3052.

[222] See eBrevia, http://ebrevia.com/#overview/, archived at https://perma.cc/FP9X-CHXZ (last visited Oct. 31, 2016) (“eBrevia uses industry-leading artificial intelligence, including machine learning and natural language processing technology, developed at Columbia University to extract data from contracts, bringing unprecedented accuracy and speed to contract analysis, due diligence, and lease abstraction.”).

[223] See Legal Sifter, https://www.legalsifter.com/, archived at https://perma.cc/DZP4-DAZN (last visited Oct. 31, 2016).

[224] See McGinnis & Pearce, supra note 114 at 3046.

[225] See Wim Voermans, Lex ex Machina: Using Computertechnology for Legislative Drafting, 5 Tilburg Foreign L. Rev. 69, 69 (1996).

[226] See Lyria Bennett Moses & Janet Chan, Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools, 37 U. New South Wales L. J. 643, 644 (2014).

[227] See Lexpredict, https://lexpredict.com/, archived at https://perma.cc/LBD3-X5K7 (last visited Sept. 21, 2016).

[228] See Casey Sullivan, AIG to Launch Data-Driven Legal Ops Business in 2016, Bloomberg Law (Oct. 20, 2015), https://bol.bna.com/aig-to-launch-data-driven-legal-ops-business-in-2016/, archived at https://perma.cc/9TU7-Q23A.

[229] See, e.g., Lex Machina, https://lexmachina.com/legal-analytics/, archived at https://perma.cc/G5J4-Z53Q (last visited Oct. 31, 2016) (illustrating the levels of specificity described above, such as a particular judge’s likelihood of ruling on a specific motion).

[230] See McGinnis & Pearce, supra note 114 at 3046.

[231] See Deborah L. Rhode, Access to Justice, 69 Fordham L. Rev. 1785, 1785 (2001).

[232] See Susskind, supra note 200, at 3.

[233] See Raymond T. Brescia, What We Know and Need to Know About Disruptive Innovation, 67 S. Carolina L. Rev. 203, 206 (2016).

[234] See id.

[235] See id. at 213.

[236] See id. at 222.

[237] See Liz Stinson, This Tool Makes it Stupid Simple to Turn Data into Charts, Wired (Apr. 8, 2016, 2:15 PM), https://www.wired.com/2016/04/tool-makes-turning-data-charts-stupid-simple/, archived at https://perma.cc/8LKT-YB86.

[238] Unattributed.

[239] See, e.g., Golan v. Holder, 132 S. Ct. 873, 890 (2012) (noting how copyright protection is designed to be a protection for fair use); see Stewart v. Abend, 495 U. S. 207, 236 (1990) (citations omitted) (noting how the fair use doctrine “permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster”). See generally Daniel P. Fernandez, et al., Copyright Infringement and the Fair Use Defense: Navigating the Legal Maze, 27 U. Fla. J.L. & Pub. Pol’y 135, 137 (2016) (analyzing the issues that are presented when dealing with copyrighted materials within the scope of the fair use defense); see Joseph P. Liu, Fair Use, Notice Failure, and the Limits of Copyright as Property, 96 B.U. L. Rev. 833, 834 (2016) (identifying and discussing the relationship between the fair use doctrine and notice failure); see Hannibal Travis, Free Speech Institutions and Fair Use: A New Agenda for Copyright Reform, 33 Cardozo Arts & Ent. L. J, 673, 677 (2015) (exploring the idea that ongoing issues in the area of copyrights are directly and negatively affecting free speech).

[240] See Stanford Copyright and Fair Use Center, http://fairuse.stanford.edu/, archived at https://perma.cc/UM3X-399J (last visited Sept. 23, 2016).

[241] See Fair Use Checklist, Cornell University, http://copyright.cornell.edu/policies/docs/Fair_Use_Checklist.pdf, archived at https://perma.cc/FTV4-3LZK (last visited Oct. 31, 2016).

[242] See Copyright Act of 1976, Pub. L. No. 94-553 (1976) (codified at 17 U.S.C. § 101–810 (2016)).

[243] See 17 U. S. C. § 102 (2016).

[244] See 17 U. S. C. § 104 (2016).

[245] See 17 U. S. C. § 104(a) (2016). 

[246] See 17 U. S. C. § 104(b) (2016).

[247] See 17 U. S. C. § 106 (2016).

[248] See 17 U.S.C. § 108 (2016).

[249] See 17 U.S.C. § 109 (2016).

[250] See 17 U.S.C. § 107 (2016).

[251] Harper & Row, Publishers, Inc. v. Nation Enterprises, 471 U.S. 539, 549 (1985).

[252] See, e.g., Folsom v. Marsh, 9 F. Cas. 342, 344-45 (1841); see also Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 577 (1994) (noting that “Congress meant § 107 ‘to restate the present judicial doctrine of fair use, not to change, narrow, or enlarge it in any way’ and intended that courts continue the common-law tradition of fair use adjudication.”).

[253] 17 U. S. C. § 107 (2011).

[254] See Campbell, 510 U.S. at 577-78.

[255] Id. at 578.

[256] *We have added more categories as some sentences do not relate to any distinct factor and serve as ‘negative’ language from which the computer can distinct between relevant data and irrelevant data.

[257] See Nicole Bradick, All Rise: The Era of Legal Startups is Now in Session, Venture Beat (Apr. 13, 2014, 8:32AM), at http://venturebeat.com/2014/04/13/all-rise-the-era-of-legal-startups-is-now-in-session/, archived at https://perma.cc/YE29-7K6L.

Airbus Flying Car Prototype Announced: How Will the Law Adapt?

1-7uuMgA9VvRsJRw80Q-z20A.0

By: Will MacIlwaine,

In 2016, Urban Air Mobility, a division of Airbus Group, began looking into the possibility of self-flying vehicles.[1] On January 16, Airbus Chief Executive Officer Tom Enders announced that the company plans to test a prototype of a self-flying taxi for a single passenger by the end of 2017.[2] The company’s flying taxi system will be called CityAirbus, and customers will be able to book a taxi using a smartphone device.[3]

While Airbus plans to have a taxi prototype ready by the end of this year, it also hopes to have models of its flying vehicle for sale by as early as 2020.[4] The benefits of flying vehicles seem abundant, two obvious benefits being avoidance of congested roadways, as well as potentially faster travel times. Aside from the sheer convenience of a flying car, Mr. Enders believes that a product such as his company’s prototype could decrease costs for city infrastructure planners, as flying cars would not travel on roads or bridges that are often costly to maintain and repair.[5] Further, air pollution could be reduced significantly in a move toward flying vehicles, as Airbus is committed to making its flying vehicles fully electric.[6]

As intriguing as this idea may seem, there are certainly issues that will need to be addressed, as well as potential legal ramifications that could arise through the introduction of this product. Airbus believes the biggest task its team will face is making its CityAirbus taxi fly on its own, without a pilot.[7] Tesla has introduced a similar autopilot feature for its Tesla Model S automobile, but has faced criticism as reports of accidents have surfaced in the past year. Enders’ team faces an even taller task: ensuring that its autopilot feature is successful in the air.

There are a variety of potential disastrous lawsuits that the CityAirbus technology might cause. For one, if two CityAirbus taxis crash into each other, how is liability determined? Certainly the passengers in the flying vehicles would not be liable, as the passenger is not the one operating the self-flying car. Airbus would most certainly be legally responsible for these accidents. This could also extend further, encompassing situations in which Airbus vehicles malfunction and damage buildings, or worse, injure the passengers of the flying cars.

Regarding the risk of injury while using a CityAirbus taxi, it is likely that passengers would be given extensive warnings about the dangers and risks of using the vehicles. If the user sees these warnings and understands the dangers inherent in flying cars, yet still voluntarily decides to ride in the vehicle, wouldn’t this amount to implied assumption of risk and bar any negligence claims by the passenger against Airbus?

Further, a new legal framework would need to be developed for flying cars. Would flying cars have to abide by speed limits? Would owners of these vehicles who purchase them in 2020 have to obtain a “flying license,” even though the vehicle is self-operated? Would flying cars need insurance just like ordinary cars? Would federal government regulate all of these things, or would the states be responsible for creating guidelines for flying cars?[8]

These are not the only legal questions surrounding flying vehicles. Would there be restricted areas where flying cars could not travel, such as around airports? If so, how would these regulations be legally enforced, when law enforcement officials are busy fulfilling their duties on the ground? Cities and states might be required to purchase similar flying vehicles so that its law enforcement officers could travel in them to enforce these regulations in the air. Wouldn’t this certainly offset and likely exceed the cost savings for city infrastructure planners that Mr. Enders predicted? While only hypothetical questions today, these legal issues will likely arise eventually if the Airbus team is successful in introducing its prototype by the end of this year.

Flying cars could certainly offer obvious advantages, but it seems that Mr. Enders and his team have many questions to consider in its development of CityAirbus if the company is to ensure that its potentially historical technological advancement does not turn into a legal nightmare.

 

 

[1] See Forget Self-Driving Cars: Airbus Will Test a Prototype Flying-Taxi by the End of This Year, Reuters, Jan. 16, 2017, http://www.dailymail.co.uk/sciencetech/article-4124412/Airbus-CEO-sees-flying-car-prototype-ready-end-year.html.

[2] See id.

[3] See id.

[4] See id.

[5] See Victoria Bryan, Airbus CEO Sees ‘Flying Car’ Prototype Ready by End of Year, Reuters, Jan. 16, 2017, http://www.reuters.com/article/us-airbus-group-tech-idUSKBN1501DM.

[6] See Jay Bennett, Airbus Wants to Test its Flying Car Prototype This Year, Popular Mechanics, Jan. 16, 2017, http://www.popularmechanics.com/flight/a24780/airbus-test-its-flying-car-prototype-2017/.

[7] See Forget Self Driving Cars, supra note 1.

[8] See Cory Smith, Soaring to New Heights: Flying Cars and the Law, Michigan Telecomm. & Tech. L. Rev., Oct. 22, 2015, http://mttlr.org/2015/10/22/soaring-to-new-heights-flying-cars-and-the-law/.

Image Source: http://i2.cdn.turner.com/money/dam/assets/161020184223-airbus-flying-car-4-780×439.jpg.

Genetic Testing Thriving, But Law Lags Behind

genetic testing

By: Ryan Martin,

Genetic testing is a fast-growing area of medical technology, but in many respects the legal world has yet to match this pace.[1] These genetic tests can detect the likelihood of developing Celiac disease, Alzheimer’s, and various forms of cancer.[2] One of the most popular areas for genetic testing has been to detect gene mutations that lead to breast cancer. Although breast cancer rates have remained the same between 2005 to 2013, a recent study reported the rate of women receiving mastectomies increased 36 percent during that time.[3] Certainly this increase was partly aided by Angelina Jolie’s release of her own genetic breast cancer test subsequent double mastectomy.[4] However, this trend has shown a larger willingness to use genetic testing as a preventative health method. While this testing could save countless lives, it raises new concerns about medical malpractices cases, the privacy of one’s genetic data and how insurance companies can use this data.

Under traditional malpractice claims physicians are already at the risk for being sued for failure to: evaluate a lump, perform or analyze a mammogram, and perform or analyze a biopsy.[5] In these malpractice cases the proper care is to be determined by the then available facts which were or ought to have been known.[6] An action cannot succeed unless the diagnosis or error was made in a manner that did comply with the recognized standard of medical care by physicians in the same specialty, under the same or similar circumstances.[7] But what happens when different physicians are using different services to acquire these genetic tests, and what if it is publicized that certain tests contain serious flaws? Furthermore, if the results of a test turn out to be incorrect could a doctor be liable for having recommending that specific company?

Myriad Genetics has been the leader in genetic testing that identifies women who have an increased chance of developing breast cancer. This test detects dangerous forms of BRCA1 or BRCA2 genes that lead to a substantially higher risk for cancer.[8] If either of these genes is mutated a woman’s risk of developing breast cancer can soar to as high as 85 percent.[9] For years Myriad was the sole distributor of BRCA testing and made more than $2 billion from its BRCA tests.[10] However, the US Supreme Court overturned their patent which opened the door for other companies to begin offering the same test for significantly cheaper prices.[11]

As recently reported, Myriad is now advising that these other companies test have significant flaws and are missing these deadly mutations. Spokesman for Myriad, Ron Rogers, suggested that, “We don’t know how many people are affected, but we believe it’s hundreds of thousands.”[12] Clearly, Myriad has an interest in having its competitors have inferior test results but this raises unanswered questions as to what the recognized standard is in the field of genetic testing. As always, to reduce risk of malpractice suits, physicians should advise on any risks associated with genetic testing as well as any surgeries performed because of those results.

There is also the issue of what access someone’s family should have to their genetic results. There are currently no laws that say what a patient can or cannot do with their own genetic information.[13] However, what if that information could lead to a critical medical diagnosis in a child or sibling? Alternatively, someone who publicizes their genetic information—like Angelina Jolie—could be divulging genetic information of their family members. This raises privacy concerns and issues over who owns that information.[14]

The law has yet to catch up in the insurance realm as well. Typically, under the Genetic Information Nondiscrimination Act (GINA), health insurance companies are barred for denying coverage to people with a gene mutation.[15] However, the law does not encompass life insurance companies, long-term care, or disability insurance.[16] These insurers can ask about family history of disease and genetic information and are authorized to deny coverage if the person is deemed too risky.[17] While GINA applies at the federal level various states, such as CA, have passed legislation prohibiting discrimination based on genetic testing results.[18]

Ultimately, the world of technology is moving too fast for the legal world to keep pace. Courts must address this issues as they arise to give a sense of how genetic testing will be assessed in malpractice cases and state legislatures should take the lead to address the issues posed by genetic testing and insurance. Further information on genetic testing can be found here.

 

 

 

[1] See Genetic Testing Market Headed for Growth and Global Expansion by 2020 – Persistence Markey Research, medGadget, (Nov. 14, 2016), http://www.medgadget.com/2016/11/genetic-testing-market-headed-for-growth-and-global-expansion-by-2020-persistence-market-research.html.

[2] See id.

[3] See Gillian Mohney, Mastectomies Increased 36 Percent From 2005 to 2013, Report Finds, ABC News, (Feb. 22, 2016), http://abcnews.go.com/Health/mastectomies-increased-36-percent-2005-2013-report-finds/story?id=37116791.

[4] See Alexandra Sifferlin, Angelina Jolie’s Surgery May Have Doubled Genetic Testing Rates at One Clinic, TIME, (Sept. 2, 2014), http://time.com/3256718/angelina-jolie-genetic-testing/.

[5] See Medical Malpractice in Diagnosis and Treatment of Breast Cancer, 92 A.L.R.6th 379.

[6] See id.

[7] See id.

[8] See Sharon Begley, As revenue falls, a pioneer of cancer gene testing slams rivals with overblown claims, STAT, (Nov. 29, 2016), https://www.statnews.com/2016/11/29/brca-cancer-myriad-genetic-tests/.

[9] See Dr. Ian Shyaka, Breast cancer: Early detection is the key to survival, The NewTimes, (Nov. 14, 2016), http://www.newtimes.co.rw/section/article/2016-11-14/205322/

[10] See Begley, supra note 8.

[11] See Leo O’Connor, Experts Debate MDx Industry Impact of AMP v Myriad Three Years After Court’s Decision, genomeweb, (Nov. 15, 2016), https://www.genomeweb.com/business-news/experts-debate-mdx-industry-impact-amp-v-myriad-three-years-after-courts-decision

[12] See Begley, supra note 8.

[13] See Emily Mullin, Do your Family Members Have a Right to Your Genetic Code?, MIT Tech. Rev., (Nov. 22, 2016), https://www.technologyreview.com/s/602946/do-your-family-members-have-a-right-to-your-genetic-code/.

[14] See Privacy in Genomics, Nat’l Human Genome Research Inst., (Apr. 21, 2015), https://www.genome.gov/27561246/privacy-in-genomics/.

[15] Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881 (2008).

[16] See Christina Farr, If You Want Life Insurance, Think Twice Before Getting A Genetic Test, Fast Company, (Feb. 17, 2016), https://www.fastcompany.com/3055710/if-you-want-life-insurance-think-twice-before-getting-genetic-testing.

[17] See id.

[18] See id.

Image Source: https://www.nursingtimes.net/attachment?storycode=5074974&attype=P&atcode=1288573

RegTech: A Solution for Banks or Just Another Hurdle?

RegTech-800x200

By: Cambridge Lestienne,

There is no question about it, we are officially in the midst of a FinTech revolution.

FinTech – the commonly used shorthand referring to financial technology – is the intersection between financial services and technology.[1] Applicable to various business areas, it involves the use of cutting-edge technology to design and deliver financial services and products tailored to each business and customer base.[2] Though relevant to big banks and other businesses alike, the area is predominantly characterized by countless focused start-ups.[3] You have likely heard this term around a lot lately in light of the boom it’s been experiencing in recent years. Back in 2010, when FinTech was first making its way on the scene, investments in the sector were valued at $1.8 Billion.[4] However, as of mid-August 2016, investments had reached as high as $15 Billion.[5] While this boom in investment as has been focused on lending and payments, other areas, such as insurance, wealth management and corporate finance, have yet to be as significantly impacted.[6] The reason for which being the heightened regulation that exists in these areas following the financial crisis of 2008.[7] Enter RegTech.

In parallel with its aptly named predecessor, RegTech refers to the intersection between regulation and technology in order to address regulatory challenges faced in the financial industry.[8] As regulation surrounding the financial services industry has become more and more heightened, and the focus of compliance programs has shifted to data and reporting, investment in RegTech firms is becoming increasingly valuable.[9] One of the driving factors in such investment has been cost reduction. It is estimated that some of the world’s largest and most notable banks, such as HSBC, Deutsche Bank and JPMorganChase, spend in excess of $1 Billion annually on implementing compliance and controls related to regulation.[10] Cost savings are not the only benefit for companies investing in RegTech. In a recent publication, Ernst & Young identified eight compelling benefits of incorporating RegTech into current compliance and risk management practices.[11] These benefits were broken down in terms of short-term and long-term. The short term benefits were identified as: (1) reduced cost of compliance; (2) sustainable and scalable solutions; (3) advanced data analytics; and (4) risk and control convergence.[12] In the long-term, E&Y found that RegTech would benefit companies through: (1) a positive customer experience; (2) increased market stability; (3) improved governance; and (4) enhanced regulatory reporting.[13]

While the benefits of investing in RegTech may be numerous and compelling, banks and other institutions should be sure to do their due diligence.[14] Because the industry is so highly regulated, it will be important that banks keep regulators in the loop as they form relationships with these new RegTech firms.[15] However, despite this caution, the Office of the Comptroller of the Currency, one of the primary regulators for national banks and federal thrifts, has noted that there is significant opportunity for technology to reduce the costs incurred and increase efficiency.[16] Specifically with respect to the Bank Secrecy Act and Anti-Money Laundering.[17] Only time will tell if we will see the same boom in RegTech as we have in FinTech. Though seemingly primed for success, so much will depend how the regulatory environment changes, if at all, under the impending Trump administration.

 

 

[1] See PricewaterhouseCoopers LLP, FinTech Q&A, 3 (2016), https://www.pwc.com/us/en/financial-services/publications/viewpoints/assets/pwc-fsi-what-is-fintech.pdf.

[2] See Matthew Blake, Dustin Hughes, Peter Vanman, 5 Things You Need to Know about FinTech, World Econ. Forum (Apr. 20, 2016), https://www.weforum.org/agenda/2016/04/5-things-you-need-to-know-about-fintech/.

[3] See Deloitte, RegTech is the New FinTech: How Agile Regulatory Technology is Helping Firms Better Understand and Manage their Risks, 4 (2016), https://www2.deloitte.com/content/dam/Deloitte/ie/Documents/FinancialServices/IE_2016_FS_RegTech_is_the_new_FinTech.pdf.

[4] See Nikolai Kuznetsov, What’s Next for FinTech?, Forbes (Nov. 22, 2016, 10:22 AM), http://www.forbes.com/sites/nikolaikuznetsov/2016/11/22/the-next-phase-in-fintech/#4379a6554a29.

[5] See 54% of Incumbents Say FinTech Partnerships Have Boosted Revenue, Bus. Insider (Nov. 28, 2016, 12:14 PM), http://www.businessinsider.com/54-of-incumbents-say-fintech-partnerships-have-boosted-revenue-2016-11.

[6] See Kuznetsov, supra note 4.

[7] See id.

[8] See Deloitte, supra note 3.

[9] See id.

[10] See Martin Arnold, Market Grows for ‘RegTech’, or AI for Regulation: Artificial Intelligence and Biometrics Help Banks Comply with Rules, Fin. Times (Oct. 14, 2016), https://www.ft.com/content/fd80ac50-7383-11e6-bf48-b372cdb1043a.

[11] See EY, Innovating with RegTech: Turning Regulatory Compliance into a Competitive Advantage, 8-9 (2016), http://www.ey.com/Publication/vwLUAssets/EY-Innovating-with-RegTech/$FILE/EY-Innovating-with-RegTech.pdf.

[12] See id. at 8.

[13] See id. at 9.

[14] See Katie Wechsler & Zachary Luck, The Federal FinTech Promised Land, 19 No. 4 Fintech L. Rep. NL 2, (2016).

[15] See Matthias Memminger, Mike Baxter, Edmund Lin, You’ve Heard of FinTech, Get Ready for ‘RegTech’, Am. Banker (Sept. 7, 2016), http://www.americanbanker.com/bankthink/youve-heard-of-fintech-get-ready-for-regtech-1091148-1.html.

[16] See Wechsler & Luck, supra note 14.

[17] See id.

Image Source: https://transficc.com/images/_800xAUTO_fit_center-center_90/RegTech.png

Youtube, You Disclose

youtube and apps

By: Brad Stringfellow,

 

The FTC is cracking down on social media ads and seeking to enforce proper disclosure amongst celebrity endorsers and advertisers. The means by which people connect with one another seem to be expanding at an ever-increasing rate. As conventional media loses its predominant grasp on consumer attention, advertisers have sought alternative means to promote their products. The FTC is doing its best to step in and regulate advertisements across all platforms, including social media.

 

Under the Federal Trade Commission Act, the FTC has been granted the broad authority to regulate “unfair or deceptive practices in or affecting commerce.”[1] FTC guidelines can be boiled down to four basic principles that all advertisements must meet: “1) Advertisements must be truthful and not misleading; 2) Advertisements may not be unfair or deceptive; 3) Advertisers must substantiate all claims, whether express or implied; 4) Any disclosures necessary to make an advertisement accurate must be clear and conspicuous.”[2]

 

The FTC has been putting in a fair amount of effort to adapt to changes in the digital landscape, and has done a fair job of doing so for a federal agency. The most recent change was put into place in December 2015 regarding deceptive formatting.[3] The FTC has been focusing heavily on proper disclosures within all social media. A handy, non-technical guide has been created by the FTC and put on their website to help people understand how to properly disclose promoted material.[4]

 

Social media stars who promote products or services are dubbed influencers.[5] Because of the personal nature of social media accounts and the much more informal atmosphere, disclosures are all the more important for the public to know when an influencer is sharing on honest opinion or hawking a product. The FTC guide gives a handy example of the weight you would give to the opinion of a travel blogger who paid from their own pocket to stay at a resort versus the opinion of a travel blogger paid by the resort.[6]

 

One is obligated to follow the guidelines if they have a “material connection” to an advertiser; this can be met by receiving gifts or being related to someone at the company.[7] The guide requires that a disclosure be made when a lack of disclosure misleads “a significant minority” of consumers.[8]

 

Anticipating questions of constrained formats, such as Twitter, the guide points out that “sponsored” and “promotion” are only nine characters, “paid ad” is only seven characters, and that “#ad” or starting the tweet with “Ad:” is only three characters: the FTC does not require specific words to meet the disclosure threshold, but does provide simple suggestions for fulfilling it.[9] In situations where a disclosure is not possible, such as adding a “Like” to a company or product, the FTC recommends against such acts.[10] It is interesting to note that making a three hour video endorsing a product is acceptable with one spoken sentence disclosing your relationship, but a simple tap of your thumb giving a heart symbol to a product page is impossible to disclose.

 

In order to demonstrate the seriousness with which they take these guidelines, the FTC has pursued several social media campaigns that made insufficient efforts to disclose. The FTC recently reached a settlement with Warner Brothers over the promotion of a video game advertised by several Youtube influencers.[11] While Warner Brothers, through an ad agency, instructed the influencers to include a disclosure statement buried in the video description box; the FTC found this insufficient and pursued a civil action.[12] Earlier in the year, the FTC went after Lord & Taylor, clothing manufacturers, who used fashion bloggers to promote a sundress from a new line.[13] While the company instructed the influencers to include the company name and dress line in the Instagram posts, the FTC found this as an insufficient disclosure as it provided no indication the influencers were paid for the post.[14]

 

Several media watch dog groups have also taken action to help enforce policy guidelines. Three such watch dog groups have filed complaints with the FTC regarding sponsored content targeted towards children.[15] Following the line of reasoning of harsh censures on advertisers of Saturday morning cartoons in the 1980s, the groups petition strict guidelines or outright banning of sponsored content coming from Disney and Dreamworks through various agents.[16] No action has been taken by the FTC yet.[17] Likewise, another watch dog group found over 100 instances of paid product placement with improper disclosure by various members of the Kardashian family.[18] The Kardashians were given the option to delete improperly disclosed posts or face being turned in to the FTC.[19]

 

A few companies are recognizing the FTC’s efforts and are taking a pro-active approach in promoting disclosure. As of last month, Youtube has updated their sponsored content guidelines.[20] They have also added a few new features to help promote disclosure, such as the option to add a “sponsored content” line at the beginning of a video.[21] Electronic Arts (EA), a gaming software company has also put in place new policies.[22] EA now mandates that any influencer receiving any kind of benefit (free software, paid trips, gifts, etc) add watermarks or hashtags “Supported by EA” for content where EA has no editing rights, and “Advertisement EA” for content where EA does have editing rights.[23]

 

The FTC has established standards for the social media world, and has begun enforcing their policy. Advertisers are taking notice and beginning to take action. Hopefully, consumers will benefit and be better able to distinguish when they are advertised to.

 

 

 

 

[1] 15 USC § 45(a)(2).

[2] See Michael W. Schroeder. The FTC’s Crackdown on Social Media #Ads, Lex (Nov. 3, 2016), http://www.lexology.com/library/detail.aspx?g=a62ff1c5-c8fc-45f7-9eea-903be4ecac58.

[3] See id.

[4] See Fed. Trade Comm’n, The FTC’s Endorsement Guides: What People are Asking (last visited Nov. 23, 2016), https://www.ftc.gov/tips-advice/business-center/guidance/ftcs-endorsement-guides-what-people-are-asking.

[5] See id.

[6] See id.

[7] See id.

[8] See id.

[9] See supra note 4, Fed. Trade Comm’n.

[10] See id.

[11] See Wendy Davis, Warner Bros. Finalize FTC Settlement Over Influencer Campaign, The Daily Online Examiner (Nov. 22, 2016, 5:14 PM), http://www.mediapost.com/publications/article/289543/warner-bros-finalizes-ftc-settlement-over-influen.html.

[12] See id.

[13] See id.

[14] See id.

[15] See Jon Fingas, FTC Complaint Blasts Disney, Google over Child Influencer Videos, Engadget (Oct. 24, 2016), https://www.engadget.com/2016/10/24/ftc-complaint-over-influencer-videos-targeting-kids/.

[16] See id.

[17] See id.

[18] See Janko Roettgers, Kardashians in Trouble Over Paid Product Endorsements on Instagram, Variety (Aug. 22, 2016, 10:52 AM), http://variety.com/2016/digital/news/kardashians-instagram-paid-ads-product-placements-1201842072/.

[19] See id.

[20] See Youtube, Paid Product Placements and Endorsements (last visited Nov. 23, 2016), https://support.google.com/youtube/answer/154235#paid_promotion_disclosure.

[21] See id.

[22] See Julia Alexander, EA puts Influencers in Check with Disclosure Rules for Sponsored Content, Polygon (Nov. 16, 2016, 4:00 PM), http://www.polygon.com/2016/11/16/13655180/ea-sponsored-content-youtube-twitch-disclosure.

[23] See id.

 

Image Source: https://14415-presscdn-0-52-pagely.netdna-ssl.com/wp-content/uploads/2014/01/Untitled3.png

Fake News on Facebook: Did it help put Donald Trump in the White House?

The front page of a newspaper with the headline "Fake News" which illustrates the current phenomena. Front section of newspaper is on top of loosely stacked remainder of newspaper. All visible text is authored by the photographer. Photographed in a studio setting on a white background with a slight wide angle lens.

By: Kaley Duncan,

 

BREAKING: “Surgeon General Warns: Drinking every time Trump lies during debate could result in acute alcohol poisoning.”[1] Would you click on it? This news article first appeared on the media outlet Raw Story and was shared by 243,371 people.[2] To some, this might be funny, but studies show that the general public is taking stock in fictional articles like this one.[3]

 

Dissemination of fake news articles posted to social media sites is on the rise. The most recent election cycle saw a host of hyperpartisan articles that were false or misleading.[4] In fact, fake news outperformed real news in the last six months leading up to the 2016 presidential election.[5] A study by BuzzFeed’s Craig Silverman uncovered “hyperpartisan Facebook pages are publishing false articles and misleading information at an alarming rate.”[6] Silverman analyzed Facebook users’ engagement by measuring the amount of likes, reactions, and shares any given article received.[7] The study showed that mainstream news – which did not post any “mostly false” content – received much less user engagement than misleading and false news sources.[8] Occupy Democrats, a left-wing site that boasts 4 million fans, put out 20.1% false or misleading articles.[9] Freedom Daily, a right-wing site with 1.3 million fans, put out 46.4% false or misleading articles.[10] Both Occupy Democrats and Freedom Daily received much more Facebook user engagement than any other site in the study, including mainstream news.[11]

 

So who are these fake news reporters and what are they gaining from misleading the public? The Washington Post interviewed Paul Horner, a fake news writer, who claims Donald Trump is in the White House because of him.[12] Horner makes around $10,000 a month writing and posting fake news stories.[13] He says websites like Google AdSense pay to keep his business going.[14] His stories generate a lot of user clicks which ad companies are willing to pay top dollar for.[15] The reason his business is doing so well? – “…There’s nothing you can’t write about now that people won’t believe,” said Horner. “I can write the craziest thing about Trump, and people will believe it. They don’t fact-check.”[16] In November, Horner posted a story about a protester who got paid $3,500 to protest a Trump rally. His fake story got picked up and retweeted by Trump’s campaign manager Corey Lewandowski.[17] “I made that up. I’ve gone to Trump protests – trust me, no one needs to get paid to protest Trump,” said Horner in reference to the story. [18] The influx of fake stories led some to believe that the articles influenced Facebook users, thereby impacting the election results. Horner is one such person.[19] However, Facebook founder Mark Zuckerberg, disagrees. “I think the idea that fake news on Facebook, which is a very small amount of content, influences the election in any way…is a pretty crazy idea.”[20]

 

According to some sociologists, Zuckerberg may be right. A phenomenon called confirmation bias suggests that people often click on articles that validate their existing beliefs.[21] Facebook’s algorithm is designed to post articles to users’ walls that are consistent with their interests.[22] If this is true, fake articles may not have persuaded anyone, rather just concreted their already existing beliefs.[23] On the other hand, some believe that the sharing of hyperpartisan stories with false information, could further polarize an already divided nation.[24] If nothing else, fake stories will likely add to the growing distrust of the media.[25]

 

A study by Pew Research Center states that 61% of millennials rely on Facebook for their political news.[26] With such heavy reliance, many agree that some sort of regulations need to be implemented. The question is how. Governments abroad block Facebook and other forms of social media during election cycles.[27] Such an extreme solution would not be acceptable in a democratic society. Some have suggested the answer is to re-institute some form of the Fairness Doctrine for social media.[28] The Fairness Doctrine was introduced by the Federal Communications Commission in 1949 requiring broadcast licensees to cover issues of public importance fairly.[29] This meant that when covering political news, broadcasters had to give equal air time to both sides.[30] While this may seem like a good idea, many scholars believe that the Fairness Doctrine violated freedom of speech and stifled diversity in the media, which is ultimately why it was repealed in 1987.[31]

While the Fairness Doctrine may not be the answer, big companies are looking for ways to reform. Google announced that it was going to cut off fake news sites from advertising in hopes that the practice of such reporting will run dry without adequate funding.[32] Facebook’s Zuckerberg is more hesitant. Possibly because Facebook has been criticized in the past for allegedly suppressing conservative news stories.[33] Since then, the company has been careful when it comes to skewing the trending page results.[34] “Identifying the ‘truth’ is complicated…I believe we must be extremely cautious about becoming arbiters of truth ourselves,” said Zuckerberg in a recent post on Facebook responding to the public’s demand for reform.[35]

 

Some are looking for less drastic, alternate solutions. For instance, a group of college students came up with a program they call FiB which uses an algorithm to identify and flag potentially fake or misleading articles.[36] Once a fake article is identified, the program then provides the user with a list of more credible sources from which to gather information.[37] The program is not yet fully developed, but could be a promising solution to this fake news epidemic.[38] Until then, as social media users, you must be weary of your media consumption. Communications experts Dr. Melissa Zimdars and Alexios Mantzarlis say to beware of highly partisan news, shocking headlines, and have a healthy amount of skepticism in general when reading articles posted to Facebook or other forms of social media.[39] In our two-way communication system, we as the audience must demand more from our news. Clicking on click-bait articles with flashy headlines will only feed the growing fake news epidemic that has distorted the free flow of information.

 

 

 

[1] Nathan Wellman, Surgeon General Warns: Drinking Every Time Trump Lies During Debate Could Result in Acute Alcohol Poisioning, U.S. Uncut (Sept. 26, 2016), http://usuncut.com/news/surgeon-general-warns-drinking-every-time-trump-lies-debate-result-acute-alcohol-poisoning/

[2] See Wellman, supra note 1.

[3] See Mathew Ingram, Here’s Why Stamping Out Fake News is a lot Harder Than You Think, Fortune (Nov. 17, 2016).

[4] See Craig Silverman et al., Hyperpartisan Facebook Pages are Publishing False and Misleading Information at an Alarming Rate, BuzzFeedNews (Oct. 20, 2016), https://www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis?utm_term=.pij128P2k#.mkrZDVLDA.

[5] See Timothy Lee, The Top 20 Fake News Stories Outperformed Real News at the End of the 2016 Campaign, Vox (Nov. 16, 2016), http://www.vox.com/new-money/2016/11/16/13659840/facebook-fake-news-chart.

[6] See Craig Silverman et al., supra note 4.

[7] See Id.

[8] See Id.

[9] See Id.

[10] See Id.

[11] See Craig Silverman et al., supra note 4.

[12] See Caitlin Dewey, Facebook Fake-News Writer: ‘I think Donald Trump is in the White House because of me’, The Washington Post (Nov. 17, 2016), https://www.washingtonpost.com/news/the-intersect/wp/2016/11/17/facebook-fake-news-writer-i-think-donald-trump-is-in-the-white-house-because-of-me/.

[13] See Id.

[14] See Id.

[15] See Google AdSense, https://www.google.com/adsense/start/how-it-works/ (last visited Nov. 22, 2016).

[16] See Dewey, supra note 12.

[17] See Id.

[18] See Id.

[19] See Id.

[20] See Paul Mozur & Mark Scott, Fake News in U.S. Election? Elsewhere, That’s Nothing New, The N.Y. Times (Nov. 17, 2016), http://www.nytimes.com/2016/11/18/technology/fake-news-on-facebook-in-foreign-elections-thats-not-new.html?_r=0.

[21] See Scott Bixby, ‘The end of Trump’: how Facebook deepens millennials’ confirmation bias, The Guardian (Oct. 1, 2016), https://www.theguardian.com/us-news/2016/oct/01/millennials-facebook-politics-bias-social-media.

[22] See Colby Itkowitz, Fake News on Facebook is a Real Problem. These College Students Came Up with a Fix in 36 Hours, The Washington Post (Nov. 18, 2016), http://www.denverpost.com/2016/11/18/fake-news-facebook-college-students-solution.

[23] See Kia Kokalitcheva, Mark Zuckerberg Says Fake News on Facebook Affecting the Election is a ‘Crazy, Fortune (Nov. 11, 2016), http://fortune.com/2016/11/11/facebook-election-fake-news-mark-zuckerberg.

[24] See Brian Hughes, How to Fix the Fake News Problem, CNN (Nov. 16, 2016), http://www.cnn.com/2016/11/16/opinions/how-to-fix-the-fake-news-problem-hughes.

[25] See generally Amy Mitchell et al., Millenials and Political News: Social Media – the Local TV for the Next Generation?, Pew Research Center: Journalism & Media, (2015), http://www.journalism.org/2015/06/01/appendix-a-within-each-generation-more-similarities-than-differences/ (discussing trends in how the public consumes political news including the growing distrust in the news).

[26] See Id at 1.

[27] See Mozur & Scott, supra note 20.

[28] See Frank Miniter, Beware of the Mainstream Media’s Solution to ‘Fake News’, Forbes (Nov 17, 2016), http://www.forbes.com/sites/frankminiter/2016/11/17/beware-of-the-mainstream-medias-solution-to-fake-news/#357f645742ac.

[29] See Kathleen Ruane, Congressional Research Service, Fairness Doctrine: History and Constitutional Issues, at 2 (2011), http://fas.org/sgp/crs/misc/R40009.pdf.

[30] See Id.

[31] See Miniter, supra note 28.

[32] See Timothy Lee, Facebook’s Fake News Problem, Explained, Vox ( Nov. 16, 2016), http://www.vox.com/new-money/2016/11/16/13637310/facebook-fake-news-explained.

[33] See Philip Bump, Did Facebook Bury Conservative News? Ex-staffers say yes., The Washington Post (May 9, 2016), https://www.washingtonpost.com/news/the-fix/wp/2016/05/09/former-facebook-staff-say-conservative-news-was-buried-raising-questions-about-its-political-influence.

[34] See Id.

[35] See Mark Zuckerberg, Facebook (Nov. 12, 2016, 10:15 PM), https://www.facebook.com/zuck/posts/10103253901916271.

[36] See Itkowitz, supra note 22.

[37] See Id.

[38] See Id.

[39] See AJ Willingham, Here’s How to Outsmart Fake News in Your Facebook Feed, CNN (Nov. 18, 2016), http://www.cnn.com/2016/11/18/tech/how-to-spot-fake-misleading-news-trnd.

 

Image Source: https://www.aceyourpaper.com/essay/wp-content/uploads/fake-news-essay.jpeg

UK Judges Rules on Cryopreservation

472_690_Gavel-Keyboard_R-346x125

By: Sophia Brasseux

 

This past October, a fourteen-year-old girl from the UK, known as JS, died of a rare form of cancer.[1]  However, she just might have a second chance at life. Justice Peter Jackson’s ruling on October 17 granted her mother control over decisions regarding the disposal of her daughter’s body in a groundbreaking family law dispute.[2]

In a letter JS wrote to the court, she expressed her desire to be cryogenically preserved, which would allow her body to be unfrozen upon the discovery of a cure for her rare form of cancer.[3]  When JS’s father disagreed with her decision—one that her mother supported—the family asked a High Court Judge to intervene.[4] Judge Jackson emphasized that the focus of his ruling would not be on the science, but rather on the parental dispute concerning whether the mother or father would be responsible for JS’s body after her death.[5] In his court opinion, the Judge expressed his concerns about the controversial technology.[6]

Cryopreservation is the process of freezing a human body to prevent decay after death.[7] Though the process is complicated, and still very much developing, there are three basic steps: first, the body is placed in an ice bath immediately following its being legally declared dead; second, the organs and cells are prepared for freezing temperatures by replacing the body’s fluids with agents that work as antifreeze; third, the body is placed in an insulating bag and then inside a cooling box with liquid nitrogen until it reaches minue-200 degrees Celsius.[8] Once this process is complete, the body can be transported to various storage facilities.[9] JS was transported to the Cryonics Institute, located in Michigan, on October 25.[10]

Cryopreservation has been met with mixed reviews; while some scientists are optimistic about its benefits, others are more skeptical. One such skeptic, Clive Coen, a professor of neuroscience at Kings College in London, questions whether revival is going to be a reality.[11] In Coen’s view, not only has there yet to be a revival of a human being,  a lack of weight has been given to the damage bodies incur from the antifreeze agents used during the freezing process.[12]

Many issues have led to controversy surrounding this recently developed technology, one being the uncertainty regarding its outcomes. Judge Jackson said that, not only was this the first case of its kind to come before the court in the UK, but also probably in the world.[13] In Jackson’s view, this new science will likely have the biggest impact on the future of family law.[14] Another issue is that, although cryopreservation is legal, it is still largely unregulated.[15] While regulation does exist for freezing sperm and embryos, those regulations do not encompass freezing the whole body; this sort of procedure had not yet been contemplated when regulations were initially passed.[16] Even so, preservation agreements are still considered an unsettled area of law when dealing with sperm and embryos.[17] These agreements are most successful when they are unambiguous, in line with public policy, and have extensive detail about the individuals involved.[18] Since human cryopreservation is so new, there is even less case law to base such conclusions in regards to how those sorts of agreements will be treated in court.

In JS’s case, the court’s decision to protect her post-mortem wishes gave her comfort in her final days. But what does this mean in a more general sense for family law?[19] This procedure is extremely expensive, especially considering the lack of certainty about the results, averaging around 37,000 euros for the most basic package.[20] Is the potential outcome worth the cost of the procedure plus litigation fees if a family cannot decide for themselves how to handle such a decision? Those in support of cryopreservation claim that the procedure is truly a leap of faith in the choice between “’definitely’ dying and ‘maybe’ living on.”[21] Even if the process does work, JS will likely have no living family and be stuck in the United States as a fourteen-year-old non-citizen with no conception of the present state of the world once she is brought back to life.[22] The revival of those who have been preserved may create even more legal issues well into the future.

 

 

 

 

 

[1] See Gordon Rayner, Girl, 14, Who Died of Cancer Cryogenically Frozen After Telling Judge She Wanted to be Brought Back to Life ‘In Hundreds of Years’, The Telegraph (Nov. 18, 2016), http://www.telegraph.co.uk/news/2016/11/18/cancer-girl-14-is-cryogenically-frozen-after-telling-judge-she-w/.

[2] See id.

[3] See id.

[4] See id.

[5] See Laura Smith-Spark, UK Teenager Wins Battle to have Body Cryogenically Frozen, CNN (Nov. 18, 2016), http://www.cnn.com/2016/11/18/health/uk-teenager-cryonics-body-preservation/.

[6] See id.

[7] See Meera Senthilingam, What is Cryogenic Preservation?, CNN (Nov. 18, 2016), http://www.cnn.com/2016/11/18/health/how-cryopreservation-and-cryonics-works/.

[8] See id.

[9] See id.

[10] See supra 5.

[11] See id.

[12] See id.

[13] See id.

[14] See id.

[15] See supra 1.

[16] See id.

[17] See T.G. Schuster et al., Legal Considerations for Cryopreservation of Sperm and Embryos, Fertility and Sterility (July 2003). https://www.ncbi.nlm.nih.gov/pubmed/12849802.

[18] See id.

[19] See supra 1.

[20] See id.

[21] Id.

[22] See id.

Image Source: http://www.workerscomplawyerie.com/workers-compensation-law/

ShotSpotter: Tracking Gunfire from a Mile Away

SHOTSPOTTER: DETECTING GUNFIRE FROM A MILE AWAY

By Lindsey McLeod

In recent years, Silicon Valley has taken on a range of issues that span the spectrum from consumer entertainment to government security and data protection.[1] Recently, Silicon Valley executives have made the jump into the gun violence arena, attempting to use technology to assist police officers in responding to shots fired. The technology, one of a many apps that are currently emerging in the tech-security world, is called ShotSpotter.[2]

ShotSpotter was initially developed to combat the growing gun violence program in urban areas within the United States.[3] The developers intended the app to meet the growing need to combat gun violence in the communities most affected.[4] The developers recognized a problem in that those who are most significantly impacted by gun violence on a regular basis are the least likely to report it, resulting in incident reports that grossly misrepresent the problem at hand.[5] For example, fewer than one in five shooting incidents are reported to police, and when reports are made to the police, the account is rarely accurate.[6] This miscommunication and misrepresentation causes police to respond inappropriately, further perpetuating the gun violence problem.[7]

ShotSpotter is cloud-based technology that creates a system similar to a large cyber network implanted over a geographic area, equipping that area with microphones and software monitoring.[8] The microphones are designed to suppress ambient noise and pay particular attention to loud “trigger” noises, which respond to “booms” and “bangs,” referred to as “impulsive noises.”[9] When a sound is triggered, the system sends an alert to the ShotSpotter headquarters location, where the monitoring review service—comprised of trained acoustic experts—makes the final determination regarding the origin of the audio.[10] When these impulsive noises are deemed the result of gunfire, the local police are alerted to the disturbance and dispatched to the area.[11] The police can then interrupt the event and terminate the gun violence event.

Despite the obvious need for police intervention like that which this app creates, critics of the app point to the lack of arrests it produces as evidence of the app’s ineffectiveness. For example, in Brockton, MA, between January 1, 2013 and September 28, 2015, the ShotSpotter technology alerted police of gun activity 296 times, yet those alerts led to only two arrests.[12] These unimpressive statistics represent the nationwide trend of the impact that ShotSpotter has had on combatting gun violence, increasing the frequency of alerts but doing little in terms of arrests.[13]

The developers, however, argue that this data is not representative of a technological failure; instead, the developers argue that this technology was not developed to lead to arrests, but rather to combat the problem at its origin.[14] Ralph Clark, the CEO of ShotSpotter, argues that “only a small number of individuals are responsible for most of a city’s gunfire and any tools available to get those folks off the street are important”.[15] Thus, though ShotSpotter has not led to an increase in arrests, the understanding by the residents of these high-risk communities is that police will respond quickly to the alert of gunfire, which will likely to lead to a decrease in the gunfire, and consequently a decrease in violence.[16] Effectively, the presence of such technology within high-risk communities should abolish the need for this technology altogether.[17]

Beyond the prosecution numbers of gun violence perpetrators, there is a legal issue presented in terms of the admissibility of material created by this technology. Although the infrequency of arrests suggests that there is little need for the use of this evidence at trial, the mere potential for its use nevertheless poses an interesting legal question. The use of this technology creates a feeling of a “big brother” presence, a phenomenon that tends to invoke Fourth Amendment concerns.[18] Additionally, this technology appears to invoke privacy concerns that, although not explicitly protected by the Constitution, are inferred by the Supreme Court rulings in cases such as Roe v. Wade and Bowers v. Hardwick.[19] Despite these concerns, United States v. Katz states that privacy concerns invoking the Fourth Amendment’s protection from unreasonable searched and seizures apply only to the search of a person, not the place. In these instances, then, the party supporting the ShotSpotter evidence could argue that this material is collected from the place, not the parties, that are present at the scene of the crime and is thus admissible.[20]

The courts consider the evidentiary issues loosely, which here would result in support of combatting crime as opposed to protecting the accused. A Massachusetts Superior Court recently deemed the material admissible in the first-degree murder trial of Dwayne Moore and Edward Washington.[21] In this trial, an expert witness from SST Inc. a, the company that manufactures ShotSpotter, testified about the timeframe during which the shots were fired and the time lapse between shots.[22] The use of this evidence in the Massachusetts Superior Court suggests that this evidence will be seen more frequently in the coming years in criminal proceedings as a means to prove, or disprove, the location and details of a shooting.[23] Because of the growing presence of these technologies in the criminal justice community, criminal defense attorneys should anticipate the impact that these technologies could have on evidence admitted at trial going forward.

 

 

 

 

[1] See Megan Smith, Expanding the Pentagon’s Silicon Valley Office, WhiteHouse.gov (May 20, 2016) https://www.whitehouse.gov/blog/2016/05/19/expanding-pentagons-silicon-valley-office (Secretary of Defense Ash Carter is taking bold steps to help the U.S. military take advantage of commercially driven technology and innovation).

[2] See ShotSpotter, ShotSpotter.com (Nov. 21, 2016) http://www.shotspotter.com/.

[3] See id.

[4] See id.

[5] See Law Enforcement Resources, ShotSpotter.com (Nov 17, 2016) http://www.shotspotter.com/law-enforcement.

[6] See id.

[7] Id.

[8] Here’s How the NYPD is Expanding ShotSpotter, ShotSpotter.com, (Nov. 17, 2016) http://www.shotspotter.com/news/article/heres-how-the-nypds-expanding-shotspotter-system-works.

[9] See id.

[10] See id.

[11] See id.

[12] See Matt Drange, ShotSpotter Alerts Police to Lots of Gunfire, But Produces Few Tangible Results, FORBES (Nov. http://www.forbes.com/sites/mattdrange/2016/11/17/shotspotter-alerts-police-to-lots-of-gunfire-but-produces-few-tangible-results/#71c59a892539.

[13] See id.

[14] See John Biggs, ShotSpotter CEO Ralph Clark Talks About the Future of City Surveillance, Tech Crunch (Nov. 20, 2016) https://techcrunch.com/2016/11/19/shotspotter-ceo-ralph-clark-talks-about-the-future-of-city-surveillance/.

[15] See id.

[16] See id.

[17] See id.

[18] U.S. Const. amend IV.

[19] See The Right of Privacy, Exploring Constitutional Conflicts, UMKC Law (Nov. 21, 2016) http://law2.umkc.edu/faculty/projects/ftrials/conlaw/rightofprivacy.html.

[20] See United States v. Katz, 389 U.S. 347 (1967).

[21] See Stephen Neyman, Massachusetts Criminal Trial Using Gunshot Detection System to Support Witness Testimony in High Profile Murder Trial, Massachusetts Criminal Defense Attorney Blog (Feb. 2012) http://www.shotspotter.com/news/article/massachusetts-criminal-trial-using-gunshot-detection-system-to-support-witn.

[22] See id.

[23] See id.

 

Image Source: https://openclipart.org/

Page 59 of 84

Powered by WordPress & Theme by Anders Norén