The first exclusively online law review.

Category: Article Page 1 of 8

Patent Eligibility as a Function of New Use, Aggregation, and Preemption Through Application of Principle

Download PDFPierce Publication Version PDF

Cite as: N. Scott Pierce, Patent Eligibility as a Function of New Use, Aggregation, and Preemption Through Application of Principle, 23 Rich. J.L. & Tech. 11 (2017),


By: N. Scott Pierce*


By long-standing judicial precedent, laws of nature, natural phenomena, and abstract ideas are excepted from eligibility for patent protection. The Supreme Court recently promulgated a two-part test that excludes from eligibility subject matter that is directed to any of these judicial exceptions unless there is something “significantly more,” namely “invention” or an “inventive concept.” The test is intended to bar patent protection that would preempt use of any of the judicial exceptions themselves. “Preemption,” however, is related to two earlier, and now obsolete, doctrines of “new use” and “aggregation” in that all three derived from eighteenth-century English case law that viewed inventive methods to be applications of principle within the meaning of eligible “manufactures” under the Statute of Monopolies. When the Patent Act of 1952 recast the language of its predecessor statutes and earlier jurisprudence into separate provisions under Title 35 of the United States Code (U.S.C.) for eligibility (§101), novelty (§102) and non-obviousness (§103), “new use” and “aggregation” were no longer considerations of eligibility because, as stated most succinctly by Judge Learned Hand, “the definition of invention [is] now expressly embodied in § 103.” For the same reason, the doctrine of “preemption” and its attendant “two-part test” should follow suit.





I.  Introduction.  3

II.  The Beginnings of Modern Patent Eligibility.  8

A.  Boulton and Watt v. Bull.  8

  1. Justice Buller’s Opinion.  10
  2. Lord Chief Justice Eyre’s Opinion.  12

B.  Hornblower and Maberly v. Boulton and Watt.  16

  1. Justice Grose’s Opinion.  16
  2. Justice Lawrence’s Opinion.  18

C.  Summary and Comparison of Opinions by Chief Justice Eyre, and Justices Buller, Grose, and Lawrence.  20

III.  New Uses of Known Machines, Manufactures and Compositions of Matter.  23

A.  The Inherency of Benefit.  23

B.  Non-Analogous Use.  35

C.  Non-Obvious Use.  44

D.  The Patent Act of 1952: Splitting Invention from Eligibility.  50

  1. Statutory and Judicial Conflation Prior to the Patent Act of
    1952.  50
  2. General Understanding of New Uses Based on In re: Thuau.  55
  3. The Split.  56

E.  The Demise of New Use Doctrine.  60

IV.  Aggregation of Applied Principles.  66

A.  “Something More than an Aggregate”.  66

  1. Cooperation and Single Purpose.  66
  2. Invention as the “Vital Spark”.  69

B.  The Demise of Aggregation Doctrine.  70

V.  Preemption of Laws of Nature, Natural Phenomena and Abstract Ideas.  74

A.  Setting the Stage.  75

B.  “Mere Principle” and its Application.  76

C.  The Two-Part Eligibility Test.  80

D.  “Preemption” in the Absence of Something “Significantly More” than “Mere Principle”.  81

E.  Treatment of Patent Eligibility Since Diamond v. Diehr, and an Alternative.  86

VI.  Conclusion.  101



I.  Introduction


[1]       Patent eligibility under 35 U.S.C. § 101[1]–as interpreted by the Supreme Court in Alice v. CLS Bank Int’l., and as applied by the United States Patent and Trademark Office under recent guidelines[2]–is a tiered algorithm.[3] The first tier addresses the language of the statutory provision by asking whether the claim is for “a process, machine, manufacture or composition of matter.”[4] It is followed by a second tier that includes a two-step analysis which asks if the subject matter is directed to any of three judicial exceptions, namely a law of nature, a natural phenomenon or an abstract idea, and disqualifies claims directed to any of those exceptions in the absence of something “significantly more.”[5]


[2]       It has been argued that the Statute of Monopolies “played, at best, a minimal role in pre-modern patent law.”[6] Nevertheless, at least one thread of modern jurisprudence stems from the sixth section of the Statute, which authorized patent protection for “the sole working or making of any manner of new manufactures within this realm[], to the true and first inventor[] and inventors….”[7] Specifically, the eighteenth-century English cases of Boulton v. Bull[8] and Hornblower v. Boulton[9] encompassed methods within the term “manufactures” as applications of principle. By doing so, they laid the foundations for some of the most fundamental current questions surrounding patent eligibility.


[ 3]      While both manufactures and methods of use were considered eligible for patent protection under Boulton and Hornblower, for many years afterward new methods of use of known manufactures were not. Manufactures were understood to embody all applications of principle to which they could be put; therefore, new methods generally could only be patented if they were conducted by use of manufactures having an additional, patentably distinct feature. Even then there was a question as to whether the whole manufacture was patentable or only the additional improvement feature. This issue became moot with the advent of claims, whereby exclusionary rights required that the combination of elements of a claim be patentably distinct, so that the patentability of any individual element became unnecessary.[10]


[4]       Eventually, a new method of use of a known device was considered eligible, but only if the new use was non-analogous to known uses of the device. More particularly, a new use of an old device must include “invention.”[11] The patentability of a new use of a known product, and of a known product for a new intended use, both of which were considered a “dual use,” became the topics of heated debate in the late nineteenth and early-to-mid twentieth centuries, culminating in substitution of the term “process” for that of “art” in the statutory provision for patent eligibility under the Patent Act of 1952.[12] The revised statutory language did not immediately settle the matter.[13] Ultimately, claims directed to known products having new intended uses were considered to lack novelty, while new uses of known machines generally became questions of “obviousness.”[14]


[5]       “Aggregation” came into use as a doctrine proscribing patent protection for “new combinations” of devices “without producing a new and useful result [of] the joint product of the elements of the combination and something more than an aggregate of old results.”[15] As with a new use, patent eligibility was denied because there was no new underlying application of principle. Rather, aggregation was “[m]erely bringing old devices into juxtaposition, and … allowing each to work out its own effect without production of something novel.”[16] In neither the case of new use nor of aggregation was there “invention” and, as with new use, aggregation ultimately became essentially a matter of “statutory obviousness” under the Patent Act of 1952.[17] Where aggregation was not subsumed under statutory obviousness, it was recast as “indefiniteness” under 35 U.S.C. § 112, second paragraph,[18]in that the affected claims failed to “positively recite structural relationships” of the elements.[19] Even here, however, the lack of any new application of principle by the combination of known but unrelated components was apparent.


[6]       “Preemption” as ­–a legal term to justify barring the eligibility of claimed subject matter was first invoked by the Supreme Court in Gottschalk v. Benson.[20] The Court relied on the “longstanding rule that ‘an idea of itself is not patentable,’”[21] which in this case was a mathematical formula for converting “binary-coded-decimal (“BCD”) numerals into pure binary numerals.”[22] Reciting earlier Supreme Court cases dating back to Le Roy v. Tatham,[23] the Court also embraced within this maxim a prohibition against patenting a “principle, in the abstract,”[24] or “[p]henomena of nature.”[25]


[7]       As with new use and aggregation, the guard against preemption mandated that only an inventive “application of the law of nature to a new and useful end”[26] was eligible for protection. Preemption thereafter became the pivot point for eligibility under 35 U.S.C. § 101 of claimed subject matter reciting a law of nature, a natural phenomenon or an abstract idea, and was the basis for the test articulated by the Supreme Court in Alice Corp. v. CLS Bank Int’l., in 2014.[27]


[8]       There has never been a satisfactory explanation by the Supreme Court as to why current analyses of patent eligibility cannot be couched within other portions of the statutory framework, such as novelty under 35 U.S.C. § 102 or non-obviousness under 35 U.S.C. § 103. This was a common complaint among dissenting justices in Supreme Court cases during the first few decades under the 1952 Patent Act.[28] The Court’s more recent explanation–that “§§ 102 and 103 say nothing about treating laws of nature as if they were part of the prior art when applying those sections,”[29]falls flat at least for the simple reason that there also is no such language in section 101.


[9]       Part II of this article summarizes dicta in the eighteenth-century English cases of Boulton and Watt v. Bull and Hornblower v. Boulton. Part III traces the legal doctrine of “dual” or “new” use from its origins in the dicta of Boulton and Hornblower to its assimilation into statutory provisions for novelty under 35 U.S.C. § 102 and non-obviousness under 35 U.S.C. § 103 following the Patent Act of 1952. Part IV addresses “aggregation” and its parallels with “new use” as a matter of non-obviousness. Part V analyzes legal “preemption” and its relation to earlier doctrines– of “new use” and “aggregation,” arguing that, for the same reasons that “new use” and “aggregation” are no longer viable doctrines in their own right, the doctrine of “preemption” also should be foregone in favor of non-eligibility portions of the statute other than that of eligibility under 35 U.S.C. § 101.



II.  The Beginnings of Modern Patent Eligibility


[10]     The Statute of Monopolies of 1623 was generally intended to bar patents in England, with the exception of those for new “manufactures,” as prescribed under the sixth section. However, the meaning of “manufactures” under the statute was the subject of great debate, most notably in the famous eighteenth-century cases of Boulton and Watt v. Bull and Hornblower v. Boulton.[30] Ultimately, in Hornblower v. Boulton, patent protection for methods as “manufactures” was permitted. Part II-A discusses the contrasting, influential opinions of Justice Buller and Lord Chief Justice Eyre in Boulton, and Part II–B discusses Justices Grose and Lawrence’s opinions in Hornblower, which if not directly cited, were often reflected in later jurisprudence. Both Boulton and Hornblower set the stage for so-called “new use” and “aggregation” doctrines that developed during the nineteenth and twentieth centuries under American jurisprudence.


A.  Boulton and Watt v. Bull


[11]     The patent at issue in Boulton and Watt v. Bull was a conceded improvement on the Newcomen steam engine (which had been in use for almost one-hundred years, primarily for the purpose of pumping water out of mines).[31] James Watt discovered that the efficiency of such steam, or “fire” engines as they were known at that time could be significantly improved by condensing spent steam in a chamber separate from the cylinder and piston producing the work.[32] This change eliminated the need to cool the cylinder between strokes of the piston, thereby significantly reducing lost work and, consequently, the cost of operation.[33]


[12]     The Drafting of the patent specification, however, posed a difficulty for Watt, if he was to obtain the broadest possible patent protection. If the invention was described in great detail, his patent might be limited to only those particular embodiments. Alternatively, if only broad principles of operation were presented, then the patent might be considered invalid as failing to provide sufficient instruction to enable the public to practice the invention once the patent expired. Watt ultimately took the advice of his friend, William Small, drafting the application to intentionally avoid “descriptions of any particular machinery, but specifying in the clearest manner that you have discovered some principles.”[34] Watt obtained his patent and, along with his business partner, Matthew Boulton, was able to obtain, by a private act of parliament and by characterizing the invention as an “engine,” a term of twenty-five years from an initial patent date of January 5, 1769.[35]


[13]     Boulton was an enforcement action, against infringement of Watt’s patent, that ultimately resulted in no judgment because the court was split.[36] According to Lord Chief Justice Eyre in Boulton, there were two issues: “the first, whether the patent is good in law, and continued by the act of parliament mentioned in the case; the second, whether the specification stated in the case is in point of law sufficient to support the patent?”[37] Though ostensibly distinct, the two questions were related. The exposition of “discovered” principles in the specification spoke directly to the question of whether the patent was good in law, since it was generally well-accepted that “there can be no patent for a mere principle.”[38] For example, the defendants in Boulton stated:


By obtaining a patent for principles only, instead of one for the result of the application of them, the public is prevented, during the term from improving on those principles, and at the end of the term is left in a state of ignorance as to the best, cheapest, and most beneficial manner of applying them to the end proposed.[39]


[14]     On the other hand, it was unanimously agreed by the Justices that Watt’s invention was useful and, as found by the jury and recited by the defendants:


[T]he specification made by Watt, is of itself sufficient to enable a mechanic acquainted with fire-engines previously in use, to construct fire-engines, producing the effect of lessening the consumption of fuel and steam in fire-engines, upon the principle invented by Watt.[40]

1.  Justice Buller’s Opinion


[15]     If the specification, then, did not identify any particular apparatus for the principles “discovered” by Watt, but, nevertheless, was sufficient to “enable a mechanic acquainted with fire-engines previously in use” to practice that method, and thereby benefit from that discovery, on what basis did patent eligibility lie?[41] For Justice Buller, another of the justices hearing the case, the answer was clear: “[T]he true foundation of all patents, … must be the manufacture itself; and so says the Statute [of Monopolies] 21 Jac. I, c.3.”[42] The Statute of Monopolies, in other words, was the “foundation” for patent protection. Eligibility for such protection under the Statute of Monopolies depended upon the ability to classify the subject of a patent as a “manufacture,” as summarized by Justice Buller:


All monopolies except those which are allowed by that statute, are declared to be illegal and void; they were so at common law, and the sixth section excepts only those of the sole working or making any manner of new manufacture: and whether the manufacture be with or without principle, produced by accident or by art, is immaterial. Unless this patent can be supported for the manufacture, it cannot be supported at all.[43]


[16]     For Justice Buller, the subject matter of the patent must be within the scope of the meaning of the statutory term, “manufacture.” Nevertheless, Justice Buller considered a “principle in the patent, and engine in the act of parliament [to] mean … the same thing.”[44] The discrepancy between the qualification of a new “manufacture” and a “principle” was resolved by considering the statutory word “manufacture,” to be a threshold requirement for facial patent eligibility, and beneficial utility to be evidence of the existence of the principle embodied in the manufacture.[45] Once established as a new and beneficial embodiment of principle, the exclusionary right of the patentee extended to all uses of that manufacture.[46]


[17]     From these general observations, Justice Buller concluded that, while patents of addition or improvements on an old machine may be good, the scope of protection must be limited to the improvement alone, and not extended to include the old machine already in the public domain.[47] Justice Buller found that there was nothing new in the steam engine described in Watt’s patent,[48] therefore, the steam engine’s manner of use–according to the principles discovered by Watt–must be no more than an application of principle already inherent in known steam engines.[49] Consequently, the claim was to the “whole machine,” which was known and, as such, Watt’s patent must be void.[50]


2.  Lord Chief Justice Eyre’s Opinion


[18]     Whereas Justice Buller relied on the negative implications of extending protection beyond the literal confines of the Statute of Monopolies, Lord Chief Justice Eyre looked to broaden the meaning of “manufacture,” recognizing that many cases had already been decided in favor of new uses of known devices.[51] Like Justice Buller, Chief Justice Eyre posited that, “if the machinery itself is not newly invented, but only conducted by the skill of the inventor, so as to produce a new effect, the patent cannot be for the machinery.”[52] He concluded that patent protection cannot be granted to things known in the art simply on the basis that a new use for that machinery has been discovered.[53] Eyre also agreed with Justice Buller’s statement that: “if the principle alone be the foundation of the patent, it cannot possibly stand, with that knowledge and discovery which the world were in possession of before.”[54]


[19]     Chief Justice Eyre believed, however, that the language of the Statute of Monopolies should not be so strictly interpreted as to bar all methods of use of known devices, stating that, “[n]ow I think these methods may be said to be new manufactures, in one of the common acceptations of the word, as we speak of the manufactory of glass, or any other thing of that kind.”[55] Eyre employed the example of David Hartley’s method of using iron plates to fireproof buildings, stating that Hartley’s patent could not be for the effect obtained, namely, “the absence of fire,”[56] nor could it be for the plates or the method of their manufacture, both of which were commonly known.[57] Rather, as Chief Justice Eyre stated: “[b]ut the invention consisting in the method of disposing of those plates of iron, so as to produce their effect, and that effect being a useful and meritorious one, the patent seems to have been very properly granted to him for his method of securing buildings from fire.”[58] “It [the patent] must be for [the] method detached from all physical existence whatever.”[59]


[20]     Patentability must be, as stated by Chief Justice Eyre, “for a principle so far embodied and connected with corporeal substances as to be in a condition to act, and to produce effects in any art, trade, mystery, or manual occupation….”[60] This was “the thing for which the patent stated in the case was granted, and this is what the specification describes, though it miscalls it a principle.”[61] Eyre asserted:


It is not that the patentee has conceived an abstract notion that the consumption of steam and fire engines may be lessened but he has discovered a practical manner of doing it; and for that practical manner of doing it he has taken this patent. Surely this is a very different thing from taking a patent for a principle; it is not for a principle, but for a process.[62]


[21]     While Watt, as he had been advised to do, stated “in the clearest manner”[63] that he had “discovered some principles,”[64] and Justice Buller had taken the language of Watt’s specification at face value in this regard, Lord Chief Justice Eyre viewed the invention as being “not for a principle, but for a process” albeit by use of no new machinery.[65] As stated by Eyre, “the machinery is not the essence of the invention but incidental to it”[66] and, therefore, the method as described in the specification need only “be capable of lessening the consumption to such an extent as to make the invention useful.”[67] “More precision is not necessary, and absolute precision is not practicable.”[68]


[22]     Eyre summarized that, while the act of parliament characterized the invention as an “engine” and the specification described the invention as a discovered principle, in effect, the patent specification described neither.[69] Rather, Chief Justice Eyre saw the invention as a “process.”[70] As a method, or “process,” it was the proper subject of a patent wholly apart from whether the device itself was, separately, new or patentable:


The objection on the act of parliament is of the same nature as one of the objections to the specification: the specification calls a method of lessening the consumption of steam in fire-engines a principle, which it is not; the act calls it an engine, which perhaps also it is not; but both the specification and the statute are referable to the same thing, and when they are taken with their correlative are perfectly intelligible. Upon the wider ground I am therefore of opinion that the act has continued this patent.[71]


[23]     Therefore, while it was true that the Statute of Monopolies provided only for the exception of “the sole working or making of any manner of new manufactures,” according to Chief Justice Eyre patents had routinely been granted under the Statute for new methods of use of known manufactures.[72] Moreover, for Eyre, equivalency of the terms “method” and “manufacture” was not contingent upon embodiment of a new application of principle. Rather, a new application of principle was independent of the existence of a new “manufacture,” despite his conclusion that “method” and “manufacture” were understood to mean the same thing under the Statute. Therefore, Chief Justice Eyre did not need to address whether the improvement in a machine must be separable from the machine it improved in order to be eligible for patent protection.[73] Nor did he need to address the patentability of old devices intended for new and beneficial uses. For Eyre, manufacture and method meant the same thing, not because they were both embodiments of the same new application of principle, but rather, because they were each, independently, capable of embodiment of a new application of principle, and that the idea of a new application of principle was the root meaning of the term “manufacture” in the Statute of Monopolies.[74] Where a new combination of components did not result in some new application of principle, there would be no “manufacture” under the Statute.


[24]     For Justice Buller, although terming something as a “manufacture” was a necessary condition for patent eligibility, it was an insufficient condition for patentable distinction. The condition for patentable distinction was, instead, for Buller an embodiment of a new application of principle, for which a new “manufacture” was necessary. For Chief Justice Eyre, on the other hand, patent eligibility and patentable distinction were wrapped up in the term “manufacture” under the Statute, and the meaning of the word embraced both devices and their uses independently of each other, so long as they each embodied a new application of principle. Ultimately, no judgment in Bolton was given because the court was split, with Justices Buller and Heath holding Watt’s patent invalid, and Chief Justice Eyre and Justice Rooke holding in favor of the patent.


B.  Hornblower and Maberly v. Boulton and Watt


[25]     Boulton and Watt’s patent was again challenged, but unanimously upheld as valid at the Court of King’s Bench in the 1799 case of Hornblower and Maberly v. Boulton and Watt.[75] While the justices at the Court of King’s Bench, like those at the Court of Common Pleas in Boulton, differed as to the nature of Watt’s invention, they all agreed that the term “manufacture” was broad enough to embrace the application of principle described in Watt’s specification, and that the specification was sufficient to enable its practice by ordinary mechanics.[76] Further, while Justices Kenyon and Amherst summarily upheld the validity of the patent as a manufacture that was sufficiently described in the specification, Justices Grose and Lawrence took up many of the themes laid out by Justice Buller and Chief Justice Eyre in Boulton.[77]


1.  Justice Grose’s Opinion


[26]     Justice Grose, for example, following Justices Buller and Eyre in the Court of Common Pleas, asked whether the patent was “for a mere principle, and not for a new manufacture”[78] and, like Justice Buller, questioned whether the patent, if for a manufacture, was new and, if new, whether it should have been “for the addition only, and not for the whole engine.”[79] Also like Justice Buller, Justice Grose reasoned that, even though Watt had adequately described a new method, it “should hardly” fall within the Statute of Monopolies if it was “not affected or accompanied by a manufacture.”[80] However, he differed from Justice Buller by finding that Watt did, indeed, describe a “new manufacture, by which his principal is realized; that is, by which his steam vessel is kept as hot as the steam during the time the engine is at work; by which means the consumption of steam and fuel is lessened.”[81] Whereas Buller found “nothing new in the machine,”[82] Grose found several distinctions:


[H]e specifies the particular parts requisite to produce the effect intended, and states the manner how they are to be applied. He describes the case of wood in which the steam vessel is to be inclosed, the engines that are to be worked wholly or partially by condensation of steam, the vessels that he denominates condensers, and the steam vessels where rotary motions are required. Can it then be said that the making and combining of these parts is not some manner of new manufacture? I cannot say that it is not.[83]


[27]     On the other hand, Grose and Buller both conditioned patentable distinction on whether the patent was broad enough to cover the old, unimproved engine, or “only for the addition to or improvement of the old engine.”[84] Implicit in this analysis is that any device must inherently embody all physical applications of principle to which that device could be put. However, drawing from both Lord Chief Justice Eyre and Justice Buller in the previous case, Justice Grose resolved the difficulty associated with patenting improvements inextricably linked with old machines by limiting patent protection to devices that embodied the improvement. Specifically, as stated by Justice Grose, “[i]f indeed a patent could not be granted for an addition, it would be depriving the public of one of the best benefits of the Statute of James.”[85] Therefore, the act of parliament granting to Watt his exclusive right in his invention, according to Grose, by “reciting the patent, recites it as a grant of the benefit and advantage of making and vending ‘certain engines by him invented for lessening the consumption of steam and fuel in fire engines.’”[86] Therefore, the “Legislature considered the patent as a patent for the improvement of the invention described in the specification, and not as a patent for a mere method . . .” as contended by Chief Justice Eyre, “. . . or for the original fire engine[,]” as contended by Justice Buller.[87] For Justice Grose, a “manufacture” under the Statute of Monopolies had to be a device. Also like Justice Buller, Justice Grose implicitly viewed a device as being a physical embodiment of all applications of principle to which it could be put and, therefore, only an improvement on a device embodying a new application of principle could be a basis for an exclusionary right. However, unlike Justice Buller, and drawing from Chief Justice Eyre, Justice Grose did not see the inability to separate an improvement, from a device it improves, as a fatal flaw. Rather, like Chief Justice Eyre, Justice Grose based entitlement to patent protection on the benefit accrued by the improvement.


2.  Justice Lawrence’s Opinion


[28]     Justice Lawrence borrowed from Justice Grose the criteria, recited in the act of parliament, for granting Watt his patent (for “the sole benefit of making and vending certain engines invented by him for lessening the consumption of steam in fire engines,”).[88] But, like Chief Justice Eyre, he considered “[e]ngine and method [to] mean the same thing”[89] and therefore, either “may be the subject of a patent.”[90] Justice Lawrence also, like Chief Justice Eyre, recognized the terms “engine” and “method” to be “convertible,” implying that an improvement could be embodied in a new use of an unimproved machine. Therefore, a “mechanical contrivance” did not necessarily embody all applications of principal entitled to patent protection.[91] For example, as stated by Justice Lawrence:

[S]ome of the difficulties in the case have arisen from considering the word engine in its popular sense, namely, some mechanical contrivance to effect that to which human strength, without such assistance, is unequal: but it may also signify device; and that Watt meant to use it in that sense, -and that the Legislature so understood it, is evident from the words engine and method being used as convertible terms. Now there is no doubt but that for such a contrivance a patent may be granted, as well as for a more complicated machine: it equally falls within the description of a manufacture; and unless such devices did fall within that description, no addition or improvement could be the subject of a patent.[92]


[29]     In other words, understanding that the legislature intended the terms “engine” and “method” to be “convertible” meant that, so long as the method was effected by mechanical means, Watt’s invention was within the meaning of the statute, and could be embodied in an “addition” to or an “improvement” of a known “machine.” Significantly, however, Justice Lawrence stated that, “Watt claims no right to the construction of engines for any determinate object, except that of lessening the consumption of steam and fuel in fire-engines[,]”[93] thereby leaving open the possibility that Watt would, in fact, have a right to exclude others from the “construction of engines”[94] for the purpose of “lessening the consumption of steam and fuel and fire engines[,]”[95] regardless of whether those engines included additions or improvements intended to effect that result.[96] At any rate, Justice Lawrence did not need to opine on whether Watt’s invention was valid as a method alone, or whether his exclusionary right extended to the construction, use, or vending of old engines with a new intended purpose, because he concluded that Watt had described an “improvement of fire-engines . . . with sufficient accuracy . . ., which may be made in all fire-engines, in such a way as to enable a workman to execute it. . . .”[97] Specifically, Justice Lawrence stated that Watt had included in the specification a vessel for the condensation, distinct from that in which the powers of steam operate; and to convey the steam, as occasion requires, from the cylinder to the condensing vessel; to keep the cylinder hot by means distinctly described, and to extract, by pumps, the vapour which may impede the work.[98]


[30]     Having articulated the improvement as a specification providing “directions for the purpose,”[99] Watt again left open, under Justice Lawrence’s analysis, the possibility of patent protection for a vessel and pumps described in the specification that may have been present in previously known steam engines, but not employed as directed by Watt’s specification. If so, then Justice Lawrence, by accepting convertibility of the term “engine” and “method,” was forced to introduce the concept of “intended use” as a criterion for an exclusionary right in the case of Watt’s patent. It didn’t matter whether his invention was characterized as an “engine” or “method,” so long as the “determinate object” was “lessening the consumption of steam and fuel in fire engines.”


C.  Summary and Comparison of Opinions by Chief Justice Eyre, and Justices Buller, Grose, and Lawrence


[31]     Justices of the courts in each of Boulton and Hornblower, therefore, set up a dichotomy under the meaning of “manufacture” in the Statute of Monopolies. In Boulton, Justice Buller insisted that a “manufacture” was just that, whether it be “with or without principle,” and that the exception under the sixth section of the Statute was to any manner of “new manufacture.”[100] The implication, of course, was that it was only a “new manufacture” that could embody a new application of principle necessary to entitle the “true and first inventor and inventors of such manufactures” to “any letters patent and grants of privilege” under the Statute. A corollary of this reasoning extended such entitlement to all uses of new manufactures, since it was only a new manufacture that could embody a new application of principle. Patents of addition were circumscribed to exclude from patent protection known devices so improved. Lord Chief Justice Eyre, on the other hand, while agreeing that “principle alone [cannot] be the foundation of the patent,”[101] determined the meaning of “manufacture” under the Statute to be the “practical manner” of producing the “effect” of a newly discovered principle in “any art, trade, mystery, or manual occupation….”[102]


[32]     A “manufacture,” then, for Chief Justice Eyre, was not limited to “the thing for which the patent stated in the case was granted.” Rather, it could be the manner in which it was employed to embody the “abstract notion” conceived by the inventor, in which case the “machinery,” or “thing,” would only be incidental to the “essence of the invention.”[103] For Eyre, a “new manufacture” under the Statute was not “with or without principle,”[104] as it was for Buller, but a new embodiment of principle, “as to be in a condition to act,” regardless of whether by virtue of new machinery, or strictly as a process.[105] In Hornblower, Justice Grose adopted Justice Buller’s requirement that any protection under the Statute must be “effected or accompanied by a manufacture,” but resolved the problem of improvements by “addition” to known devices by limiting protection to devices and their applications that embodied the improvement.[106]


[33]     Justice Lawrence, on the other hand, like Chief Justice Eyre in Boulton, considered “process” and the means by which it was effected under the Statute to be “convertible.” In doing so, however, he imputed intent by limiting the scope of the exclusionary right to only those means obtaining the benefit. In the case of Watt’s invention this was “lessening the consumption of steam and fuel in fire engines.”[107]


[34]     Therefore, whereas Buller concluded in Boulton that the Statute of Monopolies mandated that a new “manufacture” be a new device regardless of whether it embodied any new principle, and that a new device would embody the principle of any application to which it could be put, Eyre imputed new application of principle under the Statute, and broadened “new manufacture” to independently embrace “machinery” and “process.” Buller’s view required that “additions” be separable from the known devices they improved in order to qualify under the Statute–lest they deprive the public, or patentees, of existing rights. Eyre was not so restrictive, instead only limiting the term, “new manufacture,” to machinery and processes that actually embodied new applications of principle.[108]


[35]     In Hornblower, on the other hand, Grose found that a method “not effected or accompanied by a manufacture” did not qualify for protection under the Statute, and limited exclusionary rights to embodiments of improvements consequent to additions to known manufactures. This eliminated the need in Buller’s analysis to afford protection only to inventions that could be separated from known manufactures they improved.[109] Lawrence, like Eyre in Boulton, equated machinery to their methods of their use and held them subject to protection as a “new manufacture” under the Statute of Monopolies. However, Eyre did not link machinery and their potential uses, and so was not concerned with patents of addition. Lawrence, unlike Eyre, foresaw the problems associated with granting exclusionary rights to machines and processes that would subsume benefits obtained by unimproved machines and processes. For Lawrence, this was addressed by limiting exclusionary rights to machinery intended for uses that obtained the benefits of the invention.


[36]     The positions of Chief Justice Eyre and Justices Buller, Grose, and Lawrence are itemized below:


  • Buller: A “manufacture” under the Statute of Monopolies embodies all applications of principle to which it can be put and, therefore, methods are not patentable, as such. An addition to a known “manufacture” must be independently patentable.
  • Eyre: “Manufactures” and methods of their use are independently patentable as “manufactures” under the Statute.
  • Grose: “Manufactures” do not include methods of use, but additions to known manufactures need not be independently patentable.
  • Lawrence: Methods of use are patentable as “manufactures,” and known manufactures can be patented as such if limited to new intended uses.


[37]     As we shall see, each of these viewpoints would play one or more roles among the development of “new use,” “aggregation” and “preemption” doctrines that were to develop in the nineteenth and twentieth centuries. Further, like the views expressed by Eyre, Buller, Grose and Lawrence, all three doctrines were based on the presence of a new application of principle couched as “invention.” Following enactment of the Patent Act of 1952, “new use” and “aggregation” doctrines would be subsumed under the conditions for patentability of “novelty,” and “non-obviousness.” Only preemption doctrine remains linked to questions of statutory patent eligibility.



III.  New Uses of Known Machines, Manufactures and Compositions of Matter


[38]     The various positions held by the justices in Boulton and Hornblower regarding eligibility for patent protection as a “manufacture” under the “Statute of Monopolies” ultimately translated in the United States into judicial prohibitions against patentability for new uses of known “machines, manufactures and compositions” under the patent statutes in effect prior to the Patent Act of 1952. Section III–A explores the link between protection of methods as “manufactures” under Boulton and Hornblower, and how the inherency of principles embodied in manufactures initially justified denial of patent protection for beneficial new uses when those manufactures were previously known. Parts III–B and III–C show how the test for patent eligibility of new uses became one of “invention,” either as “non-analogous” or “non-obvious” uses. Part III–D describes a split under the Patent Act of 1952 that partitioned novelty and the judicial threshold of “invention” from the issue of patent eligibility. Part III–E then explains how the “new use” doctrine was ultimately absorbed into the statutory provisions for novelty and “non-obviousness” under the 1952 Act.


A.  The Inherency of Benefit


[39]     Following Boulton and Watt v. Bull and Hornblower v. Boulton, English jurisprudence generally followed the precept that, under the Statute of Monopolies, inventors were entitled to patent protection for both things and processes.[110] A process, in particular, was eligible for patent protection if the benefit achieved was inherent in an improvement of the device it employed. Conversely, if the device itself was not “improved,” then there was no new “process.” For example, the court in Gibson & Campbell v. Brand , before the Court of Common Pleas in 1841, held that a patent “for a new or improved process [f]or manufacture of silk, and silk in combination with certain other fibrous substances,”[111] was not properly patentable subject matter. According to the jury, despite constituting an improvement, the process represented “no new invention and no new combination.”[112] More specifically, Chief Justice Tindal relied upon Chief Justice Eyre’s opinion in Boulton that “the subject matter of letters patent, i.e. the word ‘manufacture’ as used in the statute of James, has generally been understood to denote either a thing made…; or it may perhaps extend also to a new process to be carried on by known implements or elements….”[113] However, as with Buller’s reasoning in Boulton, the inventive nature of the process was limited to the machine, “by which the work is carried into effect.”[114] As stated by Chief Justice Tindal:


Now, looking at the specification in this case, it appears to me, that this patent cannot be supported at law, because the plaintiffs have, in the course of it, claimed more than they are entitled to; for I cannot read the description that they give of their invention, and the parts of their invention, without understanding them to claim improvements that are made upon the machine, which is used for the purpose of producing the desired result.[115]


[40]     “[D]isclaim[ing] those parts of the process or mechanism, which may have been, previously to granting our patent, well known,”[116] the patentees directed “the well-known spinning frame,” and “the improvements we have applied to it,” to the “new and useful purpose of spinning silk waste of long fibres.”[117] However, in view of the jury’s determination that there was “no new invention and no new combination,” despite an improvement in the process, the jury verdict was upheld.[118] An improved process, using a known machine, or a known machine embodying only a slight variation, did not amount “to any thing which might properly be the subject of a patent.”[119]


[41]     In Losh v. Hague, Lord Abinger, Chief of the Court of Exchequer, construed an argument in favor of patentability of a new use of railway carriage wheels (a known contrivance), or a “double use.”[120] Drawing an analogy to a new use of a “medicine known as a valuable specific in one class of complaints, fevers,”[121] Lord Abinger stated:


[T]he application of that medicine to such a new purpose would not be the subject-matter of letters patent. The medicine is a manufacture, and the making or compounding it might be the subject of a patent; but the medicine being known, the discovery of any new application is not any manner of manufacture. . . .


Cases of this kind are well described by the term ‘double use;’ and under such circumstances it is truly said, there cannot be a patent for a double or new use of a known thing, because such use cannot be said to lead to any manner of new manufacture.[122]


[42]     Therefore, as had been advocated by Justice Buller in Boulton v. Bull, a device embraced all applications to which it could be put, all such use being considered by the courts to be exactly “analogous to what was done before.”[123] The test for patent eligibility, then, was whether any improvement on the device when applied to a new use was, “in fact, made on the same principle, in either whole or in part;…”[124]


[43]     A patent directed to use of anthracite to fuel blast furnaces employed in smelting iron was upheld by the Court of Common Pleas in Crane v. Price, despite an earlier patent for the same type of furnace that did not mention the use of anthracite.[125] The court, again under Chief Justice Tindal, held that the “application of anthracite or stone coal and culm, combined with the using of hot air blast, in the smelting and manufacturer of iron from iron stone, mine, or ore,”[126] is not a “manufacture within the intent and meaning of the Statute of James.”[127] It was immaterial that the particular type of air blast furnace itself had been known, or that anthracite had previously been known in the manufacture of iron. Rather, patentability was consequent to the fact that “the combination of the two together (the hot blast and anthracite) were not known to be combined before in the manufacture of iron….”[128] Significantly the court found that, while there were “numerous instances of patents which have been granted, where the invention consisted no more than in use of things already known,”[129] that “failed on other grounds,”[130] such as “want of novelty, or defective specification,”[131] there were “none that failed on the ground that the invention itself was not the subject of a patent.”[132]


[44]     Therefore, according to the court, Crane’s use of anthracite (a known fuel) in an otherwise known method of smelting iron, was within the meaning of “manufacture” under the Statute of Monopolies as a combination of these known features.[133] The reasoning here is critical because, just as had been argued by Justice Grose in Hornblower, eligibility for patent protection hinged on a combination of elements as opposed to an improvement or alteration in any element of a known device.[134] The point is that the issue of a “double use” did not arise, because the invention was an embodiment of a novel combination of known elements rather than the novel application of known elements to any particular new use. As a novel combination, eligibility for patent protection was presumed, and did not hinge on patentable distinction of any particular element of the invention. Further, because patentability lay in the combination of elements, there was no need to link patentability to an intended use of any of those elements or their combination.


[45]     In the United States, and in the same year that Crane v. Price was decided, Justice Story riding circuit in the Circuit Court for the District of Massachusetts held, in Howe v. Abbott, that “[t]he application of an old process to manufacture an article, to which it had never before been applied, is not a patentable invention.”[135] However, “[t]here must be some new process, or some new machinery used, to produce the result.”[136] Here, the invention was directed to “a new and useful improvement in the application of a material called ‘palm leaf,’ or ‘brub grass,’ to the stuffing of beds, mattresses, sofas, cushions, and all of the uses for which hair, feathers, moss, or other soft and elastic substances are used.”[137] Justice Story found that, because “Smith has invented no new process or machinery; but has only applied to palm leaf the old process, and the old machinery used to curl hair, it does not strike me, that the patent is maintainable.”[138] The invention was, consequently, “the mere application of an old process and old machinery to a new use.”[139]


[46]     Although not discussed by Justice Story, it would appear that the distinction of the invention from that of the earlier case of Crane is that Crane’s process for smelting ore was considered a new process by virtue of combination of an “invention already known to the public,” with “something else” to thereby obtain a new process.[140] Smith’s invention in Howe, on the other hand, was considered “no new process or machinery,”[141] but, rather, “an old process and old machinery [put] to a new use.”[142] There was, in other words, no new application of principle in Howe, but, instead, simply application of a known principle to a new material, in this case, “palm leaf,” instead of hair.


[47]     Likewise, in Bean v. Smallwood, also before the Circuit Court for the District of Massachusetts, Justice Story held that a “new and useful improvement in the rocking chair,”[143] was not patentable because the point of novelty lay in a feature that had “been long in use, and applied, if not to chairs, at least in other machines, to purposes of a similar nature.”[144] The invention was not “substantially new,” but rather, “old, and well-known, and applied only to a new purpose….”[145] Therefore, even in an instance where the device technically was novel as a whole, the novel combination was not sufficient to connote patentability if the point of novelty was found to be known and merely applied to “a new purpose.” As stated by Justice Story:


In short, the machine must be new, not merely the purpose to which it is applied. A purpose is not patentable; but the machinery only, if new, by which it is to be accomplished. In other words, the thing itself which is patented must be new, and not the mere application of it to a new purpose or object.[146]


[48]     The combination, in other words, albeit novel, was not patentable because it merely served to apply a known device to a new purpose and was not “substantially new.”[147]


[49]     In Le Roy v. Tatham (“Le Roy I”) the Supreme Court in 1852 held that the instructions to the jury dismissing the novelty of machinery employed to fabricate lead pipe was error.[148] Drawing from Bean v. Smallwood, the Court stated “that a machine, or apparatus, or other mechanical contrivance, in order to give the party a claim to a patent therefor, must in itself be substantially new. If it is old and well known, and applied only to a new purpose, that does not make it patentable.”[149]


[50]     In dissent, Justice Nelson responded that, in effect, the patentees claimed “the combination of the machinery, only when used to form pipes under heat and pressure, in the manner set forth, or in any other manner substantially the same.”[150] More specifically, according to Justice Nelson, “[t]hey do not claim it as new separately, or when used for any other purpose, or in any other way; but claim it only, when applied for the purpose and in the way pointed out in the specification.”[151] The dissent criticized the majority for necessitating novelty in the “combination of the machinery employed” which, according to Nelson, is “contrary to the fair and reasonable import of the language of the specification, and also of the summary of the claim.”[152] If the naturally-occurring feature–by which “lead, when in a set state, being yet under heat, can be made, by extreme pressure to reunite perfectly around a core after separation, and then be formed into strong pipes or tubes,”[153]–were absent, the “simple apparatus employed” would be rendered “useless.”[154] As stated by Justice Nelson:


The patentees have certainly been unfortunate in the language of the specification, if, upon a fair and liberal interpretation, they have claimed only the simple apparatus employed; when they have not only set forth the discovery of this property in the metal, as the great feature in their invention, but, as is manifest, without it the apparatus would have been useless. Strike out this property from their description and from their claim, and nothing valuable is left.[155]


[51]     In essence, the dissent, as Chief Justice Eyre had done in Boulton, founded patentability on physical application of a naturally-occurring principle, regardless of whether the machinery employed to effect that application were new or old.[156] The majority, under Justice McClean, instead paralleled the reasoning of Justice Buller linking novelty in application of principle to novelty in the machinery by which that application was manifested.[157]


[52]     Justice Nelson then went further than Chief Justice Eyre, finding that discovery of a new application of principle entitled a patentee to “all other modes of carrying the same principle or property into practice for obtaining the same effect or result.”[158] By expanding protection of a new application of principle to whatever mode of application that principle employed to obtain the same result, Justice Nelson flipped Justice Buller’s reasoning in Boulton (extending patent protection to all manners of use of novel machinery). The corollary of this conclusion is that it is immaterial whether that mode itself is novel, so long as it is applied to effect the newly-discovered principle. As stated more fully by Justice Nelson:

The mode or means are but incidental, and flowing naturally from the original conception; and hence of inconsiderable merit. But, it is said, this is patenting a principle, or element of nature. The authorities to which to which I referred, answer the objection. It was answered by Chief Justice Eyre, in the case of Watt’s patent in 1795, fifty-seven years ago; and more recently in still more explicit and authoritative terms. And what if the principle is incorporated in the invention, and the inventor protected in the enjoyment for the fourteen years. He is protected only in the enjoyment of the application for the special purpose and object to which it has been newly applied by his genius and skill. For every other purpose and end, the principle is free for all mankind to use.[159]


Justice Nelson concluded:


They suppose that the patentees have claimed only the combination of the different parts of the machinery described in their specification, and therefore, are tied down to the maintenance of that as the novelty of their invention. I have endeavored to show, that this is a mistaken interpretation; and that they claim the combination, only, when used to embody and give a practical application to the newly-discovered property in the lead….[160]


[53]     By decoupling novelty from the machinery employed to apply a newly-discovered “principle, or element of nature,” Justice Nelson, as had Justice Lawrence in Hornblower,[161] provided for exclusionary rights to known modes where their application employed a newly discovered principle.[162] To do otherwise would limit patent protection to new manufactures, per se. Presaging later developments that would mark the introduction of the modern conception of non-obviousness, Justice Nelson also linked eligibility of a “mode or means of the new application of principle”[163] to a threshold requirement of invention:


To hold, in the case of inventions of this character, that the novelty must consist of the mode or means of the new application producing the new result, would be holding against the facts of the case, as no one can but see, that the original conception reaches far beyond these. It would be mistaking the skill of the mechanic for the genius of the inventor.[164]


[54]     Despite these analyses of the relationship between patentability of devices and of the uses to which they may be put, the general understanding that an invention could not lie in the new use of an old machine persisted. For example, in Brown v. Piper, a method of “preserving fish and other articles in a close chamber by means of a freezing mixture, having no contact with the atmosphere of the preserving chamber,”[165] was held to be an “application by the patentee of an old process to a new subject, without any exercise of the inventive faculty, and without the development of any new idea which can be deemed new or original in the sense of the patent law.”[166] Accordingly, the patent was considered invalid because “[t]he thing was within the circle of what was well known before, and belonged to the public.”[167]


[55]     Similarly, In Roberts v. Ryer, the Supreme Court denied patentability to a device that was intended for a new use.[168] The device was “a mere carrying forward or new or more extended application of the original thought, a change only in form, proportions, or degree, doing substantially the same thing in the same way, by substantially the same means, with better results.”[169] The patent at issue was directed to an open bottom ice-box that included a dividing partition and a chamber directly under the ice-box, in which articles to be refrigerated “may be placed in such manner as to receive the descending current of air from the ice box directly upon them.”[170] The Court found that, compared to an earlier patent, “[t]here was no change in the machine: it was only put to a new use.”[171] The Court unequivocally stated, without citation that, “[i]t is no new invention to use an old machine for a new purpose. The inventor of a machine is entitled to the benefit of all the uses to which it can be put, no matter whether he had conceived the idea of the use or not.”[172]


[56]     Shortly thereafter, in 1877, the Supreme Court in Cochrane v. Deener, squarely placed processes within the statutory framework of “art.”[173] Further, the Court partitioned the patentability of machinery employed to perform a process, from the process itself:


That a process may be patentable, irrespective of the particular form of the instrumentalities used, cannot be dispured [sic]…. A process is a mode of treatment of certain materials to produce a given result. It is an act, or a series of acts, performed upon the subject matter to be transformed and reduced to a different state or thing. If new and useful, it is just as patentable as is a piece of machinery. In the language of the patent law, it is an art. The machinery pointed out as suitable to perform the process may or may not be new and patentable; whilst the process itself may be altogether new, and produce an entirely new result. The process requires that certain things should be done with certain substances, and in a certain order; but the tools to be used in doing this may be of secondary consequence.[174]


The parallel between the patentability of processes and machinery were each, separately, founded upon novelty and utility, thereby inherently negating “double use” as an issue.


[57]     Another case, Hartranft v. Wiegmann stated that “[t]he application of labor to an article, either by hand or by mechanism, does not make the article necessarily a manufactured article, within the meaning of that term as used in the tariff laws.”[175] This case was not centered on the eligibility of subject matter under patent laws. Nevertheless, it was borrowed in later cases, most notably Diamond v. Chakrabarty, where the Supreme Court held that genetically manipulated microorganisms were patentable subject matter under 35 U.S.C. §101 because “the patentee has produced a new bacterium with markedly different characteristics from any found in nature and one having the potential for significant utility.”[176] The Court in Chakrabarty, in fact, relied on the reasoning under Hartranft[177] that shells, despite processing, “were still shells. They had not been manufactured into a new and different article, having a distinctive name, character or use from that of a shell.”[178]


[58]     Interestingly, there is no mention in Chakrabarty of instances in Hartranft, whereby a “distinctive name, character or use” would qualify subject matter as a “manufacture.” One such example in Hartranft was that of an India rubber sole fabricated by “simply allowing the sap of the India rubber tree to harden upon a mould.”[179] The Court in Hartranft considered the rubber sole to be a manufactured article “because it was capable of use in that shape as a shoe, and had been put into a new form, capable of use and designed to be used in such new form.”[180] In other words, even under the tariff laws at the time, subject matter could qualify as a “manufacture,” despite being of a material found in nature, if it was “designed for use in a new form.” According to the Supreme Court decision in Hartranft, material derived from nature qualified as a “manufacture” if some new utility inherent in that material was manifested as a consequence.


B.  Non-Analogous Use


[59]     The Supreme Court introduced “analogous use” in Penn. Railroad Co. v. Locomotive Engine Safety Truck Co., stating that “application of an old process or machine to a similar or analogous subject, with no change in the manner or application, and no result substantially distinct in its nature, will not sustain a patent, even if the new form of result has not before been contemplated.”[181] Relying on earlier English cases that stated “there must be some invention in the manner in which the old process is applied,”[182] the Court held that the subject matter of the patent being challenged, which was “already in use under railroad cars, is applied in the old way, without any novelty in the mode of applying it, to the analogous purpose of forming the forward truck of a locomotive engine.”[183] According to the Supreme Court, the “application is not a new invention, and therefore not a valid subject of a patent,”[184] thereby hinging eligibility for patent protection on the “novelty in the mode of applying”[185] the patent’s subject matter to a non-analogous purpose.[186]


[60]     In Ansonia Brass and Copper Co. v. Electrical Supply Co., the Supreme Court relied on Roberts v. Ryer, asserting that “application of an old process to a new and analogous purpose does not involve invention, even if the new result had not before been contemplated.”[187] Stated in positive terms, the Court equated non-analogous use with inventive skill sufficient to warrant patentability:


On the other hand, if an old device or process be put to a new use which is not analogous to the old one, and the adaptation of such process to the new use is of such a character as to require the exercise of inventive skill to produce it, such new use will not be denied the merit of patentability.[188]


[61]     Thereafter, several cases were decided by the Supreme Court that equated new, non-analogous use with inventiveness warranting patent protection, the absence of which was considered a prohibited “double use.” For example, in Grant v. Walter the Court stated:


The most that can be said of this Grant patent is that it is a discovery of a new use from an old device which does not involve patentability…. [I]t forms only an analogous or double use, or one so cognate and similar to the uses and purposes of the former cross-reeled and laced skein as not to involve anything more than mechanical skill, and does not constitute invention….[189]


[62]     Likewise, in Potts v. Creager:


As a result of the authorities upon this subject, it may be said that, if the new use be so nearly analogous to the former one, that the applicability of the device to its new use would occur to a person of ordinary mechanical skill, it is only a case of double use, but if the relations between them be remote, and especially if the use of the old device produce a new result, it may at least involve an exercise of the inventive faculty. Much, however, must still depend upon the nature of the changes required to adapt the device to its new use.[190]


[63]     Judge Learned Hand of the Circuit Court for the Southern District of New York, and later of the Second Circuit Court of Appeals, played a significant role in the development of patent law, particularly with respect to eligibility. Probably most famously, in Parke-Davis & Co. v. H.K. Mulford Co., he upheld the validity of claims for purified adrenaline (ammonia magnesium phosphate) extracted from adrenal glands.[191] Judge Hand based their validity on utility embodied within the claimed product. Dismissing the charge that the patent is “only for a degree of purity, and therefore not for a new ‘composition of matter,’”[192] Judge Hand stated that, “while it is of course possible logically to call this a purification of the principle, it became for every practical purpose a new thing commercially and therapeutically. That was a good ground for a patent.”[193] He summarized that, “[t]he line between different substances and degrees of the same substance is to be drawn rather from the common usages of men than from nice considerations of dialectic.”[194] In essence, Judge Hand asserted that the purified extract was not an embodiment of a bare principle, but, rather a novel composition embodying a new application of principle. On appeal, validity of the claimed extract was upheld by the Circuit Court of Appeals for the Second Circuit, albeit under a narrower construction that limited the composition to a substance “in whose production the suprarenal glands (whose physiological characteristics were already known) have played some part.”[195] Novelty of the composition and a new use, made possible by a new application of principle embodied in that composition, were central to the holdings in both cases.


[64]     In Traitel Marble Co. v. U T. Hungerford Brass & Copper Co., Judge Hand, “[a]ssuming…that the law is absolute that there can be no patent for the new use of an old thing,”[196] held as patentable subject matter that embodied “very slight structural changes…, when they presuppose a use not discoverable without inventive imagination.”[197] According to Judge Hand, while “the statute allows no monopolies merely for ideas or discoveries…” [devices were to be judged] not by the mere innovation and their form or material, but by the purpose which dictated them and discovered their function.”[198] Therefore, like Justice Buller in Boulton, Judge Hand believed that a device continued to embody all purposes to which it might be put to use. Nevertheless, he also found that even a slight variation in the device, if “inventive,” and if beneficial when put to a new use, would make the device and its method of use eligible for patent protection.


[65]     Likewise, Judge Hand, three months later in H.C. White v. Morton E. Converse & Son Co., held valid a mechanical patent for a tricycle, even though the necessary changes to obtain the improvement were quite “simple” and by means that “have been also always at hand.”[199] For Judge Hand, “[t]he fact that the changes were so slight is quite irrelevant, so long as they were essential to the purpose, as they were.”[200] Judge Hand explained:

While the statute grants monopolies only for new structures, and not for new uses, invention is not to be gauged by the necessary physical changes, so long as there are some, but by the directing conception which alone can beget them.[201]

[66]     Even where the changes to a known device might be small and well-known, a combination “essential to the purpose” that obtained a novel and beneficial result was, for Judge Hand, inventive and sufficient to merit patent protection, despite the fact that “this inventor merely thought to unite them by a fortunate insight which had thereto escaped the imagination of others.”[202] Therefore, as had been stated by Justice Grose in Hornblower, additions to known devices need not be independently patentable to make the improved device eligible for patent protection.


[67]     There was, however, growing confusion over eligibility of subject matter at this time, as highlighted in the case of Ex parte Brown, a decision by the Patent Office on appeal to reverse the rejection of claims directed to “electrical insulating material composed of plant leaves of the Bromelia family.”[203] The Commissioner reasoned that, because the “[a]ppellant appears to have been the first to discover that this [fiber] material possesses unexpectedly superior electric insulating properties,”[204] the claimed “electric insulating material” composed of the fiber material was patentable.[205] It relied on dicta from General Electric Co. v. Hoskins Mfg. Co., including the following:


The novelty of the patent in suit consists in discovering a new use for the chromium-nickel alloy in which is produced most extraordinary and unexpected results. . . .


[Marsh] first disclosed the properties and great advantages of the chromium nickel alloy as a resistance element. . . .


Inasmuch as Marsh is not claiming novelty for his alloy as such, we need not give the objection further attention.[206]


[68]     Close inspection of the decision in General Electric, however, reveals that the court distinguished between claims to the chromium-nickel alloy and its embodiment as an “electrical resistance element.”[207] As stated by the court, “unless Placet anticipates Marsh’s material as an electrical resistance element, it is not anticipated.”[208] As further stated by the court:


From the foregoing statements it is evident that Placet and the other prior art and prior publication references fell far short of disclosing, even to those skilled in the art, the subject matter of the patent in suit.[209]


[69]     The Patent Office in Brown, however, went further and viewed naturally-occurring material as patentable, if characterized as a discovery of its inherent properties. In other words, the patentability of the claims was upheld as an “electric insulating material,” because the property of electrical resistivity was the discovery that formed the basis of the patent application.[210]


[70]     Similarly, in Ex parte Oscar Hannach, the Patent Office Board of Appeals in 1931 reversed the final rejection of claims directed to a “refrigerating composition, consisting of a mixture of ammonium-chloride and sodium-carbonate adapted to produce a decrease of temperature upon being dissolved,” in view of a German patent disclosing a mixture of ammonium-chloride and sodium-carbonate for extinguishing fires.[211] The examiner argued that the “appellant is not entitled to a patent for merely perceiving this [refrigerating] property of the old substance.”[212] The Board, on the other hand, did “not consider that the uses are sufficiently analogous or the functions of the chemicals sufficiently similar so that any suggestion of the use of the fire extinguishing mixture as a refrigerating mixture would be received without the exercise of invention.”[213] Again, like the position held by Justice Lawrence in Hornblower, patent eligibility was available for known devices if limited to a particular use. Here, discovery of a non-analogous use was sufficient to support a claim to a known composition distinguished only by its characterization and intended use.


[71]     It was also at about this time that the Supreme Court began calling into question the statutory meaning of “manufacture” under patent law. In American Food Growers, Inc. v. Brogdex Co., the Supreme Court in 1931 held that the claimed combination of natural fruit and a “boric compound carried by rind or skin in an amount sufficient to render the fruit resistant to decay,”[214] was not a “manufacture,” because there was “no change in the name, appearance, or general character of the fruit.”[215] The Court quoted Hartranft [216] and another case, also unrelated to patent law, Anheuser-Busch Ass’n. v. United States, to thereby impose the requirement that a “manufacture” must embody “something more”:


“Manufacture implies a change, but every change is not manufacture, and yet every change in an article is the result of treatment, labor and manipulation. But something more is necessary….There must be transformation; a new and different article must emerge ‘having a distinctive name, character or use.’”[217]


As we shall see, a threshold requirement resembling “something more” would be echoed by the Supreme Court in later decisions addressing the statutory eligibility of claimed subject matter.[218]


[72]     Shortly after American Fruit Growers was decided, Judge Hand in H.K. Regar & Sons, Inc. v. Scott & Williams, again held to Justice Buller’s standard in Boulton that “a new use of an old thing or an old process, quite unchanged, can under no circumstances be patentable,”[219] and directly related this conclusion to the statutory provision that “allows patents only for a new ‘art, machine, manufacture or composition of matter’ [35 USCA § 31].”[220] For Judge Hand, a “new use begets a new device. In such cases it requires but little physical change to make an invention.”[221] Therefore, and seemingly in contrast to some earlier decisions by the Patent Office,[222] recharacterization of known subject matter and intended use could not connote statutory eligibility as a new “art, machine, manufacture or composition of matter.”


[73]     Judge Hand addressed the issue of “new use” more directly in Hookless Fastener Co. v. G.E. Prentice Mfg. Co., as a conflict between the eligibility of new machines and the lack of eligibility of new uses of known components.[223] As stated by Judge Hand:


We conceive the rule to be that if the invention be merely of a new use for an old machine, it is never patentable; the statute does not authorize patents for uses, though processes come close aboard at times. But if the patent be for a new machine, there is no such doctrine, and indeed could not be, because substantially every machine is sure to be composed of old elements. The real difficulty is, as it usually is, in fixing the marches where these conflicting doctrines meet.[224]


[74]     Judge Hand was, possibly without recognizing it, wrestling with Judge Buller’s insistence that to be patentable, an improvement on a machine must be patentable apart from its combination with the machine so improved, and the dilemma of Lord Mansfield in Morse v. Branson in 1776, that “if the objection to this patent was on the ground that it was only for an addition to an old machine, that objection would revoke almost every patent.”[225] Regardless, Judge Hand’s position remained the same: arts, machines, manufactures, and compositions of matter all inherently embodied applications of principle, thereby prohibiting from eligibility patent protection for previously undiscovered uses, regardless of the novelty of the use and any new benefits or utilities obtained by such new discoveries.


[75]     The Patent Office, however, continued to grant patents based on discoveries that made known subject matter amenable to new and beneficial uses. For example, in 1934, the Patent Office Board of Appeals in Ex parte Walter H. Fulweiler, upheld the validity of a process for the “use of aluminum soap of cocoanut oil to stuff the leather” for use in gas meter diaphragms, because, although known as a method to “render it pliable and waterproof,” the applicant had discovered “new properties in this material especially adapting it for use in gas meters.”[226] The Board stated that the case was “believed to be similar to that of Ex parte Brown,” where the fiber of plant leaves was held to be patentable subject matter as “electric insulating” material.[227] In another example, the Patent Office Board of Appeals, in Ex parte Jos. A. Weiger, held that claims directed to a valve seat, where the “only alleged novelty of the claims resides in the use of the material for the seat not heretofore used for that purpose,” were patentable because, although the material itself was taught in an earlier patent, there was no description in that patent “to any great extent the properties of the material.”[228] According to the Board, the appellant’s “selection of the patented material and determination of its suitability [as a valve seat] is an accomplishment warranting the grant of a patent.”[229] Thus, while Judge Hand at the Circuit Court of Appeals for the Second Circuit continued to insist that “new uses” could only be consequent to employment of new devices within the mandate of the patent statute, and while the Supreme Court generally allowed for such “new uses” only when they were not “analogous” to known uses or processes, the Patent Office reversed rejections made by examiners of claims to materials distinguished only by their intended use.[230]

C.  Non-Obvious Use


[76]     The Supreme Court case of Cuno Eng’g. Corp. v. Automatic Devices Corp., is well-known for its assertion that a “new device, however useful it may be, must reveal the flash of creative genius not merely the skill of the calling,” to qualify for patent protection.[231] Although the Court in Graham v. John Deere later dismissed this language as mere “rhetorical embellishment,”[232] Cuno is also noted for invoking the 1851 Supreme Court case of Hotchkiss v. Greenwood.[233] Hotchkiss is now widely acknowledged as establishing a requirement for patentability of “more ingenuity and skill” than that of “the skillful mechanic”[234] that later became the basis for the modern statutory requirement of “non-obviousness.”[235] There is less recognition that the Court in Cuno based its decision, in part, on the prohibition against new uses of old devices. Specifically, the Court extrapolated the Constitutional provision for patent protection to a requirement of “inventive genius” by relying on the bar against patenting a “new application of an old device:”


We cannot conclude that his skill in making this contribution reached the level of inventive genius which the Constitution (Art. I, § 8) authorizes Congress to reward. He merely incorporated the well-known thermostat into the old ‘wireless’ lighter to produce a more efficient, useful and convenient article. A new application of an old device may not be patented if the ‘result claimed as new is the same in character as the original result’ even though the new result had not before been contemplated.[236]


Therefore, the prohibition against claiming a “new use” of known subject matter continued to be a viable doctrine, even under the Supreme Court, and was included in the reasoning that would eventually become the seeds of modern statutory “non-obviousness.”


[77]     In 1943, the Court of Customs and Patent Appeals in In re Thuau upheld a rejection of claims directed to “a new therapeutic product for the treatment of diseased tissue.” [237] All three claims were directed to products, and two of the three claims at issue were directed to known products limited to an intended use.[238] As framed by the appellants, the issue was “whether the products defined involved a new and unobvious use –namely, a therapeutic product for the treatment of diseased tissue.”[239] The Court of Customs and Patent Appeals rephrased the question to “whether a new and unobvious use for an old composition renders claims for such use patentable.”[240] The issue, as rephrased by the court, seems incongruous with the subject matter of the claims on appeal because those claims were directed, in each case, to a product and not to its use. As we have seen, equating a product to a method of its use would only find sanction if, as asserted by Justice Buller in Boulton, a manufacture embodied all possible applications of principle of its use by definition, rather than that of Lord Chief Justice Eyre, who determined that methods, or processes, could be held patentable as “manufactures” under the Statute, independent of the patentability of the means by which the methods are effected. The remainder of the opinion in Thuau, indeed, assumes equivalency of products and methods of their use; the court repeatedly referenced the patentability of “a new use of an old thing or old process:”


But a new use of an old thing or old process, quite unchanged, can under no circumstances be patentable; not because it may not take as much inventiveness to discover it, as though some trivial change were necessary, but because the statute allows patents only for a new “art, machine, manufacture or composition of matter”…. The test is objective; mere discovery will not do.[241]


The claims were denied eligibility for patent protection by the court as a new use of an old composition, even though the claims were for products, and despite acknowledgement of the value of the discovery and the consequent benefit of the invention:


That appellant has made a valuable discovery in the new use of the composition here involved we have no doubt, and it is unfortunate for him if he cannot make claims adequate to protect such discovery, but to hold that every new use of an old composition may be the subject of a patent upon the composition would lead to endless confusion and go far to destroy the benefits of our patent laws.[242]


There was no consideration by the court that, the products being known, the rejections of the claims at issue could have stood on lack of novelty alone, without involving the doctrine of “new use.”


[78]     In 1947, Judge Hand, again for the United States Court of Appeals for the Second Circuit, held, in Old Town Ribbon & Carbon Co., Inc. v. Columbia Ribbon Carbon Mfg. Co. Inc., that claims directed to a “device for making a master copy sheet for use either in the gelatin type or in the spirit type of reproduction,” and for a “folded sheet”[243] were anticipated by an earlier-issued patent.[244] As stated by the court, “it was a perfect anticipation of both claims in suit, except for the absence of any suggestion that they were fit for the ‘gelatin pad,’ as well as for the ‘spirit,’ process.”[245] The court’s reasoning, however, perpetuated the underlying assumption of the Court of Customs of Patent Appeals in In re Thuau that “machines,” “manufactures,” and “compositions of matter” were, somehow, inherently, embodiments of any “art” or process employing them:


Nevertheless, since 1793, unless a patent disclosed a “new and useful art,” a new “machine,” a new “manufacture,” or a new “composition of matter,” it has not been a valid patent. If it be merely for a new employment of some “machine, manufacture or composition of matter” already known, it makes not the slightest difference how beneficial to the public the new function may be, how long a search it may end, how many may have shared that search, or how high a reach of imaginative ingenuity the solution may have demanded. All the mental factors which determine invention may have been present to the highest degree, but it will not be patentable because it will not be within the terms of the statute. This is the doctrine that a “new use” can never be patentable.[246]


For Judge Hand, a “process” was an “art” under the statute only if it employed some novel “machine, manufacture or composition of matter.” The court left no doubt that the issue was one of patent eligibility by explicitly providing for qualification under the statute for even “very slight physical changes in a ‘machine,’ a ‘manufacture’ or a ‘composition of matter,’” while specifying no such provision for a new process or “art” not consequent to some slight variation of a “machine,” “manufacture,” or “composition of matter:”


As we have said in earlier cases, this does not mean that very slight physical changes in a “machine,” a “manufacture” for [sic] a “composition of matter” may not be enough to sustain a patent; the act of selection out of which the new structure arises, is the determinant, and small departures may signify and embody revolutionary changes in discovery; but the law does not protect the act of selection per se, however meritorious, when it is not materially incorporated into some new physical object.[247]


Without ever citing Thuau, Judge Hand repeated the reasoning of the Court of Customs and Patent Appeals in that case by superfluously lumping eligibility of a new “art,” or “process,” with claims directed to other statutory categories when the “machine,” “manufacture,” or “composition of matter” employed by the process was not new. All of the claims were directed to known products and, therefore, as in Thuau, Judge Hand did not consider claimed methods of their use, nor any products intended for some specific use, to be statutory subject matter when the products themselves were not novel.[248]


[79]     Interestingly, the Court of Customs and Patent Appeals provided some clarification to the dicta of Thuau in In re Haller,[249] just a few months after Old Town Ribbon. Like Thuau and Old Town Ribbon, the claims at issue in Haller were directed to a product, in this case “[a] packaged product comprising cyclopropyl alkyl ether having not more than three carbon atoms in the alkyl group, labeled to show its use as an insecticide.”[250] Unlike the dicta in Thuau, the court in Haller clearly distinguished between claims directed to an old composition having a new intended use, and claims directed to the new use of that old composition:


Counsel for appellant cites numerous authorities to the effect that the concept of using an old material for a new purpose may, if properly claimed, form a basis for a patent. That point is not in issue here. The issue here is whether an old composition can be patented as a composition on the basis of the mere statement of a new use.[251]


[80]     Relying on Thuau, the court stated that “[t]he difficulty is not that there can never be invention in discovering a new process involving the use of an old article, but that the statutes make no provision for the patenting of an article or composition which is not, in and of itself, new.”[252]


[81]     The dicta by the court in Haller also clearly rested on its understanding that the “basis for rejection… in the Thuau case… [was] lack of novelty in the composition claimed, rather than lack of invention in the use suggested.”[253]


D.  The Patent Act of 1952: Splitting Invention from Eligibility

1.  Statutory and Judicial Conflation Prior to the Patent Act of 1952


[82]     Until enactment of the Patent Act of 1952, the modern concepts of “patent eligibility” and “statutory novelty” were defined under a single paragraph of the patent statute, as they had been under various acts since the Patent Act of 1790. For example, in 1947, at the time of Haller, the relevant provision was § 31 at Title 35 of the United States Code, which read, in part, as follows:


Section 31. Inventions Patentable.

Any person who has invented or discovered any new and useful art, machine, manufacture, or composition of matter, or any new and useful improvements thereof, … not known or used by others in this country, before his invention or discovery thereof, and not patented or described in any printed publication in this or any foreign country, before his invention or discovery thereof, or more than one year prior to his application, and not in public use or on sale in this country for more than one year prior to his application, unless the same is proved to have been abandoned, may, upon payment of the fees required by law, and other due proceeding had, obtain a patent therefor.[254]


[83]     Even more importantly, there was no statutory provision for non-obviousness, which debuted in the Patent Act of 1952. Instead, courts relied on the standard espoused in Hotchkiss[255] and, before that, “patentable novelty” or “substantial novelty,”[256] which was a judicial conceit born from the limitation of the Patent Act of 1793 that “simply changing the form or the proportions of any machine, or composition of matter, in any degree, shall not be deemed a discovery.”[257]


[84]     Given that the modern notions of eligibility and novelty were combined within a single statutory provision, and that non-obviousness was a nascent concept, at best, it is not surprising that “invention” continued to play a significant role in determining whether subject matter qualified as any of the statutory categories of “art, machine, manufacture or composition of matter” right up to the introduction of the 1952 Act. In Funk Bros. Seed Co. v. Kalo Inoculant Co., for example, the Supreme Court in 1948 applied the reasoning of Cuno Engineering to hold that combinations of different species of bacteria of the genus Rhizobium were not eligible subject matter under the statute because, according to the Court, “a product must be more than new and useful to be patented; it must also satisfy the requirements of invention or discovery.”[258] Following the logic of the Court in Cuno, that “[a] new application of an old device may not be patented if the ‘result claimed as new is the same in character as the original result’ even though the new result had not before been contemplated,”[259] the Court in Funk Bros. stated:


The application of this newly-discovered natural principle to the problem of packaging of inoculants may well have been an important commercial advance. But once nature’s secret of the non-inhibitive quality of certain strains of the species of Rhizobium was discovered, the state of the art made the production of a mixed inoculant a simple step. Even though it may have been the product of skill, it certainly was not the product of invention.[260]


[85]     More specifically, the combination of species fell short of “invention” within the meaning of the patent statute because:


No species acquires a different use. The combination of species produces no new bacteria, no change in the six species of bacteria, and no enlargement of the range of their utility. Each species has the same effect it always had. The bacteria perform in their natural way. Their use in combination does not improve in any way their natural functioning. They serve the ends nature originally provided and act quite independently of any effort of the patentee.[261]


The non-mutually inhibitive nature of certain combinations of bacteria were examples of “manifestations of laws of nature, free to all men and reserved exclusively to none,”[262] whereby:


He who discovers a hitherto unknown phenomenon of nature has no claim to a monopoly of it which the law recognizes. If there is to be invention from such a discovery, it must come from the application of the law of nature to a new and useful end.[263]


[86]     The non-mutually inhibitive combination of bacteria claimed by the patentee was no more than “one of the ancient secrets of nature now disclosed.”[264] In effect, the combination represented for the Court in Funk Bros. “a new use of an old thing or an old process, quite unchanged,” in the words of the court in Thuau.[265] Therefore, as stated by the Court in Funk Bros., “[a]ll that remains…, are advantages of the mixed inoculants themselves. They are not enough.”[266]


[87]     Following Haller, the meaning of Thuau was again at issue in In re Benner, where the Court of Customs and Patent Appeals upheld a rejection of claims directed to a ball mill lining element.[267] The appellant distinguished the holding in Thuau as “merely claiming a new use for an old condensation product whereas appellants claims are directed to an article of manufacture which is new.”[268] The court found that the “introductory phrase, ‘A ball mill lining element,’ does not constitute a part of the subject matter of the appealed claims to be considered as a limitation in determining the question of patentability.”[269] Therefore, “the matter of non-analogous use alleged is not important in this case.”[270] The court also responded to the appellants’ further argument that “a change, modification, or adaptation (of the old product), however slight, imparts patentability,” as being “too broad to be accepted as sound law.”[271] Rather, while “[i]nvention might be present in a very slight alteration, … such alteration must amount to something more than mechanical or professional skill” and, regardless, the court found a “lack of statutory authority for the grant of a patent based solely on use.”[272] The court did not comment on the eligibility of claims regarding methods of use, as opposed to claims for the products themselves.


[88]     A clear distinction between claims to compositions and methods of their use was again laid out by the Patent Office Board of Appeals in Ex Parte Wagner.[273] The claims, directed to a “well drilling process employing a drilling mud to which has been added a water-soluble cellulose sulfate,” were rejected by the examiner as “not being proper process claims.”[274] The Board interpreted the examiner’s rejection to mean “that the process claims are unpatentable over the conventional well drilling processes shown in the cited patents and not that they are improper in a statutory sense.”[275] The Board reversed the examiner because it found that, under the “Thuau doctrine,” claims to compositions and to their methods of use were separately patentable in that, depending upon the prior art, composition claims might fall while those directed to methods of their use might not.[276] In In re Craige, the United States Court of Customs and Patent Appeals was even more direct.[277] Affirming a rejection where no method claims were at issue, the court cited Thuau for the proposition that “patents for old compositions of matter based on new use of such compositions, without change therein, may not lend patentability to claims.”[278]


[89]     In In re Aronberg, which was decided June 30, 1952, claims for a pipe joint sealing compound were upheld as novel because the claims, despite being open-ended, did not contemplate the presence of a “non-drying oil.”[279] The sealing compound, therefore, was “a substance useful in an art wholly non-analogous to the [prior] art….”[280] According to the Court of Customs and Patent Appeals, the holding was in conformance with the “well settled rule that discovery of a new use for an old article is not patentable.”[281] On the eve of the Patent Act of 1952,[282] patent protection was justified by reasoning that blended the three concepts of eligibility, novelty and invention:


It is our view that by eliminating or omitting the non-drying oil (the non-siccative) from the composition there was produced a new composition of matter which the British patent did not anticipate, and the record justifies the conclusion that the new composition is both novel and useful and that its production involved the exercise of the inventive faculty.[283]


Because applicants’ claimed pipe joint sealing compound was a novel and inventive composition of matter under the law, the maxim that prohibited “a new use for an old article” had not been violated.


2.  General Understanding of New Uses Based on In re: Thuau


[90]     Before implementation of the Patent Act of 1952, the holding and dicta in Thuau commonly was understood to bar “pure uses,” as exemplified by Biesterfeld in 1949:

In the past the Patent Office issued quite a large number of patents covering pure uses, which under the decisions shown above must be deemed void. This practice is believed to be coming to an end, following the publication of In re Thuau (57 U.S.P.Q.) in 1943. Certainly there is no justification now for the Patent Office to issue any patent claims covering a use per se, whether mechanical or chemical.


According to the decisions, a patentee is entitled to all the uses of his invention, whether known or unknown to him.

….In conclusion, a use as such is unpatentable.[284]


[91]     However, Wachsner criticized “new use” doctrine in an article published in the Journal of the Patent Office Society in June of 1952.[285] He began with the assumption that the “In re Thuau doctrine” meant that a “new use for an old substance is not patentable, even when the new use is clearly non-analogous.”[286] The fundamental dilemma identified by Wachsner was that, while patents do not give the absolute right to use an invention (rather they provide only an exclusionary right), it is well-settled that the patentee is “not allowed to… practice a method patented to another person, no matter whether the latter patent is older or younger than his own.”[287] Therefore, “the argument against patentability of new uses because of the unrestricted use to which an older patentee is entitled is little convincing.”[288] Conversely, decisions inferring that “the principle of the unpatentability of new uses” has to give way where “the new use is non-analogous, that is to say where invention is involved,” must concede that “then, it is no principle at all,” because “invention has always to be present if a patent is to issue, and no amount of inventive genius can make up for the lack of unpatentable [sic] subject matter.”[289] For Wachsner, such decisions, hinging eligibility on “inventive genius,” abandoned “a clear distinction … between patentable matter and invention,”[290] in order to “becloud the real issue and to find a way out of the dilemma [of new uses] to refuse a patent to somebody who obviously has deserved it.”[291]


3.  The Split


[92]     The Patent Act of 1952,[292] which was enacted on January 1, 1953, distinguished patent eligibility from “conditions” for patentability of eligible inventions by splitting the previous provisions of section 31 of Title 35 into new 101 (“Inventions patentable”) and 102 (“Conditions for patentability; novelty and loss of right to patent”).[293] Judicial precedence delineating patentable distinction beyond novelty was legislated under section 103 (“Conditions for patentability; non-obvious subject matter”).[294] New section 101 replicated portions of the language of previous section 31 and included the same categories of subject matter, but substituted the term “art,” with that of “process.” The substitution was made “to avoid the necessity of explanation that the word ‘art’ as used in this place means ‘process or method,’ and that it does not mean the same thing as the word ‘art’ in ‘other places.’”[295] The term “process” was defined at § 100 to mean “process, art or method, and includes a new use of a known process, machine, manufacture, composition of matter, or material,”[296] and was added “to make it clear that ‘process or method’ is meant, and also to clarify the present law as to the patentability of certain types of processes or methods as to which some insubstantial doubts have been expressed.”[297]


[93]     P.J. Federico, a principal author of the new Act, made a distinction between the eligibility of claims directed to a “new use,” on one hand, and claims to “a known process, machine, manufacture, composition of matter, or material” subject to that new use, on the other.[298] The former were eligible for patent protection, “provided the conditions for patentability are satisfied,” while the latter were not, regardless of such conditions.[299] For Federico, despite the fact that “some of the statements made in the decision are not completely defensible,” the Court of Customs and Patent Appeals in Thuau meant “simply that an old material cannot be patented as a composition of matter, because it is an old material, and the fact that the inventor or discoverer may have discovered a new use for the old material does not make the material patentable. To this extent the decision is affirmed by the statute….”[300]


[94]     In this discussion by Federico there was no reference to any threshold under new § 101, other than the requirement that the process, machine, manufacture, composition of matter, or material be “new and useful,” as required by the literal language of the statute. Even with respect to “new uses of old materials,” Federico stated that the new statute “recognizes a process or method which involves only a new use of an old material, as within the field of subject matter capable of being patented.”[301] He then linked recognition under section 101 to the qualification that “conditions and requirements of this title” must, nevertheless, be met:


The reference to the new use of a known machine or manufacture in the definition merely means that processes may utilize old machines or manufacturers and the reference to the new use of a known process simply indicates that the procedural steps in a patentable process might be old. . .

The methods, however, will still have to satisfy other conditions of the statute in order to be patentable, and the condition expressed in section 103 would rule out many such methods.[302]


[95]     Section 103, in turn, was deemed by Federico to be a second “major change” that incorporated a judicial requirement of “invention.”[303] According to Federico, section 103 was a “limitation on section 102 and it should more logically have been made part of 102.” But, even as a “third requirement,” beyond novelty and utility, the new provision embraced “invention” under the old statute as “an extension of the statutory requirement for novelty:”


In form this section [103] is a limitation on section 102 and it should more logically have been made part of section 102, but it was made a separate section to prevent 102 from becoming too long and involved and because of its importance. The antecedent of the words “the prior art,” which here appear in a statute for the first time, lies in the phrase “disclosed or described as set forth in section 102” and hence these words refer to material specified in section 102 as the basis for comparison. . . .


The source of the requirement under the prior statute has been variously attributed. The opening clause of old R.S. 4886, which specified the classes of patentable subject matter (see section 101), began “Any person who has invented or discovered any new and useful art, machine, etc.” Two requirements may be found here: novelty (although novelty is further defines [sic] to referring [sic] to the conditions which defeat novelty), and utility (which condition is not further defined). The use of the word “invented” in this phrase has been asserted as the source of the third requirement under discussion. However, a different origin, with which the language and arrangement in the new code are in harmony, has also been stated. This is that the requirement originally was an extension of the statutory requirement for novelty.[304]


Therefore, tests of novelty, including those of sufficiency of “invention” under the statute, were deliberately partitioned from the listing of eligible classes of invention under the old statute and placed under the “conditions for patentability” of “novelty” under section 102 and “non-obvious subject matter” under section 103.


[96]     It is telling that Federico’s Commentary includes no discussion of any lingering requirement that a “process” under section 101 of the new Act must meet a threshold of non-analogous use or inventiveness. Rather, both the legislative history and Federico’s Commentary clearly state that qualification as a “process” under section 101 involves “merely the new use of a process, machine, manufacture, composition, or material.” All other requirements associated with obtaining an exclusionary right were relegated to “conditions for patentability” found in the remainder of the statute. Significantly, there was also no mention anywhere in the legislative history of “preemption” or the excepted categories of laws of nature, natural phenomena, and abstract ideas that would figure so prominently in judicial developments that would follow under section 101.


[97]     In addition to his Commentary, Federico spoke at a meeting of the American Patent Law Association (APLA) in 1953 on the topic of sections 100 and 101 of the 1952 Act. As reported by the APLA, Federico stated that “In re Thuau was reaffirmed by the statute and is still good law with respect to the point decided ‘An old material is still an old material,’”[305] presaging statements he would later make in his Commentary. Ex parte Wagner, discussed above,[306] was used by Federico as an illustration presented to examiners at the Patent Office as a “good decision to study with respect to use claims.”[307] As recited above, the Board in Wagner stated that, “under the Thuau doctrine, the situation may reasonably arise, after grant of the patent, where the composition claims may be anticipated by a reference which does not meet the process claims.”[308]


[98]     Reliance on Wagner as a characterization of Thuau limited the prohibition against “dual use” to claiming old compositions used for specific processes, rather than claiming new processes using those old compositions. Federico also stated that examiners were barred from allowing claims employing the phraseology, “[t]he use of _____ for _____.”[309] Instead, “[t]he claim must specify that it is a process or method.”[310] Finally, Federico stated that “process claims should no longer be rejected as being merely a conventional way of using a material.”[311]


E.  The Demise of New Use Doctrine


[99]     Comments similar to Federico’s in his Commentary, were made by Riesenfeld in a 1954 article.[312] It is possible that Riesenfeld had read Federico’s Commentary, although there is no reference to it in his article. There is, however, reference to Federico’s speech as reported in the Bulletin of the APLA in 1953.[313] In his article, Riesenfeld stated that the “substitution of the expression-troika ‘process, art or method’ in lieu of the single wheel-horse ‘art’ should not amount to an actual change in the law,”[314] and questioned how much the new Act limited the “Thuau doctrine.”[315] Riesenfeld appears to distinguish the patentability of known “machines, compositions of matter and material” from new uses of those statutory categories as “processes” by asserting that “a newly discovered use for a known substance, machine or process is still only patentable if it is not merely analogous or cognate to the uses heretofore made.”[316] The suggestion could be drawn from this passage that eligibility of claims to a process, unlike those directed to other categories of statutory subject matter, required some degree of invention, despite provisions in the new statute for a separate requirement of non-obviousness under section 103.


[100]   On closer reading, however, Riesenfeld raises general policy concerns over the prospect of depriving “the public of the benefits of a process, machine or product merely because it has been discovered that such process, machine or product possesses desirable qualities heretofore not apparent which warrant the intensification or expansion of the accepted use.”[317] He then turns the argument around and states that this logic should apply regardless of whether the “new use relates to a known process or a known product,” thereby removing the basis for imposing conditions on the statutory eligibility of processes that would not apply to the other categories of machines, manufacturers, and compositions of matter.[318] Riesenfeld called out a then recently-decided district court case, United Mattress Mach. Co. v. Handy Button Mach. Co., as an example of a “contrary” and “perturbing misunderstanding of the [A]ct,” citing a footnote in that case suggesting that processes and products were to be distinguished under the Act by the fact that “process patents may be granted for a new use in situations where products would not qualify.”[319] The court in United Mattress had explained that, because “[t]he Act contains no comparable language respecting the new use of prior art products, as such,” the limitations on the meaning of “process” under the 1952 Patent Act as a matter of eligibility did not extend to “products.”[320] In other words, Riesenfeld was understanding the court in United Mattress to impute a requirement of inventiveness to a “process” under the Act that did not apply to the other categories of eligible subject matter, and he attributed this to a “perturbing misunderstanding” of the statute and its legislative history.[321]


[101]   As stated above, Riesenfeld does not appear to have had the benefit of Federico’s Commentary, clearly stating that Thuau was not overruled by the statute and that claims to a new use for an old material were eligible while claims to an old material intended for a new use were not. He may also have not been privy to Federico’s statement that the word “invented” under section 101 was to be implemented under the non-obviousness requirement of section 103.[322] Instead, Riesenfeld presumed that the new Act, and the “new statutory definition of ‘process’ restores the broad principles of patentability flowing from a careful analysis of the exposition given by the Supreme Court in the Ansonia case….”[323] According to Riesenfeld, that “careful analysis” revealed the “crucial issue specifically as a ‘question of patentable novelty’ and one of ‘invention’ rather than one of patentable subject matter as such…,” as recited by Justice Brown in that case.[324] Therefore, even in the absence of Federico’s Commentary, Riesenfeld’s estimation of the Supreme Court’s reasoning in Ansonia Brass revealed a distinction between eligibility on one hand, and the degree of “invention” sufficient to merit the grant of a patent on the other. These separate requirements, as articulated by the Court in Ansonia Brass, later dovetailed neatly into the statutory eligibility language of section 101 with its explicit reference to “conditions and requirements” that included novelty under section 102 and non-obviousness under new section 103.


[102]   Nevertheless, confusion over the meaning of the holding and dicta in Thuau, and its treatment under the 1952 Patent Act continued. In In re Ducci, the Court of Customs and Patent Appeals affirmed a rejection of claims directed to an article and method for the manufacture of multi-cellular glass because, as stated by the Board, “this glass is analogous to the glass of the references and it is quite evident that it can be converted to multi-cellular glass by the method disclosed in the references.”[325] The Board relied on Craige to conclude that, “[u]nder these conditions the invention does not reside in the method.”[326]


[103]   In his defense, the appellant argued that “Craige, … turned upon the doctrine of In re Thuau, … which held that new use of old materials were not patentable,” but that “the Patent Act of 1952 overturned the Thuau doctrine, and, of course, the Craige decision with it.”[327] In response, the court referred to arguments made by the Solicitor on behalf of the Patent Office quoting Riesenfeld’s article:


. . . With respect to Section 100(b), the latest published view on the matter is that “the background of the amendment gives reason to assume that a newly discovered use for a known substance, machine or process is still only patentable if it is not merely analogous or cognate to the uses heretofore made.” To this view the Commissioner subscribes.[328]


The court concluded that, “in the absence of authority to the contrary, we know of no reason to dispute the validity of the foregoing views expressed by Mr. Riesenfeld and the Commissioner of Patents,”[329] and affirmed the decision of the Board invalidating the claims as being “without the exercise of the inventive faculty, only that which is obvious to any person skilled in the art.”[330]


[104]   The appellant, apparently, misunderstood Thuau to mean that method claims constituting a “new use of old materials were not patentable,” and was, therefore, trying to make the argument that, because the 1952 Patent Act clearly made new uses patentable, the so-called “Thuau doctrine,” as well as the Craige decision, had been overturned. The court, for its part, correctly understood that Thuau had not been overturned by the 1952 Patent Act, but misunderstood Thuau to mean that an analogous method is not eligible as a “process” for patent protection under section 100(b), and supported this position by statements taken out of context from Riesenfeld’s article by the Solicitor. Moreover, throughout Ducci, there is no reference to any of sections 101, 102 or 103, possibly indicating difficulty by the Patent Office and the court in applying a distinction among these new statutory provisions, and potentially laying the basis for greater misunderstanding.


[105]   For example, the U.S. Court of Appeals in Elrick Rim Co. v. Reading Tire Mach. Co., Inc. stated that, a “different use of a known substance, machine, or process is not ‘new’ within the meaning of this statute [35 U.S.C. § 100(b)] if it is merely analogous or cognate to the use theretofore made.”[331] The threshold of “invention” relied upon by the court, however, mirrored the newly minted statutory requirement of non-obviousness:


Invention or discovery is not present where the new use of a known apparatus is the product of the exercise of ordinary professional skill. There must be ingenuity over and above mechanical skill.[332]


This analysis suggests that lack of “invention” in a “different use” is excepted from the definition of “process” under 35 U.S.C. § 101.


[106]   On the other hand, Judge Learned Hand, in Lyon v. Bausch & Lomb Optical in the United States Court of Appeals for the Second Circuit, specifically viewed the “definition of invention… [to be] … now expressly embodied in § 103.”[333] Further, courts since that time, while acknowledging that “§100(b) does not make every new use patentable,” have uniformly decided patentability of “uses” pursuant to the “conditions” for patentability, namely novelty under 35 U.S.C. §102 and non-obviousness under 35 U.S.C. § 103.[334] The decision and dicta by the Court of Customs and Patent Appeals in In re Zierden, is one example:


Since the composition … is not rendered patentable by the recitation of intended use, the rejection … must be affirmed.


As to the method claims … the situation is different. First of all, there is express statutory authority for a patent on a process which is a new use of a known process, composition of matter, or material, 35 U.S.C. 100(b) and 101, provided, of course, the process predicated on the new use is new and unobvious and not subject to a statutory one-year time bar.[335]

[107]   The Supreme Court has never rendered an opinion as to whether a process or any other statutory class under 35 U.S.C. §101 is ineligible for patent protection as a “new use.”


IV.  Aggregation of Applied Principles


[108]   “New use” doctrine in the United States closely followed the Boulton and Hornblower prohibitions against new uses of known “machines, manufactures and compositions of matter” that did not embody a “new application of principle.” “Aggregation” developed later as a doctrine that barred patent protection for combinations of known methods or devices if they did not collectively embody some new application of principle and, therefore, lacked the “something more” that was “invention” under patent law. Part IV-A describes the genesis of the aggregation doctrine and how it, like new use doctrine, came under the umbrella of “invention.” Part IV-B explains how aggregation doctrine, also like the new use doctrine, did not survive the split effected by the Patent Act of 1952 separating novelty and “non-obviousness” from consideration of subject matter eligibility.


A.  “Something More than an Aggregate”

1.  Cooperation and Single Purpose


[109]   In 1873, the Supreme Court in Hailes v. van Wormer held a patent as invalid because the claimed stove was a “mere aggregate” of component parts that produced no new results and, therefore, lacked invention.[336] A “combination” could stand in contrast to “a mere aggregate of several results” in that “a new combination, if it produces new and useful results, is patentable, though all the constituents of the combination were well known and in common use before the combination was made.”[337] For the Court, the patent law mandated a “new and useful result” because “[m]erely bringing old devices into juxtaposition, and there allowing each to work out its own effect without the production of something novel, is not invention.”[338] Without “something more,” upholding the patent would be akin to removal of subject matter from the public domain:


No one by bringing together several old devices without producing a new and useful result[,] the joint product of the elements of the combination and something more than an aggregate of old results, can acquire a right to prevent others from using the same devices, either singly or in other combinations, or, even if a new and useful result is obtained, can prevent others from using some of the devices, omitting others, in combination.[339]


[110]   In Reckendorfer v. Faber, the Supreme Court in 1875 decided that the combination of “the lead and india-rubber, or other erasing substance, in the holder of a drawing-pencil,”[340] was “not invention within the patent law”[341] because it did not “embody any new device, or any combination of devices producing a new result.”[342] The Court stated that, because lead pencils and india-rubber erasers had been known in the art, the patentee was reliant on “the combination of the lead and india-rubber in the holder of a drawing pencil” as his “invention.”[343] However, according to the Court, the “law requires more than a change of form, or juxtaposition of parts, or of the external arrangement of things, or of the order in which they are used, to give patentability,”[344] and asserted that a “double use is not patentable, nor does its cheapness make it so.”[345] As stated by the Court: “The combination, to be patentable, must produce a different force or effect, or result in the combined forces or processes, from that given by the separate parts. There must be a new result produced by their union: if not so, it is only an aggregation of separate elements.”[346] The Court, in effect, drew a parallel between a “double use,” or “dual use,” and “aggregation” by mandating cooperation among component parts to obtain a new result:


A double effect is produced or a double duty performed by the combined result. In these and numerous like cases the parts co-operate in producing the final effect, sometimes simultaneously, sometimes successively. The result comes from the combined effect of the several parts, not simply from the separate action of each, and is, therefore, patentable.[347]

[111]   In other words, while a single component of a device may have multiple functions, it did not violate the maxim against “double use” of a known device if it was claimed in combination with other components that acted cooperatively to produce a new result. An “aggregation,” on the other hand, by definition, included no such cooperation among component parts and in effect, was prohibited as a “double use” of those component parts that produced no new result.[348]


[112]   Thereafter, the notion of “aggregation” was expressed in many different forms by the courts.[349] In 1931, for example, Judge Learned Hand, in Sachs v. Hartford Elec. Supply, dissected the cooperation among component parts thought to be necessary to constitute “invention.” For Judge Hand, cooperation among parts could consist in “no more than their necessary presence in a unit which shall answer a single purpose.”[350] Moreover, Hand considered “aggregation” as a term to be a “false lead”[351] in that “inventions depend upon whether more was required to fill the need than the routine ingenuity of the ordinary craftsman” and, therefore, any consequent “attempt to define it in general terms,” such as that of “aggregation,” is “illusory” and, accordingly, “it is best to abandon it.”[352] Again, “invention,” as a threshold for patent protection mandated “something more” than novelty, making the idea of “aggregation,” at least for Judge Hand, not useful.


2.  Invention as the “Vital Spark”


[113]   Despite Judge Hand’s disapproval of the term, “aggregation” continued as a legal doctrine delineating eligible subject matter, albeit with considerable confusion. In Skinner Bros. Belting Co. v. Oil Well Improvements, decided by the Circuit Court of Appeals for the 10th Circuit, Judge McDermott acknowledged, as had Judge Hand in Sachs, that the “distinction between combination and aggregation is one difficult to put in words that really define.”[353] Judge McDermott, unlike Judge Hand, however, relied upon the understanding that a “combination discloses a co-operation or a co-ordination of the elements which, working together as a unit, although mayhap not simultaneously, produces a new or better result.”[354] An “aggregation,” for Judge McDermott was like a track team, where all the members of the team “work for a common general end, to amass points for the alma mater; but there is lacking the vital spark of co-operation or co-ordination.”[355] Invocation of a “vital spark” suggests association with some minimal requirement of inventiveness, which the court clearly did not exclude from patentable “combinations”:


Is this a patentable combination?


The patent in suit meets this test [of a patentable combination]. It is said that no inventive genius, but only mechanical ingenuity, was needed to think of this device. No formula has been prescribed which affords a solution of the vexed question, Has inventive genius been exercised? We know that the simplicity of the device does not belie inventive genius.[356]


[114]   Funk Bros., discussed supra,[357] was the last Supreme Court case addressing eligibility for patent protection prior to enactment of the Patent Act of 1952. Relying on Leroy I, the Court reiterated that “patents cannot issue for the discovery of the phenomena of nature,” but, rather, “must come from the application of the law of nature to a new and useful end.”[358] The claimed “combination of species” at issue in Funk Bros. was an “aggregation of select strains of the several species into one product,[359] which amounted to “hardly more than an advance in the packaging of the inoculants,” because “[e]ach species has the same effect it always had.”[360] The Court stated that the “aggregation of species fell short of invention within the meaning of the patent statutes.”[361]

B.  The Demise of Aggregation Doctrine


[115]   After enactment of the Patent Act of 1952, the lower courts continued to rely upon “aggregation,” but with increasing disfavor after some initial confusion around the split of 35 U.S.C. § 31 into new sections 101 and 102, and the newly-enacted statutory provision for non-obviousness under section 103.[362]


[116]   In In re Worrest, for example, the Court of Customs and Patent Appeals found that “strict adherence to the requirement of co-action between elements in order to have a patentable combination is unrealistic and illogical.”[363] Rather, where the court saw no invention “because no new or unexpected result was produced by the combination, …such a device should, in our opinion, properly be regarded as an unpatentable combination, and not as an aggregation.”[364] However, the following year, the same court affirmed, in In re Carter, a decision by the Board of Appeals rejecting claims as an unpatentable aggregation precisely because there was a lack of “exercise of the inventive faculty:”


Under these circumstances there is no novel combination but only an aggregation of old elements which constitutes no patentable invention…. The question here is not what Fischer did, but whether any person skilled in the art, with the references of record before him, could, without the exercise of inventive faculty, make the combination of elements here claimed…. We are convinced that he could.[365]


[117]   The suggestion here being that an “aggregation of old elements” could rise to the level of a “novel combination” if a person skilled in the art could not, in fact, make the combination of elements claimed without the “exercise of the inventive faculty.”


[118]   The patentability of combinations themselves were at issue in the 1963 case of In re Menough.[366] There, the Court of Customs and Patent Appeals affirmed a rejection of claims, but made a point of separating its affirmation from the Patent Office’s reasoning that “each of the refused claims sets forth a combination of old elements whose function as a combination is merely the sum of old functions of the individual elements, and that therefore the combination must be presumed to be obvious to one skilled in the art.”[367] The court stated that it could find no support for this proposition because “[m]echanical elements can do no more than contribute to the combination of mechanical functions of which they are inherently capable,” and, therefore, the “patentability of combinations has always depended on the unobviousness of the combination per se.”[368]


[119]   Shortly thereafter, Judge Rich, who wrote the opinion in Menough, concluded in In re Gustafson that use of the term “aggregation,” as well as distinctions between a “combination,” and an “unpatentable combination” for lack of “invention,” were made moot by imposition of the statutory requirement of non-obviousness under 35 U.S.C. § 103 of the Patent Act of 1952:


On January 1, 1953, all of this mental anguish ceased to be necessary. The test of the presence or absence of “invention,” and along with it the subsidiary question of whether a device or process was or was not an “aggregation,” or a “combination,” or an “unpatentable combination” for want of “invention,” was replaced by the statutory test of 35 U.S.C. [§] 103. [369]


The Patent Office did not contest the novelty of the invention, nor its utility,[370] and the solicitor’s brief noted that the Board did not reject the claims in view of references, but simply relied upon common knowledge.[371] According to the solicitor’s brief, the “essential relationship of cooperation being missing, the claims must fall as merely reciting an unpatentable aggregation,” and that “the advantages argued by the appellant *** result merely from the use of mechanical skill in juxtaposing separate old elements, or from an obvious combination of such old elements.”[372]


[120]   For Judge Rich, however, the explanation for rejecting the claims lay more properly within the realm of indefiniteness, under the second paragraph of 35 U.S.C. § 112. As stated by Judge Rich:


[I]t becomes reasonably clear to us that the real objection to the claims here, never clearly stated or used as a ground for rejection, is that the claims fail to define the invention disclosed by appellant with sufficient particularity and distinctiveness to comply with the second paragraph of section 112.[373]


Judge Rich also questioned the relevance of 35 U.S.C. § 101, given the availability of provisions for non-obviousness and particularity under sections 103 and 112, second paragraph, respectively.[374] Regardless, according to Judge Rich, the appellant should be provided with a statutory basis for any rejection made by the Patent Office:


Appellant was entitled to know whether his claims were rejected under section 101, or 103, or 112. Admittedly he was given a rejection which the solicitor says could be based on any or all of those sections but not told this until the solicitor filed his brief in this court.[375]


[121]   Finally, in In re Collier, Judge Rich dismissed allegations by an appellant that a rejection by the examiner of a claim as a “mere catalogue of elements was an aggregation rejection.”[376] Instead, Judge Rich agreed with the Board that since there was “no positive recitation of any structural cooperation among the elements listed,” the appropriate ground for rejection was indefiniteness under 35 USC § 112, second paragraph, for lack of definiteness.[377] The appellant’s arguments that the claim’s statements of intended use should be considered “positive limitations” were also dismissed in that language describing “things which may be done [but] are not required to be done,”[378] was considered to be indefinite. Judge Rich also agreed with the Board that the subject matter of the claim at issue was obvious under 35 U.S.C. § 103 in view of a prior art reference.[379]


[122]   In sum, “aggregation,” as well as statements of intended use, also commonly associated with “dual use,” were addressed by the Board and by the Court of Customs of Patent Appeals as matters of indefiniteness and obviousness under the Patent Act of 1952. The last word on the topic appears in the Manual of Patent Examining Procedure (MPEP) at section 2173.05(k),[380] which simply relies on Gustafson and Collier for the dual propositions that “[a] claim should not be rejected on the ground of aggregation,” and that a “rejection for ‘aggregation’ is nonstatutory.”[381]


V.  Preemption of Laws of Nature, Natural Phenomena and Abstract Ideas


[123]   It has always been presumed that principles, and their discovery, are not subject to the exclusionary right of patent protection. However, there also has been general agreement that it is application of such principles that constitutes invention. Basing patent eligibility on “preemption” became popular with the Supreme Court decision in 1972 of Gottschalk v. Benson, after the doctrines of “new use” and “aggregation” had largely been assimilated into the “conditions for patentability” of statutory novelty (35 U.S.C. § 102) and non-obviousness (35 U.S.C. § 103).[382] “Preemption,” unlike the new use and aggregation doctrines, was directly linked by courts to eligibility under 35 U.S.C. § 101 as an issue that is distinct from the “conditions for patentability,” and doing so has led to the current “two-part” test of Mayo v. Prometheus and Alice v. CLS Bank Int’l.[383] Part V-A summarizes the development of the dichotomy between “principles” and their application under patent law, as established by Boulton and Hornblower, and under the now-anachronistic doctrines of “new use” and “aggregation.” Part V-B explains how the nineteenth century Supreme Court cases of LeRoy v. Tatham (“LeRoy I” and “LeRoy II”) and O’Reilly v. Morse foreshadowed “preemption” as a doctrine by barring overly broad claims directed to applications of principle that were viewed as discouraging innovation. Part V-C describes the two-part test of eligibility and the requirement that “significantly more” than an ineligible concept be present in claimed subject matter. Part V-D relates “preemption” doctrine back to the earlier doctrines of “new use” and “aggregation” as functions of “something more” than “mere principle.” Finally, Part V-E argues that current confusion over application of the “two-part” test can be eliminated by following the treatment of its predecessor doctrines and considering preemption not as an issue of patent eligibility, but rather, within the confines of the “conditions for patentability” of statutory novelty and non-obviousness.


A.  Setting the Stage


[124]   The development and fate of “new use” and “aggregation” doctrines followed that of the initial dichotomy laid out in Boulton and, later, in Hornblower over whether the word “manufacture” in the sixth section of the Statute of Monopolies extended to methods of use. The dichotomy, in turn, hinged on whether an application of principle was inherent in an article of manufacture. Under one interpretation, eligibility could extend to methods if manufactures employed by those methods were not considered to inherently embody all applications of principle to which they could be put. Under the other interpretation, where manufactures were considered to inherently embody all applications of principle to which they could be put, no new method of use of a known manufacture would be patentable, thereby barring patent protection to any method of use, per se, as a prohibited “dual,” “double,” or “new use” of the known “manufacture.” This latter view restricted patent protection to only those uses that entailed employment of a manufacture that was itself novel. Under both interpretations, a new application of principle was understood to be an “invention.” Similarly, “aggregation” was contingent upon whether a novel combination of known elements invoked a new application of principle not present in separate use of the components of the combination. Judge Learned Hand’s view that the term “aggregation” was a “false lead”[384] notwithstanding, failure to identify some cooperation or single purpose among component parts to obtain a new result was indicative of an absence of “invention,” thereby rendering the combination of those components ineligible for patent protection.[385]


[125]   Both “new use” and “aggregation” died away as legal doctrines after enactment of the Patent Act of 1952. “New use” was subsumed under the definition of “process” in 35 U.S.C. § 100. The patentability of “new uses” became an issue of novelty under 35 U.S.C. § 102, which had been newly-carved from prior section 35 U.S.C. § 31, and of the new statutory requirement of non-obviousness of 35 U.S.C. § 103, which defined patentable “invention.” Similarly, rejections under the doctrine of “aggregation” were ultimately considered to be non-statutory and impermissible as a matter of law. “Preemption,” as we shall see, was rooted in the same dichotomy that engendered “new use” and “aggregation” doctrines, and should meet the same fate.


B.  “Mere Principle” and its Application


[126]   It has been explicitly understood, at least since Boulton was decided in 1795, that “there can be no patent for a mere principle.”[386] Instead, and as has also been understood since Boulton, it is only application of a principle that is entitled to an exclusionary right.[387] Justification for barring patent protection of “mere principle” can be found in Justice Buller’s opinion in Boulton, where he stated:


There is one short observation arising on this part of the case, which seems to me to be unanswerable, and that is, that if the principle alone be the foundation of the patent, it cannot possibly stand, with that knowledge and discovery which the world were in possession of before.[388]


[127]   This reasoning is echoed in later decisions by the Supreme Court, such as Leroy I in 1853, which linked application of principle to patent eligibility and justified that link against an alternative that would “discourage arts and manufactures:”


A patent is not good for an effect, or the result of a certain process, as that would prohibit all other persons from making the same thing by any means whatsoever. This, by creating monopolies, would discourage arts and manufactures, against the avowed policy of the patent laws.[389]


[128]   The patent at issue in Leroy I was again addressed a few years later in Leroy II,[390] but this time the Court held that the claimed combination of the machinery and its use was sufficient to support the result obtained and, was “within the patent law” because it was based on the presence of a “new and operative” agency:


If it be admitted that the machinery, or part of it, was not new when used to produce the new product, still it was so combined and modified as to produce new results, within the patent law. One new and operative agency in the production of the desired result would give novelty to the entire combination.[391]


Although there was no discussion as to why the Court arrived at a holding different from that of LeRoy I, it is clear that articulation of the claimed combination and its nexus to the result obtained, was decisive in the latter case:


It is rare that so clear and satisfactory an explanation is given to the machinery which performs the important functions above specified. We are satisfied that the patent is sustainable, and that the complainants are entitled to the relief claimed by them.[392]


Therefore, the Court in LeRoy II upheld the patent because it found both a novel combination of machinery and use, and a link between that combination and a “new and operative agency” that produced a new result.[393]


[129]   Likewise, but more famously, in O’Reilly v. Morse, the Supreme Court struck down Morse’s eighth claim to “every improvement where the motive power is the electric or galvanic current, and the result is the marking or printing intelligible characters, signs, or letters at a distance,”[394] because “he shuts the door against inventions of other persons …”[395] The Court explained:


It is impossible to misunderstand the extent of this claim. He claims the exclusive right to every improvement where the motive power is the electric or galvanic current, and the result is the marking or printing intelligible characters, signs, or letters at a distance.

If this claim be maintained, it matters not by what process or machinery the result is accomplished. . . . For he says he does not confine his claim to the machinery or parts of machinery, which he specifies; but claims for himself a monopoly in its use, however developed, for the purpose of printing at a distance.[396]


Because no novel combination of claim components could be associated with the benefit obtained, the Court concluded that “the claim is too broad, and not warranted by law.”[397] In effect, for the Court, the patentee “claims an exclusive right to use a manner and process which he has not described and indeed had not invented, and therefore could not describe when he obtained this patent.”[398] Here, then, eligibility for patent protection hinged on “invention.”


[130]   As discussed above, prior to the Patent Act of 1952, the modern notions of patent eligibility and novelty were embraced within a single statutory provision. O’Reilly, for example, was decided in 1854, when the relevant statute was section 6 of the Patent Act of 1836, which read, in part, as follows:


And be it further enacted, That any person or persons having discovered or invented any new and useful art, machine, manufacture, or composition of matter, or any new and useful improvement on any art, machine, manufacture, or composition of matter, not known or used by others before his or their discovery or invention thereof, and not, at the time of his application for a patent, in public use or on sale, . . . may make application in writing to the Commissioner of Patents, . . . and the Commissioner . . . may grant a patent therefor. [399]


Nevertheless, a clear distinction was maintained between the ineligibility of “mere principle” and eligibility for patent protection of an application of that same principle. This distinction was often incorporated into ultimate decisions fundamentally based on novelty, or on the predecessor to modern non-obviousness of “substantial” novelty. For example, as stated by the Court in Leroy I:


In all such cases, the processes used to extract, modify, and concentrate natural agencies, constitute the invention. The elements of the power exist; the invention is not in discovering them, but in applying them to useful objects. Whether the machinery used be novel, or consist of a new combination of parts known, the right of the inventor is secured against all who use the same mechanical power, or one that shall be substantially the same.[400]

C.  The Two-Part Eligibility Test


[131]   The Supreme Court in LeRoy foreshadowed the modern notion of “preemption” by stating that “[a] principle, in the abstract, is a fundamental truth; an original cause; a motive; these cannot be patented, as no one can claim in either of them an exclusive right.”[401] The Court explained that a “[P]atent is not good for an effect, or the result of a certain process, as that would prohibit all other persons from making the same thing by any means whatsoever.”[402] However, the notion of “preemption” only became known as such after the doctrines of “new use” and “aggregation” were abandoned. When viewed in the context of those earlier doctrines, preemption can be seen as the source of confusion in the current two-part eligibility test applied by the Supreme Court in Mayo v. Prometheus and Alice v. CLS Bank.[403]


[132]   The two-part test asks, quite simply, whether a claim to a process, machine, manufacture, or composition of matter is directed to a law of nature, a natural phenomenon, or an abstract idea and, if so, whether the claim recites additional elements that amount to “significantly more” than the judicial exception.[404] The Court in Alice stated:


In Mayo Collaborative Services v. Prometheus Laboratories, Inc., we set forth a framework for distinguishing patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent-eligible applications of those concepts. First, we determine whether the claims at issue are directed to one of those patent-ineligible concepts. If so, we then ask, “[w]hat else is there in the claims before us?” To answer that question, we consider the elements of each claim both individually and “as an ordered combination” to determine whether the additional elements transform the nature of the claim” into a patent-eligible application. We have described step two of this analysis as a search for an “inventive concept,”– i.e., an element or combination of elements that is “sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.” [405]


The fundamental issue, of course, is the meaning of “significantly more” in the second step of the two-part test. Moreover, analysis of U.S. Supreme Court decisions directed to patent eligibility suggests that the two-part test is not entirely new. Contribution of “something more” was applied as a threshold test, for example, by the Supreme Court in Hailes v. van Wormer in 1873 to negate the charge of “aggregation,”[406] and again in American Food Growers in 1931 to establish eligibility as a “manufacture.”[407] In both instances, the issue was one of “invention” that originated in the question of whether or not a “manufacture” under the Statute of Monopolies inherently embodied all principles employed in any application to which that manufacture could be put. Eligibility, regardless, hung upon the presence of applied principle. Novel application of that principle, either as an inherent feature of a novel device or as a process, was often the “something more” that constituted a protectable “invention.”


D.  “Preemption” in the Absence of Something “Significantly More” than “Mere Principle”


[133]   The legal doctrine of “preemption” was born from the same considerations that stranded the doctrines of “new use” and “aggregation” in the wake of the Patent Act of 1952. Not having been explicitly limited to sections 102 and 103, “invention” became manifest under “preemption” as a threshold requirement under section 101 against claims to “mere principle.” The first step, however, was to identify the presence of principle in one of the statutory categories of “process, machine, manufacture, or composition of matter.” As in Boulton and Hornblower, this could be done for processes by linking them to the “manufacture” employed. Alternatively, the process could be linked to a product resulting from the process.


[134]   For example, the Supreme Court in Gottschalk v. Bensen held that a computer program converting binary-coded decimal numerals to pure binary numerals was not a “process” under 35 U.S.C. § 101 of the Patent Act of 1952.[408] While stopping short of mandating that a “process patent must either be tied to a particular machine or apparatus or must operate to change articles or materials to a ‘different state or thing,’”[409] the Court characterized the claimed computer program as an “algorithm” for solving a mathematical problem.[410] The “practical effect,” in the absence of “substantial practical application except in connection with a digital computer,” would, amount to the patenting of an “idea,” whereby “the patent would wholly pre-empt the mathematical formula and in practical effect would be a patent on the algorithm itself.”[411]


[135]   In Parker v. Flook, the Supreme Court took up the eligibility of claims directed to a method for computing an updated alarm limit and, as in Gottschalk, began with the maxim that a principle, in and of itself, is not patentable.[412] The Court in Parker, however, invoked a threshold requirement of “invention” that barred eligibility for protection when the only point of novelty was a newly discovered law of nature, natural phenomenon or abstract idea.[413] “Conventional or obvious” application of “unpatentable principle” was considered mere “post-solution activity,” that was inadequate under section 101:


The notion that post-solution activity, no matter how conventional or obvious in itself, can transform an unpatentable principle into a patentable process exalts form over substance. A competent draftsman could attach some form of post-solution activity to almost any mathematical formula; the Pythagorean theorem would not have been patentable, or partially patentable, because a patent application contained a final step indicating that the formula, when solved, could be usefully applied to existing surveying techniques. The concept of patentable subject matter under § 101 is not “like a nose of wax which can be turned and twisted in any direction…”[414]


The Court, further, explicitly linked patent eligibility with “inventive application of the principle:”


Respondent’s process is unpatentable under § 101, not because it contains a mathematical algorithm as one component, but because once that algorithm is assumed to be within the prior art, the application, considered as a whole, contains no patentable invention. Even though a phenomenon of nature or mathematical formula may be well known, an inventive application of the principle may be patented. Conversely, the discovery of such a phenomenon cannot support a patent unless there is some other inventive concept in its application.[415]


The Court laid out the distinction between an “inventive application of the principle,” and an ineligible “post-solution activity,” merely to state that the “rule that the discovery of a law of nature cannot be patented rests, not on the notion that natural phenomena are not processes, but rather on the more fundamental understanding that they are not the kind of “discoveries” that the statute was enacted to protect.”[416]


[136]   Two years later, the Supreme Court in Diamond v. Chakrabarty held that a genetically engineered bacterium capable of breaking down crude oil was eligible subject matter under 35 U.S.C. § 101.[417] Although a living organism, the bacterium was found to possess “markedly different characteristics from any found in nature and one having the potential for significant utility.”[418] As such, it was “patentable subject matter under § 101.”[419] While no explicit definition of “markedly different characteristics” was given, the fact that it had “potential for significant utility”[420] provided at least a clue that the bacterium of the claimed invention was different in-kind from its naturally-occurring counterpart. That difference seemed to be more than “post-solution activity,” consequent to discovery of a law of nature, natural phenomenon, or abstract idea. Instead, the difference was a novel combination of the discovery of the necessary genetic modification with a naturally-occurring bacterium, to thereby obtain a beneficial result. The invention was a process for stably transferring and maintaining a cooperative relationship between plasmids capable of degrading components of crude oil and a bacterium of the genus Pseudomonas to obtain a benefit not exhibited by either the plasmids, or the bacterium, alone.[421] This process and cooperative relationship distinguished the invention from earlier man-made combinations, such as the application of preservatives to fruit in American Fruit Growers,[422] or the non-mutually inhibitive combinations of root-nodule bacteria in Funk Bros.[423] Although the Court did not provide justification for analyzing the claimed subject matter as a function of eligibility rather than non-obviousness, one clear distinction is that the assessment of “invention” to determine eligibility under 35 U.S.C. § 101 was relative to a law of nature, a natural phenomenon, or an abstract idea, rather than as a comparison to subject matter defined as “prior art” under 35 U.S.C. § 102.


[137]   The relation of section 101 patent eligibility to section 102 novelty was directly addressed in Diamond v. Diehr, where the Court held that a claimed “process for molding raw, uncured synthetic rubber into cured precision products”[424] using an Arrhenius equation to limit the curing was eligible for patent protection under 35 U.S.C. § 101.[425] The Court noted that, according to the “Revision Notes,” the Patent Act of 1952 intentionally “split into two sections, section 101 relating to the subject matter for which patents may be obtained, and section 102 defining statutory novelty and stating other conditions for patentability.”[426]


[138]   Relying on Flook, the Court did not mandate a determination of novelty, either in the Arrhenius equation as the algorithm employed or its application, to decide patent eligibility:


It is argued that the procedure of dissecting the claim into old and new elements is mandated by our decision in Flook which noted that a mathematical algorithm must be assumed to be within the “prior art.” It is from this language that the petitioner premises his argument that if everything other than the algorithm is determined to be old in the art, then the claim cannot recite statutory subject matter. The fallacy in this argument is that we did not hold in Flook that the mathematical algorithm could not be considered at all in making the § 101 determination. To accept the analysis proffered by the petitioner would, if carried to its extreme, make all inventions unpatentable because all inventions can be reduced to underlying principles of nature which, once known, make their implementation obvious.[427]


[139]   Rather, the fact that the Arrhenius equation was not, itself, novel was considered immaterial for the purpose of determining eligibility under § 101, as was the novelty of any other element or step in the process taken in isolation. Eligibility, instead, must be considered in view of the claims as a whole, independent of the novelty of any component of those claims.[428] As stated by the Court:


In determining the eligibility of respondents’ claimed process for patent protection under § 101, their claims must be considered as a whole. It is inappropriate to dissect the claims into old and new elements and then to ignore the presence of the old elements in the analysis. This is particularly true in a process claim because a new combination of steps in a process may be patentable even though all the constituents of the combination were well known and in common use before the combination was made. The “novelty” of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter.[429]


[140]   The court did not invoke novelty or “invention” when it held the claimed process in Diehr to be eligible subject matter:


[W]hen a claim containing a mathematical formula implements or applies that formula in a structure or process which, when considered as a whole, is performing a function which the patent laws were designed to protect (e.g., transforming or reducing an article to a different state or thing), then the claim satisfies the requisite of § 101. Because we do not view respondent’s claims as an attempt to patent a mathematical formula, but rather to be drawn to an industrial process for molding of rubber products, we affirm the judgment of the Court of Customs and Patent Appeals.[430]


[141]   As an “industrial process for molding of rubber products,” the claimed invention, when considered as a whole, was an application of a mathematical formula that met the statutory requirements of patent eligibility.[431]


E.  Treatment of Patent Eligibility Since Diamond v. Diehr, and an Alternative


[142]   Eligibility under § 101 in Diamond v. Diehr paralleled eligibility of methods under the Statute of Monopolies in Boulton–both analyses relied on an application of principle. Specifically, the method in Boulton could only be considered eligible for patent protection under the Statute if, like a “manufacture,” it embodied a physical application of principle.[432] Similarly, the Court in Diamond v. Diehr upheld the eligibility of the claimed process, “[b]ecause we do not view respondents’ claims as an attempt to patent a mathematical formula, but rather to be drawn to an industrial process for the molding of rubber products.”[433] Likewise, “aggregation” barred patent protection to combinations of known physical applications of principles that did not cooperate to obtain a specific purpose or result. The measure of a new application of principle in both cases was the presence of “invention.” Both doctrines became obsolete under the Patent Act of 1952 because estimations of “invention” were subsumed under the new distinct statutory provisions of novelty and non-obviousness.


[143]   “Preemption” as a doctrine bars patent protection for subject matter that does not amount to “significantly more” than a judicial exception. This is similar to how “new uses” were barred for failing to embody new applications of principle apart from those inherent in the “manufactures” they employed, and “aggregations” were barred as known applications of principles that operated independently of each other without a “single purpose.” In both “new use” and “aggregation” doctrines, patentability mandated “something more” than the principles applied. To avoid prohibition as a “new use,” a principle must be embodied that is not inherent in the device employed. Similarly, to escape “aggregation,” a plurality of principles must be applied in a novel manner to obtain a single purpose. Collectively, satisfaction of these requirements was considered “invention,” or novelty in the application of principle beyond the “work of the skillful mechanic.” Separate statutory provisions for novelty and non-obviousness under the Patent Act of 1952 partitioned determinations of eligibility of subject matter eligibility under section 101 from novelty under section 102 and “invention” under the provision for non-obviousness of section 103. As a result, the doctrines of “new use” and “aggregation” became obsolete as tests of patent eligibility.”


[144]   Like “new use” and “aggregation” doctrines, the threshold for avoiding “preemption” is the sufficiency of “invention” in an application of principle. One possibility is to treat “preemption” in the same manner as the earlier doctrines of “new use” and aggregation” by considering “invention” exclusively under the statutory provisions of novelty and non-obviousness. Eligibility would then be straightforward, excluding only subject matter that is not “a process, machine, manufacture, composition of matter, or any new and useful improvement thereof,” as explicitly called for under the terms of the statute. Patentability, of course, would still turn on satisfaction of the provisions for novelty and non-obviousness under sections 102 and 103, respectively.


[145]   For example, in Bilski v. Kappos, the Supreme Court refused to limit eligibility for patent protection to a process that is “tied to a particular machine or apparatus,” or “transforms a particular article into a different state or thing.”[434] Also known as the “machine or transformation test,” the Supreme Court stated that this was not the “sole test for deciding whether an invention is a patent-eligible ‘process,’”[435] and specifically held open, as a matter of statutory interpretation, the eligibility of at least “some business method patents.”[436] However, as “abstract ideas,” such methods–including the method of hedging commodity prices of Bilski’s patent application, when “reduced to a mathematical formula”–were ineligible because it would “preempt use of this approach in all fields, and would effectively grant a monopoly over an abstract idea.”[437] The Court further stated that limiting the invention to specific applications or fields of use were “token post-solution components [that] did not make the concept patentable” because they were “well known random analysis techniques to help establish some of the inputs into the equation.”[438] The Court held the claimed method of hedging commodity prices to be ineligible under 35 U.S.C. § 101. However, considering “invention” exclusively under the statutory provisions of novelty and non-obviousness would likewise enable consideration of the claims as a whole, despite recitation of an “abstract idea,” while also safeguarding against “preemption.” Meeting the statutory definition of “‘process,’ does not mean that the application claiming that method should be granted.”[439] As the Court explained:


In order to receive patent protection, any claimed invention must be novel, §102, non-obvious, §103, and fully and particularly described, §112. These limitations serve a critical role in adjusting the tension, ever present in patent law, between stimulating innovation by protecting inventors and impeding progress by granting patents when not justified by the statutory design.[440]


Strikingly, the claimed “postsolution activity” recited by the Court was viewed in a similar manner to that of a “new use” of the abstract idea of “hedging”[441] and, as such, the Court could have decided against patentability under the statutory “conditions for patentability” of novelty and non-obviousness following well-worn precedent.[442]


[146]   As another example, the Court in Mayo v. Prometheus held ineligible claims directed to a method of determining how, if at all, dosaging should be adjusted in view of measured metabolite levels of an administered drug.[443] Relying on the much earlier English common law case of Neilson v. Harford, the Court distinguished the presence of “unconventional steps … that confined the claims to a particular, useful application of principle”[444] in Neilson, from “simply appending conventional steps,” as in Mayo.[445] The latter was inadequate to convert an ineligible abstract idea to a patent-eligible “process” under 35 U.S.C. § 101. The Court, however, declined to specify any threshold between “conventional” and “unconventional steps,” and even seemed to shy away from convention as a test for eligibility, suggesting that eligibility might, in fact, be a function of the breadth of protection sought:


We need not, and do not, now decide whether were the steps at issue here less conventional, these [discovered natural] features [of the metabolites] of the claims would prove sufficient to invalidate them. For here, as we have said, the steps add nothing of significance to the natural laws themselves. Unlike, say, a typical patent on a new drug or a new way of using an existing drug, the patent claims do not confine their reach to particular applications of those laws. The presence here of the basic underlying concern that these patents tie up too much future use of laws of nature simply reinforces our conclusion that the processes described in the patents are not patent eligible, while eliminating any temptation to depart from case law precedent.[446]


[147]   The Court’s proposal to address the “basic underlying concern” over “too much future use of laws of nature,” or “preemption,” by excluding from patent eligibility “conventional” or “mere post-solution activity” is incongruous. If preemption were the Court’s primary concern, then any narrowing limitation should suffice. Ultimately, the Court was seeking to limit applications of laws of nature, natural phenomena and abstract ideas to applications that added “significance to the natural laws themselves.”[447] However, failure of the Court to provide some standard for “significance” beyond the presence of “invention” has, since Mayo, led to confusion.


[148]   If, however, that “additional” significance were, for example, a novel step that itself embodied some additional law of nature, natural phenomenon, or abstract idea, acting cooperatively or collectively with the recited or known, natural law to obtain a beneficial result, then the subject matter of the claims taken as a whole could be seen as “inventive” and not as some prohibited “new use,” or “aggregative” or “preemptive” combination. Moreover, a determination that such a relationship exists would be better determined as a function of novelty and non-obviousness under 35 U.S.C. §§ 102 and 103, respectively, rather than some ill-defined notion of “conventionality,” “significance,” or “preemption.”


[149]   The Court attempted to address this issue by asserting that reliance on sections 102 and 103 to perform a “screening function” against the “‘law of nature’ exception” would make “§ 101 patentability a dead letter.”[448] The Court admitted that there may be overlap, but found that avoiding that possibility by relying entirely on statutory provisions other than section 101 would “risks creating significantly greater legal uncertainty.”[449] Moreover, for the Court, “§§ 102 and 103 say nothing about treating laws of nature as if they were part of the prior art when applying those sections.”[450] Interestingly, the Court failed to note that there is no such mention of “laws of nature” in section 101. Nor did the Court explain how section 101 would be a “dead letter” simply by shifting reliance on that missing language to the statutory provisions for novelty and non-obviousness, or how confusion is to be avoided by enabling, as the Court did, “overlap” of the criteria for novelty and non-obviousness with consideration of eligibility under section 101.


[150]   Similarly, but as applied to product claims instead of methods, the Supreme Court in Ass’n for Molecular Pathology v. Myriad Genetics, Inc., held that “a naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated,…”[451] Here, however, the Court explicitly invoked “invention” as a distinguishing characteristic of eligible subject matter. Specifically, Myriad’s isolated nucleic acid was not an “act of invention” because the technique for isolation was conventional, despite the fact that, in isolation, it was “important and useful:”


It is undisputed that Myriad did not create or alter any of the genetic information encoded in the BRCA1 and BRCA2 genes…. Instead, Myriad’s principal contribution was uncovering the precise location and genetic sequence of the BRCA1 and BRCA2 genes within chromosomes 17 and 13. The question is whether this renders the genes patentable.


. . . In this case … Myriad did not create anything. To be sure, it found an important and useful gene, but separating the gene from its surrounding genetic material is not an act of invention.[452]


Moreover, relying on Funk Bros., the Court stated that eligibility does not hinge on the quality of the discovery: “Groundbreaking, innovative, or even brilliant discovery does not by itself satisfy the § 101 inquiry.”[453]


[151]   By contrast, cDNA ,which constitutes only those DNA segments that actually encode a protein, and which do not generally occur in nature, was eligible for patent protection under the Court’s analysis, “except insofar as very short series of DNA may have no intervening introns to remove when creating cDNA.”[454] In deciding that cDNAs were not “products of nature” and therefore eligible for patent protection under section 101, the Court made no mention of “invention” as a requirement, suggesting that, where claimed subject matter is not naturally-occurring, there is no need to consider such a requirement. Therefore, according to the Court, whereas isolated DNA having a sequence that occurs in nature is not eligible for patent protection, regardless of the “groundbreaking, innovative, or even brilliant discovery” that enabled identification of the sequence to be isolated, cDNA, having a sequence that does not occur in nature, is “patent eligible under § 101” merely because it is “not a ‘product of nature.’”[455] Although Chakrabarty was discussed, the Court either did not include any requirement that cDNA have “markedly different characteristics” from its naturally occurring counterpart, or it presumed that the lack of introns, which are not expressed and have no known function, was “markedly different.”[456] Having stated that cDNAs that are not “indistinguishable from natural DNA” do qualify as patent eligible subject matter, the Court nevertheless expressed “no opinion” whether other “statutory requirements of patentability” were met.[457]


[152]   Under the Court’s reasoning in Ass’n for Molecular Pathology v. Myriad Genetics, Inc., therefore, invention” was inapplicable to the “[g]roundbreaking, innovative or even brilliant discovery” of naturally-occurring DNA that, in isolation, was admittedly important and useful. Nor was “invention” applicable to the eligibility of cDNA, where naturally-occurring non-coding sequences had been removed from native DNA. Moreover, the Court admitted that isolated DNA and cDNA are novel and have distinct utilities that are not available in their naturally-occurring counterparts.[458] As such, they embody principles that are not inherent in those sources from which they are derived. Contrary to the Court’s holding, as compositions of matter, both isolated DNA and cDNA should be considered eligible for patent protection under 35 U.S.C. §101. Patentability of isolated DNA and cDNA, as claimed and taken as a whole, could then be decided on the basis of statutory novelty and non-obviousness, and non-obviousness could be premised on the “invention” manifested in the utility, or benefit, made possible by their discovery, regardless of the conventionality of any particular claimed elements taken separately.


[153]   In Alice Corp. v. CLS Bank Int’l, patent claims directed to a “method of exchanging financial obligations between two parties using a third-party intermediary to mitigate settlement risk”[459] were invalidated as being “drawn to a patent-ineligible abstract idea … under §101.”[460] The Court applied the two-part test taken from the “framework”[461] set forth in Mayo v. Prometheus and found “no meaningful distinction between the concept of risk hedging in Bilski and the concept of intermediate settlement at issue here.”[462] According to that two-part test, eligibility under 35 U.S.C. § 101 depended upon first determining that “the claims at issue are directed to one of those patent ineligible concepts,” of “laws of nature, natural phenomena, and abstract ideas.”[463] If the claims were directed to one of those “patent-ineligible concepts,” then, according to the second part of the test, the claims were examined to see “‘[w]hat else is there in the claims before us?’” as recited in Mayo.[464] For the Court in Alice, that “what else” must be sufficient to “transform the nature of the claim into a patent-eligible application.”[465] The test for such transformation, in turn, was the presence of an “inventive concept,” which the Court defined as “…an element or combination of elements that is ‘sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.’”[466] However, nowhere is there an indication of what is meant by an “inventive concept” or “significantly more,” except as examples taken from previous decisions by the Supreme Court. For instance, with reference to Diehr, one of the few cases found to have met the test, the Court stated that “the claims…were patent eligible because they improved an existing technological process, not because they were implemented on a computer,”[467] suggesting that the presence of an “inventive concept” was a matter of degree, rather than a difference in kind, as previously asserted in Parker v. Flook.[468] Summarizing the second part of the two-part test drawn from Mayo, the Court concluded, with respect to the computer-implemented method of hedging risk claimed by Alice, that: “In light of the foregoing . . . the relevant question is whether the claims here do more than simply instruct the practitioner to implement the abstract idea of intermediated settlement on a generic computer. They do not.”[469]


[154]   An “improvement,” if taken to mean a change resulting in “benefit,” has a direct parallel to new uses of old devices that became patentable as a consequence of some improvement of the machine or some new application of principle in the use of a known machine. It also has a parallel with distinguishing “combinations” from aggregations, which were construed to be combinations of principles that operated independently of each other, but, as stated in Sachs, answered no “single purpose.”[470] Therefore, the idea of an “inventive concept” referred to by the Court in Alice could mean a novel application of principle to obtain a benefit, or a combination of principles to obtain a single purpose or benefit not achieved by application of each principle alone. If so, then, as in Diehr, the method in Alice should be eligible for patent protection as a “process,” and determination of “significantly more” should be a function of the “conditions for patentability” under the statute, namely novelty under section 102 and non-obviousness under section 103, just as the doctrines of “new use” and “aggregation” were so considered shortly after enactment of the Patent Act of 1952. “Preemption” in this case would dissolve as an independent concept because, even if some minimal recitation of structure were sufficient to “transform” the abstract idea into eligible subject matter, the statutory conditions for patentability of novelty and non-obviousness would be competent to determine patentability.[471]


[155]   The Court of Appeals for the Federal Circuit in Ariosa Diagnostics, Inc. v. Sequenom, Inc., held invalid a claimed method for detecting paternally-inherited nucleic acids in maternal plasma.[472] Applying the two-part test of Mayo, the court first determined that the claimed method was based on the discovery by the inventors of the presence in maternal plasma of cell-free fetal DNA (cffDNA).[473] In conducting the second step of the analysis, the court noted that amplification of cffDNA required no more than the application of routine technology, such as polymerase chain reaction (PCR), to thereby obtain cffDNA in detectable levels:


The method at issue here amounts to a general instruction to doctors to apply routine, conventional techniques when seeking to detect cffDNA. Because the method steps were well-understood, conventional and routine, the method of detecting paternally inherited cffDNA is not new and useful. The only subject matter new and useful as of the date of the application was the discovery of the presence of cffDNA in maternal plasma or serum.[474]


The court concluded that “the practice of the method claims does not result in an inventive concept that transforms the natural phenomenon of cffDNA into a patentable invention.”[475] Moreover, the question of whether Sequenom’s method of detecting paternally-inherited genetic material constituted legal “preemption” was “made moot” by the holding that the “patent’s claims are deemed only to disclose patent ineligible subject matter under the Mayo framework:”


In this case, Sequenom’s attempt to limit the breadth of the claims by showing alternative uses of cffDNA outside of the scope of the claims does not change the conclusion that the claims are directed to patent ineligible subject matter. Where a patent’s claims are deemed only to disclose patent ineligible subject matter under the Mayo framework, as they are in this case, preemption concerns are fully addressed and made moot.[476]


By denying the conclusion of eligibility, the court, in effect, rendered futile any attempt to avoid the challenge of preemption.


[156]   Judge Linn concurred with “the sweeping language of the test set out” in Mayo, but viewed the second part of the two-part test of Mayo to be overly broad.[477] Seemingly channeling Wachsner, Judge Linn complained that the consequence of the threshold requirement of “invention” was “to refuse a patent to somebody,”[478] (in this case Sequenom,) the “patent protection it deserves and should have been entitled to retain.”[479] Of particular significance, Judge Linn distinguished Mayo by stating that Sequenom’s claimed method constituted a “new use of the previously discarded maternal plasma,” that obtained an “advantageous result … deserving of patent protection.”[480] In effect, Judge Linn argued that the benefit obtained consequent to the new discovery was indicative of a new application of naturally-occurring principle that was entitled to an exclusionary right.


[157]   As will be recalled, new application of principle was the primary argument in favor of eligibility for patent protection of James Watt’s steam engine under the Statute of Monopolies.[481] While “[g]roundbreaking, innovative, or even brilliant discovery does not by itself satisfy the section 101 inquiry,” as recited by the majority in Myriad,[482] it is also well-understood that applications of such discoveries may, in fact, be not only eligible for patent protection, but deserving of patent protection. For example, unlike the claimed subject matter of Mayo, where the point of novelty was only the observation of whether subsequent dosages should be elevated or reduced, Sequenom’s claimed subject matter included the novel step of amplifying paternally-inherited DNA derived from maternal blood serum.[483] As such, it was not simply a discovery, but, rather, a “new use,” or novel cooperative relationship among natural phenomena, namely (1) amplification by PCR of (2) cffDNA derived from maternal plasma, to obtain a beneficial result. These two natural phenomena, unlike an “aggregation,” did not operate independently of each other, but, instead, operated in conjunction to provide a significant advance which, as Judge Linn mentioned, should be eligible for protection, “[b]ut for the sweeping language” of Mayo.[484] As in the obsolete doctrines of “new use” and “aggregation,” the judicial exceptions to patent eligibility of laws of nature, natural phenomena, and abstract ideas can and should be considered with other claim elements, but as conditions for patentability, and not as issues of eligibility. Employing any of those exceptions should not disqualify “processes, machines, manufactures, compositions of matter or new useful improvements thereof” from eligibility for patent protection because they will still, and will always be, “subject to the conditions and requirements of this title,” as specified by 35 U.S.C. § 101.


[158]   “Preemption” should follow the fate of the doctrines of “new use” and “aggregation.” There is no reason to separately analyze claimed subject matter for eligibility simply because the basis for patentability is alleged to be a discovery of a law of nature, natural phenomenon or abstract idea, or because that law of nature, natural phenomenon, or abstract idea is explicitly recited in the claim. Doing so unhinges decisions concerning patent protection from the literal language of the statute and causes endless confusion.


[159]  The Intellectual Property Owners Association (IPO), the American Bar Association (ABA), and the American Intellectual Property Association (AIPLA) have all recently proposed revisions to 35 U.S.C. § 101.[485] The difficulty, however, with almost any proposed revision of a statute is how it will be interpreted by courts. In the amendment proposed by the IPO, for example, the “sole exception to subject matter eligibility” hinges on what is “understood by a person having ordinary skill in the art.” A person having “ordinary skill in the art” is a threshold also found in the non-obviousness standard of 35 U.S.C. § 103, which itself has been contentious since it was introduced under the Patent Act of 1952,[486] and poses the same problem of “overlap.” The ABA proposal has the problem of codifying the same language from recent Supreme Court decisions that is causing the current controversy, such as “preempt,” “practical application,” “law of nature,” “natural phenomenon,” and “abstract idea.” Further, while “inventive concept” is specifically excluded as a consideration under this proposal, its recitation alone makes it a potential subject of litigation. In the AIPLA proposal, questions may possibly arise as to how courts will interpret the phrase, “claimed invention as a whole,” or even the phrase, “can be performed solely in the human mind,” in part (b) of the proposal.


[160]   At this time there is no generally recognized meaning of “significantly more” or “invention,” the lack of which constitutes the threshold for “preemption” under the current two-part test articulated by the Court in Alice, nor does the language recited above in the proposed revisions to section 101 clarify patent eligibility. The policy concern of “preemption,” like that of “new use” and “aggregation,” should be easily absorbed by other portions of the statute without digressing into or codifying inherently vague terms that cannot be found in the existing provision for patent eligibility under 35 U.S.C. § 101.


[161]   A premise of this paper is that the language of current 35 U.S.C. § 101 does not pose a problem; it is straightforward on its face. The difficulty lies in confusion resulting from failure to recognize the evolution of what have now become three statutorily distinct doctrines, namely eligibility (§ 101), novelty (§ 102) and non-obviousness (§ 103). An understanding of the relationships and common roots among these provisions is critical to their separate application, and it has so far proven folly to interpose inherently vague language as a substitute.



VI.  Conclusion


[162]   Courts, lawyers and commentators have been widely panned as historians of patent law.[487] However, a better understanding of legal doctrines, such as those dominating the development of patent law in this country, should be possible by tracing them through court opinions and commentary over time. Preemption, as embodied in the current, so-called two-part eligibility test, is one example that can be addressed by looking for doctrinal parallels and their resolution, if any.


[163]   The issue of patent eligibility, as it is currently understood, began with the analyses of Boulton and Watt v. Bull and Hornblower v. Boulton that ultimately held methods to be within the ambit of “manufactures” under the Statute of Monopolies. Interpretation of the Statute by the justices in both cases set the stage for a debate over the next two-hundred (plus) years about the limits of what kinds of subject matter are available for protection.[488] The debate is far from over, and the level of attention that has been paid to this topic over the last decades, despite several attempts by the Supreme Court to make the issue clear, should give pause to consider whether we are on the right track.


[164]   The link between “manufactures” and “methods” that proved a basis for invention under the Statute of Monopolies was new application of principle. To the extent that principles applied by methods were inherent in the manufactures they employed, such methods were not patentable. This reasoning underlay the prohibition against “new” or “double” uses that continued until the Patent Act of 1952 rendered “new use” doctrine unnecessary by splitting consideration of eligibility from that of invention under “non-obviousness.” The doctrine of “aggregation,” which prohibited patent protection for combinations of applied principles that did not act cooperatively or toward a “single purpose,” also became superfluous by partitioning “invention” from eligibility under the new act. In both the doctrines of “new use” and of “aggregation,” “invention” need not be addressed when determining eligibility of subject matter because “invention” was defined under “non-obviousness,” among other statutory “conditions and requirements” to which a “new and useful process, machine, manufacture or composition of matter, or any new and useful improvement thereof,” was subject.


[165]   “Preemption” recognizes the judicial exceptions to patent eligibility of laws of nature, natural phenomena and abstract ideas and, like “new use” and “aggregation,” prohibits patent protection where there is a lack of “invention.” Unlike “new use” and “aggregation,” however, “preemption” remains confined to estimations of eligibility, despite admitted “overlap” of criteria with novelty and non-obviousness. So long as this “overlap” remains, the scope of patent eligibility will remain confused. The Supreme Court said that the alternative would constitute “studiously ignoring all laws of nature when evaluating patents under “§§ 102 and 103 [that] would ‘make all inventions unpatentable…,” and “§§ 102 and 103 say nothing about treating laws of nature as if they were . . . prior art….”[489] It should be noted that § 101 also says no such thing. Far from “risking greater uncertainty,”[490] as asserted by the Supreme Court, “preemption,” if limited to consideration under the “conditions for patentability” of novelty and non-obviousness, and the definition of “invention” provided therein, would cause “preemption” to follow well-worn precedent as an unnecessary and defunct doctrine.






* Author Bio: Principal at Hamilton, Brook, Smith & Reynolds, P.C., in Concord, Mass., and Adjunct Professor at Suffolk University Law School. The author can be reached at and by telephone at (978) 341-0036. The author is solely responsible for the views of this article, which do not necessarily represent those of his Firm, or any client or organization.


[1] See generally, Patent Act of 1952, 35 U.S.C.S. § 101 (2013) (stating “whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title”); See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2355 (2014).


[2] See 80 Fed. Reg. 45429 (July 30, 2015); see also Memoranda by Robert W. Bahr, Deputy Comm’r for Patent Examination Policy, at 1-2, U.S. Patent & Trademark Office, to Patent Examining Corps (May 4, 2016).


[3] See Alice Corp. Pty. Ltd., 134 S. Ct. 2347, 2356 (2014).


[4] See Patent Act of 1952, 35 U.S.C.S. § 101 (2013).


[5] Bahr, supra note 2, at 1 (emphasis added).


[6] E.g., Brad Sherman & Lionel Bently, The Making of Modern Intellectual Property Law: The British Experience, 1760-1911 209 (Cambridge University Press, 1999) (“If we resist the temptation to rewrite history in our own image, it becomes clear that the Statute of Monopolies played, at best, a minimal role in pre-modern patent law.”).


[7] Statute of Monopolies, 1623, 21 Jac. 1, c. 3, § 6 (Eng.) (emphasis added).


[8] See Boulton v. Bull (1795) 126 Eng. Rep. 651, 653-55; 2 H. BL. 463.


[9] See Hornblower v. Boulton (1799) 101 Eng. Rep. 1285, 1288; 8 T. R. 95.


[10] See, e.g. Markman v. Westview Instruments, Inc., 517 U.S. 370, 373 (1996) (“Claims practice did not achieve statutory recognition until the passage of the Act of July 1836 and inclusion of a claim did not become a statutory requirement until 1870, Act of July 8….” (citations omitted)).


[11] See Ansonia Brass and Copper Co. v. Electrical Supply Co., 144 U.S. 11, 18 (1892) (“[T]he application of an old process to a new and analogous purpose does not involve invention, …”).


[12] See Patent Act of 1952, 35 U.S.C.S. § 101 (2013); S. Rep. No. 82-1979, at 2398-99 (1952). (As stated in the legislative history of the Patent Act of 1952: “The word ‘process’ has been used to avoid the necessity of explanation that the word ‘art’ as used in this place means ‘process or method,’ and that it does not mean the same thing as the word ‘art’ in other places. . . .The definition of ‘process’ has been added in section 100 to make it clear that ‘process or method’ is meant, and also to clarify the present law as to the patentability of certain types of processes or methods as to which some insubstantial doubts have been expressed.” (emphasis added)).


[13] See, e.g. , In re Ducci, 225 F.2d 683, 688 (C.C.P.A. 1955) (“…a newly discovered use for a known substance, machine or process is still only patentable if it is not merely analogous or cognate to the uses heretofore made.’”) (quoting Stefan A. Riesenfeld, The New United States Patent Act in Light of Comparative Law, 34 JPOS 406, 416, June, 1954)


[14] See, e.g., Ex parte Bartelson, Breneman, and Mac Adam, 151 U.S.P.Q. (BNA) 59 (B.P.A.I. Mar. 29, 1966) (stating that “[a] new use of a known machine may be patentable if defined as a process, where, as here, the prior art does not make either the process or the useful results thereof, obvious.”).


[15] Hailes v. Van Wormer, 87 U.S. 353, 368 (1873) (emphasis added).


[16] Id.


[17]See Patent Act of 1952, ch. 950, 66 Stat. 798 (1952). 35 U.S.C. § 103 was a new statutory provision requiring non-obviousness under the Patent Act of 1952 that stated: “A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which such subject matter pertains.” See also “In re Gustafson, 331 F.2d 905, 909 (CCPA, 1964): “On January 1, 1953, all of this mental anguish ceased to be necessary. The test of the presence or absence of ‘invention,’ and along with it the subsidiary question of whether a device or process was or was not an ‘aggregation,’ or a ‘combination,’ or an ‘unpatentable combination’ for want of ‘invention,’ was replaced by the statutory test of 35 U.S.C. § 103.”


[18] 35 U.S.C. § 112 (“The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention”); Patent Act of 1952, ch. 950, 66 Stat. 798 (1952).


[19] In re Collier, 397 F.2d 1003, 1005 (C.C.P.A. 1968).


[20] See Gottschalk v. Benson, 409 U.S. 63, 71-72 (1972) (“The mathematical formula involved here has no substantial practical application except in connection with a digital computer, which means that if the judgment below is affirmed, the patent would wholly pre-empt the mathematical formula and in practical effect would be a patent on the algorithm itself.”); see also Jeffrey A. Lefstin, Inventive Application: A History, 67 Fla. L. Rev. 565, 571 (2016) (“Yet according to Justice Douglas, Benson ‘in a nutshell’ was about preemption. To grant Benson exclusive use of his claimed process would effectively preempt all uses of the underlying algorithm….”).


[21] See Gottschalk, 409 U.S. at 67 (quoting Rubber Tip Pencil Co. v. Howard, 87 U.S. 498, 507 (1874)).


[22] Id. at 64.


[23] See id. at 67 (citing Le Roy v. Tatham, 55 U.S. 156 (1853)).


[24] Id. at 67 (“‘A principle, in the abstract, is a fundamental truth, an original cause, a motive; these cannot be patented as no one can claim in either of them an exclusive right.’”) (quoting Le Roy v. Tatham, 55 U.S. 156, 175 (1853)”).


[25] See id. (“Phenomena of nature, though just discovered, mental processes, and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work.”).


[26] Gottschalk, 409 U.S. at 67 (quoting Funk Bros. Seed Co. v. Kalo Co., 333 U.S. 127, 130 (1948)).


[27] See Alice Corp. v. CLS Bank Int’l., 134 S. Ct. 2347, 2354-55 (2014) (“In Mayo Collaborative Services v. Prometheus Laboratories, Inc., 566 U.S. 66, 132 S.Ct. 1289, 189 L.Ed. 2d 321 (2012), we set forth a framework for distinguishing patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent-eligible applications of those concepts.”).


[28] See e.g., Parker v. Flook, 437 U.S. 584, 600 (1978) (Stewart, J., dissenting) (stating in Justice Stewart’s dissent: “The Court today says it does not turn its back on well-settled precedents…but it strikes what seems to me an equally damaging blow at basics principles of patent law by importing into its inquiry under 35 U.S.C. § 101 the criteria of novelty and inventiveness. Section 101 is concerned only with subject matter patentability. Whether a patent will actually issue depends upon the criteria of §§ 102 and 103, which include novelty and inventiveness, among many others.”); see also Diamond v. Diehr, 450 U.S. 175, 211 (1981) (Stevens, J., dissenting).


[29] Mayo Collaborative Services v. Prometheus Labs, Inc., 566 U.S. 66, 90 (2012).


[30] See Boulton v. Bull (1795) 126 Eng. Rep. 651, 651 n.a1; 2 H. BL. 463; see Hornblower v. Boulton (1799) 101 Eng. Rep. 1285, 1287; 8 T.R. 95.


[31] See Boulton, 126 Eng. Rep. at 667.


[32] See id. at 654.


[33] See William Rosen, The Most Powerful Idea in the World: A Story of Steam, Industry, and Invention 104-06 (Ellah Allfrey et al. eds., 2010).


[34] Jenny Uglow, The Lunar Men: Five Friends Whose Curiosity Changed the World 243 (Julian Loose et al., eds., 2002) (quoting William Small to James Watt, February 1769 Matthew Boulton Papers, Birmingham City Archives 340/4.)


[35] See Boulton, 126 Eng. Rep. at 651.


[36] See id.


[37] Id. at 665.


[38] Id. at 667.


[39] Id. at 656.


[40] Boulton, 126 Eng. Rep. at 656.


[41] Id.


[42] Id. at 663.


[43] Id. (emphasis added).


[44] Id. (emphasis added).


[45] See Boulton, 126 Eng. Rep. at 666.


[46] Id. at 663 (Justice Buller stating that: “In most instances of the different patents mentioned by my Brother Adair, the patents were for the manufacture, and the specification rightly stated the method by which the manufacture was made: but none of them go to the length of proving, that a method of doing a thing without the thing being done or actually reduced into practice, is a good foundation for a patent. When the thing is done or produced, then it becomes the manufacture which is the proper subject of a patent.”).


[47] See id. at 664 (“Since that time, it has been the generally received opinion in Westminster Hall, that a patent for an addition is good…Where a patent is taken for an improvement only, the public have a right to purchase the improvement by itself, without being incumbered[sic] with other things.”).


[48] See id. at 662 (Justice Buller stated: “Upon this state of the case, I cannot say that there is anything substantially new in the manufacture; and indeed it was expressly admitted on the argument, that there were no new particulars in the mechanism: that it was not a machine or instrument which the Plaintiff had invented: that the mechanism was not pretended to be invented in any of its parts: that this engine does consist of all the same parts as the old engine: and that the particular mechanism is not necessary to be considered.”).


[49] See id.


[50] See Boulton, 126 Eng. Rep. at 665 (“But here, the Plaintiffs claim the right to whole machine. To that extent their right cannot be sustained, and therefore I am of opinion that there ought to be judgement for the Defendant.”).


[51] See id. at 667 (“Probably I do not over-rate it, when I state that two-thirds, I believe I might say three-fourths, of all patents granted since the statute [of monopolies] passed, are for methods of operating and of manufacturing, producing no new substances and employing no new machinery.”).


[52] Id.


[53] See id.


[54] Id. at 662.


[55] Boulton, 126 Eng. Rep. at 667.


[56] Id. at 666.


[57] See id.


[58] Id.


[59] Id. at 667.


[60] Boulton, 126 Eng. Rep. at 667.


[61] Id. at 667.


[62] Id.


[63] Uglow, supra note 34 at 243.


[64] Id.


[65] Boulton, 126 Eng. Rep. at 667.


[66] Id. at 668.


[67] Id.


[68] Id.


[69] See id. at 669.


[70] Boulton, 126 Eng. Rep. at 668.


[71] Id. at 669-70 (emphasis added).


[72] Id. at 655.


[73] See id.


[74] See id. at 655-56.


[75] See Hornblower v. Boulton (1799) 101 Eng. Rep. 1285, 1285; 8 T.R. 95.


[76] See id. at 1285.


[77] See id.


[78] Id. at 1289.


[79] Id.


[80] Hornblower, 101 Eng. Rep. at 1290.


[81] Id.


[82] Boulton v. Bull (1795) 126 Eng. Rep. 651, 664; 2 H. BL. 463.


[83] Hornblower, 101 Eng. Rep. at 1290.


[84] Id.


[85] Id.


[86] Id. at 1291 (emphasis added).


[87] Id.


[88] Hornblower, 101 Eng. Reg. at 1291-92.


[89] Id. at 1292.


[90] Id.


[91] Id.


[92] Id.


[93] Hornblower, 101 Eng. Reg. at 1292 (emphasis added).


[94] Id.

[95] Id.


[96] Id.


[97] Id.


[98] Hornblower, 101 Eng. Reg. at 1292 (emphasis added).


[99] Id. (“Therefore it seems to me that he has given directions for the purpose: whether those directions were or were not sufficient, is not now a question for our decision, it was a question for the determination of the jury, and they have decided it.”).


[100] Boulton v. Bull, 126 Eng. Rep. 651, 663, 665; 2 H. BL. 463.


[101] Id. at 662.


[102] Id. at 667.


[103] Id. at 667–68.


[104] Id. at 663.


[105] Boulton, 126 Eng. Rep. at 667.


[106] Hornblower v. Boulton, (1799) 101 Eng. Rep. 1285, 1290; 8 T.R. 95.


[107] Id. at 1291.


[108] Boulton, 126 Eng. Rep. at 663.


[109] Hornblower v. Boulton, (1799) 101 Eng. Rep. 1285, 1290; 8 T.R. 95.

[110] See , e.g., H.I. Dutton, The Patent System and Inventive Activity During The Industrial Revolution, 1750-1852, 72–75 (Manchester Univ. Press 1984) (discussing further the controversy in English case law and commentary on the meaning of “manufactures” under the Statute of Monopolies surrounding Boulton and Hornblower); see Helen Mary Gubby, Developing a Legal Paradigm for Patents, 111 (Eleven Int’l; Pub. 2012); see Christine MacLeod, Inventing the Industrial Revolution 237 n.29 (Cambridge Univ. Press 1988) (“This continued to be a point of judicial uncertainty and debate. One witness in 1829 thought that half the patents overturned in the courts were lost on the judge’s adverse definition of ‘manufacture.’”).


[111] Gibson v. Brand, 1 W.P.C. 626, 633 (1841).


[112] Id. at 635.


[113] Id. at 633.


[114] Id. at 634.


[115] Id.


[116] Gibson, 1 W.P.C. at 634.


[117] Id. at 635.


[118] Id.


[119] Id. at 636.


[120] Losh v. Hague, 1 W.P.C. 202, 208 n.(f) (1838).


[121] Id.

[122] Id.


[123] Id. at 208.


[124] Id.


[125] Crane’s Patent, 1 W.P.C. 375, 375 (1836).


[126] Crane v. Price & Others, 1 W.P.C. 377, 378 (1842).


[127] Id. at 378-79.


[128] Id. at 408.


[129] Id. at 409.


[130] Id.


[131] Crane v. Price & Others, 1 W.P.C. 377, 409 n.(e) (1842).


[132] Id.


[133] See id. at 409.


[134] Boulton v. Bull (1795) 126 Eng. Rep. 651, 651 n.(a)1.


[135] Howe v. Abbott, 12 F. Cas. 656, 658 (C.C.D. Mass. 1842) (No. 6766).


[136] Id at 658.


[137] Id at 657.


[138] Id.


[139] Id.


[140] Crane, 1 W.P.C. 393 at 413 (“But the present specification expressly says, I take the whole of the invention already well known to the public, and I combine it with something else.”).


[141] Howe, 12 F. Cas. at 658.


[142] Id.


[143] William W. Story, Samuel Bean v. Thomas Smallwood (1843), in Reports of Cases Argued and Determined in the Circuit Court of the U.S. 408, 408 (1845).


[144] Bean v. Smallwood, 2 F. Cas. 1142, 1143 (C.C.D. Mass. 1843) (No. 1171).


[145] Id.


[146] Id.


[147] Id. (“Now I take it to be clear, that a machine or apparatus, or other mechanical contrivance, in order to give the party a claim to a patent therefor, must itself be substantially new.”)


[148] Le Roy v. Tatham, 55 U.S. 156, 177 (1852) [hereinafter Le Roy I] (“We think there was error in the above instruction, that the novelty of the combination of the machinery, specifically claimed by the patentees as their invention, was not a material fact for the jury, and that on that ground, the judgment must be reversed.”).


[149] Id. (quoting Bean v. Smallwood, 2 F. Cas. 1142, at 1143 (C.C.D. Mass. 1843)).


[150] Id. at 180 (Nelson, J., dissenting).


[151] Id. (Nelson, J., dissenting).


[152] Id. at 181 (Nelson, J., dissenting).


[153] Le Roy I, 55 U.S. at 179 (Nelson, J., dissenting).


[154] Id. at 182 (Nelson., J., dissenting).


[155] Id. (Nelson, J., dissenting).


[156] See Boulton v. Bull (1795), 126 Eng. Rep. 651, 655, 667; 2 H. BL. 463.


[157] Id. at 658.


[158] Le Roy I, 55 U.S. at 185 (Nelson, J., dissenting).


[159] Id. at 187 (Nelson, J., dissenting).


[160] Id. at 188 (Nelson, J., dissenting).


[161] Id. at 187 (Nelson, J., dissenting).


[162] See Hornblower v. Boulton (1799) 101 Eng. Rep. 1285, 1291-92; 8 T.R. 95.


[163] Le Roy I, 55 U.S. at 186-87 (Nelson, J., dissenting).


[164] Id. at 187 (Nelson, J., dissenting).


[165] Brown v. Piper, 91 U.S. 37, 39 (1875).


[166] Id. at 41.


[167] Id.


[168] See Roberts v. Ryer, 91 U.S. 150, 159 (1875).

[169] Id. (quoting Smith v. Nichols, 88 U.S. 112,112 (1874)).


[170] Id. at 153.


[171] Id. at 159.


[172] Id. at 157.


[173] See Cochrane v. Deener, 94 U.S. 780, 788 (1877).


[174] See id. at 787-88.

[175] Hartranft v. Wiegmann, 121 U.S. 609, 615 (1887).


[176] See Diamond v. Chakrabarty, 447 U.S. 303, 310 (1980).


[177] See id. at 309-310 (“[Chakrabarty’s] claim is not to a hitherto unknown natural phenomenon, but to a nonnaturally occurring manufacture or composition of matter – a product of human ingenuity “having a distinctive name, character [and] use.” Hartranft v. Wiegman, 121 U.S. 615 (1887)).


[178] See Hartranft, 121 U.S. at 615.


[179] See id.


[180] See id.


[181] Penn. Railroad Co. v. Locomotive Engine Safety Truck Co., 110 U.S. 490, 494 (1884).


[182] Id. at 496 (quoting Brook v. Aston, 27 Law Journal (N.S.) Q.B. 145).

[183] Id. at 498.


[184] Id.


[185] Id.


[186] Penn. Railroad Co., 110 U.S. at 498 (As stated by the Court: “In the case at bar, the old contrivance of a railroad truck, swiveling upon the king-bolt, with traverse slot, and pendant diverging links, already in use under railroad cars, is applied in the old way, without any novelty in the mode of applying it, to the analogous purpose of forming the forward truck of a locomotive engine. This application is not a new invention, and therefore not a valid subject of a patent.”)

[187] Ansonia Brass & Copper Co. v. Electrical Supply Co., 144 U.S. 11, 18 (1892) (citing Roberts v. Ryer, 91 U.S. 150 (1875)) (“It was said by Chief Justice Waite in Roberts v. Ryer, 91 U.S. 150, 157, that ‘it is no invention to use an old machine for a new purpose. The inventor of a machine is entitled to all the uses to which it can be put, no matter whether he had conceived the idea of the use or not.’”).


[188] Id.


[189] Grant v. Walker, 148 U.S. 547, 556 (1893) (emphasis added).


[190] Potts v. Creager, 155 U.S. 597, 608 (1894) (emphasis added).


[191] Parke-Davis & Co. v. H.K. Mulford & Co., 189 F. 95, 98, 104 (S.D.N.Y. 1911); see also , e.g., Amy L. Landers, Understanding Patent Law 300 (Matthew Bender ed., LexisNexis 2d ed. 2012) (“The Parke-Davis opinion, which permitted the patent for a purified substance that evidenced properties beyond those existent in the material’s natural state, has been recognized as laying the foundation for the patentability of the more complex biotechnological inventions developed today. The Parke-Davis opinion demonstrates that a product derived from nature that evidences alteration from their natural origins constitutes patentable subject matter.”).


[192] Parke-Davis & Co., 189 F. at 103.


[193] Id.


[194] Id.


[195] Parke-Davis & Co. v. H.K. Mulford & Co., 196 F. 496, 498 (2d. Cir. 1912).


[196] Traitel Marble Co. v. U.T. Hungerford Brass & Copper Co., 18 F.2d 66, 68 (2d Cir. 1927).


[197] Id.


[198] Id.


[199] H.C. White v. Morton E. Converse & Son Co., 20 F.2d 311, 313 (2d Cir. 1927).


[200] Id.


[201] Id.


[202] Id.


[203] Ex parte Brown, 387 Off. Gaz. Pat. Office 461, 461 (1928) (discussing exemplary claims of the patent at issue, U.S. 1,725,335, were directed to the material itself: “1. An electric insulating material composed of the fiber of plant leaves of the Bromelia family; 6. An electric insulating material for conduits in the form of a paper composed of the fiber of the caroa plant of the neoglaziovia variegata species of the Bromelia family”).


[204] Id. at 462.


[205] See id.


[206] General Electric Co. v. Hoskins Mfg. Co., 224 F. 464, 470-73 (7th Cir. 1915).


[207] Id. at 469, 471.


[208] Id. at 467 (emphasis added).


[209] Id. at 470.


[210]See Ex parte Brown, 387 Off. Gaz. Pat. Office 461, 462 (1928).


[211] Ex parte Oscar Hannach, 8 U.S.P.Q. (BNA) 13 (Bd. Pat. App. 1931).


[212] Id.


[213] Id.


[214] American Fruit Growers, Inc. v. Brogdex Co., 283 U.S. 1, 11 (1931).


[215] Id. at 12.


[216] Hartranft, 121 U.S. at 615. (“They were still shells. They had not been manufactured into an article, having a distinctive name, character or use from that of a shell.”) See also, text supra at note 175.

[217] American Fruit Growers, Inc. 283 U.S. at 12 (quoting Anheuser-Busch Ass’n., 207 U.S. at 562 (quoting Hartranft, 121 U.S. at 609)).


[218] See infra text accompanying note 404, et seq.


[219] H.K. Regar & Sons, Inc. v. Scott & Williams, Inc., 63 F.2d 229, 231 (2d. Cir. 1933).

[220] Id.


[221] Id.

[222] See infra text accompanying note 213, et seq.

[223] See Hookless Fastener Co. v. G.E. Prentice Mfg. Co., 68 F.2d 940, 941 (2d Cir. 1934).


[224] Id.

[225] Helen Gubby, Developing a Legal Paradigm for Patents 28 (Erasmus University of Rotterdam 2011).
[226] See Ex parte Fulweiler, 24 U.S.P.Q. (BNA) 268 (P.T.A.B.. 1934).

[227] See id.; see also supra text accompanying note 203.

[228] See Ex parte Weiger, 26 U.S.P.Q. (BNA) 25 (P.T.A.B.. 1934).

[229] See id. at 3.

[230] See Ex parte Bosland, 44 U.S.P.Q. 695, 696 (P.T.A.B. 1940). This practice by the Patent Office resulted in a memorandum that was prepared “for discussion only.” The memorandum listed ten forms of claim construction and commented that the first four of them were proper while the others were not. According to an editor’s note in the United States Patent Quarterly, the “discussion was not concluded, and the paper consequently never distributed to the examining divisions.” The ten forms of claim construction were as follows:

  1. The process which comprises adding X to milk.
    2. The composition comprising milk and the substance X.
    3. A milk composition containing X.
    4. A milk preservative comprising X.
    5. A material for preserving milk comprising X.
    6. For use for preserving milk the substance X.
    7. As a preservative for milk the substance X.
    8. The use of X for preserving milk.
    9. The material X, which when added to milk acts to preserve it.
    10. A composition adapted for preserving milk comprising X..


[231] See Cuno Eng’g. Corp. v. Automatic Devices Corp., 314 U.S. 84, 91 (1941).


[232] See Graham v. John Deere Co., 383 U.S. 1, 15 n.7 (1966).


[233] See Cuno, 314 U.S. at 90 (citing Hotchkiss v. Greenwood, 52 U.S. 248 (1851)).


[234] Hotchkiss,52 U.S. at 267; See, e.g., John F. Duffy, Inventing Invention: A Case Study of Legal Innovation, 86 Tex. L. Rev. 1, 39 (2007) (“Hotchkiss v. Greenwood, the Supreme Court’s first major opinion in this case, replaced the early requirement of inventive principle with a more general doctrine that demanded a sufficient ‘degree of skill and ingenuity’ as a condition for patentability.” (quoting Hotchkiss, 52 U.S. at 267)).


[235] See infra text at note 255.


[236] Cuno, 314 U.S. at 91 (citations omitted, emphasis added). The reasoning employed by the Court in Cuno in this respect closely resembled that of the defendants in Hotchkiss: If in the present case the patentees had invented an improvement in the mode of fastening the knobs to the handles, or if they had invented a new mode of making knobs out of clay or other materials, their patent might have been sustained; but we maintain they cannot obtain a patent for a new use, or double use, of the article of clay, any more than they could sustain a patent for a new use of an old machine. Hotchkiss 52 U.S. at 261 (emphasis added).


[237] In re Thuau, 135 F.2d 344, 345 (C.C.P.A. 1943).


[238] See id. at 345. The independent claims on appeal from the Board of Appeals were as follows: “1. A new therapeutic product for the treatment of diseased tissue, comprising a condensation product of metacresolsulfonic acid condensed through an aldehyde. … 5. A new therapeutic product for the treatment of diseased tissue, comprising a condensation product obtained by condensing substantially pure metacresolsulfonic acid with an aldehyde. 14. The reaction product of substantially pure metacresolsulfonic acid and an aldehyde.”


[239] Id.


[240] Id.


[241] Id. at 347 (quoting section 31, title 35 U.S. Code (35 U.S.C.A. § 31(repealed 1999.))).


[242] In re Thuau, 3135 F.2d at 347.


[243] U.S. Patent No. 2,118,888 (filed Sep. 30, 1936).


[244] See Old Town Ribbon & Carbon Co., Inc. v. Columbia Ribbon Carbon Mfg. Co., Inc., 159 F.2d 379, 382 (2d Cir. 1947).


[245] Id. at 381.


[246] Id. at 382.


[247] Id.


[248] See id.


[249] In re Haller, 161 F.2d 280, 281 (CCPA 1947)


[250] Id. at 280.


[251] Id. at 281.


[252] Id.


[253] Id.


[254] 35 U.S.C. § 31 (1946) (repealed 1999).


[255] Hotchkiss v. Greenwood, 52 U.S. 248, 267 (1850). (“[U]nless more ingenuity and skill in applying the old method of fastening the shank and the knob were required in the application of it to the clay or porcelain knob than were possessed by an ordinary mechanic acquainted with the business, there was an absence of that degree of skill and ingenuity which constitute essential elements of every invention. In other words, the improvement is the work of the skillful mechanic, not that of the inventor”).


[256] See Kenneth J. Burchfiel, Revising the “Original” Patent Clause: Pseudohistory in Constitutional Construction, 2 Harv. J.L. & Tech. 155, 191, 195 (1989).


[257] Id. at 184 (discussing the limitations of the Patent Act of 1973).


[258] Funk Bros. Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 131 (1948) (citing Cuno Engineering Corp. v. Automatic Devices Corp., 314 U.S. 84, 91 (1941) and 35 U.S.C. § 31, R.S. § 4886).


[259] See Cuno Eng’g. Corp. v. Automatic Devices Corp., 314 U.S. 84, 91 (1941). (quoting Blake v. San Francisco, 113 U.S. 679, 683 (1885)) (citations omitted). See text supra at note 220.


[260] Funk Bros., 333 U.S. at 132 (1948) (emphasis added).


[261] Id. at 131.


[262] Id. at 130.


[263] Id. (emphasis added).


[264] Id. at 132.


[265] In re Thuau, 135 F.2d 344, 346 (CCPA 1943).


[266] Funk Bros., 333 U.S. 127,132 (1948).


[267] See In re Benner, 174 F.2d 938, 939 (CCPA 1949).


[268] Id. at 941.


[269] Id. at 942.


[270] Id.


[271] In re Benner, 174 F.2d at 942.


[272] Id.


[273] See Ex parte Wagner, 88 U.S.P.Q. (BNA) 217 (P.T.A.B. 1950).


[274] Id. at 217.


[275] Id.


[276] See id. at 220. (“We agree with appellants that under the Thuau doctrine, the situation may reasonably arise, after grant of the patent, where the composition claims may be anticipated by a reference which does not meet the process claims.”)


[277] See In re Craige, 188 F.2d 505, 506 (C.C.P.A. 1951).


[278] See id. at 509 (emphasis added).


[279] In re Aronberg, 198 F.2d 840, 843-44 (CCPA 1952). (“Inasmuch as there is no disclosure in appellant’s application of a non-drying oil, we fail to see how use of the word “comprises,” although it is an inclusive term, properly may be construed to include a non-drying oil as an ingredient of the composition defined.”).


[280] Id. at 845-46.


[281] Id. at 846.


[282] Karl B. Lutz, The New 1952 Patent Statute, 35 J. Pat. Off. Soc’y 155 (1953). The Patent Act of 1952 went into force on January 1, 1953.


[283] In re Aronberg 198 F.2d at 846 (emphasis added).


[284] Chester H. Biesterfeld, Patent Law for Lawyers, Students and Engineers, 72-73 (John Wiley & Sons Inc., 2d ed. 1949).


[285] See Lothar Wachsner, Patentability of New Uses, 34 J. Pat. Off. Soc’y. 397 (1952).


[286] Id.


[287] Id. at 399-400.


[288] Id. at 400.


[289] Id. at 401.


[290] See Wachsner, supra note 285, at 401.


[291] Id. at 402.


[292] See Patent Act of 1952, Pub. L. No. 82-593, §1, 66 Stat. 792, 797.


[293] See 35 U.S.C. § 102 (amended 2011). See also S. Rep. No. 82-1979, at 2395 (1952), reprinted in 1952 U.S.C.C.A.N. 2394, 2409 (1952). (“Based on title 35 U.S.C., 1946 ed., sec. 31…. The corresponding section of the existing statute is split into two sections, section 101 relating to the subject matter for which patents may be obtained, and section 102 defining statutory novelty and stating other conditions for patentability.”) (“Revision Notes”).


[294] See 35 U.S.C. § 103. See also S. Rep. No. 82-1979, at 2395 (1952), reprinted in 1952 U.S.C.C.A.N. 2410. (“There is no provision corresponding to the first sentence [of new section 103] in the present statute, but the refusal of patents by the Patent Office, and the holding of patents invalid by the courts, on the ground of lack of invention or lack of patentable novelty has been followed since at least as early as 1850.”).


[295] Id. at 2398-99.


[296] 35 U.S.C. § 100(b).


[297] S. Rep. No. 82-1979, at 2399 (1952), reprinted in1952 U.S.C.C.A.N. 2394, 2399 (1952).


[298] P.J. Federico, Commentary on the New Patent Act, 75 J. Pat & Trademark Off. Soc’y 161, 176 (1993) [hereinafter Commentary].


[299] Id.


[300] Id. at 177.


[301] Id. at 178.


[302] Id.


[303] See Commentary, supra note 298, at 180.

[304] Id. at 180, 182.


[305] Am. Pat. Law Ass’n. Bull., May 1953, at 108. [hereinafter APLA Bull.].


[306] See supra text accompanying note 273.


[307] APLA Bull., supra note 305.


[308] Ex parte Wagner, 88 U.S.P.Q. (BNA) 217, 220 (B.P.A.I. Oct. 6, 1950).


[309] APLA Bull., supra note 305, at 108.


[310] Id.


[311] Id.


[312] See Stefan A. Riesenfeld, The New United States Patent Act in the Light of Comparative Law I, 102 U. Pa. L. Rev. 291, 297 (1954).


[313] See id. at 299 (“As a matter of claim drafting, it is therefore necessary to protect the discovery of new uses by means of process or method claims and not of product claims.”) n.53 (“This is also the position of the Patent Office, see report of a speech by Mr. Federico.”) (citing APLA Bull., supra note 305, at 107).


[314] Id. at 297.


[315] See id. at 299 (“It remains open to doubt how far the section in question modifies or limits the Thuau doctrine, although the new act certainly alters the statutory basis of that decision.”).


[316] Id.


[317] Riesenfeld, supra note 312, at 299–300.


[318] Id. at 300.


[319] Id. at 300; United Mattress Mach. Co v. Handy Button Mach. Co., 207 F.2d 1, 4 n.5 (3d. Cir. 1953). The cases cited by United Mattress in support of this proposition were General Electric, 224 F.2d 464 (3d Cir. 1994), and Ansonia Brass, 144 U.S. 11 (1892), both of which were decided before the Patent Act of 1952 partitioned patent eligibility and novelty, and before there was statutory provision for non-obviousness.


[320] United Mattress, 207 F.2d at 4 n.5.


[321] Riesenfeld, supra note 312, at 300.


[322] Commentary, supra note 298, at 180, 182. As recited above, Federico made the point that: “The Committee Report states, in the general part, that one of the two ‘major changes or innovations’ in the new statute consisted in ‘incorporating a requirement for invention in section 103.’ *** ‘The opening clause of old R.S. 4886 which specified the classes of patentable subject matter (see section 101), began ‘Any person who has invented or discovered any new and useful art, machine, etc.’ Two requirements may be found here: novelty (…), and utility (…). The use of the word ‘invented’ in this phrase has been asserted as the source of the third requirement under discussion.”


[323] Riesenfeld, supra note 312, at 300.


[324] Id. at 298 n.45 (quoting Ansonia Brass, 144 U.S. at 13–14, 18).


[325] In re Ducci, 225 F.2d 683, 687 (C.C.P.A. 1955).


[326] Id. at 687 (citing In re Craige, Jr., 189 F.2d 505, 509 (C.C.P.A. 1951) (“[P]atents for old compositions of matter based on new use of such compositions, without change therein, may not lend patentability to claims.” (emphasis added)).


[327] Id. at 688.


[328] Id. (quoting Riesenfeld, supra note 312, at 299).


[329] Id.

[330] In re Ducci, 225 F.2d at 687.


[331] Elrick Rim Co. v. Reading Tire Mach. Co., Inc., 264 F.2d 481, 486–87 (9th Cir. 1959)(emphasis added).


[332] Id. at 487 (citations omitted).


[333] Lyon v. Bausch & Lomb Optical Co., 224 F.2d 530, 535 (2d Cir. 1955).


[334] See Armour Pharmaceutical Co. v. Richardson-Merrell, Inc., 264 F. Supp. 1013, 1016 (D. Del. 1967); see e.g., In re Zierden, 411 F.2d 1325, 1329 (C.C.P.A. 1969).


[335] In re Zierden, 411 F.2d at 1329.; see also, e.g., Research Corp. v. NASCO Ind., Inc., 501 F.2d 358, 360 (7th Cir. 1974) (“Whether a different use for a known process is merely analogous and cognate, and thus not ‘new,’ is a question which merges in the decisional process with the question of obviousness.”); see also, Mehl/Biophile Int’l. Corp. v. Milgraum, 8 F. Supp. 434, 446 (D. N.J. 1998), aff’d, 192 F.3d 1362 (Fed. Cir. 1999) (“If a different use of a known process is analogous or cognate to the prior uses, it will have difficulty defeating an argument that the patent is obvious under Section 103.”).


[336] See Hailes v. Van Wormer, 87 U.S. 353, 368 (1873).


[337] Id.


[338] Id. (emphasis added).


[339] Id. (emphasis added).


[340] Reckendorfer v. Faber, 92 U.S. 347, 348 (1875).


[341] Id. at 358.


[342] Id. at 355.


[343] Id.


[344] Id. at 356.


[345] Reckendorfer, 92 U.S. at 356.


[346] Id. at 357. (emphasis added).


[347] Id.


[348] Id.


[349] See generally H. Berman, Digest of Decisions on Combination and Aggregation, 17 J. Pat. Off. Soc’y., no. 1, 1935, at 29; H. Berman, Digest of Decisions on Combination and Aggregation, 17 J. Pat. Off. Soc’y., no. 2, 1935, at 143; H. Berman, Digest of Decisions on Combination and Aggregation, 17 J. Pat. Off. Soc’y., no. 3, 1935, at 202; H. Berman, Digest of Decisions on Combination and Aggregation, 17 J. Pat. Off. Soc’y., no. 4, 1935, at 311; H. Berman, Digest of Decisions on Combination and Aggregation, 18 J. Pat. Off. Soc’y., no. 4, 1936, at 285; H. Berman, Digest of Decisions on Combination and Aggregation, 21 J. Pat. Off. Soc’y., no. 9, 1939, at 685; H. Berman, Digest of Decisions on Combination and Aggregation, 24 J. Pat. Off. Soc’y., no. 10, 1943, at 718. (Supporting that between 1935 and 1942, Herman Berman, a patent examiner, authored a digest of judicial decisions distinguishing between permissible “combinations” and impermissible “aggregations.” The digest was published as a series of articles in the Journal of the Patent Office Society); see also C.W. Dawson, Some Notes on the Doctrine of Aggregation, 26 J. Pat. Off. Soc’y, no. 12, 1944, at 838 (supporting that C.W. Dawson, also a patent examiner, published a similar summary of cases addressing aggregation).


[350] Sachs v. Hartford Elec. Supply, 47 F.2d 743, 748 (2d Cir. 1931).


[351] Id.


[352] Id.


[353] Skinner Bros. Belting Co. v. Oil Well Improvements, 54 F.2d 896, 898 (10th Cir. 1931).


[354] Id.

[355] Id. at 898–99.


[356] Id. at 898.


[357] See supra text accompanying note 258.


[358] Funk Bros. Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 130 (1948).


[359] Id. at 131.


[360] Id.


[361] Id.


[362] See supra text accompanying notes 292-311.


[363]In re Worrest, 201 F.2d 930, 935 (C.C.P.A. 1953).


[364] Id.


[365] In re Carter, 212 F.2d 189, 193 (C.C.P.A. 1954).

[366] See In re Menough, 323 F.2d 1011, 1011 (C.C.P.A. 1963).

[367] Id. at 1014.

[368] Id. at 1015.

[369] In re Gustafson, 331 F.2d 905, 909 (C.C.P.A. 1964).

[370] See id. at 906 (“Novelty is not questioned, nor utility.”).


[371] But see id. at 910 (“However, the Board did not cite references on the issue of obviousness but relied on common knowledge…”).


[372] Id.


[373] Id.; see also 35 U.S.C. § 112, which read, at the time: “The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.”. See also text at note 18, supra.


[374] See In re Gustafson, 331 F.2d 905, 911 (C.C.P.A. 1964) (“Even now we are unable to see what bearing section 101 has on the question. There remain the possibilities that the claims do not define the invention claimed at [sic] with the particularity required by section 112, that what the claims do define is obvious, and that the invention sought to be claimed is obvious. Possibly there is relevant art.”).


[375] Id.


[376] In re Collier, 397 F.2d 1003, 1006 (C.C.P.A. 1968).


[377] Id. at 1005.


[378] Id. at 1006.

[379] See id. at 1006 (“We agree with that reading of claim 17 and consequently with the holding that its subject matter is obvious in view of prior art.”).

[380] MPEP § 2173.05(k) (9th ed. Rev. 7, Nov. 2015).

[381] Id. This section of the MPEP also refers the reader to section 2172.01 in the event that a claim omits essential matter. Section 2172.01, in turn, calls for a rejection on the basis of lack of enablement in such a case.

[382] See Gottschalk v. Benson, 409 U.S. 63 (1972); see also 35 U.S.C. § 102 (listing statutory novelty as a condition for patentability); see also 35 U.S.C. § 103 (conditions for patentability of non-obviousness subject matter).

[383] See Mayo Collaborative Serv. v. Prometheus Lab., Inc., 566 U.S. 66 (2012); see also Alice Corp. Pty. Ltd. v. CLS Bank Int’l 134 S.Ct. 2347 (2014).

[384] See Sachs, 47 F.2d at 748.

[385] See supra text at 352..


[386] Boulton 126 Eng. Rep. at 667; see text supra at note 38; see also, for example, Justice Story’s note, “On the Patent Laws,” which appeared as an appendix to Justice Marshall’s opinion in Evans v. Eaton, 16 U.S. (3 Wheat.) 454 app. at 15 (1818) (“A patent cannot be for a mere principle, properly so-called; that is, for an elementary truth.”)



[387] See id. (“[B]ut for a principle so far embodied and connected with corporeal substances as to be in a condition to act, and to produce effects in any art, trade, mystery, or manual occupation, I think there may be a patent.”); see text supra at note 60.


[388] Id. at 662.


[389] Le Roy I, 55 U.S. at 175.


[390] See Le Roy v. Tatham, 63 U.S. 132 (1860) [hereinafter Le Roy II].


[391] Id. at 139 (emphasis added).


[392] Id. at 141.


[393] See id. at 139


[394] O’Reilly v. Morse, 56 U.S. 62, 112.


[395] Id. at 112-13.


[396] Id. at 112-113.


[397] Id. at 113.


[398] Id at 113.


[399] Patent Act of 1836, ch. 357, § 6, 5 Stat. 117, 119 (reprinted 1870) (current version at 35 U.S.C. §§ 101, 102, 103 & 112 (2011)) (emphasis added).


[400] Le Roy v. Tatham, 55 U.S. 156, 175 (1853) (emphasis added).


[401] Id.


[402] Id.


[403] See Mayo, 566 U.S. at 70; Alice Corp. v. CLS Bank, Int’l., 1345 S. Ct. 2347, 2355, 2357 (2014).


[404] Alice Corp, 1345 S. Ct. at 2353.

[405] Id. at 2355 (citations omitted) (emphasis added).


[406] See supra text accompanying notes 336-339.


[407] See supra text accompanying note 214.


[408] See Patent Act of 1952, Pub. L. No. 82-593, § 101, 66 Stat. 792, 797; see also Gottschalk v. Bensen, 409 U.S. 63 (1972).


[409] Gottschalk, 409 U.S. at 71 (“It is argued that a process patent must either be tied to a particular machine or apparatus or must operate to change articles or materials to a ‘different state or thing.’ We do not hold that no process patent could ever qualify if it did not meet the requirements of our prior precedents.”).


[410] See id. at 65. (“The patent sought is on a method of programming a general-purpose digital computer to convert signals from binary-coded decimal form into pure binary form. A procedure for solving a given type of mathematical problem is known as an ‘algorithm.’”).


[411] Id. at 71-72 (emphasis added).


[412] Parker v. Flook, 437 U.S. 584, 589 (1978).


[413] See id. at 593-594.


[414] Id. at 590 (quoting White v. Dunbar, 119 U.S. 47, 51 (1886)) (emphasis added).


[415] Id. at 594 (emphasis added).


[416] Id. at 593 (emphasis added).


[417] See Diamond v. Chakrabarty, 447 U.S. 303, 310 (1980).


[418] Id.


[419] Id.


[420] Id.


[421] See Chakrabarty, 447 U.S. at 305.


[422] See American Fruit Growers, Inc. v. Brogdex Co., 283 U.S. 1, 11-12 (1931).


[423] See Funk Bros., 333 U.S. at 130.


[424] Diamond v. Diehr, 450 U.S. 175, 177 (1981).


[425] See id. at 192-93.


[426] Id. at 191 (quoting S. Rep. No. 1979, 82d Cong. 2d Sess., 17 (1952)); see also In re Benner, 46 F.2d 382, 383 (C.C.P.A. 1931) (discussing the aspect of novelty that is required for patentability). See supra at note 293.


[427] Diehr, 450 U.S. at 189, n.12.


[428] See id. at 188-189.


[429] Id.


[430] Id. at 192-93.


[431] Id.


[432] See Boulton v. Bull (1795) 126 Eng. Rep. 651, 667 2 H. BL. 495 (“Undoubtedly there can be no patent for a mere principle, but for a principle so far embodied and connected with corporeal substances as to be in a condition to act, and to produce effects in any art, trade, mystery, or manual occupation, I think there may be a patent.”).


[433] Diehr, 450 U.S. at 192-193.


[434] Bilski v. Kappos, 561 U.S. 593, 602 (2010).


[435] Id. at 604.


[436] Id. at 608 (“Finally, while [35 U.S.C.] § 273 appears to leave open the possibility of some business method patents, it does not support broad patentability of such claimed inventions.”).


[437] Id. at 612.


[438] Id.


[439] Bilski, 561 U.S. at 609.


[440] Id.


[441] See id. at 611. (The Court stated: “Petitioners’ remaining claims are broad examples of how hedging can be used in commodities and energy markets. Flook established that limiting an abstract idea to one field of use or adding token postsolution components did not make the concept patentable. That is exactly what the remaining claims in petitioners’ application do. These claims attempt to patent the use of the abstract idea of hedging risk in the energy market and then instruct the use of well-known random analysis techniques to help establish some of the inputs into the equation.  Indeed, these claims add even less to the underlying abstract idea principle than the invention in Flook did, for the Flook invention was at least directed to the narrower domain of signaling dangers in operating a catalytic converter.”) (Emphasis added).


[442] See id at 612.


[443] See generally Mayo, 566 U.S. 66 (2012) (determining that certain drug administrations were not patent eligible subjects).


[444] Mayo, 566 U.S. at 84; see also Neilson v. Harford, 151 E.R. 1266, 371 (1841).


[445] Mayo, 566 U.S. at 82.


[446] Id. at 87 (emphasis added).


[447] Id.


[448] Id. at 89.


[449] Id. at 90 (The Court stated: “We recognize that, in evaluating the significance of additional steps, the § 101 patent eligibility inquiry and, say, the § 102 novelty inquiry might sometimes overlap. But that need not always be so. And to shift the patent eligibility inquiry to these later sections risks creating greater legal uncertainty, while assuming that those sections can do work that they are not equipped to do.”).


[450] Mayo, 566 U.S. at 90.


[451] Ass’n for Molecular Pathology v. Myriad Genetics, Inc., 133 S. Ct. 2107, 2111 (2013).


[452] Id. at 2116-2117 (emphasis added).


[453] Id. at 2117.


[454] Id. at 2119.


[455] Id at 2110, 2119.


[456] Ass’n for Molecular Pathology, 133 S. Ct. at 2117.


[457] Id. at 2119 n.9 (“We express no opinion whether cDNA satisfies the other statutory requirements of patentability.”)


[458] See id.


[459] Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2356.


[460] Id. at 2349-2350.


[461] Id. at 2355.


[462] Id. at 2357.


[463] Alice Corp., 134 S. Ct. at 2355.


[464] Id. (quoting Mayo, 566 U.S. at 78).


[465] Id.


[466] Id. (quoting Mayo, 566 U.S. at 73) (emphasis added).


[467] Alice Corp., 134 S. Ct. at 2358 (emphasis added).


[468] See text supra at note 416.


[469] Alice Corp., 134 S. Ct. at 2359 (citation omitted).


[470] See Sachs v. Hartford Electric Supply Co., 47 F.2d 743, 748 (1931). See also supra text at note 350.


[471] Such an analysis, must, however, also be viewed in the context of the kinds of abstract ideas that, even if applied as processes, are subject to patent protection. This question, in turn, depends upon the meaning of “useful Arts” under Article I, Section 8, clause 8 of the United States Constitution, and is beyond the scope of this paper. The author, however, has briefly attempted to address this subject in A Great Invisible Crashing: The Rise and Fall of Patent Eligibility Through Mayo v. Prometheus.” N. Scott Pierce, A Great Invisible Crashing: The Rise and Fall of Patent Eligibility Through Mayo v. Prometheus, 23 Fordham Intell. Prop. Media & Ent. L.J. 186, 199-201, 204 (2012); The author also refers the reader to Joel Mokyr, in The Enlightened Economy: An Economic History of Britain, 1700-1850, (Yale University Press) (2009), who said that “the useful arts” as they were understood during the “Industrial Enlightenment” of eighteenth century British society were embodied in “the Baconian program,” and were intended to give “people power over nature and not (just) over other people.” Id. at 200-201; If so, then such an interpretation might be a basis for excluding from eligibility for patent protection application of “abstract ideas” that are limited to giving power only over other people (and not nature), such as forms of government, economics, finance, religion, etiquette, etc., regardless of their physical means of manifestation.


[472] See Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1373-1375, 1380 (Fed. Cir. 2015), cert. denied, 2016 U.S. LEXIS 4087 (2016).


[473] See id. at 1375-76.


[474] Id. at 1377.


[475] Id. at 1376.


[476] Id. at 1379.


[477] Ariosa Diagnostics, Inc., 788 F.3d at 1380.


[478] See Wachsner, supra note 285, at 401, 402; see also Funk Bros., 333 U.S. at 132.


[479] Ariosa Diagnostics, Inc., 788 F.3d at 1380 (Linn, J., concurring).


[480] Id. at 1381 (Linn, J., concurring) (emphasis added).


[481] See supra text at note 30 et seq.


[482] Ass’n for Molecular Pathology, 133 S. Ct. at 2117.


[483] See Ariosa Diagnostics, Inc., 788 F.3d at 1373.


[484] Id. at 1381.


[485] The revisions proposed by the IPO and the ABA are as follows:

IPO proposal:


Whoever invents or discovers, and claims as an invention, any new and useful process, machine, manufacture, composition of matter, or any new and useful improvement thereof, may thereto, shall be entitled to obtain a patent therefor for a claimed invention thereof, subject only to the exceptions, conditions, and or requirements set forth in of this Title.
A claimed invention is ineligible under subsection (a) if and only if the claimed invention as a whole, as understood by a person having ordinary skill in the art to which the claimed invention pertains, exists in nature independently of and prior to any human activity, or exists solely in the human mind.

The eligibility of a claimed invention under subsections (a) and (b) shall be determined without regard as to the requirements or conditions of sections 102, 103, and 112 of this Title, the manner in which the claimed invention was made or discovered, or the claimed invention’s inventive concept.

Proposed Amendments to Patent Eligible Subject Matter Under 35 U.S.C. § 101, Intellectual Property Owners Assoc., Section 101 Legislation Task Force, February 7, 2017 (alterations emphasized);

ABA proposal:

  1. Inventions patentable: Conditions for patentability: eligible subject matter.

(a) Eligible Subject Matter.- Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may shall be entitled to obtain a patent thereof, subject to the on such invention or discovery, absent a finding that one or more conditions and or requirements under of this title have not been met.

(b) Exception.- A claim for a useful process, machine, manufacture, or composition of matter, or any useful improvement thereof, may be denied eligibility under this section 101 on the ground that the scope of the exclusive rights under such a claim would preempt the use by others of all practical applications of a law of nature, natural phenomenon, or abstract idea. Patent eligibility under this section shall not be negated when a practical application of a law of nature, natural phenomenon, or abstract idea is the subject matter of the claims upon consideration of those claims as a whole, whereby each and every limitation of the claims shall be fully considered and none ignored. Eligibility under this section 101 shall not be negated based on considerations of patentability as defined in Sections 102, 103 and 112, including whether the claims in whole or in part define an inventive concept.


Letter to The Honorable Michelle K. Lee from the American Bar Association, March 28, 2017 (alterations emphasized).
The text of section 101 as proposed by the AIPLA is as follows:
(a) Eligible Subject Matter. Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain shall be entitled to a patent therefor, subject only to the conditions and requirements of set forth in this title.
(b) Sole Exceptions to Subject Matter Eligibility. A claimed invention is ineligible under subsection (a) only if the claimed invention as a whole exists in nature independent of and prior to any human activity, or can be performed solely in the human mind.


(c) Sole Eligibility Standard. The eligibility of a claimed invention under subsections (a) and (b) shall be determined without regard to the requirements or conditions of sections 102, 103, and 112 of this title, the manner in which the claimed invention was made or discovered, or whether the claimed invention includes an inventive concept.
AIPLA Proposal on Patent Eligibility, May 12, 2017.
[486] See, e.g., Pierce, N.S., Common Sense: Treating Statutory Non-Obviousness as a Novelty Issue, 25 Santa Clara Computer High Tech. L.J. No. 3, 541 (2009).

[487] See, e.g., Kenneth J. Burchfiel, Revising the “Original” Patent Clause; Pseudohistory in Constitutional Construction, 2 Harv. J. L. & Tech 155, 212-13 (1989); Pamela O. Long, Invention, “Intellectual Property,” and the Origin of Patents, 32 Technology and Culture 846, 875 n. 76 (Periodicals Service Company) (1991) (“Until recently, the early history of patents in continental Europe had been investigated primarily by patent lawyers interested in the antecedent of their own discipline…. Yet much of it is marred by inadequate documentation, overgeneralization, and an anecdotal quality that fails to explore patents within the context of economic history and the history of technology.”)


[488] See, e.g., Brad Sherman & Lionel Bently, The Making of Modern Intellectual Property Law 209 (Cambridge University Press, 1999). (“This is not to suggest that the 1624 Statute of Monopolies or the 1710 Statute of Anne played no role in the history of intellectual property, for they clearly did. Rather, it is to argue that the way in which these (and related) events were perceived changed, sometimes dramatically, with the emergence of modern intellectual property law.”)


[489] Mayo, 566 U.S. at 90.


[490] Id.

Ransomware – Practical and Legal Considerations for Confronting the New Economic Engine of the Dark Web

Download PDFSherer Publication Version PDF

Cite as: James A. Sherer, Melinda L. McLellan, Emily R. Fedeles, and Nichole L. Sterling, Ransomware – Practical and Legal Considerations for Confronting the New Economic Engine of the Dark Web, 23 Rich. J.L. & Tech. Ann. Survey (2017),


By: James A. Sherer,* Melinda L. McLellan,** Emily R. Fedeles,*** and Nichole L. Sterling****


I.    Introduction


[1]       Ransomware is malicious software that encrypts data on a device or a system, then bars access to, or recovery of, that data until the owner has paid a ransom.[1] This type of threat has existed in some shape or form since at least 1989,[2] but over the past two years the frequency and scope of attacks have increased to alarming levels. In response, the U.S. Federal Trade Commission (FTC) identified Ransomware as “one of the most serious online threats facing people and businesses” in 2016 as well as “the most profitable form of malware criminals use,”[3] and the FBI developed a special working group dedicated to fighting it.[4]


[2]       Considering that Ransomware emerged “at the dawn of the Internet revolution,”[5] even before the development of formalized Internet law and policy, attorneys have now had a bit of time to become familiar with its operation and effects and to contemplate reasonable and legitimate responses to Ransomware attacks. Despite the intervening decades, and although Ransomware as a process and business are (somewhat) better understood, the legal implications of Ransomware attacks are still up for debate, and there is no simple answer to the question of how Ransomware victims can, or should, deal with an attack.


[3]       This digital menace poses constantly evolving threats, which adds to the challenges victims confront when attempting to implement current guidance and benchmarked response efforts to Ransomware. These challenges are not only rooted in functionality and potential damage, but also due to the emergence of a viable business model facilitating Ransomware’s exponential growth as a tool for criminals. We will explore these challenges by providing an overview of Ransomware’s development and spread and then examining the current, albeit unsettled, legal landscape surrounding Ransomware attacks and victim responses, to consider what the future might hold for regulation in this space.


II.    A History of Ransomware


[4]       As noted above, Ransomware has been around in one form or another for at least ten years,[6] and as early as 1989 in the U.S.[7] and Europe.[8] The first recorded example was biologist Joseph Popp’s “AIDS Trojan”: Popp developed the virus and “passed 20,000 infected floppy disks out at the 1989 World Health Organization’s AIDS conference.”[9] Ransomware subsequently faded as a notable security concern for more than a decade before making another brief appearance in 2005.[10] Then, in the wake of an economic recession, Ransomware came back with a vengeance, making a dramatic entrance as it “resurged in 2013;”[11] it has continued to flourish ever since. Interestingly, Ransomware’s recent reemergence may be explained, in part, by the success of other hacking efforts. The historical model for the most obvious cybercrimes had been stealing and selling data (usually credit card numbers), but this fraud became so prevalent that the going rate for stolen payment card information has dropped precipitously over the past five years.[12] In response, “[t]o keep cybercrime profitable, criminals needed to find a new cohort of potential buyers, and they did: all of us.”[13]


[5]       Although experts rightly emphasize the significant problem Ransomware presents today, the risks have not always been so grave in the hostage-software industry. As Doug Pollack noted, “ironically, until [the 2005 resurgence], most [Ransomware] was fake. Fraudulent spyware removal tools and performance optimizers scared users into paying to fix problems that didn’t really exist.”[14] Regardless, most present-day (and, likely, future) Ransomware is serious business, both in the effects it has on victims and in the underground infrastructure that buttresses Ransomware’s propagation. Moreover, the scourge of Ransomware is growing steadily, with some researchers noting 500% yearly increases.[15] Other experts focus on the exponential reach of Ransomware, noting that it “infects one computer but…often spreads across network drives to infect other computers as well.”[16]


[6]       In the face of an inarguably immense and expanding problem, an understanding of the relevant legal issues is crucial for practitioners who will encounter Ransomware and its effects. That said, evaluating the applicable legal framework requires knowledge of Ransomware’s mechanics, which may vary widely by the type, source, and purpose of the Ransomware—not to mention the specific effects it may have on a given organization.


III.    Ransomware as a Process


[7]       Malware is malicious software, but that category “encompasses a wide range of program types including viruses, worms, logic bombs, Trojan horses, keyloggers, zombie programs, and backdoors.”[17] One subcategory of Malware is “Scareware,” or Malware that “takes advantage of people’s fear of revealing their private information, losing their critical data, or facing irreversible hardware damage.”[18] Ransomware is a subset of Scareware; specifically a “category of malicious software which, when run, disables the functionality of a computer in some way,”[19] making it essentially “a digital version of hostage taking.”[20] Ransomware is also classified as a type of viral software, which is software that may be grouped into separate “families” and differentiated by whether it presents only the superficial trappings of a threat or poses an actual problem.[21] We may divide the types of Ransomware that pose an actual threat into two main groups: “one-off” variants used in an ad-hoc fashion, and software that serves as an extension of the broader criminal infrastructure into which victims pay their ransom.


A.    Locker Ransomware


[8]       Beginning with the functional mechanics of the software, Ransomware attacks can be segregated by form. Early variants[22] were primarily Locker Ransomware, and were identified as such (e.g., WinLocker, which would lock up a user’s screen, and Master Boot Record, which would interrupt a user’s normal operating system).[23] The Locker approach “restricts user access to infected systems by locking up the interface or computing resources within the system,”[24] thereby blocking off access to the computer or denying access to files.[25] Locker Ransomware may display “a message that demands payment to restore functionality,”[26] such that it appears similar to the other Ransomware variants discussed below, but operates quite differently.


[9]       If the victim’s operating system is imagined as a storage unit, where the worth of the operating system lies in the items contained within the unit, Locker Ransomware operates by effectively changing the lock on the door, or, in some cases, changing the mechanism by which the lock engages. The items within the storage unit remain untouched, and the victim is asked to pay to have the door unlocked (or to have the locking mechanism restored to its original form), but victims in such Locker Ransomware cases have other options for regaining access. For example, they can try to bypass the door by (metaphorically) drilling out the lock, taking the door off its hinges, or just removing the walls from around the unit’s contents.


B.    Crypto Ransomware


[10]     Cryptographic approaches to Ransomware operate differently, though the initial message—pay us or you cannot access your data—looks the same at first blush. Rather than focusing solely on the lock, however, these variants[27] employ a Crypto Ransomware or CryptoLocker approach.[28] Here, the Ransomware “encrypts files on the target system so that the computer is still usable, but users can’t access their data.”[29] This type of Ransomware typically “uses RSA 2048 encryption to encrypt files,” making “cracking the lock” to avoid paying ransom an impossibility; for an average desktop computer, this approach would take “around 6.4 quadrillion years.”[30]


[11]     Continuing with the storage unit metaphor, a Crypto Ransomware approach may or may not tamper with the lock on the front door. Instead, Crypto Ransomware sizes up each item within the unit, systematically determining the relative value of the files to the user. These may include, for example, unstructured data comprised of user photos, Word documents, Excel files, or PDFs. Once those files are identified by extension, the program goes to work, encrypting each file and rendering it unusable pending payment of the ransom—unless, as we discuss below, (1) the user can find a workaround solution online; or (2) the ransom is paid but no key is provided.


[12]     When it comes to Crypto Ransomware, there is no option to drill out the lock, take the door off the hinges, or tear down the wall; each file is locked up separately and indefinitely.[31] Accordingly, this type of Ransomware poses a very different kind of threat and, as such, is handled quite differently by experienced security professionals tasked with solving the problem.


[13]     Crypto Ransomware doesn’t stop there. Certain variants add insult to injury, as some may, “while encrypting files, search[] and steal[] [B]itcoins from the user.”[32] Others, called “Doxware,” may focus on areas normally associated with user privacy such as conversations, photos, and other sensitive files; and threaten to release them publicly unless the ransom is paid.[33] Still another form of Crypto Ransomware, Shadowlock, “forces users to complete consumer surveys of products and services as the ransom payment.”[34]


[14]     Although Ransomware’s efficacy has improved over the decades since its introduction, many earlier forms are still in use.[35] This may be due in part to its inherent longevity, as one key element of older Ransomware’s functionality is the malicious way in which its self-propagating features make it incredibly difficult to eliminate. Some legacy Ransomware variations are no longer in circulation, but certain “[m]alware that was released years—in some cases, decades—ago is still alive and well today,”[36] making awareness of modern Ransomware’s progenitors required knowledge for practitioners active in this space.


C.    Ransomware Delivery


[15]     Despite the automated nature of Ransomware’s self-propagation, the spread of most Ransomware is still a personal process that relies on human error.[37] The FBI notes specifically that “Ransomware is frequently delivered through spear phishing emails” to end users.[38] Other common methods of installing Ransomware are “exploit kits,”[39] “Web exploits and drive-by downloads,”[40] “infected removable drives, infected software installers,”[41] and “mass phishing campaigns.”[42] In a “mass phishing campaign,”[43] malware is “installed on a user’s computer without their knowledge when that user browses to a compromised website,”[44] and is using “outdated browsers, browser plugins, and other software.”[45] These techniques may be referred to as “malvertising” where “[c]ybercriminals leverage compromised advertising networks to serve malicious advertisements on legitimate websites which subsequently infect the visitors…[later] redirecting the user to an Exploit Kit (EK) landing page.”[46]


[16]     In addition to leveraging self-propagation, Ransomware schemes also may rely on the “spray and pray” technique, or sending out massive quantities of malware-infected emails in hopes of hitting “as many individual targets…as quickly as possible” by virtue of sheer volume.[47] Still other types of Ransomware have begun to deploy an even more personal approach, tailoring messages to appear as genuine as possible; often through social engineering research used to gain knowledge of a company’s operational structure, invoicing and remittance practices, and even individuals’ writing styles.[48] Increasingly, “e-mails are highly targeted to both the organization and individual, making scrutiny of the document and sender important to prevent exploitation.”[49]


D.    Personality and Psychology


[17]     The customization of these programs is reflected in a variety of features that are now common to Ransomware schemes. For example, certain programs display multiple language options so “language is not a barrier to payment, [allowing] the user [to] access ransom instructions in English, French, German, Russian, Italian, Spanish, Portuguese, Japanese, Chinese and Arabic”[50] and making sure that the Ransomware “experience” is appropriately localized for the victim.[51] Once the Ransomware is downloaded, it disables the victim’s machine “by disallowing execution of various programs,” demanding ransom, and even “using local police images” –the program geo-locates the user’s internet protocol address and associates that address with location-specific law enforcement decals and insignia deployed from a central command-and-control server.[52]


[18]     In connection with this locality-based personalization, Ransomware may use psychological tactics to induce guilt or shame in individual victims.[53] For example, ransom notes may include salacious details to frighten users, sometimes claiming that the victim has violated federal statutes and/or threatening imprisonment for alleged visits to websites “containing pornography, child pornography, zoophilia and child abuse.”[54] These ransom notes are then spread throughout the computer’s operating system, often propagating hundreds of copies on a given computer to ensure the user’s attention is drawn to the threat.[55]


[19]     Alternatively, “some versions of Ransomware are now designed to seek out the files on a victim’s computer that are most likely to be precious, such as a large number of old photographs, for example, tax filings, or financial worksheets.”[56] Other variants “just delete[] files instead of encrypting them.”[57] Finally, some “variants display a countdown timer to the victim, threatening to delete the key/decryption tool if payment is not received before the timer reaches zero or, in other cases, increase the price of the ransom.”[58]


[20]     Even setting aside the nuances of these personal approaches, it is nearly impossible for security experts to keep pace with Ransomware advances generally, as “hackers are releasing over 100,000 new [R]ansomware variants daily,”[59] and “‘evil genius’ [R]ansomware ideas are ‘coming out on a regular basis.’”[60] Perhaps even more challenging for law enforcement and security specialists, the level of technological expertise required to engineer a Ransomware attack has decreased significantly; at this point, deploying Ransomware is “relatively low budget, low stakes, and [doesn’t] require much skill to pull off.”[61] Indeed, in one instance, a recent drop in price to US$39 for Ransomware software concerned experts who believed “the low price coupled with its potency could trigger a wave of new infections.”[62]


[21]     Evolving with the times, recent Ransomware variants have focused on smartphones and other connected devices, including those that are a part of the “Internet of Things.”[63] The first instances of “mobile-focused Ransomware came out in 2013,”[64] buoyed in part “by the practice of users downloading pirated apps from unsanctioned app stores.”[65] As noted by another commentator, “[R]ansomware criminals can achieve some profit from targeting any system: mobile devices, personal computers, industrial control systems, refrigerators, portable hard drives, etc. The majority of these devices are not secured in the slightest against a [R]ansomware threat.”[66]


IV.    The Business of Ransomware


You always wanted a Ransomware but never wanted two pay Hundreds of dollars for it? This list is for you!?? Stampado is a cheap and easy-to-manage ransomware, developed by me and my team. It’s meant two be really easy-to-use. You’ll not need a host. All you will need is an email account.[67]


[22]     The mentality behind Ransomware seems to have deep-rooted cultural underpinnings, likened by some authors to medieval roadways that became host “to travelling footpads referred to as highwaymen.”[68] Methodologically, the purveyors of Ransomware bear little resemblance to hackers “who attempt to exfiltrate or manipulate data where it is stored, processed, or in transmission;” instead, “ransomware criminals only attempt to prevent access to the data.”[69] In short, Ransomware aims to disrupt.


[23]     Ransomware differs from many other types of hacking on a number of levels. It has been called a “business model”[70] that has “quickly risen to dominance”[71] within the “cybercriminal market in the past few years”[72] and has “emerged as one of the most serious online threats facing businesses.”[73]


[24]     Often, a Ransomware attempt betrays the fact that its author “lack[s] the technical complexity to perform successful attacks;”[74] some versions have been described as lacking technical savvy, and others as “not very well developed” beginner-level efforts.[75] Perhaps because of a general lack of know-how, and Ransomware’s reputation as offering “easier money than hacking into personal information to use for identity theft,”[76] a cottage industry has mushroomed. Certain criminals “now have the resources to hire professional developers to build increasingly sophisticated malware” on their behalf.[77] Providers, “usually based in Russia, Ukraine, Eastern Europe and China, have begun licensing what’s known as ‘exploit kits’—all-inclusive Ransomware apps—to individual hackers for a couple hundred dollars a week,”[78] or even “[US]$50 for a set period time of use,”[79] frequently taking a “cut of the profits from payouts.”[80]


[25]     Known as “Ransomware-as-a-service” (or RaaS), there are now “products, such as CerberRing, which provide[] less-tech savvy criminals a corridor into cybercrime, and yield[] criminal affiliates (often tasked with distributing the [R]ansomware) a healthy portion of the profits.”[81] Interestingly enough, because Ransomware is such big business, some Ransomware enterprises actually offer “customer service which victims can contact to negotiate”[82] and similar structures that make both launching the attacks, and paying the ransoms, easier.[83]


[26]     Some commentators note that there is “some honour among thieves,” where “hackers almost always honour their word and provide the encryption key to those who make timely online payments.”[84] Others disagree, noting that a decision to pay does not consistently restore functionality, and “[t]he only reliable way to restore functionality is to remove the malware.”[85] For many this is truly unfortunate, as “[t]he costs of downtime often exceed the cost of ransom.”[86]


[27]     Ransomware infrastructure has “begun to mimic the way modern software is developed: there are criminal engineers and manufacturers, retailers, and ‘consumers’—[those] hackers on the lookout for the newest, most effective product.”[87] In some cases, when a ransom is paid functionality may be restored but in an inconsistent manner (e.g., accounting data may be returned, but mapped drive data is not); in at least one of those cases, the victim determined that the “help” offered by the Ransomware attacker could instead lead to the loss of more data.[88]


[28]     Ransomware may be preferred by criminals because it cuts out the middle-man. [89] It bypasses many of the annoyances associated with hacking to steal data that then must be monetized. Where “intellectual property, or other sensitive information that is stolen outright….is often ‘fenced’ on the Dark Web, then the buyer has to turn it into a false identity that can be used to fraudulently obtain goods or services.”[90] In contrast, Ransomware has victims who “pay the criminal directly, the payment happens within hours or days in untraceable currency, and there is no chain of custody to point to the criminals because the data stays on the victim’s system the whole time.”[91] Indeed, deploying Ransomware is especially convenient for criminals, as its operation “often means dealing not with a small group of fellow criminals, but instead with a much larger population of lay users who are unlikely to disappear behind bars.”[92]


V.    Ransomware’s Direct Impact


[29]     In some cases, specific industries have been singled out as popular targets. For instance, at the time of writing, “[R]ansomware is the dominant current information security threat to health care providers.”[93] Ransomware may target “victims like healthcare providers whose complex independent networks and critical need for real-time information can make reliance on backups difficult and potentially life-threatening.”[94] These types of targets (“hospitals in particular” but also “other firms heavily dependent on computers”[95]) tend to focus on paying off the attacker to make the problem go away, whereas other types of companies may be amenable to “resisting the attack and rebuilding entire systems.”[96] If the demands are not met, in the most extreme examples, a victim might be “forced back into the 1980s: digital typewriters, notebooks, fax machines, post-it notes, paper checks and the like.”[97] In the face of these challenges, many organizations and individuals simply pay. Some do so without fanfare, and experts claim it “would shock you [] how many companies have quietly gone ahead and paid for information to be returned.”[98] Others, like PayPal, have made public the fact that they will pay for stolen data to protect their customers.[99]


[30]     One commentator noted that attorneys increasingly are “targets of [R]ansomware;” in the past several years, a number of “large and small law firms in the United States and Canada have had their office computer systems compromised by [R]ansomware.”[100] Some professionals “suspect that paying gets you listed on the Dark Web as an easy target, setting you up for more attacks.”[101] At least in some cases, the FBI appears to agree.[102] Ransomware’s effects are not just monetary, as the loss of the files themselves (or the cost of ransom) may be eclipsed by the loss of “client trust, relationships, and reputation.”[103]


VI.    Ransomware’s Indirect Impact


[31]     One commentator notes that Ransomware is an exception (and perhaps portends a wave of such exceptions) to the traditional “data security breach” concept with which we have all become familiar.[104] Whereas a traditional “breach” typically entails the acquisition of data, Ransomware allows wrongdoers to control, damage, and interrupt systems; deny access to data; and destroy or otherwise harm the data’s integrity—all without actual acquisition of the data.[105]


[32]     Although some contend that “no information is actually stolen during a [R]ansomware attack,”[106] others argue that falling victim to Ransomware “could also be considered a data breach, even though the data never leaves the victim’s systems.”[107]


[33]     The issue of whether Ransomware constitutes a breach was raised at the 2016 Healthcare Compliance Association conference.[108] There, Iliana Peters of the Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) “pointed out that HIPAA regulations define a data breach as ‘impermissible acquisition, access, use or disclosure of PHI [protected health information](paper or electronic) which compromises the security or privacy of the PHI.’”[109] Additional HIPAA guidance from the OCR also notes that some Ransomware may “exfiltrate” the data,[110] which further complicates a simple explanation for the mechanics of a Ransomware attack. The OCR also noted that “[h]ospitals and other healthcare providers hit by [R]ansomware attacks should notify affected individuals, the federal government and perhaps the news media unless there is a ‘low probability’ any personal health information was disclosed.”[111] That “guidance makes clear that a [R]ansomware attack usually results in a ‘breach’ of healthcare information under the HIPAA Breach Notification Rule,” noted OCR’s Executive Director, Jocelyn Samuels.[112]


[34]     In contrast, some argue that data breach notification statutes were implemented with a focus on informing citizens that their personal information may have been compromised, offering “valuable warnings to assist victims in protecting themselves” and otherwise corralling information that has been set loose in the outside world.[113] The July 2016 HHS guidance also indicates that the question of “whether notification is required comes down to a ‘fact-specific determination.’”[114] In some cases, a forensic investigation may provide evidence to support a company’s conclusion that a ransomware attack did not expose any personal information, even if the incident resulted in a system shutdown or other functional difficulties. Many healthcare entities have reached this same conclusion under HIPAA.


VII.    Response to Ransomware


[35]     Although the following discussion examines conventional best practice approaches for dealing with Ransomware, but the preceding section should signal that there is no one-size-fits-all solution. As with many computer infections, a typical initial response to Ransomware may be to restart the computer in “safe mode” in an effort to disable a number of programs that might be causing issues.[115] In the case of Ransomware, however, this approach may backfire, allowing the malicious software to flourish by un-loading antivirus programs that otherwise may have stopped it.[116]


[36]     The next step in the response protocol is for victims to identify which “strain” of Ransomware they are dealing with, and then determine whether an “applicable decryption method” may be readily available to help unlock or decrypt files.[117] Whether this approach will be successful depends on the sophistication of the Ransomware. Certain generic, readily available strains that are still freely disseminated among would-be hackers may be defeated with relative ease, and the fact that a given strain of Ransomware is still in circulation is not proof of its viability or effectiveness.[118] To give one example, “the makers of Jigsaw ransomware have continued their assault against victims despite the fact its encryption scheme has been defeated by security researchers.”[119]


[37]     If these initial efforts are unsuccessful, certain victims may be inclined to pay the ransom. Experts may caution against paying the ransom prematurely, but for many, a relatively paltry Ransomware demands (demands often range from US$200 to US$2,000) may be seen as “nuisance fee” more than anything else.[120] The “To Pay or Not to Pay”[121] characterization of a standard response to Ransomware is apt, though this decision-making process may mean waiting to decide until after an initial deadline is extended.[122] Waiting may result in a doubling of the ransom[123] or even an exponential increase—up to US$20,000 in some instances.[124] And in some cases there really is no choice. As noted in a recent report, “[f]or variants of [R]ansomware that rely on types of strong asymmetric encryption that remain relatively unbreakable without the decryption key, victim response is sharply limited to pay[ing] the ransom or los[ing] the data. No security vendor or law enforcement authority can help victims recover from these attacks.”[125]


[38]     Paying a ransom may, therefore, make logical sense, given that “Ransomware attacks, especially those against individual users, only demand a few hundred dollars at most from the victim” and “[f]rom law enforcement’s perspective, a home burglary results in greater loss than a singular [R]ansomware attack.”[126] At least one commentator noted cynically that, because “[s]ecurity has always been a business decision, [s]ome companies would rather pay a lower fee for ransom than pay for the cost of having a robust security stance.”[127] Others note that “to save money, some organizations don’t include all their important files in their backups, or don’t run their backups often enough.”[128]


[39]     However, notwithstanding the low dollar value of most demands, taken in the aggregate, these attacks cost real money. “[L]osses for victims from a single strain of the CryptoWall malware were close to $18 million,”[129] and another Ransomware attacker earned roughly $1 million.[130] Given that “nearly 30 percent of CryptoLocker and CryptoWall victims pay the ransom,”[131] there remains the concern that “hackers [will] continue to ask for higher and higher ransoms.”[132] Early payment schemes involved payment through “an SMS text message or regular call to a premium rate number” where such charges could be “as high as $460.”[133] A second iteration of payment schemes moved to prepaid electronic payment systems such as Paysafecard, Ukash, and Moneypak, where Ransomware victims are required to purchase special PIN numbers.[134]


[40]     Regardless of whether it makes business sense for victims to pay a victim to pay a given ransom, victims must also consider whether they may pay. Unhelpfully, regulatory authorities have expressed varying opinions on that point and have not provided definitive guidance as to whether victims should pay. The FTC notes that “[l]aw enforcement doesn’t recommend paying the ransom” while warning that “it’s up to you to determine whether the risks and costs of paying are worth the possibility of getting your files back.”[135] In contrast, Joseph Bonavolonta, the head of the FBI’s Cyberand Counterintelligence Program in 2015, stated that the FBI “often advise[s] people just to pay the ransom.”[136] Rick Kam, president of ID Experts, also opined that “it is often easier just to pay the ransom than to do without the data.”[137] Anecdotally, the authors have heard a wide range of opinions with respect to whether paying the ransom is a sound approach. Indeed, given the exploding number of attacks and diversity of outcomes, it is increasingly challenging to offer affected companies or individuals clear recommendations on how to assess the likelihood of success when it comes to answering a Ransomware demand.


[41]     In short, law enforcement guidance may boil down to a “[l]ook, we can’t help you,”[138] response, even if some agencies indicate that “[m]ost…including law enforcement don’t condone paying the ransom,”[139]and “[m]ost security vendors advise the public (who are not yet victims) to never pay the ransom and to focus on mitigation efforts instead.”[140] The FBI, however, appears to be seeking “public-private partnerships,” as the Bureau utilizes notifications it receives regarding Ransomware and other threats in an overall effort to build up more comprehensive forms of defense and prevention.[141]


VIII.    Practical and Legal Considerations


[42]     In almost all cases, Ransomware ransom demands must be paid in a digital currency such as Bitcoin.[142] Bitcoin emerged in 2009[143] and has had unpredictable and profound effects, particularly with respect to the underground economy.[144] For many victims, receipt of a Bitcoin ransom demand is the first time they are exposed to the term, and very few have the necessary resources available to pay such a demand in a timely manner. Others who are aware of the threat—or who have a need for Bitcoin as a payment method for unrelated reasons—may “stockpile [B]itcoins in order to pay off cyber criminals who threaten to bring down their critical IT systems.”[145] To provide one public example, Hollywood Presbyterian Medical Center recently paid $17,000 in Bitcoin in response to a ransom demand.[146]


[43]     Unfortunately, making a Bitcoin payment is not a straightforward prospect for most organizations. The process is rife with potential legal and practical problems, because the company will likely “need to buy Bitcoins from an online exchange. The exchange will require you to supply a bank account or debit card number to fund the transaction, which creates an immediate risk because Bitcoin exchanges are notorious for being hacked.”[147]


[44]     To add another layer of complexity, in its March 25, 2014 Virtual Currency Guide, the United States Internal Revenue Service declared that a virtual currency such as Bitcoin is considered property, not currency, and thus its use is a taxable event.[148] Further, “[a] payment made using virtual currency is subject to information reporting to the same extent as any other payment made in property.”[149] “The basis of virtual currency…is the fair market value of the virtual currency in U.S. dollars as of the date of receipt”, which means that a taxpayer could end up with a taxable gain or loss, depending on the net outcome.[150]


[45]     Concurrently, Ransomware perpetrators who demand Bitcoin ransoms run the risk of also violating financial services laws and regulations prohibiting the operation of unlicensed banks—or at least causing such violations.[151] “[T]he U.S. Attorney for the Southern District of New York issued a press release concerning [a] criminal prosecution against Anthony R. Murgio and Yuri Lebedev for running an unlicensed Bitcoin exchange used by victims of CryptoWall [R]ansomware to pay ransoms [to their attackers] via TOR (The Onion Router).”[152] The two men were accused of having operated, a Bitcoin exchange service, in violation of federal anti-money laundering laws and regulations and that, “in doing so, they knowingly exchanged cash for people whom they believed may be engaging in criminal activity.”[153] It is alleged that, in total, “between approximately October 2013 and January 2015, exchanged at least [US]$1.8 million for Bitcoins on behalf of tens of thousands of customers.”[154] In addition, during this time, Murgio allegedly “transferred hundreds of thousands of dollars to bank accounts in Cyprus, Hong Kong, and Eastern Europe, and received hundreds of thousands of dollars from bank accounts in Cyprus and the British Virgin Islands, in furtherance of the operations of his unlawful business.”[155] In doing so, the operators of were said to have “knowingly enabled the criminals responsible for those attacks to receive the proceeds of their crimes” thereby violating federal anti-money laundering laws, because they “never filed any suspicious activity reports regarding any of the transactions.”[156]


[46]     As part of its efforts to combat global terrorism, the U.S. actively works to prevent terrorists from accessing and using its financial system.[157] Payments to criminals using Ransomware to hold data hostage may run afoul of banking laws and policies as well as related statutes and regulations. Individuals and organizations choosing to make ransom payments to end Ransomware attacks could be subject to international sanctions programs administered in the U.S. by the Office of Foreign Assets Control (OFAC), though such enforcement has not yet been tested as of this writing. Under these sanctions programs, ransom payments to certain entities are illegal, as noted by Samuel Cutler:


It’s important to begin from the fact that ransom payments to [Foreign Terrorist Organizations] FTOs or Specially Designated Global Terrorists (“SDGTs”) identified by [OFAC] are illegal under U.S. law. Monetary contributions to FTOs are considered material support under 18 U.S.C. 2339B, while transfers to SDGTs are violations of economic sanctions imposed pursuant to the International Emergency Economic Powers Act (“IEEPA”).


Furthermore, as the Financial Action Task Force (“FATF”) notes in discussion of ransom payments to the Islamic State in Iraq and the Levant (“ISIL”), “[U.N. Security Council] Resolution 2161 applies to both direct payments and indirect payments through multiple intermediaries, of ransoms to groups or individuals on the Al-Qaida Sanctions List. These restrictions apply not only to the ultimate payer of the ransom, but also to the parties that may mediate such transfers, including insurance companies, consultancies, and any other financial facilitators.”[158]


[47]     So far, the act of paying to remove Ransomware has not been prosecuted under 18 U.S.C. 2339B[159] or IEEPA, but U.S. law enforcement officials encourage victims of Ransomware to report the attacks and are actively seeking to uncover the people behind these attacks. It remains to be seen whether a substantial Ransomware-related payment that was determined to have been made to a person or group on an OFAC list may result in legal action.[160]


[48]     In addition, an Executive Order issued in April 2015 “expand[s] the [existing] sanctions regime to block the property and interests of persons engaging in ‘significant malicious cyber-enabled activities’” outside of the U.S. that constitute a significant threat to the country as “determined by the Secretary of the Treasury, in consultation with the Attorney General and the Secretary of State.” [161] Activities deemed significant “have the purpose or effect of” seriously harming or compromising critical infrastructure; disrupting the availability of computers and networks; and misappropriating funds, trade secrets, personal identifiers, or financial information.[162] Moreover, “[t]he blocking extends to assets of those who ‘have materially assisted, sponsored, or provided financial, material, or technological support for, or goods or services in support of, any activity [proscribed by the order] or any person whose property and interests are blocked pursuant to this order,’” which could implicate individuals and institutions that choose to pay to remove Ransomware.[163] Ransomware disrupts the availability of computers and networks, has the ability to compromise critical infrastructure, and may allow for the misappropriation of information; these and other risks are among the considerations presented in the Order.[164]


[49]     In addition, the U.S. government’s hostage policy may be instructive in determining whether a Ransomware payment is likely to be prosecuted. The government itself will not pay ransoms to release human hostages, but the relevant policy explicitly states that families will not be prosecuted for paying ransoms in exchange for hostages, even if these payments are made to FTOs or other individuals or groups on the government’s sanctions lists.[165] Former President Obama noted that “no family of an American hostage has ever been prosecuted for paying a ransom for the return of their loved ones.”[166] Whether that U.S. policy would extend to photos of an individual’s loved ones held hostage by Ransomware is an entirely different question—one that may well test the limits of the government’s humanitarian leniency in this regard.


[50]     Current U.S. hostage policy also offers no exemption from prosecution for organizations making or facilitating ransom payments.[167] The FBI notes in its Ransomware guidance that “by paying a ransom, an organization might inadvertently be funding other illicit activity associated with criminals.”[168] Moreover, intermediaries cannot be used to avoid OFAC sanctions, which include freezing assets, forfeiture of assets, preventing payment transfers, fines, and imprisonment.[169] In Ransomware attacks, it may be impossible to ascertain who exactly is holding the data hostage, which in turn prevents the victim from determining in advance whether a ransom payment could result in sanctions for the organization.


[51]     Ultimately, it seems unlikely that individuals will be penalized for making small payments to regain access to personal data affected by Ransomware; enforcement is challenging on a practical level, as the anonymity of virtual currencies makes it difficult—if not impossible—to know whether payments are going to individuals or groups on sanctions lists.[170] Large organizations considering whether to pay higher amounts to meet demands from Ransomware attackers may face a more aggressive enforcement landscape. In some cases, organizations have engaged third parties to pay virtual currency ransom demands on their behalf. Ransomware payoffs and other hacking-related expenses may be funneled through intermediaries that “are often part of a larger contract for countersurveillance work, ensuring corporate accounting departments don’t need to green-light individual black market buys.”[171] With respect to the concept of paying ransom generally, it is worth considering the court’s ruling in United States v. Kozeny,[172] in which the “United States District Court for the Southern District of New York [found] that only extortion or duress under the threat of imminent physical harm would excuse[] the conduct” (emphasis added).[173] It is difficult to imagine extending that line of reasoning to include threats to important documents or photos, especially given that industry best practices for business continuity include maintaining robust backups that would protect against just this threat.[174]


[52]     As noted by some practitioners,[175] counsel’s advice on preventing and responding to Ransomware attacks may implicate Model Rule 1.1 – Competence, as amended by Comment 8, where “…a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology…”[176] Although the recent explosion in Ransomware attacks is a relatively new phenomenon, there is no shortage of resources lawyers can use to become familiar with the threats posed by Ransomware and, consequently, to their clients’ data. For example, the FBI has issued guidance that provides “key areas to focus on with Ransomware [such as] prevention, business continuity, and remediation.”[177]


[53]     With respect to potential regulatory enforcement, the FTC has warned that “a company’s failure to update its systems and patch vulnerabilities known to be exploited by Ransomware could violate Section 5 of the FTC Act.”[178] In addition, the Gramm-Leach-Bliley Act (GLBA) includes requirements concerning the disclosure by financial institutions of fraudulent access to customer information.[179] The GLBA Safeguards Rule may be used “in conjunction with the FTC’s Section 5 authority to bring actions against financial institutions that fail to properly protect consumer financial information.”[180] Covered Entities under HIPAA are themselves subject to the Security Rule which, among a myriad of requirements to safeguard patient data, obligates Covered Entities to implement a data backup plan.[181] HIPAA compliance guides indicate that HIPAA security requirements extend to Ransomware, noting “…the possibility of a [R]ansomware attack must now be covered in any risk assessment.”[182]


[54]     Ransomware attacks also create eDiscovery conundrums. Ransomware as an application has been considered in a number of cases, including with respect to assessing a defendant’s behavior to determine whether parole was violated,[183] and in an arbitration regarding the ownership of a domain name.[184] Given the potential for increasingly complex conflicts in this space, practitioners should consider the implications of Ransomware on eDiscovery across a variety of scenarios. These include situations in which Ransomware is the source of a given dispute, as well as when Ransomware becomes a complicating factor in the eDiscovery process.[185]


[55]     Although eDiscovery has not been directly addressed in published decisions that contain a Ransomware element, the duty to preserve remains inviolate.[186] If a matter involves Ransomware, and whether that matter affects the data itself or has secondary implications with respect to the data’s unavailability (such as when a hospital is attacked and patients are rerouted to other locations),[187] eDiscovery considerations should be front-of-mind for practitioners. Not only will claims or defenses associated with the Ransomware attack necessarily implicate the technology used, the practices that may have enabled (or failed to prevent) the attack (e.g., the infection vector, the data affected, or the target’s backup environment) all may be relevant to the case, thus subject to discovery and requiring preservation.


[56]     Yet another potential risk concerns the possibility that Ransomware could negatively impact eDiscovery collection, preservation, and later discovery efforts. The data preserved by eDiscovery collections often includes highly refined sets of important, often “entirely new stores of extraordinarily sensitive information”[188] that are retained for legal hold purposes regardless of the company’s standard data retention policies and information governance practices.[189] As discussed above, law firms have become a lucrative target for criminals using Ransomware;[190] among other valuable data sources, information preserved pursuant to litigation holds often is maintained by law firms that are representing multiple companies in a variety of matters. Law firms and other organizations—including vendors that provide preservation-related services—that have custody of these eDiscovery data sets should be cognizant of the risks created by atypical retention practices. These data sets are no less susceptible to Ransomware than their “standard” counterparts—and may even be more attractive targets, given the one-off nature of eDiscovery collections as well as the highly sensitive data they contain. Further, Ransomware may “preserve” data in a sense, but the data cannot be made available for production or may not exist in a usable format, which can add to the eDiscovery conundrums noted above.


IX.    Ransomware’s Future


[57]     Ransomware appears poised to evolve along the same lines as many other non-criminal programming efforts, increasingly adopting the aesthetic and practicality of popular software instances that rely on a modular design, allowing criminals to “use certain functions as-needed,” and offering “much better efficiency” and the “ability to switch tactics as required in the event one method is discovered or is found to be ineffective.”[191] This approach would retain certain core elements associated with functional, successful Ransomware variants in play while remaining nimble enough to affect new Internet of Things and mobile device usage.


[58]     For example, replacing the usual “command and control” center and related Deep- or Dark-Web business model, future Ransomware might “simply transmit a beacon with a GUID (globally unique identifier) to a Command and Control domain, trying to reach this domain through common protocols/services…to transmit this data.”[192] That is, Ransomware applications will be streamlined to suit a market seeking self-service options, exchanging a bespoke process for one that is both easier to replicate on a mass scale and cheaper to produce and distribute.[193]


[59]     As noted above, the volume and scope of attacks has expanded as demographics and usage patterns have shifted more and more Ransomware activity onto mobile and Internet of Things devices.[194] In addition, the software and strategy underlying Ransomware attacks has adapted to evade common protective measures; since good backups often are the best defense against serious damage in the event of an attack, newer Ransomware variations have been built to go after those backups as well, destroying “all Shadow Copy and restore point data on Windows systems.”[195] Ransomware is being developed to target not only a given piece of hardware, but also the device’s local and virtual environment, in an attempt to outwit the efforts of potential victims by guessing at where they might back up their data and undermining those preventative or responsive measures. Future Ransomware may well exploit would-be victims’ digital networking or social connections, using information gleaned from online posts to identify additional targets who may value the same types of data and thus be willing to pay the same types of ransoms to secure its release.


[60]     Although individuals will no doubt continue to fall victim to Ransomware, the trend seems to be toward attacks carried out on a more ambitious scale. Criminals are said to be “shying away from random attacks,” shifting from a focus on individuals and “expanding [further] into the corporate world” where victims are more likely to have the financial wherewithal to pay larger sums.[196] In short, an “individual might be limited to a [US] $500 ransom, but how about a manufacturer or a hedge fund?”[197] Criminals can leverage knowledge gained through experience in the ransom marketplace to seek out specific opportunities, determining, for example, that an average person’s photos are worth $X; an investment manager’s emails and personal diary are worth $Y; and a hedge fund’s proprietary formulas, representing “need-to-know” intelligence that is jealously guarded, are worth $Z. Adept attackers have already demonstrated their ability to exploit victim psychology in the abstract; laser-like, focused shakedowns may be the next horizon for Ransomware attacks.


[61]     In addition to diversified attack methodology, the potential impacts of Ransomware attacks are evolving. Beyond the hijacking or theft of stored financial records or customer files, targeting connected technology has the potential to wreak physical, “real life” havoc.[198] In the case of the Hollywood Presbyterian Medical Center Ransomware attack, for example, in addition to “forcing staff to go back to paper records and fax machines,” the data loss may have impacted care as “emergency patients were diverted to other hospitals.”[199] As we continue to rely more heavily on connected devices, it is not difficult to see how these types of disruptions could create serious problems across multiple industry sectors—the incipient arrival of driverless cars, for example, represents a potentially vulnerable technology that could be exploited for profit by data hostage-takers. An instance of Ransomware may be localized, but its effects can extend much further afield. Cars without accessible data could be paralyzed, regardless of whether they are in motion at the time the attack begins. Picture the movie Speed, replacing Sandra Bullock at the helm of a passenger-laden bus with a driverless car heading toward a cliff, doomed to disaster unless a ransom is paid.[200] Likewise, many hospital treatments rely on accurate patient data at critical moments. How much would an individual pay to ensure her blood type is communicated correctly or that his medical history warns doctors of possible drug interactions? If a patient were to die under such circumstances, how would a court assess liability for a failure either to prevent the Ransomware attack, or to pay the ransom promptly?


X.    Conclusion


“[Ransomware] is a volume business. It’s simple, relatively anonymous and fast. Some people will pay, some will not pay, so what. With a wide enough set of targets there is enough upside for these types of attacks to generate a steady revenue stream.”[201]


[62]     Grey areas abound, but thoughtful preparation is the best defense; both to avoid a Ransomware attack in the first place, and to manage the issues that may arise when an attack occurs. Practitioners should not only be knowledgeable about Ransomware, which includes understanding Ransomware’s operation, effects, and ramifications, but also vigilant in following the latest trends and tracking the ever-evolving threats. Ransomware is not going anywhere, and while the meteoric rise and spread of Ransomware has been startling as a singular issue, it also serves as a clear warning of things to come. There is still plenty of room for innovation and tremendous incentives for criminals to pursue these opportunities. In a marketplace flooded with stolen credit card numbers and digital credentials, selling ill-gotten personal information to identity thieves has become both more cumbersome and less lucrative than holding data hostage and demanding a ransom from its owner.[202]


[63]     Given this environment, practitioners should take a proactive approach to understanding Ransomware, not only to counsel clients effectively, but also to safeguard their own sensitive data, both professional and personal. Such understanding demands a working knowledge of digital currencies and ransom payment options, although there is some debate as to whether employing intermediaries[203] may help address that particular challenge.[204] Regardless, the key will be education and vigilance to guide strategic responses to Ransomware incidents. In addition to taking steps to prevent Ransomware attacks, practitioners must prepare to respond as effectively and efficiently as possible to this ever-evolving threat.[205]





* James A. Sherer is a Partner in the New York office of Baker & Hostetler LLP.

** Melinda L. McLellan is a Partner in the New York office of Baker & Hostetler LLP.

*** Emily R. Fedeles is an Associate in the New York office of Baker & Hostetler LLP.

**** Nichole L. Sterling is an Associate in the New York office of Baker & Hostetler LLP.


[1] See Krzysztof Cabaj & Wojciech Mazurczyk, Using Software-Defined Networking for Ransomware Mitigation: the Case of CryptoWall, 30 IEEE Network 14 (2016).


[2] See James Scott & Drew Spaniel, The ICIT Ransomware Report: 2016 Will be the Year Ransomware Holds America Hostage 3–4 (2016).


[3] Ben Rossen, How to Defend Against Ransomware, FTC (Nov. 10, 2016),,


[4] See Paul Merrion, FBI Creates Task Force to Fight Ransomware Threat, CQ Roll Call, Apr. 4, 2016, 2016 WL 2758516.


[5] Robert E. Litan, Law and Policy in the Age of the Internet, 50 Duke L.J. 1045, 1045 (2001).


[6] See Amin Kharraz et al., Cutting the Gordian Knot: A Look Under the Hood of Ransomware Attacks, in DIMVA 2015 Proceedings of the 12th International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment 3 (Springer 2015).


[7] See James Scott & Drew Spaniel, supra note 2, at 4.


[8] Nicole van der Meulen et al., European Parliament Policy Dep’t for Citizens’ Rights & Constitutional Affairs, Cybersecurity in the European Union and Beyond: Exploring the Threats and Policy Responses 35 (2015),,


[9] James Scott & Drew Spaniel, supra note 2, at 6.


[10] See id.


[11] See van der Meulen, supra note 8, at 35.


[12] See Josephine Wolff, The New Economics of Cybercrime, The Atlantic (June 7, 2016),,


[13] Id.


[14] Doug Pollack, Ransomware 101: What to Do When Your Data is Held Hostage 7 (2016) (ebook),,


[15] See Kharraz, supra note 6, at 1, 4.


[16] See Azad Ali et al., Recovering from the Nightmare of Ransomware – How Savvy Users Get Hit with Viruses and Malware: A Personal Case Study, 17 Issues in Information Systems 58, 61 (2016).


[17] Robert J. Kroczynski, Are the Current Computer Crime Laws Sufficient or Should the Writing of Virus Code Be Prohibited?, 18 Fordham Intell. Prop. Media & Ent. L.J. 817, 823 (2008).


[18] See Kharraz, supra note 6, at 1.


[19] Gavin O’Gorman & Geoff McDonald, Ransomware: A Growing Menace, Symantec Corp. (2012) at 2,,


[20] Eric Jardine, A Continuum of Internet-Based Crime: How the Effectiveness of Cybersecurity Policies Varies across Cybercrime Types, ResearchGate, 10 (Jan. 2016), reprinted in Research Handbook on Digital Transformations 421 (F. Xavier Olleros & Majinda Zhegu eds., 2016).


[21] See Kharraz, supra note 6, at 2.


[22] See, e.g., William Largent, Ransomware: Past, Present, and Future, Talos Blog (Apr. 11, 2016, 9:01 AM),, (last visited Feb. 6, 2017).


[23] See Ian T. Ramsey & Edward A. Morse, Cyberspaxe Law Comm. Winter Working Grp., Ransoming Data: Technological and Legal Implications of Payments for Data Privacy 4–5 (Jan. 29-30, 2016) (unpublished manuscript) (on file with author),,


[24] Pollack, supra note 14, at 7.


[25] See Largent, supra note 22.


[26] See O’Gorman & McDonald, supra note 19, at 2.


[27] See, e.g., Largent, supra note 22.


[28] See id.


[29] Doug Pollack, Trading in Fear: The Anatomy of Ransomware, id experts (May 2, 2016),,


[30] Adam Alessandrini, Ransomware Hostage Rescue Manual 2, (2015),,


[31] Considerations associated with quantum computing and decryption are outside the purview of this paper.


[32] Ramsey & Morse, supra note 23, at 5.


[33] Chris Ensey, Ransomware Has Evolved, And Its Name Is Doxware, DarkReading (Jan. 4, 2017, 07:30 AM), (noting also that this would be one way of getting back access to at least some of the hostage files).


[34] Technical Intricacies of Ransomware and Safeguarding Strategies, Fall 2016 E-Newsletter (Digital Mountain, Santa Clara, C.A.), 2016, at 1,,


[35] See Largent, supra note 22.


[36] Id.


[37] See id.


[38] See U.S. Dep’t of Justice, Protecting Your Networks from Ransomware 2,,


[39] See Largent, supra note 22, at 1.


[40] See O’Gorman & McDonald, supra note 19, at 4.


[41] See Practical Steps to Thwart Ransomware and other Cyberbreaches, YourABA (Dec. 2016),,


[42] See Largent, supra note 22.


[43] Id.


[44] See O’Gorman & McDonald, supra note 19, at 4.


[45] Fed. Bureau of Investigation, Ransomware,,


[46] Deepen Desai, Malvertising, Exploit Kits, ClickFraud & Ransomware: A Thriving Underground Economy, ZScaler (Apr. 21, 2015),,


[47] See Largent, supra note 22.


[48] See Ransomware on the Rise: Norton Tips on How to Prevent Getting Infected, Norton by Symantec,,


[49] See Fed. Bureau of Investigation, supra note 45.


[50] Ramsey & Morse, supra note 23, at 5.


[51] See Azad Ali et al., supra note 16, at 62.


[52] O’Gorman & McDonald, supra note 19, at 5.


[53] See Haley S. Edwards, A Devastating Type of Hack Is Costing People Big Money, Time (Apr. 21, 2016),,


[54] O’Gorman & McDonald, supra note 19, at 2.


[55] See Ali et al., supra note 16, at 61–62.


[56] Edwards, supra note 53.


[57] Tom Spring, Dirt Cheap Stampado Ransomware Sells on Dark Web for $39, ThreatPost (July 14, 2016, 12:35 PM),,


[58] Largent, supra note 22.


[59] Pollack, supra note 14, at 5.


[60] Ricci Dipshan, Danger Ahead: 3 New Ransomware Developments in 2016; From Hybrid Ransomware to Attacks on Mobile Devices and New Entrants in the Field, Experts Warn of a Difficult Year Ahead, Law Tech. News (May 31, 2016).


[61] Edwards, supra note 53.


[62] Spring, supra note 57.


[63] See, e.g., Antigone Peyton, A Litigator’s Guide to the Internet of Things, 22 Rich. J. L. & Tech. 9, ¶ 1 (2016),,


[64] See van der Meulen, supra note 8, at 45.


[65] Dipshan, supra note 60.


[66] See Scott & Spaniel, supra note 2, at 4.


[67] Spring, supra note 57.

[68] Scott & Spaniel, supra note 2, at 3.

[69] See id. at 4.

[70] See Jon Neiditz, Ransomware in Society and Practice, Practising Law Inst. 39, 41.

[71] Id.


[72] Id.

[73] Ben Rossen, Ransomware – A Closer Look, Fed. Trade Comm’n (Nov. 10, 2016, 11:05 AM),,

[74] Kharraz, supra note 6, at 2.

[75] Dipshan, supra note 60.

[76] Thompson Information Services, Malware Attack Causes System Shutdown at Medstar, 15 No. 4 Guide Med. Privacy & HIPAA Newsl. 2, at 1 (May 2016) [hereinafter Malware Attack]

[77] Rossen, supra note 73.


[78] Edwards, supra note 53.


[79] Spring, supra note 57.


[80] Largent, supra note 22.


[81] See Technical Intricacies of Ransomware and Safeguarding Strategies, Digital Mountain (Fall 2016),


[82] Pollack, supra note 14, at 14.


[83] See Brian Krebs, CryptoLocker Crew Ratchets Up the Ransom, Krebs on Security (Nov. 6, 2013, 12:13 AM),,


[84] Jardine, supra note 20, at 10.


[85] O’Gorman & McDonald, supra note 19, at 2.


[86] Pollack, supra note 14, at 5.


[87] Edwards, supra note 53.


[88] See Azad Ali et. al., supra note 16, at 64.


[89] See Sentinel One, Ransomware is Here: What You Can Do About It? 2,,


[90] Pollack, supra note 14, at 5.


[91] Id.


[92] Wolff, supra note 12.


[93] Neiditz, supra note 71, at 7 (citing Danny Palmer, Ransomware is Now the Biggest Cybersecurity Threat, ZDNet (May 6, 2016),,


[94] Id. at 9.


[95] Merrion, supra note 4.


[96] Id.


[97] Largent, supra note 22.


[98] Wolff, supra note 12.


[99] See Sean Sposito, PayPal, OthersBuy Stolen Data from Criminals to Protect Users, San Francisco Chron. (Jan. 8, 2016),,


[100] Daniel Crothers, Cybersecurity for Lawyers – Part IV: Is Payment of Ransom in Your Budget?, 63 The Gavel 24, 24 (2016).


[101] Pollack, supra note 14, at 11 (quoting unnamed consultant “D”).


[102] See Mathew J. Schwartz, Please Don’t Pay Ransoms, FBI Urges, Data Breach Today (May 4, 2016),,


[103] See A.B.A., Practical steps to thwart ransomware and other cyberbreaches, YourABA (Dec. 2016),,


[104] See Neiditz, supra note 70, at 41.


[105] See id.


[106] Jardine, supra note 20, at 10-11.


[107] Doug Pollack, Ransomware 101: What to Do When Your Data is Held Hostage, 5 (2016) (ebook).


[108] See id.


[109] Id.


[110] See Fact Sheet: Ransomware and HIPAA, Dept. of Health & Hum. Serv.,, (last visited Feb. 8, 2017).


[111] Paul Merrion, HHS Clarifies When Ransomware Attacks Trigger HIPAA Notification, CQ Roll Call, July 13, 2016, 2016 WL 3709987 [hereinafter HHS Clarifies].


[112] Jocelyn Samuels, Your Money or Your PHI: New Guidance on Ransomware, OpenHealth News, July 11, 2016,,


[113] John Neiditz & David Cox, Beyond Breaches: Growing Issues In Information Security, Integro (2016),,


[114] HHS Clarifies, supra note 111.


[115] See generally Azad Ali et. al., supra note 16, at 66 (describing the authors’ personal experience with ransomware mechanisms).


[116] See id.


[117] See Adam Alessandrini, Ransomware Hostage Rescue Manual, KnowBe4 (2015) at 8,,


[118] See id. at 7.


[119] Tom Spring, Dirt Cheap Stampado Ransomware Sells on Dark Web for $39, ThreatPost, July 14, 2016,,

[120] See Crothers, supra note 100 at 24.


[121] See Scott & Spaniel, supra note 2, at 3.


[122] See Ondrej Kehel, Ransomware: To Pay or Not To Pay, LexisNexis, Aug. 16, 2016,,


[123] See Azad Ali et. al., supra note 16, at 64.


[124] See Jardine, supra note 20, at 10.


[125] Scott & Spaniel, supra note 2, at 4.


[126] Id. at 5.


[127] Michael Sutton, Big Business Ransomware: A Lucrative Market in the Underground Economy, DarkReading, July 1, 2016,—threats/big-business-ransomware-a-lucrative-market-in-the-underground-economy/a/d-id/1326144,


[128] Maria Korolov, Will Your Backups Protect You Against Ransomware?, CSO (May 31, 2016),


[129] Doug Pollack, How Ransomware Could Hold Your Business Hostage, idexerts, Apr. 29, 2016,,


[130] See Haley Sweetland Edwards, A Devastating Type of Hack is Costing People Big Money, Time (Apr. 21, 2016),,


[131] Nicole van der Meulen et. al., Cybersecurity in the European Union and Beyond: Exploring the Threats and Policy Responses, European Parliament at 35 (2015),, (citing Richard Pinson, Computer threat: Cryptolocker virus is ransomware, Nashville Business Journal, Aug. 10, 2015, (last visited Oct. 12, 2015)).


[132] Michael Sutton, Big Business Ransomware: A Lucrative Market in the Underground Economy, DarkReading (July 1, 2016 11:20 AM)—threats/big-business-ransomware-a-lucrative-market-in-the-underground-economy/a/d-id/1326144,


[133] O’Gorman & McDonald, supra note 19, at 4.


[134] See id.


[135] Ben Rossen, How to Defend Against Ransomware, Federal Trade Commission, Nov. 10, 2016,,


[136] Scott & Spaniel, supra note 2, at 5.


[137] Malware Attack, supra note 76, at 1.


[138] Edwards, supra note 54.


[139] Rossen, supra note 73.


[140] Scott & Spaniel, supra note 2, at 5.


[141] Merrion, supra note 4.


[142] See Azad Ali et. al., supra note 16, at 63.


[143] See Barber, Simon, Xavier Boyen, Elaine Shi, and Ersin Uzun, Bitter to better—how to make bitcoin a better currency, International Conference on Financial Cryptography and Data Security, pp. 399-414. Springer Berlin Heidelberg (2012). See also, Who is Satoshi Nakamoto, CoinDesk, Feb. 19, 2016,,


[144] See generally Andy Greenberg, Follow The Bitcoins: How We Got Busted Buying Drugs On Silk Road’s Black Market, Forbes (Sept. 5, 2013),, (explaining why Bitcoin is used for underground transactions).


[145] Jamie Doward, City Banks Plan to Hoard Bitcoins to Help Them Pay Cyber Ransoms, The Guardian, Oct. 22, 2016,,


[146] See Robert Mclean, Hospital Pays Bitcoin Ransom After Malware Attack, CNN, Feb. 17, 2016,,


[147] Doug Pollack, Tradable, Untraceable, Sometimes Unavoidable: The Business of Bitcoin, id Experts, June 20, 2016,,


[148] See Ramsey & Morse, supra note 23, at 7.


[149] IRS Virtual Currency Guidance: Virtual Currency Is Treated as Property of U.S. Federal Tax Purposes; General Rules for Property Transactions Apply, IRS, Mar. 25, 2014,,


[150] I.R.S. Notice 2014-21 at 3, Mar. 25, 2014,,


[151] See Ramsey & Morse, supra note 23, at 5.


[152] Id.


[153] Manhattan U.S. Attorney Announces Charges Against Two Florida Men for Operating an Underground Bitcoin Exchange, FBI, July 21, 2015,,


[154] Id.


[155] Id.


[156] Id.


[157] See David S. Cohen, Kidnapping for Ransom: The Growing Terrorist Financing Challenge, Council on Foreign Relations, Oct. 5, 2012,,


[158] Samuel Cutler, Could the Administration’s New Hostage Policy Leave Banks Vulnerable?, Sanction Law, June 24, 2015,,


[159] See 18 U.S.C. § 2339B (2012).


[160] See id.


[161] Ramsey & Morse, supra note 23, at 14.


[162] See Exec. Order No. 13,694, 80 Fed. Reg. 18,077 (Apr. 1, 2015).


[163] Ramsey & Morse, supra note 23, at 14 (quoting Exec. Order No. 13,694, 80 Fed. Reg. at 18078).

[164] See id.


[165] See Cutler, supra note 158; see also Statement by the President on the U.S. Government’s Hostage Policy Review, The White House Office of the Press Secretary, June 24, 2015,, (“[T]he United States government will not make concessions, such as paying ransom, to terrorist groups holding American hostages…. At the same time, we are clarifying that our policy does not prevent communication with hostage-takers – by our government, the families of hostages, or third parties who help these families”).


[166] See Statement by the President on the U.S. Government’s Hostage Policy Review, supra note 165.


[167] See, e.g., Manhattan U.S. Attorney Announces Charges Against Two Florida Men for Operating an Underground Bitcoin Exchange, supra note 153. DOUBLE CHECK THIS TO SEE IF ACTUALY 18 USC 2339


[168] Incidents of Ransomware on the Rise: Protect Yourself and Your Organization, FBI, April 29, 2016,,

(citing Federal Bureau of Investigation Cyber Division Assistant Director James Trainor).


[169] See OFAC FAQs: Sanctions Compliance, U.S. Dep’t of the Treasury,, (last visited Mar. 31, 2017).


[170] See Jardine, supra note 20, at 11.


[171] Sposito, supra note 99.


[172] See United States v. Kozeny, 582 F. Supp. 2d 535, 540 (S.D.N.Y. 2008).


[173] Ramsey & Morse, supra note 23, at 19 (emphasis added).


[174] See Korolov, supra note 128.


[175] See, e.g., Ivan Hemmans & David G. Ries, Cybersecurity: Ethically Protecting Your Confidential Data in a Breach-A-Day World (PowerPoint), at slides 18–21, April 27, 2016,,


[176] Comment on Rule 1.1, American Bar Association: The Center for Professional Responsibility,, (last visited Feb. 12, 2017).


[177] FBI Internet Crime Complaints, Florida Atlantic University,, (last visited Feb. 12, 2017); see also Incidents of Ransomware on the Rise: Protect Yourself and Your Organization, supra note 168.


[178] Rossen, supra note 73.


[179] See 15 U.S.C. § 6803; see also Ransomware – Legal Liability and Enforcement, Fall 2016 E-Newsletter (Digital Mountain, Santa Clara, C.A.), Oct. 24, 2016,,


[180] Ransomware – Legal Liability and Enforcement, supra note 179.


[181] Fact Sheet: Ransomware and HIPAA, supra note 110.


[182] Malware Attack, supra note 76 (quoting John Parmigiani, HIPAA consultant and editorial advisory board member).


[183] See, e.g., United States v. Haymond, No. 08-CR-201-TCK, 2016 WL 4094886, at *2 (N.D. Okla. Aug. 2, 2016).


[184] See Virginia College Savings Plan v. Zhouda, 2016 WL 5920046 (UDRP-ARB Dec), at *2–3 (Lowry, Arb.).


[185] See generally Ed Silverstein, Law Firm Among the Latest Victims of Ransomware Attack, Law Technology News, Mar. 11, 2015,, (detailing a law firm’s recent ransomware attack).


[186] See Univ. of Montreal Pension Plan v. Bank of Am. Sec., LLC, 685 F. Supp. 2d 456, 462 (S.D.N.Y. 2010).


[187] See Korolov, supra note 128.


[188] James A. Sherer, Taylor M. Hoffman & Eugenio E. Ortiz, Merger and Acquisition Due Diligence: A Proposed Framework to Incorporate Data Privacy, Information Security, e-Discovery, and Information Governance into Due Diligence Practices, 21 Rich J.L. & Tech 5, ¶ 36 (2015),,


[189] This is often a mandatory “exception” in many Records and Information Management and Information Governance policies. See Vicki Miller Luoma, Computer Forensics and Electronic Discovery: The New Management Challenge, 25 Computers & Security 91, 96 (2006) (When creating an “electronic document retention and deletion policy . . . [a]ny such policy must retain the flexibility to implement litigation holds by suspending routine document deletion” in the face of a reasonable anticipation of litigation).


[190] See Crothers, supra note 100.


[191] Ransomware: Past, Present, and Future, supra note 22.


[192] See id.


[193] Tom Spring, Dirt Cheap Stampado Ransomware Sells on Dark Web for $39, ThreatPost (July 14, 2016, 12:35 PM),,


[194] See Ben Dickson, What makes IoT ransomware a different and more dangerous threat?, Tech Crunch, Oct. 2, 2016,,


[195] Korolov, supra note 128.


[196] Sutton, supra note 127.


[197] Id.


[198] See Brian Buntz, The 10 Most Vulnerable IoT Security Targets, Internet of Things Institute, July 27, 2016,,


[199] Korolov, supra note 128.


[200] See generally Speed (Twentieth Century Fox Film Corp. 1994) (a film in which a police officer must drive a bus above 50 miles per hour in order to prevent a bomb from exploding on the bus).


[201] Raynham Remains Offline in Computer Virus Mystery, Wicked Local (Mar. 11, 2016, 5:30 PM),, (quoting Brian Contos, ICIT Fellow and VP & Chief Sec. Strategist at Securonix).


[202] See Wolff, supra note 12.


[203] See Sposito, supra note 99.


[204] See Cutler, supra note 158.


[205] See Practical Steps to Thwart Ransomware and Other Cyberbreaches, Your ABA, Dec. 2016,,

Protected Genetics: A Case for Property and Privacy Interests in One’s Own Genetic Material

Download PDFJennings Publication Version PDF

Cite as: Madison Jennings, Protected Genetics: A Case for Property and Privacy Interests in One’s Own Genetic Material, 23 Rich. J.L. & Tech. 10 (2016),


By: Madison Jennings*

I.  Henrietta And Her Cells


[1]       In 1951, a young black woman named Henrietta Lacks entered Johns Hopkins Hospital, having been diagnosed with cervical cancer.[1] There, a biopsy of her cancerous tissue was, without her knowledge or consent, taken.[2] The biological human tissue sample, produced from that biopsy procedure[3] would ultimately become more celebrated and influential than anyone present at that extraction might have dared to imagine.[4]

[2]       In her 2010 book, author Rebecca Skloot recounts this story of how a small cluster of cells scraped from the cervix of this impoverished woman from rural Virginia—a woman who grew to adulthood on the land her ancestors had once worked as slaves—became the cornerstone of millions, if not billions, of dollars worth of scientific research.[5] Looking back at the second half of the twentieth century, it would be an extraordinary challenge to find a discovery, innovation, or breakthrough involving human biology that did not, at some point, rely on these cells.[6]

A.  The Cells

[3]       HeLa cells, aptly named after the woman from which they derived, were developed into the world’s first line of immortal human cells.[7] Immortal cells are cells that can reproduce continuously without degrading or dying out.[8] Typical human cells have a reproductive lifespan, just as human beings do, limiting the timeframe in which they can replicate themselves. Eventually, the copies that cells make of themselves begin to degrade, contaminated by bacteria or other microorganisms, producing corrupted replicas, ultimately becoming incapable of cellular reproduction and dying out.[9] Immortal cells are different. An immortal cell line reproduces indefinitely and constantly—almost obsessively—never dying out entirely.[10]

[4]       Henrietta’s cancer cells did just that, duplicating themselves at an impressive rate and continuing to do so indefinitely, unless frozen.[11] Her cells were the first to be capable of such a feat.[12]

[5]       Before Henrietta Lacks, the ideal of an immortal line of human cells was nothing more than wishful thinking—a pipe dream of the scientific community–the stuff of science fiction.

[6]       Her cells were unique and represented a major breakthrough for scientific research. For years, researchers had been attempting to grow human cells in culture, largely without success.[13] Using the same techniques and the same procedures they had been employing unsuccessfully, researchers expected the same results—eventual death of the cells.[14] Henrietta’s normal cells performed as anticipated, dying just a few days after being put into culture.[15] Her cancer, however, grew at an indefatigable rate.[16] The very cancer that killed Henrietta would, inexplicably, lead to her immortality, and when it became clear to those with access to those cells just what it was that they had in their possession–the first ever line of immortal human cells–little time was wasted in announcing the breakthrough to the world.[17] HeLa cells made their debut on national television, a vial of them held out for the world to see—a victory for science and for mankind, heralding a new age of medicine and discovery.[18]

[7]       At the same time, Henrietta lay prostrate in a hospital bed at Johns Hopkins, succumbing to the same cancer contained in that vial.[19] After she passed away, she was “buried in an unmarked grave.”[20]

[8]       For most of the HeLa cells’ history, they were not connected to Henrietta, the person, in any meaningful way.[21] A chance mention of her name by a professor in a community college class inspired a teenager named Rebecca Skloot to embark on a years-long journey to remedy that—looking beyond the cells themselves, to the life that had produced them.[22] Skloot sought to know and to make known the woman whose cancerous misfortune led to such astonishing and important things as the polio vaccine and chemotherapy.[23] Skloot succeeded in that endeavor when in 2010, twenty-two years after first hearing Henrietta’s name, she published her biography of Henrietta, Henrietta’s family, and the HeLa legacy.[24]

[9]       The Immortal Life of Henrietta Lacks catapulted Henrietta, her cells, and her family into the national spotlight. It spent seventy-five weeks on the New York Time’s bestseller list,[25] became required reading at educational institutions across the country,[26] and in April 2017 HBO premiered a film version starring Oprah Winfrey.[27]

[10]     Henrietta’s story has captured the imagination of almost everyone it is exposed to. However, reactions to her story vary–from awe at all that arose from such seemingly unremarkable circumstances, to gratitude for all that her cells have made possible, to indignation and outrage on her behalf.[28] For many, the harsh reality that Henrietta died impoverished and in pain, her contributions unknown, while so many strangers benefited from the products of her body–taken without her knowledge and without her consent—is difficult to accept.

B.  Henrietta Lacks, The Woman

[11]     Henrietta was born as Loretta Pleasant in Roanoke, Virginia in 1920.[29] It is unclear why or when she came to be called Henrietta.[30] She was one of ten siblings, and following her mother’s death in 1924, her father moved the entire family to Clover, Virginia, where the siblings were divided amongst relatives to be cared for.[31] There, Henrietta shared a cabin with her grandfather and cousin.[32]

[12]     Henrietta later married that cousin, David Lacks, in 1941.[33] The couple already had two children.[34] After marrying, they moved to Baltimore, Maryland.[35] It was there, after giving birth to their fifth child, that Henrietta sought medical attention for vaginal pain and bleeding.[36] At that time, Johns Hopkins was the only hospital in the area that treated black patients, particularly poor ones like Henrietta who could not afford medical care.[37]

[13]     In many ways, an intersection of two major themes of Henrietta’s life—poverty and being a black minority—created the circumstances that allowed her cells to be harvested and commercialized. It is worth questioning whether an affluent white woman would have had the same experiences as Henrietta, or been taken advantage of quite so easily.[38]

[14]     Back in that day, many physicians and researchers believed that poor patients who received reduced or no-cost medical care were freely available for testing–consensual or otherwise–almost as a form of payment.[39] In general, very few people felt that it was morally necessary to gain a patient’s permission before obtaining, storing, or analyzing any tissue sample.[40] It is extremely unlikely that anyone would have thought of it as being so much as a common courtesy, let alone a prerequisite to the maintenance of her human rights, to inform someone like Henrietta of what had been done to her.[41]

[15]     This is no longer the way of the world. Today, it would be an appalling violation of ethical and legal standards for a physician to perform a biopsy without the informed consent of his patient.[42] One might hope that modern standards would extend beyond the biopsy itself to the usage of tissue samples. That modern legal, social, and moral standards would mandate a different result. It might be expected that, in today’s world, Henrietta would have had the right to decide for herself. That she would have been legally entitled to choose whether her cells were used for research. It is uncertain whether she would have.

[16]     Despite these changes in expectations over a person’s right to full control over their body, it is possible that in today’s world, there isn’t much about Henrietta’s story that would turn out differently. Granted, the initial biopsy would not have been undertaken without her knowledge or consent.[43] However, there is not much reason to believe that once a sample was taken, she would have had any control over what happened to it.[44] In fact, the evidence suggests otherwise; that she, or any other person, would have very little control at that point.[45]

II.  Biobanks

[17]     Today, biopsies are regularly performed medical procedures,[46] and although Henrietta never had the opportunity to consent to hers, it is fair to speculate that her modern-day counterpart would consent without second thought.[47] Biopsies are a routine part of cancer treatments, used to diagnose, assess, and provide individualized care.[48] The biopsy itself does not present a challenge. The challenge lies in what is done, and what ought to be done, with leftover human tissue that is no longer needed for the purpose for which it was originally taken?

[18]     The following section discusses what becomes of our biological leftovers, and whether any individual should have the right to decide for themselves whether their tissue is saved or discarded.

[19]     Every day, individuals across the country and around the world consent to a variety of medical tests and procedures, many of which require the extraction of their body tissue.[49] These tests range from the commonplace (drawing blood at an annual physical) to the unexpected (an emergency appendectomy).[50] Very few of these individuals will wonder what happens to their leftover tissue: what becomes of the blood, the bone marrow, the appendix that goes unused? Unfailingly, many just assume it is discarded.[51] Sometimes, it is. However, often it is not. Rather, it is stored.[52]

[20]     Biobanks are institutions that collect and distribute biological materials—often human tissue or blood—for research purposes.[53] When researchers need human material, they peruse a catalogue and order what they need.[54] Specimens are sorted by type (blood, bone marrow, etc.), and labelled with their demographical designations (“male”, “thirty years old”, and “Caucasian”).[55] The source’s name, or other “identifying” information, is not included.[56]

[21]     Biobanks are an invaluable resource for the scientific community.[57] Without them, researchers might waste invaluable time, money, and resources in acquiring enough specimens–of appropriate type and variety–necessary to conduct their studies. This comment does not argue against the existence of biobanks. They are a necessary resource and should exist. Instead, this comment critically examines the methodology employed in the creation of these biobanks, arguing that the methodology must change to protect the rights of ordinary individuals whose bodily products are bought and sold without their knowledge.

[22]     Most of the human samples stored and sold by biobanks are the leftover byproducts of medical testing.[58] As described above, a person goes to the doctor, and has blood work done. Once the testing has concluded, the unused blood is often sent for storage at a biobank, where it is accessible to researchers across the country—perhaps even the world.[59]

[23]     Henrietta’s story, a half-century ago, is achingly similar to this modern process. She went to a hospital, received medical care, and died, none the wiser that some small piece of her had been taken and stored for future use.[60]

[24]     Most people would hope to have control over whether their tissue is taken and stored like this,[61] or that they would at least know that their biological materials—their genetic information, something so intrinsically theirs—was being used for this purpose.

[25]     Unfortunately, that is not the case.[62] More than likely, any person alive today is no more protected in this regard than Henrietta Lacks was when she walked into Johns Hopkins.

[26]     Very few people are aware that their unused biological material is saved at all, let alone saved for the purpose of sale and distribution to scientists and researchers. Many would hope that they would be asked, or at least informed, before their samples were kept or sold.[63] Despite this, it is not common practice to inform someone when their medical waste is saved instead of being discarded, let alone request permission to do so. This comment argues that consumers and patients have the right to be informed, and the right to control what becomes of their own genetic materials.

A.  A Moore Modern Henrietta?

[27]     In 1976, a man named John Moore was diagnosed with leukemia.[64] While treated, copious amounts of blood and other samples were taken from his body.[65] Without his consent, some of Moore’s cells were turned into commercial cell lines—similar to Henrietta’s.[66] Despite the fact that the doctor who treated him and the hospital where he was being treated profited substantially from the sale of his cells, Moore did not receive any compensation.[67]

[28]     Moore brought several claims, among them a claim for a breach of informed consent, a breach of fiduciary duty, and a claim of conversion.[68] The California court addressed the merits of the conversion claim, finding that Moore did not have a sufficient property interest in his cells to sustain the claim.[69]

[29]     The story of John Moore eerily echoes that of Henrietta Lacks. Both should be taken as cautionary tales, and as clear examples of why there exists a need for extensive protections for the rights of individuals to have control over their own genetic information and materials.

B.  Proposed Protections

[30]     Protections of this kind are generally conceived under one of two already-existing legal frameworks: privacy or property.[70] Property regimes orient around the right to patent, commercialize, or otherwise control genetic information or genetic materials themselves,[71] while privacy regimes focus on disclosure or dissemination of genetic information found in human tissue samples.[72] Scholarship on the matter tends to pit these frameworks against one another,[73] asking the question of whether a privacy right or a property interest best protects individuals against the sort of infringement and violation suffered by Henrietta Lacks.[74]

[31]     Proposed here is not solely a property or a privacy regime, but rather an attempt to weave the two types of rights together in an effort to comprehensively protect a right that most Americans believe ought to exist.

[32]     In what ways might a modern Henrietta be protected from a transgressional, trespassory use of her body, her cellular being, and her very DNA? This comment seeks to use existing legal structures and the promulgation of newly recognized rights to create a framework through which a person in Henrietta’s situation would not only have their rights vindicated, but would have rights to assert in the first place.

[33]     The law is lagging, falling woefully short of protecting rights of individuals when it comes to their DNA, their genetic materials, and their genetic information. This next section briefly explores current law at the federal level, noting its shortcomings and inadequacies, to showcase the need for new law. Then, a sampling of state legislation is discussed, with particular focus on those states, which have created a statutorily designated property interest in genetic information. The designation of a property interest in genetic information ultimately forms the backbone of my proposed legislation, with a supplementary privacy right encompassed within it.

III.  Current Federal Law

[34]     Federal protections for the genetic information of individuals as a privacy right are found mainly in the Genetic Information Nondiscrimination Act (“GINA”), which prohibits genetic discrimination in the health insurance and employment contexts.[75] Under GINA, health insurance companies may not deny benefits to anyone because of any genetic predisposition they may have to certain illnesses or afflictions.[76] Similarly, it is against the law for employers to use genetic testing to determine any aspect of a person’s employment.[77]

[35]     Notably, the focus of GINA (and of many other statutes designed to protect individuals in this realm) is the prevention of discrimination based on an individual’s genetic information.[78] This is not the focus here—Henrietta was not discriminated against because of anything found in her genes. While admirable, protection against genetic discrimination does not solve the problem found in Henrietta’s story.

[36]     In the field of medical and scientific research, individual protections reach no further than the Common Rule.[79] The Common Rule regulates federally-funded research whenever that research uses human being as subjects.[80] The Common Rule requires informed consent—a concept taken from doctor-patient interactions and requirements—as its strongest protection for otherwise-vulnerable subjects.[81] Consent is only informed, and therefore valid, when it is given after a potential subject is made aware of all information relevant to her decision to participate (or not) in any given study.[82] Consent is not informed if, for instance, potential side effects are not disclosed beforehand.[83]

[37]     The Common Rule expands on the principle of informed consent, articulating the specific disclosures required for the use of human test subjects.[84] Subjects must be told that their consent can be withdrawn at any time; that agreement to participate at the onset of a study never requires someone to continue their participation if, at any time, they wish to stop.[85] The Common Rule also requires certain findings of ongoing studies to be disclosed to the subjects of those studies, if preliminary findings might affect a person’s willingness to continue to participate.[86]

[38]     The U.S. Food and Drug Administration imposes similar standards on the studies it reviews,[87] effectively extending the Common Rule beyond those studies that are federally-funded.[88]

[39]     This is the extent to which human research is governed at the federal level, and while the Common Rule provides extensive protections to human beings engaged in scientific studies, it does not extend to research using human tissue.[89] Under guidance issued by the federal Office of Human Research Protections in 2004, tissue samples collected for present or future research are not covered by the consent provisions of the Common Rule, as long as those samples are without personally identifying information.[90] If a sample is not linked to an individual, then it is not protected by federal regulation.[91]

[40]     The existence of the Common Rule during Henrietta’s lifetime would not have stalled the events that culminated in the world’s first immortal cell line. The story of Henrietta Lacks is a helpful rubric against which the legislation proposed by this comment is graded. In what ways could federal law protect a modern Henrietta?


IV.  Current State Law


[41]     Without federal protection, the onus of protecting the rights of individuals in their genetic material has fallen to the states. Many states have genetic privacy laws requiring informed consent to disclose genetic information, [92] but just eight states require that same consent to retain that same information.[93] Only five states recognize a personal property interest in genetic information for the individual to whom that information pertains.[94] This section first addresses these different state-level property regimes, assessing their strengths and weaknesses and using them to build the foundation for a federal rule recognizing a similar right. From there, I take a broader look at state-level privacy regimes to consider how the right of privacy might be expanded beyond the realm of discrimination to strengthen my proposed protections.

[42]     Of the states that recognize some sort of property interest related to genetic data, three states–Colorado,[95] Georgia,[96] and Louisiana[97]–recognize the interest as inhering only in the genetic information and not in the genetic samples themselves.[98] These statutes provide a civil remedies for violations (i.e. the unauthorized disclosure of genetic information), but those protections extend only to instances of discrimination in the health insurance context.[99] As currently written and enforced, these state statutes provide no more protection than current federal regulation, and so do not solve the problem raised by the story of Henrietta Lacks. Statutes that do not reach beyond employment and insurance discrimination and into the realm of research conducted using human tissue samples would not have helped Henrietta.

[43]     Of the remaining states that recognize a property interest in genetic information, we can learn several things. First, the most comprehensive state system currently enacted shows us just how far legislation needs to go to truly protect the interests of individuals in this context. Second, is a bit of a cautionary tale, a lesson in how it is not enough for statutory language to be broad enough that it could encompass research. Statutes must specifically address the use of human tissue in research, explicitly subjecting researchers to the same standards imposed upon physicians and others when it comes to the use and misuse of someone’s genetic material. Finally, we will briefly confront a common policy argument against the promulgation of the rights suggested in this comment.

A.  The Model Case

[44]     Of the states that recognize a property interest in genetic data, just one explicitly identifies a physical genetic sample in and of itself as the personal property of the individual from whom the sample is derived—Alaska.[100]

[45]     The Alaska statute provides that a DNA sample and the results of any analysis of that sample are the “exclusive property” of the individual sampled.[101] The collection, analysis, or retention of a DNA sample without the informed consent of that individual is a violation of Alaska law, as is the intentional disclosure of any such analysis without the requisite consent.[102] While there are exemptions to this standard,[103] Alaska has the most comprehensive protection regime for individuals’ rights over their own genetic material.

[46]     Creating these rights are one thing, and enforcing them is another. To that end, Alaska created both a private cause of action[104] and a criminal penalty–enforceable against those who collect, analyze, retain, or disclose genetic information in violation of the statute.[105] If a violation results in profit or monetary gain for the violator, he may be civilly liable for up to $100,000.[106]

[47]     Had Henrietta’s cells been taken, tested, and commercialized without her knowledge in modern day Alaska, she could have recovered hundreds of thousands of dollars from those who profited from the extensive research conducted using her cells. She may not have died impoverished, when so many profited from her cells. She may not have gone unacknowledged for decades after. She might have had a headstone.[107]

B.  A Cautionary Tale

[48]     Florida is the fifth and final state recognizing a property interest in genetic information.[108] Like Alaska, Florida recognizes a criminal penalty for violations of these protections.[109]

[49]     Under Florida law, challenges arise not from the inadequacy of legislation, but from courts’ narrow interpretations of the legislation– restricting its scope, rendering it ineffective at protecting individuals in the context of scientific research. Florida’s law is broad enough to form an attempted extension of the desired protections. However, it still fails the public, as it must also be specific enough that it cannot be interpreted otherwise.

[50]     The Florida legislature approaches genetic information as a civil rights issue, protecting its citizens from discrimination in areas such as “insurance, employment, mortgage, loan, credit, or educational opportunity”[110] based on their genetics. It is the specificity of this objective that allows courts to interpret the statute as narrowly as possible.

[51]     As a result, despite seemingly enthusiastic protection provided by the Florida statute, practically these rights are nearly unenforceable when violated for the purpose of scientific research.

[52]     Use in scientific research is not one of the several exceptions[111] built into the Florida statute for certain uses of genetic information. A literal reading might lead to the belief that individuals are protected against unauthorized use of their genetic information in that context. Courts have not agreed with this interpretation.[112]

[53]     In 2003, a federal district court for the Southern District of Florida held that protections offered to individuals regarding their genetic information did not extend to the realm of scientific research.[113] For the court, informed consent principles apply only in the context of patient-doctor relationships, and do not extend to the researcher-subject relationship.[114]

[54]     The Greenberg case addressed a dispute arising from the patent of a gene sequence[115] discovered as a result of research conducted using tissue samples from children born with Canavan[116] disease.[117] Plaintiffs were the parents of those children.[118] They claimed that the eventual patenting and commercialization of the research product–made possible by their children’s genetic information–was beyond the scope of what they had consented to.[119] Plaintiffs argued that because the researchers’ economic interest had not been revealed to them at the outset, the patenting of the genetic sequence amounted to unlawful conversion of plaintiff’s property, and any money made subsequent to that patent was unjust enrichment.[120]

[55]     Despite the statutory language regarding genetic information being broad enough to encompass this circumstance,[121] and despite the designation of a property interest in genetic information,[122] the court ultimately declined to find a property right for the Greenberg plaintiffs.[123] Ultimately, their suit was dismissed.[124]

[56]     The court in Greenberg failed to cite statutory language supporting its decision, instead leaning heavily on policy arguments.[125] The court reasoned that the links between the physical samples, to the information in those samples, to the research conducted using that information, to the results of that research, to the ultimate commercialization of those results were too attenuated to fall within the intended scope of the statute.[126] This argument is not entirely without merit but does not fully justify the decision.

[57]     To supplement this justification, the court raised a concern commonly invoked whenever a restriction on research is proposed—that recognizing this sort of right would too heavily burden research, resulting in a negative impact to society as a whole.[127] The court goes so far as to claim that permitting plaintiffs to bring a cause of action for conversion would “cripple” medical research.[128]

[58]     This is a common policy argument made against the sorts of rights and protections proposed by the plaintiffs in Greenberg, in this comment, and elsewhere. This argument weighs the good done by scientific research against the infringement of the natural rights of any one person, deciding that the good of society must outweigh the rights of any individual person.[129]

[59]     This sort of values judgment can certainly be appealing. But in an ethical context, an argument that pits the ease of research against the personal rights and liberties of individual people unreasonably relies upon the specter of a negative outcome that is not certain. A requirement to acquire informed consent before conducting research on any one person’s genetic materials would hinder research, this is true—but so did requiring informed consent before conducting experiments on human beings;[130] so did the abolition of slavery, when research could no longer be conducted on unwilling human chattel.[131] Research will persist, regardless.

C.  States Without a Property Interest

[60]     State genetic privacy statutes are somewhat more common than statutes identifying a personal property right in genetic information. However, of the twenty-seven states that require consent for the dissemination of an individual’s genetic information, only twelve require that same consent for the performance of a genetic test, and even fewer require consent to obtain, access, or retain genetic information.[132] This inconsistency speaks to the need for federal regulation to standardize the rights of all Americans in the realm of genetic information.

[61]     Of all the states, only two (Alaska and New Mexico) require consent for performing a genetic test; obtaining, accessing, or retaining genetic information; and disseminating that information.[133] New Mexico provides a civil remedy for those whose genetic information has been acquired or used in violation of the statute, although the damages are restricted to actual damages plus $5,000[134]—a relatively small sum.

[62]     In any state other than Alaska, a modern day Henrietta would be unable to vindicate her rights, as she would likely have no rights to vindicate. Her cells were made anonymous and no information gleaned from them was used to discriminate against her in any way. As the cells were studied and distributed, information gleaned from them was not linked to Henrietta or to the Lacks family. Most information gleaned from the cells had nothing to do with Henrietta at all—the use of the cells was their ability to reproduce and be used as test subjects,[135] not in any secrets hidden in the strands of her DNA.

[63]     Federal recognition of a property interest in one’s own genetic information and material, extending fully into the realm of research, is necessary to prevent injustice. A property regime gives individuals the legal structure necessary to truly exercise control over their own genetic material.

V.  Theories of Property and Privacy

[64]     The Alaskan structure for protecting individual rights in the realm of genetic information is the most comprehensive of any state, as it recognizes both a property interest in one’s own genetic information as well as privacy right protection against unwarranted obtainment and disclosure of that same information.[136]

A.  Property

[65]     At a most fundamental level, to own something as one’s own property is to have complete dominion and control over that thing.[137] In the context of one’s own body and body products, there is a natural inclination to want that sort of control. Many people may even feel some degree of discomfort with the idea that human bodies can be property in the way that a house or a car are. This could be because there is an implicit understanding that if something is property, it is therefore alienable.[138] Property, as we understand it, has economic value.[139] It can be bought, and it can be sold.[140]

[66]     The idea that a human body, or any part of it, can be bought or sold is an uncomfortable one, and for good reason.[141] Moving beyond that initial reaction, however, allows us to view property regimes with a more open mind.

[67]     Strong public policy working against alienation of a particular type of property can ultimately counteract the alienability of that property.[142] This theory of property is underutilized in American jurisprudence, largely because of the belief that free alienation of property best serves the interests of society as a whole.[143] Public policy is therefore rarely interpreted as favoring any restriction on alienability. In the instance of human bodies, an exception should be made.

[68]     Human tissue samples hold immense economic value.[144] We live in a world where biological samples and genetic data is collected, aggregated, analyzed, and commercialized.[145] It is insincere to pretend otherwise, and placing an arbitrary restriction solely on individuals seeking to commercialize their own biological materials serves to remove them from the market without impacting the existence or the robustness of that market.[146] This makes donors of genetic material vulnerable, as they are the only ones who are unable to profit off of something that is, in all conventional senses, very much “theirs.”[147]

[69]     If the goal is to give individuals autonomy over their own genetic information and material, a property interest feels almost essential. Property doctrine is an efficient device for allowing individuals to express and enforce preferences over who may and may not access what information.[148]

[70]     Without a property interest, Henrietta had no right to any of the profits resulting from the development and commercialization of her cell line. She remained poor, and her family still wondered: “If our mother so important to science, why can’t we get health insurance?”[149]

B.  Privacy

[71]     Practically however, a property interest is not enough, and would do little for the person whose material is stored and analyzed absent their consent, but never commercialized—why should a person whose tissue yielded something worthy of commercialization be entitled to greater recovery (or recovery at all) than a person whose tissue yielded naught but a test subject? Each person received an equal amount of harm to their dignity and to their personal autonomy. These are the types of harms we are seeking to prevent.

[72]     A flaw of any property regime on its own is that it emancipates the part from the whole, ignoring the incalculable value of an entire person.[150] It is impossible to quantify the indignity done to a person when her injury is reduced to the conversion of a good with an often unquantifiable economic value. The right to privacy is crucial to effectively legislating genetic information protections.

[73]     Privacy doctrine is traceable to the work of Justices Warren and Brandeis in their 1890 work, The Right to Privacy.[151] They sought to expand and redefine the scope of the protections offered by traditional property doctrine, creating a new right of privacy in the process.[152] Although the right to privacy is typically understood be to rooted in the theory of natural law,[153] any right to privacy as we currently understand it is derived from and wholly reliant on the fundamental right of property ownership that serves as a lynchpin of American law.[154] If “property doctrine” is a toolbox, the “right of privacy” is just one of the many tools within.[155]

[74]     Many legal scholars who have taken a hard look at the protection of genetic information have cast doubt upon the idea that privacy and property protections can peacefully co-exist, to create truly comprehensive genetic protection doctrines.[156] For these individuals, privacy exists as an entirely independent right, regardless of its property law origins.[157] However, a right to privacy is, at its core, a property interest, and always has been.[158]

[75]     The need for a right to privacy–both originally and in this context–arises from the need for an interest that cannot be monetized in the way that traditional property can.[159] By owning our bodies and body products, we gain control over how and when our genetic information and material can be used, but in treating our individual parts as separate from each other, we inevitably detach ourselves from our identities as full, entire persons—the very thing we hope to protect.[160]

[76]     If the goal here—and it is—is to preserve the dignity of the individual, then we must strive to keep the self whole, a goal best served by the right of privacy.[161]

[77]     Ultimately, if we aim to create a framework through which Henrietta’s dignity would have been preserved, and her children would have been able to benefit from the commercialization of her cells (if she had chosen to donate them), we must craft a legal structure that instills in individuals interests in both privacy and property when it comes to their genetic materials and information.

VI.  A Proposal


[78]     To protect Henrietta, and those who find themselves in the position she was in, there needs to be basic, yet comprehensive, legislation at the federal level. That legislation must accomplish three main things: (1) create a property interest in genetic information and materials for the individuals to whom that information pertains; (2) supplement the privacy rights of individuals in their genetic information; and (3) create both a civil remedy and a criminal penalty for those who infringe upon the interests that individuals have in their own genetic information and materials.

[79]     To that end, the following is a brief outline of what such legislation might look like, modeled in part off the Alaska statute discussed previously:


1.  Statement of Intent

This statute shall be interpreted as affording to individuals a property interest in their own genetic material and information, with that interest possessing all the rights typically attached to an interest in property. This statute shall be applied to all instances of research conducted on human biological material, and shall not be construed as applying only in the doctor-patient context.

2.  Definitions

(a)        “Genetic information” means both the biological human material (blood, tissue, et al.) and the results of any analysis, testing, or observation of that material.[162]

(b)       “Genetic testing” means laboratory tests of human biological material for medical or research purposes.[163]

(c)        “Researcher” means any individual who performs genetic testing on the      genetic information of another.

3.  Genetic Information

(a)        Genetic information is the unique property of the individual to whom the information pertains.

(b)       A researcher may not collect genetic information from, perform genetic testing on, retain genetic testing results of, or disclose the genetic testing results of another person unless that researcher has first obtained the written, informed consent of the person, or that person’s legal guardian or authorized representative.[164]

(c)        Prohibitions of section (b) of this statute do not apply to genetic information collected or tested for law enforcement purposes, for the purpose of determining paternity, or for emergency medical treatment.

(d)       Civil Remedy. A person may bring a civil action against a researcher who collects, tests, retains, or discloses his genetic information in violation of (a) of this section. In addition to actual damages, a researcher violating this section will be liable for damages in the amount of $10,000. If the violation resulted in monetary gain for the violator, he will be liable for damages in the amount of $200,000.[165]

(e)        Criminal Penalty. An individual has committed the crime of unlawful genetic information collection, testing, retention, or disclosure when he collects, tests, retains, or discloses the genetic information of another in violation of (a) of this section. A person who has committed the crime of unlawful collection, testing, retention, or disclosure of genetic information is guilty of an infraction, punishable by a fine of no less than $1000 and no more than $100,000.[166]


[80]     Statutory language may not be enough. As we learned from the Florida example, broad language can be interpreted narrowly. This proposal seeks to be specific enough to avoid that scenario, while remaining generally applicable enough to provide adequate coverage. Frustratingly, it is not even certain that a statute such as this would have helped Henrietta maintain control over her biological tissue.

[81]     Had things not unfolded as they did—Henrietta’s biopsy done without her knowledge, her cells kept with her none the wiser, and her name lost to the annals of history until an industrious young writer took the time to dig her up—she may still not have had the wherewithal to vindicate her rights, had they existed. How can a person seek relief for damages they are unaware have been done to them?

[82]     That analysis ignores a crucial component of any modern statute—modern society. Societal values, ideas, and sensibilities have changed and evolved in the years since Henrietta first walked into Johns Hopkins complaining of a pain in her abdomen. This statute, or one like it, may not have saved the real Henrietta from the injustice done to her, but it could very well prevent the same from happening to a modern Henrietta Lacks.

* J.D. Candidate, 2018, University of Richmond School of Law. B.A., 2014, Virginia Commonwealth University. The author would like to acknowledge Professors Thaddeus Fortney and John Aughenbaugh of Virginia Commonwealth University for their encouragement and support throughout the years. The author would also like to thank the editors and staff of the Richmond Journal of Law & Technology for their efforts in editing this article, and for their endless patience.

[1] See Rebecca Skloot, The Immortal Life of Henrietta Lacks 27-28 (Broadway Books 2010).

[2] See id. at 33.


[3] See id.


[4] See Catherine K. Dunn, Protecting the Silent Third Party: The Need for Legislative Reform with Respect to Informed Consent and Research on Human Biological Materials, 6 Charleston L. Rev. 635, 639 (2012).

[5] See generally Skloot, supra note 1 at 31-33 (describing the breakthrough scientific achievements of HeLa cells).

[6] See id. at 2.

[7] See id. at 41.

[8] See id. at 40-41.

[9] See Skloot, supra note 1 at 35-37.

[10] See id. at 40-41.

[11] See, e.g., id. at 4 (discussing the proliferation of cell retention in laboratories).

[12] See id. at 40-41.

[13] See generally Skloot, supra note 1, at 34-41 (describing the laboratory environment of the cell culturist who developed HeLa).

[14] See id. at 40.

[15] See id. at 40-41.

[16] See id. at 41.


[17] See Rebecca Skloot, Henrietta’s Dance, Johns Hopkins Mag. (Apr. 2000),,

[18] See Skloot, supra note 1, at 56-58.

[19] See Dunn, supra note 4, at 637-38.

[20] Denise Watson Batts, After 60 Years of Anonymity, Henrietta Lacks Has a Headstone, Virginian-Pilot Online (May 30, 2010),, (stating that Henrietta Lacks was buried in an unmarked grave. In 2010, Dr. Roland Pattillo, who had worked with HeLa cells, donated the money necessary to give her a headstone).

[21] See generally Skloot, supra note 1, at 1-6 (describing the ubiquity of information about the cells and contrasting it with the scarcity of information about Henrietta).

[22] See id. at 2-4, 7.

[23] See Alexandra del Carpio, The Good, The Bad, and The HeLa, Berkley Sci. Rev. (Apr. 27, 2014),,; see also Skloot, supra note 1, at 2-4.

[24] Skloot first heard of Henrietta Lacks in a community college class she attended as a high school student in 1988. See Skloot supra note 1, at 2; see Patricia Cohen, Returning the Blessings of an Immortal Life, N.Y. Times (Feb. 4, 2011),,

[25] See Books – Best Sellers Paperback Nonfiction, N.Y. Times (Aug. 26, 2012),,

[26] See Online Catalog, Random House for High School Teachers (Apr. 7, 2017),,

[27] See Erik Pedersen, Oprah Winfrey Starrer ‘The Immortal Life of Henrietta Lacks’ Gets HBO Premiere Date, Deadline Hollywood (Feb. 14, 2017, 10:42 AM),


[28] See generally Robin McKie, Henrietta Lacks’s Cells Were Priceless, but Her Family Can’t Afford a Hospital, Guardian (Apr. 3, 2010),, (describing her story as “disturbing”).

[29] Skloot, supra note 1, at 18.

[30] See id.

[31] See id.

[32] The cabin Henrietta grew up in was situated on land that had once belonged to her great-grandfather, a white slaveholder. The cabin itself had once housed his slaves. See id. at 18, 122-24.

[33] See id. at 24.

[34] See Skloot, supra note 1, at 23.

[35] See id. at 24-26.

[36] See id. at 13-15.

[37] See Skloot, supra note 1, at 15.

[38] See id. at 64.

[39] See id. at 29-30.

[40] See Gail Javitt, Why Not Take All of Me? Reflections on The Immortal Life of Henrietta Lacks and the Status of Participants in Research Using Human Specimens, 11 Minn. J.L. Sci. & Tech. 713, 718 (2010).

[41] See Natalie Ram, Assigning Rights and Protecting Interests: Constructing Ethical and Efficient Legal Rights in Human Tissue Research, 23 Harv. J. Law & Tech. 119, 134 (2009).

[42] See Dunn, supra note 4, at 645-47.

[43] See id. at 646.


[44] See id. at 635,647.

[45] See id. at 647.

[46] See Elizabeth R. Pike, Securing Sequences: Ensuring Adequate Protections for Genetic Samples in the Age of Big Data, 37 Cardozo L. Rev. 1977, 1988 (2016).

[47] See Lori B. Andrews, Harnessing the Benefits of Biobanks, 33 J.L. Med. & Ethics 22, 23 (2005).

[48] See Pike, supra note 46, at 2032.


[49] See Andrews, supra note 47, at 25.

[50] See Pike, supra note 46, at 1988.

[51] See id.

[52] See Dunn, supra note 4, at 642–43.

[53] See Andrews, supra note 47, at 23.

[54] See id.

[55] See, e.g., HS-5 (ATCC® CRL-11882™), American Tissue Culture Catalogue,, (last visited Apr. 2 2017) (stating that CRL-11882 is a human bone marrow sample taken from a thirty year old white man and can be purchased by a for-profit company for $431 USD, or by a non-profit organization for $359.15).

[56] See id. (demonstrating that the source’s name and other personal information is not included).

[57] See generally J.E. Olson, et al., Biobanks and Personalized Medicine, 86 Clinical Genetics 51, 51 (2014) (describing how biobanks provide crucial infrastructure and support for clinical genetics).

[58] See Pike, supra note 46, at 1979.

[59] See id.

[60] See generally Skloot, supra note 1, at 32-33, 40, 66 (telling the story of Henrietta’s life, her experience at Johns Hopkins, and her eventual death).

[61] See Dunn, supra note 4, at 644–45.

[62] See Andrews, supra note 47, at 23.


[63] See Dunn, supra note 4, at 645.

[64] See Moore v. Regents of University of California, 51 Cal. 3d 120, 125 (Cal. 1990).

[65] See id. at 125–26.

[66] See id. at 126–27.

[67] See id. at 127–28

[68] See Moore, 51 Cal. 3d 120 at 128 n.4.

[69] See id. at 136–38.

[70] See Anya E.R. Prince, Comprehensive Protection of Genetic Information: One Size Privacy or Property Models May Not Fit All, 79 Brook. L. Rev. 175, 175 (2013).

[71] See id. at 183.

[72] See id. at 184–85.

[73] See generally Jaclyn G. Ambriscoe, Note, Massachusetts Genetic Bill of Rights: Chipping Away at Genetic Privacy, 45 Suffolk L. Rev. 1177, 1209–11 (2012) (describing the ways in which combining privacy and property rights is like mixing “oil and water”).

[74] See id. at 1185–87.

[75] See Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881 (2008).

[76] See id.

[77] See id.; see also, H.R. 1313, 115th Cong. (1st Sess. 2017) (permitting employers to demand genetic test results from their workers).

[78] See Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881 (2008).

[79] See 45 C.F.R. § 46.101(a) (2017).

[80] See id.

[81] See 45 C.F.R. § 46.116(a)(1)–(5) (2017).

[82] See id.

[83] See 45 C.F.R. § 46.116(a)(2)–(3) (2017).

[84] See 45 C.F.R. § 46.116(a)(1)–(8) (2017).

[85] See id. at (a)(8).

[86] See id. at (b)(5).

[87] See generally 21 C.F.R. § 50.1 (2017) (discussing standards for clinical investigations run by the Food and Drug Administration).

[88] See 21 C.F.R. §§ 56.109, 812.25 (2017).

[89] See Ram, supra note 41, at 140.

[90] See id.

[91] A question must be asked whether, in an age of DNA testing, a tissue sample containing genetic information can ever be truly anonymous. Research has shown that even an incomplete DNA sample can be matched to the unique individual from whom it was taken, which renders the concept of ‘anonymous genetic material’ somewhat obsolete. See generally Amy L. McGuire & Richard A. Gibbs, Genetics: No Longer De-Identified, 312 Science Mag. 370, 370-71 (2006) (discussing research finding that an individual can be identified with just 75 single-nucleotide polymorphisms).

[92] See Nat’l Conf. of State Legs., Genetic Privacy Laws, NCSL,, (last updated Jan. 2008) (stating that 17 states required informed consent).

[93] See id.

[94] These states are Alaska, Colorado, Florida, Georgia, and Louisiana. See id.

[95] See Colo. Rev. Stat. § 10-3-1104.7(1)(a) (2016) (holding genetic information as property and imposing remedies for a violation of such property).

[96] See Ga. Code. Ann. § 33-54-1 (2016) (holding genetic information as property and imposing remedies for a violation of such property).

[97] See La Stat. Ann. § 22:2013(E) (2017) (imposing remedies for a violation of such property).

[98] See generally Nat’l Conf. of State Legs., supra note 92 (discussing the eight states require informed consent for the retention of genetic information—Alaska, Delaware, Minnesota, Nevada, New Jersey, New Mexico, New York, and Oregon. Five states identify a personal property interest in genetic information: Alaska, Colorado, Florida, Georgia, and Louisiana).
[99] See Colo. Rev. Stat. § 10-3-1104.7(12)-(13) (2016); Ga. Code. Ann. § 33-54-8 (2016); La Stat. Ann. § 22:2013(E)–(F) (2017).
[100] See Alaska Stat. § 18.13.010(a)(2) (2016).

[101] See id.

[102] See Alaska Stat. § 18.13.010(a)(1) (2016).

[103] Such as samples collected for law enforcement purposes; the collection of DNA samples in this realm is a common exception to most all legislation on the matter. Whether this should be the case is a question worth asking, but is not within the scope of this comment. See Alaska Stat. § 18.13.010(b)(1)–(5) (2016).

[104] See Alaska Stat. § 18.13.020 (2016).

[105] See Alaska Stat. § 18.13.030(a), (c) (2016).

[106] See Alaska Stat. § 18.13.020 (2016).


[107] See Batts, supra note 20.

[108] See Fla. Stat. § 760.40 (2)(a) (2016).

[109] See Fla. Stat. § 760.40 (2)(b) (2016) .

[110] Fla. Stat. § 760.40 (3) (2016).

[111] See Fla. Stat. § 760.40 (2)(a) (2016) (“Except for purposes of criminal prosecution, except for purposes of determining paternity as provided in s. 409.256 or s. 742.12(1), and except for purposes of acquiring specimens as provided in s. 943.325, DNA analysis may be performed only with the informed consent of the person to be tested, and the results of such DNA analysis, whether held by a public or private entity, are the exclusive property of the person tested, are confidential, and may not be disclosed without the consent of the person tested.”).

[112] See generally Greenberg v. Miami Children’s Hosp. Research Inst., Inc., 264 F. Supp. 2d 1064 (S.D. Fla. 2003) (holding that plaintiffs could not recover under the Florida statute protecting against misuse of genetic information).

[113] See id. at 1075.

[114] See id. at 1069.

[115] Today, this case might have resolved slightly differently. In 2013, the Supreme Court ruled that genes found in nature are not patentable merely because a particular person or institution has isolated any particular gene. See Association for Molecular Pathology, et al. v. Myriad Genetics, Inc., et al., 133 S. Ct. 2107, 2120 (2013).

[116] Canavan disease is a neurological genetic disorder. Children born with Canavan disease typically die before age ten. See Nat’l Inst. of Neurological Disorders and Stroke, Canavan Disease Information Page, NIH,, (last visited Apr. 1, 2017).

[117] See Greenberg, at 264 F. Supp. 2d 1064 (S.D. Fla. 2003)

[118] See id. at 1066.

[119] See id. at 1068.

[120] See id. at 1072.

[121] See Fla. Stat. § 760.40(1) (2016).

[122] See id. at (2)(a).

[123] See Greenberg v. Miami Children’s Hosp. Research Inst., Inc., 264 F. Supp. 2d 1064, 1075 (S.D. Fla. 2003).

[124] See id. at 1077.

[125] See id. at 1076.

[126] See id.

[127] See generally Natalie Anne Stepanuk, Genetic Information and Third Party Access to Information: New Jersey’s Pioneering Legislation as a Model for Federal Privacy Protection of Genetic Information, 47 Cath. U. L. Rev. 1105, 1135 (1998) (discussing how legislation must take into account the interests of researchers and the public, as well as the donors of any biological material); see also Ram, supra note 41, at 121-22 (noting that researchers and society have strong interests in tissue research, and that the interests of donors, researches, and society as a whole deserve respect and protection).

[128] See Greenberg v. Miami Children’s Hosp. Research Inst., Inc., 264 F. Supp. 2d 1064, 1076 (S.D. Fla. 2003).

[129] See generally id. at 1074-76 (discussing the impact a property right in genetic material would have on research).

[130] See, e.g., Skloot, supra note 1, at 131-33 (describing how the term ‘informed consent’ did not arise until the mid-1900s).

[131] See, e.g., L.L. Wall, The Medical Ethics of Dr. J. Marion Sims: A Fresh Look at the Historical Record, 32 J. Med. Ethics 346, 348 (2006) (describing how the father of gynecology relied on slaves as research subjects).

[132] See Nat’l Conf. of State Legs., supra note 92.

[133] See id.

[134] See N.M. Stat. Ann. § 24-21-6(c)(3) (2016).

[135] See Skloot, supra note 1, at 41.

[136] See Alaska Stat. § 18.13.020 (2016).

[137] See Lawrence Lessig, Code: and Other Laws of Cyberspace 161 (2nd ed. 1999).

[138] See Sonia M. Suter, Disentangling Privacy from Property: Towards a Deeper Understanding of Genetic Privacy, 72 Geo. Wash. L. Rev. 737, 755 (2004).

[139] See id. at 746.

[140] See id. at 758.

[141] See generally Suter, id. at 809. The United States has a culture of deep shame surrounding its history with the slave trade, leading many to feel generally uncomfortable with the idea of selling people, or parts of people, and the coercive effects this could have on the impoverished. See also Ambriscoe, supra note 73, at 1211 (arguing that there is a risk individuals would be coerced into selling their genetic information).

[142] See Restatement (first) of Prop. § 489 cmt. a (1944).

[143] See Suter, supra note 138, at 755.

[144] See id. at 758.

[145] See id.

[146] See Suter, supra note 138, at 757.

[147] See id.

[148] See id.


[149] Skloot, supra note 1, at 168.

[150] See Suter, supra note 138, at 748.

[151] See Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 193 (1890).

[152] See id. at 193, 197.

[153] See, e.g., Pavesich v. New England Life Ins. Co., 50 S.E. 68, 69-70 (Ga. 1905).

[154] See Suter, supra note 138, at 767; see also J. Madison, Property, in The Papers of James Madison 14:266–68 (William T. Hutchinson, et al. eds., 1792),

[155] See Suter, supra note 138, at 767.

[156] See Ambriscoe, supra note 73 at 1210-11.

[157] See id. at 1193-94.

[158] Agreement with such an assertion is not necessary to ultimately agree with the conclusion that privacy and property are the two pillars necessary to uphold and individual’s right to exercise control over their own genetic information.

[159] See Suter, supra note 138, at 761-62.

[160] See id. at 763.

[161] See Dunn, supra note 4, at 640.

[162] See generally Colo. Rev. Stat. § 10-3-1104.6 (2)(c)(I)(2016) (discussing genetic information and the limitations on disclosure of information, as well as liabilities and legislative components).


[163] See Colo. Rev. Stat. § 10-3-1104.7 (2)(b) (2016).

[164] See Colo. Rev. Stat. § 10-3-1104.7 (10)(a) (2016); see also Alaska Stat. § 18.13.010(a)(1) (2016).

[165] See Colo. Rev. Stat. § 10-3-1104.6 (11)-(12) (2016); see also Alaska Stat. § 18.13.020 (2016).

[166] See Alaska Stat. § 18.13.030(a) (2016).

Products Liability and Autonomous Vehicles: Who’s Driving Whom?

Download PDFWebb Publication Version PDF

Cite as: K.C. Webb, Products Liability and Autonomous Vehicles: Who’s Driving Whom?, 23 Rich. J.L. & Tech. 9 (2016),


By: K.C. Webb*

I.  Introduction……………………………………………………………………………… 3

II.  Background……………………………………………………………………………… 6

A.  Background of the Technology and State of the Art……………………… 6

B.  Background of Statutory and Regulatory Reforms……………………… 14

C.  Background of Automotive Products Liability Tort Law…………….. 18

1.  Defects Leading to the First Crash………………………………………… 22

2.  Defects Enhancing Injuries in the Second Crash………………………. 23

III.  How Technology Affects Tort Law…………………………………………….. 25

A.  Drawing Bad Comparisons……………………………………………………… 25

1.  Non-automotive Technology………………………………………………… 26

2.  Automotive Technology………………………………………………………. 27

B.  Applying Outdated Theories and Tests…………………………………….. 29

1.  Consumer Expectation Test………………………………………………….. 30

2.  Risk-utility Test…………………………………………………………………. 31

3.  Crashworthiness Doctrine……………………………………………………. 32

C.  PROPOSAL: Adapting Tort Law Accordingly…………………………… 34

IV.  How Tort Law Affects Technology…………………………………………….. 38

A.  Effecting Design Elements………………………………………………………. 39

1.  Mechanical Components……………………………………………………… 39

2.  Software components………………………………………………………….. 41

B.  Effecting Manufacturer’s Actions…………………………………………….. 46

1.  Applying Some Pressure……………………………………………………… 46

2.  Stopping the Buck………………………………………………………………. 47

C.  PROPOSAL: Limiting, Dividing, and Shifting Liability………………. 49

1.  Going to Capitol Hill…………………………………………………………… 49

2.  Going Dutch………………………………………………………………………. 50

3.  Going it Alone…………………………………………………………………….. 51

V.  Conclusion………………………………………………………………………………… 52



Today the robot is an accepted fact, but the principle has not been pushed far enough. In the twenty-first century the robot will take the place which slave labor occupied in ancient civilization. There is no reason at all why most of this should not come to pass in less than a century, freeing mankind to pursue its higher aspirations.

-Nikola Tesla, February 9, 1935[1]


I. Introduction

[1]       On February 14, 2016, the by-now-famous Google self-driving car crashed itself into a city bus filled with passengers.[2] The car was traveling at two miles-per-hour along a busy road in Mountain View, California, when it turned into the lane of the oncoming bus which was traveling at nearly fifteen miles-per-hour.[3] Thankfully, none of the fifteen passengers aboard the bus or any of the car’s occupants suffered injuries; however, the accident damaged the side of the bus and the car’s front fender, wheel, and driver’s side door.[4]


[2]       While California’s regulatory agencies squabbled among themselves about their role in determining liability, Google admitted to bearing some measure of responsibility.[5] The test driver failed to activate the manual override because he believed the bus would slow down and allow the car into the lane.[6] Both the test driver and the car misjudged, because three seconds later the car collided with the bus, marking the first time a self-driving car was directly responsible for a crash on public roads.[7]


[3]       This collision turned up the heat on a conversation already simmering among legal scholars, techies, automobile manufacturers, policy makers, Congress, and consumers about the interaction of this technology and tort law.[8] No legal framework currently exists for assigning liability when a self-driving car, like the Google car, crashes itself.[9] Some scholars optimistically predict that tort law will handle the introduction of this new technology easily, pointing to how well tort law has handled new technologies since the dawn of the industrial revolution.[10] Others foretell tort law’s dismal failure to adapt, predicting that all liability will necessarily shift to the manufacturer, stunting the growth of the industry.[11] These doomsday prophets instead pin their hopes on statutory and regulatory reforms.[12]


[4]       This comment explores two broad themes present in these conversations and makes two respective proposals. First, this comment discusses how autonomous vehicle technology affects the development of tort law, critiquing the ability of products liability law’s current tests to handle autonomous vehicle technology, and proposes a new test. Second, this comment discusses how tort law affects the development of this technology and proposes steps manufacturers should take to limit their liability with respect to autonomous vehicles.


[5]       Part II provides background information, beginning with an overview of how the technology works, to set the stage for a competent discussion of products liability issues. It then lays out the history of statutory and regulatory reforms, and the background of automotive products liability law. Part III presents how technology affects tort law by discussing how self-driving cars would fare under the present automotive products liability tests, and then proposes a new test. Part IV turns the lens around to examine how tort law affects technology. It discusses how liability concerns affect design elements and manufacturers’ actions, and then proposes additional steps that manufacturers should take to limit liability. Finally, Part V is a brief conclusion.


II. Background

[6]       This section discusses how autonomous vehicle technology works, statutory and regulatory reforms, and predictions by other scholars on how products liability law will react to autonomous vehicles.


A. Background of the Technology and State of the Art


[7]       Misconceptions about autonomous vehicles abound.[13] Among them: self-driving cars function through classical computer algorithms (complex if-then decision trees); driver assistance systems will gradually transform cars into completely autonomous vehicles; self-driving cars are programmed to make ethical judgments; and self-driving cars will faithfully follow all traffic regulations.[14]


[8]       In reality, the levels of autonomous features range from zero to four.[15] The lower numbers indicate lower levels of autonomy, and many of the features that fall into these categories, like cruise control, have been on the road for a long time.[16]


[9]       The higher numbers indicate a greater level of autonomy. Adaptive cruise control, for example, uses radar sensors to measure the distance in front of the vehicle and change speed accordingly to keep a set distance. Although it cannot detect and react to a soft object that appears in front of the car (like a deer) it can come to a complete stop when the vehicle in front of it performs a panic brake. Another semi-autonomous feature, lane assist, uses a camera sensor on the front of the car to detect the white and yellow lines that demarcate lanes. When the car begins to drift in the lane, the car gives a warning, usually audible and visible, sometimes vibrating the seat to wake a sleepy driver; lane keeping assist (the next generation feature of lane assist) helps the car stay in the lane by “continuously applying a small amount of counter-steering force.”[17]


[10]     More advanced features offer hands-free driving, like Super Cruise (GM), Autopilot (Tesla), and Traffic Jam Assistant (BMW).[18] These systems employ the use of radar, sensors, cameras, LIDAR,[19] telematics,[20] and GPS[21] to assess the distance to the next car ahead and find the car’s position on the road, and within the lane.[22] Then, like adaptive cruise control and lane assist, the car applies corrective measures to keep itself straight on the road.[23] Information, like road closures, is kept current with firmware updates which are administered wirelessly as needed.[24] FOTA (firmware updates over the air) has served cell phone end users for years, and now the technology has been adapted for updating automobile software.[25] Like a cell phone, the vehicle’s software can be updated sans cables or expensive recalls.[26]


[11]     Fully autonomous vehicles (AVs) use all of these technologies, and more. They use sensors and GPS to find the car’s position in the world, and determine what street and what lane the car is in.[27] Software interprets and categorizes the images perceived through sensors, like a cyclist or pedestrian; based on these categorizations, it predicts what the objects will do, like cross the street.[28] The software then selects a speed and trajectory for the car, like shifting lanes to allow extra room for the cyclist.[29]


[12]     Contrary to popular belief, the software is not designed using a complex if-then decision tree to anticipate all possible driving scenarios.[30] Instead, it uses an algorithm to categorize the objects it senses.[31] The algorithm is fed with oodles of images containing various objects, like a child chasing a stray ball into the street.[32] Using pattern recognition (from the field of artificial intelligence) to sort and classify the images it senses, when it sees a new image the algorithm occasionally guesses incorrectly.[33] It then alters its internal parameters to increase its sorting accuracy—keeping changes that make the algorithm more accurate, and discarding changes that decrease accuracy.[34] When the algorithm later sees new images, it classifies them with a higher accuracy. The algorithm, after a fashion, teaches itself to become a better driver.[35]

[13]     It is not yet clear how AVs will interact with each other, or with traditional vehicles, to share information through vehicle to vehicle communication (V2V). [36] The National Highway Transportation Safety Administration (“NHTSA”) reserved the 5.9GHz spectrum for V2V anticipating its incorporation into vehicles in the near future.[37] In the meantime, Google’s car taught itself to become a better driver through refinements to the software in the hope that “[f]rom now on, [their] cars will more deeply understand that buses (and other large vehicles) are less likely to yield . . . than other types of vehicles…”[38]


[14]     Since GM unveiled the first fully autonomous concept car at its Futurama exhibition during the 1939 World’s Fair, most major car manufacturers have followed suit, working on their own fully autonomous models, and some have already begun testing on public roads.[39] Google boasted that before the Valentine’s Day crash, its test cars had driven roughly one and a half million miles in autonomous mode.[40] Google estimated that its AVs will be available for consumption by 2018.[41] Other manufacturers targeted 2020 as their release date,[42] and the U.S. Secretary of Transportation predicts driverless cars will be “all over the world” by 2025.[43] AVs promise innumerable benefits to society, like:[44]

  • drastically reduced frequency and fatality of crashes, which result in billions of dollars of damage each year and an immeasurable emotional toll on victims’ families;[45]
  • increased mobility and access to essential services for those who are unable or unwilling to drive, like minors, the elderly, or disabled persons;[46]
  • substantially reduced traffic congestion, which currently exacts costs in terms of time, money, and frustration;[47]
  • more efficient land use as people will be more willing to commute longer distances for work, so long as they can reclaim the commute time by engaging in other activities while the car is in motion;[48]
  • reduced emissions and increased fuel economy, as cars become less susceptible to collision and thus need less tonnage to remain safe.[49]


[15]     Yet for all these benefits, legal liability remains the greatest roadblock to mass adoption of AVs.[50] As this technology proliferates, and the line between car and driver blurs, the law must adapt to accommodate.


B. Background of Statutory and Regulatory Reforms


[16]     Anticipating that AVs will occupy the roads within the next decade, lawmakers are reacting now to pave the way for these automated machines. Other stakeholders, like insurance companies, are updating their policies to keep pace with the shifting paradigm.[51]


[17]     So far, a handful of states have updated their traffic codes to permit AVs to take to public roads as test vehicles.[52] Although these first generation laws provide a rudimentary framework and certain changes or additions will be necessary,[53] a number of states already provide manufacturers with limited protection from liability,[54] and many expert reports suggest limiting manufacturer liability promotes growth of autonomous technology.[55] Some scholars even go as far as recommending federal intervention to grant manufacturers immunity via statute.[56]


[18]     AVs are also influencing federal regulations. The NHTSA, eager to realize the benefits of AVs,[57] announced a four-billion-dollar plan to “accelerate the development and adoption of safe vehicle automation through real-world pilot projects,”[58] and published an updated set of policy recommendations.[59] Before publishing the updated recommendations, NHTSA released a statement placing responsibility for accidents on the AV, regardless of whether a human occupies the car.[60]


[19]     Consumer watchdog groups, wary of the potential dangers AVs represent, called this outrageous. They state the need for a competent human driver to supervise the car is evident in the number of times Google’s autonomous technology has failed in the past months, prompting the human test driver to take the wheel.[61] This exemplifies the mixed feelings society as a whole has about AVs.[62] On the one hand, driverless cars are safer, more cautious drivers than humans who, in 2014, wrecked 6.1 million times in the United States alone.[63] Over 32,000 people perished with human error being the critical factor 94% of the time.[64] Even though we are desperate to improve these statistics and reclaim the time we forfeit commuting, only about half of us would actually ride in an AV.[65] Of that half, even fewer might be willing to put their children in a driverless car, or send them riding to the park on bicycles, crossing streets teeming with driverless cars.[66]


[20]     While policy makers grapple with these conflicting attitudes,[67] they must also wrestle with legal issues like whether there should be a uniform traffic code, or whether federal law should preempt manufacturer liability.[68] NHTSA admitted that it has a limited authority to deal with many of these concerns, [69] and even with NHTSA’s recently published recommendations, it may be years before policymakers sift through the findings and promulgate appropriate laws and rules.[70] Even then, laws and rules may take a number of revisions to perfect, especially when dealing with new technologies.[71] Consequently, tort law must adapt to handle these concerns.


C. Background of Automotive Products Liability Tort Law


[21]     Generally, a plaintiff claiming injury by an automobile may bring suit under several different theories of liability: (1) negligence, (2) strict liability, (3) breach of warranty, and/or (4) misrepresentation.[72] However strict liability is considered the “dominant legal theory” in products liability litigation, and therefore is the focus of this section.[73]


[22]     Products liability’s first case and controversy dates back to England’s Industrial Revolution.[74] In that first case,[75] the court, protective of industry, foreclosed many claims through a legal fiction called “privity,” whereby an insufficient relationship between two parties would bar a lawsuit.[76] Later courts made exceptions to this harsh rule by allowing exceptions for products that were inherently dangerous, eventually expanding the limits of inherent danger to swallow privity altogether.[77] In the 1960’s courts began to recognize that manufacturers could be strictly liable for injuries resulting from the use of their products.[78]


[23]     A strict products liability claim requires that the plaintiff prove “(1) that the defendant sold a defective product; and (2) that the defect proximately caused the plaintiff’s harm.”[79] Products liability claims come in three flavors: manufacturing defect, design defect, and warning defect.[80] Manufacturing defects are defects that occur when a product fails to meet the design specification.[81] A design defect occurs when a product is designed in a way that makes it unreasonably dangerous.[82] A warning defect occurs when the manufacturer breaches his duty to provide adequate warning, or instructions to use the car in a safe manner.[83] Additionally, specific to automotive products liability, many jurisdictions recognize some form of “crashworthiness” doctrine.[84] Under this doctrine, courts recognize that accidents are foreseeable by vehicle manufacturers, and vehicles must therefore be designed in a way that minimizes injuries to occupants.[85]

[24]     Autonomous products liability cases are further divided into two species, “(1) accidents caused by automotive defects, and (2) aggravated injuries caused by a vehicle’s failure to be sufficiently ‘crashworthy’ to protect its occupants in an accident.”[86] These species mirror the two crashes resulting from any single accident.[87] In the “first crash” the car collides with an object, like a tree or a city bus.[88] In the “second crash” the car’s occupants collide with the interior.[89] The following sections examine how automotive products liability claims fit within these species and the tests courts apply.


1. Defects Leading to the First Crash


[25]     Defects causing accidents are typically manufacturing or design defects.[90] “A classic example of a manufacturing defect case would be one in which a tire manufacturer used substandard practices in its plant, resulting in the components of the tire separating and failing later while being used.”[91] Additionally, plaintiffs have prevailed on manufacturing-defect claims in cases where “unintended, sudden[,] and uncontrollable acceleration” causes an accident.[92] In such cases, plaintiffs have been able to recover under a “malfunction theory” which uses a res ipsa loquitur-like inference to allow “deserving plaintiffs to succeed notwithstanding what would otherwise be…[an] insuperable problem of proof” of defect in the product.[93]


[26]     Plaintiffs have also prevailed where a design defect causes injury. For example, in the 1970s and 1980s litigation proliferated when vehicles were “designed with a high center of gravity, which increased their propensity to roll over.”[94] The two primary tests used by courts in design defect cases are “the consumer-expectations test and the risk-utility test.”[95]


[27]     For a manufacturing defect claim, courts apply the consumer-expectation test to determine whether the product is unreasonably dangerous.[96] Under a design-defect claim, courts apply the consumer-expectation test as well as the risk-utility test. “[U]nder [the consumer-expectation] test, a plaintiff succeeds by proving that the product failed to perform as an ordinary consumer would expect when used in an intended or reasonably foreseeable manner.”[97] Under the risk-utility test, a plaintiff must show the “magnitude of the danger outweighs the utility of the product, as designed.”[98] Additionally, plaintiffs may also seek recovery for injuries sustained in the second crash.


2. Defects Enhancing Injuries in the Second Crash


[28]     “Litigation can also arise where a plaintiff alleges that the vehicle is not sufficiently ‘crashworthy,’”[99] or in other words, the car fails to adequately protect occupants in a collision from injuries sustained during the “second crash” between the occupants and the interior of the vehicle.[100]


[29]     For example, in the landmark case Larsen v. General Motors Corp., the plaintiff drove a 1963 Chevrolet Corvair into a head-on collision, the impact of which fatally “thrust [] the steering mechanism [rearward] into the [plaintiff’s] head.”[101] The court held that even though collisions are not the intended use of an automobile, general negligence principles applied when the manufacturer’s failure to use reasonable care to avoid subjecting the car’s occupants to unreasonable risk of injury either caused the plaintiff’s injuries, or enhanced his injuries.[102] The court went on to state that automobiles do not function solely as a means of transportation, but as “a means of safe transportation” (“or as safe as is reasonably possible under the present state of the art”).[103]


[30]     Like Larsen, these claims are typically design defects, and courts apply both the consumer expectation test and the risk utility test. However, “the more complex a product is, the more difficult it is to apply the consumer-expectation test.”[104] Indeed, raising the argument of complexity has become a standard defense in automotive products liability claims.[105] As a result, courts seem to prefer the risk-utility test.[106] However, courts and scholars alike have debated whether or not these tests provide an appropriate “vehicle” for remedy, and if they are appropriate to apply to AVs.[107] The next section explores the application of these current tests to AVs.



III. How Technology Affects Tort Law


[31]     Google’s car crash (described in the Introduction) evokes images of the classic trolley car problem—“an ethical brainteaser” perplexing philosophers since 1967.[108] A runaway trolley barrels toward five innocent people tied to the tracks. If you pull a lever you can divert the trolley and switch the tracks, where the trolley will run over and kill one man. Do you do nothing and allow fate to run its course? Or do you actively decide to kill the one man and spare the five? The trolley car scenario has received renewed attention in the debate surrounding AVs.[109] If an AV is presented with a similar choice, would it divert its path to save a busload of school children, but kill the car’s occupant in the process by colliding with a tree instead? Or save the occupant, but let all the children die? Would the injured party have a products liability claim against the AV’s manufacturer? If so, what test would a court use?


[32]     This section explores the bad comparisons drawn between AVs and other technologies, both automotive and otherwise; the inappropriate application of current products liability tests to AVs; and finally, proposes a new test.


A. Drawing Bad Comparisons


[33]     Some scholars believe that tort law in its current state is perfectly capable of handling this new technology because (1) the application of tort law to AV technology is similar to its application to other non-automotive technology, like elevators and autopilot for ships and airplanes;[110] and (2) products liability law has a good track record of handling other automotive technology, like “seatbelts, airbags, and cruise control.”[111]


1. Non-automotive Technology


[34]     AVs are not analogous to elevators or autopilot. Elevators operate in a limited fashion, moving in two directions along a single path.[112] They do not make complex and sophisticated decisions, and when an elevator fails, determining liability is a much simpler matter because human intervention is typically not a factor in play.[113] Elevator users are not held liable “unless they are exceptionally negligent.”[114] By comparison, while humans can avoid an elevator by taking the stairs, humans cannot avoid AVs simply by driving a traditional car, walking, or taking the city bus. Therefore, the test in cases where a person is injured by riding a malfunctioning elevator is not appropriate for the passengers of the city bus struck by the Google car.


[35]     Nor do AVs fit well into a category with autopilot systems for airplanes and boats, which require human vigilance and intervention.[115] Requiring human vigilance of AVs is incomparable to autopilot systems because pilots are highly trained, air traffic is highly regulated, and there are far less planes in the sky than cars on the road.[116] Moreover, requiring human vigilance in AVs is undesirable.[117] One of the benefits of AV driving is freeing up a driver’s time for other tasks. Yet cognitive science research on distracted driving suggests that human reengagement after periods of occupation with another task is difficult and dangerous.[118] Ergonomic research indicates human brains are not good at routine supervision tasks, so if an AV goes for many miles without incident, the human driver will likely stop paying attention.[119] While some states have required that test vehicles keep a vigilant human driver at the ready, imposing this requirement on consumers forecloses one of the greatest benefits of the technology: mobility for the elderly, minors, and the disabled.[120]


2. Automotive Technology


[36]     To date, automotive products liability law has adapted to cover new technologies as they entered the stream of commerce. At one time seatbelts, airbags, and cruise control were new technologies.[121] At least one scholar suggests that AVs will be perceived as the next generation of automotive safety features, and the law will treat AVs as it has seatbelts and airbags.[122] However, AVs differ greatly from other safety features in their complexity. AVs have significant implications not just for the vehicle’s occupants, but for the environment outside the vehicle as well—including other drivers and pedestrians (unlike airbags and seatbelts, which primarily affect the car’s internal environment).[123]


[37]     Although cruise control draws a closer comparison, because it is a more complex feature and rates a higher level of autonomy,[124] courts have split over which test applies in cruise control cases, applying either the consumer expectation test or the risk utility test.[125] It is unlikely that courts would be any better settled over which test to apply to something even more complex like AVs. However, as the next section explains, neither test is appropriate for application to AVs.


B. Applying Outdated Theories and Tests


[38]     Even if a good comparison could be drawn, current theories and tests for recovery are inappropriate for application to AVs. For example, recovery under a manufacturing defect theory is inappropriate when the alleged defect is a software error, or an error in the computing algorithms employed by AVs, because software is not a manufactured product.[126] The spin-off malfunction theory[127] may be a more appropriate vehicle to recovery because it allows a plaintiff to show that the defect in the software occurred in the absence of any outside tampering.[128] However, not all jurisdictions recognize this theory, and the courts that do utilize it are hesitant to do so in a widespread fashion.[129]

[39]     It is more likely that when an AV crashes itself the way that Google’s car did, plaintiffs would bring suit under a theory of design defect. Still, the tests courts apply under this theory are not appropriate for application to AVs.


1. Consumer Expectation Test

[40]     Courts and scholars have criticized the consumer expectation test as inappropriate under a theory of design defect because of the complexity of traditional automobiles.[130] In fact, the Restatement (Third) rejected this test for design defects altogether.[131] If the consumer expectation test meets criticism for being inappropriate for design defects in general, and traditional vehicles in particular, this test would be even more problematic when applied to AVs which have added layers of complexity over traditional cars.[132] Furthermore, employing this test would place manufacturers in the awkward position of managing consumer expectations and providing adequate warnings for safe use of AVs, while simultaneously encouraging use and advertising the overall increased safety of the product. This forces car companies to talk out of both sides of the mouth—confusing consumers and courts alike.[133]


2. Risk-utility Test


[41]     Many courts prefer the risk-utility test, which the Restatement (Third) recognizes as the sole test for design defects,[134] but this test too has drawbacks in the context of AV litigation. For example, even though experts anticipate that mass adoption of AVs will likely drive down the overall cost of automotive products liability cases,[135] the cost of litigating a design defect in an AV’s software may be sky high.[136] Applying the test to an AV’s physical components would likely not look much different than design defect cases for traditional vehicles because litigation of semi-autonomous features adequately explores defects related to the types, placement, and uses of various sensors.[137]


[42]     Cases revolving around the design of the software or algorithms specific to AVs, on the other hand, present a much more difficult case to make—in particular showing a safer alternative design at the time of manufacture.[138] Finding an expert witness to testify will likely be difficult and expensive due to the cutting edge nature of the field, making this method of recovery unavailable for widespread use.[139] When manufacturers develop a safer alternative algorithm, firmware updates can be installed over the air, giving car manufacturers every motivation to administer an update promptly because the cost of recall will not need to be factored and weighed.[140] Assuming there was any delay or missed update, depending on the jurisdiction, rules of evidence would bar admission of later software updates that constitute subsequent remedial measures.[141]


3. Crashworthiness Doctrine


[43]     One scholar speculated that software and algorithm defects cannot be successfully brought under the doctrine of crashworthiness, because software and algorithm defects relate to the “first crash” rather than the “second crash,”[142] but the trolley car scenario teaches otherwise.[143] If an AV was put in a position where a first crash must occur (either a collision with the tree or the children), shouldn’t the car be designed to select the option that gives its occupants the least injury? A failure to do so might give the occupants a crashworthiness claim (among others).[144] Whether an AV can—or should—be so designed is discussed in more detail in the next section.[145] In short, it is merely a matter of time before an AV finds itself in the trolley car scenario, and in such an instance, the doctrine of crashworthiness will not be an appropriate test.


[44]     We humans have not solved this brainteaser, and we cannot expect that a car will make a “better” judgment when we do not know or agree which is the better outcome.[146] In other words, when an AV selects either bad outcome (kill the occupant or kill the children), some might suppose this constitutes a design defect.[147] In particular, it might give rise to a claim under the crashworthiness doctrine, which dictates that vehicles must be designed in a way that minimizes injuries to occupants.[148]


[45]     However, because society has not decided on a clear preferable outcome to this scenario, it is just as possible that neither outcome could be considered a design defect. A recent study presented respondents with a scenario wherein an AV found itself in a trolley car predicament.[149] Most respondents were willing to sacrifice the driver—if they were not the driver.[150] This study shows that society is unclear how it wants—or expects—an AV to behave under trolley car circumstances. It is therefore hard to argue that either outcome is the result of a defective design, when society is not clear on how it believes AVs should be designed with respect to the trolley problem. Consequently, imposing the doctrine of crashworthiness on AVs means that, at least in the particularly morbid scenario described above, the occupant always wins—and the children always die. This requirement usurps society’s role in determining the best outcome to the trolley car problem, and it is therefore inappropriate to impose the crashworthiness doctrine on AVs, at least in this context. Without an existing products liability principle to apply, scholars are left to speculate over what an appropriate standard might be.


C. PROPOSAL: Adapting Tort Law Accordingly


[46]     Although the tests described above fall short when it comes to AVs, they do demonstrate the inventiveness and adaptability of tort law. So even though tort law, in its present state, is not currently capable of handling this new technology, tort law will adapt by developing new, more appropriate tests.


[47]     This comment proposes one such test: the reasonable car standard. Scholars suggest that AVs should be treated as other automotive innovations (e.g. seat belts, airbags, cruise control),[151] or as non-automotive machines (e.g. elevators, autopilot).[152] This comment proposes treating AVs in a way that is more like the way we treat human drivers: by adopting a reasonable car standard.[153]


[48]     A reasonable car standard holds a car manufacturer liable only when the car does not act in a way that another reasonable AV would act. The data collection devices inside these new vehicles capture all the relevant information leading up to a collision.[154] This data would be compared to data derived and compiled from other similarly situated AVs. Allowing the factfinder to compare an AV with a traditional model permits a “false choice” which the reasonable car standard circumvents.[155] This standard presents the added advantage of being applicable regardless of whether the car contained a human occupant, meaning car manufacturers could continue to develop AVs with the goal of eliminating human override capability—which is congruent with current NHTSA policy.[156] However, if a human occupant failed to override the AV, the reasonable car standard does not necessarily foreclose a claim against the negligent human driver.


[49]     The reasonable car standard also accounts for the growth of technology. First generation cars will likely not be as “smart” as later generations, and drawing comparisons between generations and across brands would be painting with colossal brush strokes. Nor is it clear yet how much information cars will be able to share with each other.[157] This standard instead allows for the comparison of AVs at the moment the fatal decision is made, and could be applied in a manner that takes V2V[158] capabilities into consideration.


[50]     This standard also resolves issues of privity in an inclusive manner. Currently, most automotive products liability litigation involves the plaintiff suing the manufacturer of their own car.[159] The reasonable car standard allows claims to be brought by passengers, occupants of other vehicles, and pedestrians alike.


[51]     Additionally, the reasonable car standard leaves room for the trolley car problem. It allows society, by way of a jury, to give input into what the best outcome should be, and compares an AV’s choice to what a reasonable AV would have done under similar circumstances. Although what that outcome would be, and how a jury might judge it, is still uncertain, the reasonable car standard gives society the same input it has when a human driver faces the same decision.


[52]     Unfortunately, successful application of the reasonable car standard depends on production of information by car manufacturers regarding the behavior of other cars in similar situations. Manufacturers might be hesitant to reveal this information for a number of reasons (e.g. publicity, consumer privacy, trade secret protection, etc.). However, the normal rules of discovery would compel manufacturers to disclose information necessary to establish the reasonable car standard.


[53]     Another flaw in the reasonable car standard applies to the first generation of AVs: small sample size. With a limited number of AVs on the road, ascertaining what a reasonable car would do might be difficult, and the answer may be unreliable.[160] However, as the technology proliferates, this problem will become less profound.


[54]     Other scholars have discussed and rejected a reasonableness standard under a negligence theory.[161] Especially in a trolley car scenario, negligence would be the improper standard when the injury resulted from an intervening act on behalf of the AV. In other words, there is a distinction between “intending” injury and “merely foreseeing it”.[162] Applying the reasonable car standard in the strict liability setting of products liability would be a proper test because products liability does not pivot on this intent/foreseeability distinction, but whether any safer alternative design existed. The reasonable car standard would serve as a threshold to this issue to prevent the floodgates of litigation from opening so wide as to deter innovation. Determining how a reasonable AV would act and comparing it to an allegedly deviant AV would be far less invasive and expensive for the parties than litigating whether a safer alternative design would be implemented by comparing lines of computer code (which would likely be confusing for the factfinder).[163]


[55]     In sum, although tort law in its current state lacks an appropriate vehicle for remedy when it comes to AVs, tort law is robust enough to adapt as it always has to new technologies. One means of adapting is by applying a new standard: the reasonable car standard. Just as this new technology will influence the evolution of products liability law, so too products liability law will influence the evolution of technology and the actions of car manufacturers.



IV. How Tort Law Affects Technology


[56]     New technology affects the evolution of law and vice versa. This section explores how automotive products liability law is shaping the technology involved with AVs, from different design components to steps that manufacturers are taking to limit liability without stunting growth. Finally, this section proposes that while it remains unclear how the law will react to AVs, manufacturers may take prospective steps to limit, divide, and shift liability.


A. Effecting Design Elements

1. Mechanical Components


[57]     Certain design features of AVs are responsive to legal requirements. For example, California law requires that all AVs be equipped with a steering wheel and a driver at the ready.[164] However, it is not just statutory and regulatory reform driving the incorporation of certain design elements. Products liability concerns exert a similar influence.


[58]     For example, keeping a “kill-switch” in the car,[165] whereby an occupant is responsible for assuming control of the vehicle in the event the car encounters conditions it is not mature enough to handle, might provide manufacturers with an escape from liability.[166] This requires that the car maintain features that permit human control (e.g. steering wheel, pedals, rearview mirror, horn, and emergency brake).[167] Because, as previously mentioned, this requirement severely limits one of the greatest benefits of the technology—mobility for the elderly, minors, and the disabled—the law should work to alleviate this requirement.[168] In the meantime, AVs require human supervision—at least for the first generation.[169]


[59]     Another example is the “black box” recorder, or event data recorder (EDR).[170] Like the kill switch discussed above, although EDRs are required in AVs by California and Nevada law, tort law exerts a similar pressure to keep accurate records of events leading up to a collision. Doing so brings more benefit than harm to manufacturers. AVs will share the road with traditional models for several generations, and most collisions are the result of human error.[171] Therefore, providing accurate data will shift liability away from the manufacturer and onto the human driver in the vast majority of cases—either the human driver of the traditional vehicle, or the human driver who failed to operate the kill switch in the AV.[172]


[60]     Other suggestions yet to be incorporated include colored lighted license plate identifiers, so police may discern when a human driver is in control,[173] and concept cars touting more forward looking features, like reclining[174] or swiveling[175] front seats. To be sure, an AV’s hardware pushes the law to new places—as does its software.


2. Software Components


[61]     Going back to the trolley car scenario, many people question whether an AV can—or rather, should—be programmed to select a particular outcome.[176] One answer to this is that an AV should, in theory, avoid a trolley car scenario altogether. Daniela Rus, head of the Artificial Intelligence lab at M.I.T. believes that a “capable perception and planning system, perhaps aided by sensors that can detect non-line-of-[sight] obstacles” would provide an AV with sufficient situational awareness and control.[177] Rus explains, “A self-driving car should be able to not hit anybody—avoid the trolley problem altogether!”[178] Currently, the algorithms that drive AVs have not matured enough to handle routine driving scenarios, and struggle with four way stops,[179] snow[180], and apparently driving in urban settings with city buses.[181] These challenges stem in part from the software’s timid nature: abiding by traffic laws and driving defensively amid aggressive human drivers, who do not always come to a complete stop or make room for fellow drivers.[182]


[62]     This deficit could be corrected with a little tweaking, but the question then becomes: should AVs be programmed to solve the trolley car problem? And how? As many readers will have guessed, the trolley car problem has no “right answer.”[183] A utilitarian solution would save the greatest number of people, but places the operator (or in the case of AVs, the programmer) in the position of playing God—actively deciding who lives and who dies.[184] Not surprisingly, not all people agree on the best outcome. The public’s conflicting thoughts on AVs is mirrored in the disagreement over the trolley car outcome.[185] As mentioned earlier, most people are willing to sacrifice the driver—so long as they are not the driver.[186]


[63]     Simply put, even if the first generation of AVs were able to find a solution to this problem,[187] society has not yet agreed on what that answer should be.[188] Therefore, requiring this capability in AVs is senseless until society decides on the most desirable outcome. Furthermore, it is irrational to forestall the societal benefits AVs present until a solution to the trolley car problem is devised. One insightful writer put it well:


Humans are freaking out about the trolley [problem] because we’re terrified of the idea of machines killing us. But if we were totally rational, we’d realize 1 in 1 million people getting killed by a machine beats 1 in 100,000 getting killed by a human. For some reason, we’re more okay with the drunk driver or texting while driving. In other words, these cars may be much safer, but many people won’t care because death by machine is really scary to us given our nature.[189]


[64]     Setting aside the trolley car problem, tort law has affected other aspects of AV software, like FOTA; because a safer alternative design would weigh against a manufacturer in a design defect claim,[190] making updates to the software in a timely and cost-effective manner is imperative. FOTA allows manufacturers to send software updates wirelessly as they develop, with little lag time, and at minimal cost.[191]


[65]     Additionally, although many state laws provide liability protection for manufacturers in the event that some third-party tampers with the software, hacking is a foreseeable risk, the consequences of which are potentially catastrophic.[192] As a result, data communication security oriented to minimize vulnerability has produced enhanced methods of encrypting communications.[193]


[66]     Existing products liability laws have hot-housed other advancements like telematics[194] and V2V communication[195] as well. Telematics refers to “the transfer of data to and from a moving vehicle.”[196] It allows traditional cars and AVs to stay up to date on road conditions by reporting information to a central hub, which in turn communicates the information to other users (think of the traffic app Waze).[197]


[67]     V2V uses short wave radio to allow cars to exchange information at a distance of up to 300 meters.[198] This range goes beyond the capabilities of sensors, cameras, and radar in that it can “see around corners” or “through” objects to assess driving conditions well down the road and avoid collisions and traffic jams.[199] For example, if a collision occurs or the roadway is otherwise obstructed, a car slowing down to pass by or rerouted can send the information to a car 300 meters behind it to slow down or avoid the area. That car in turn can relay the message even further, conveying to cars well behind and thereby avoid unnecessary congestion.[200]


[68]     Although V2V is not restricted to use in AVs, it has enormous implications for AVs, allowing them to communicate among themselves or with traditional vehicles.[201] That increased communication provides a redundancy in the event of a sensor failure,[202] or, for example, would allow an AV to know for certain that a city bus indeed intended to slow down and let the AV into the lane.[203]


[69]     While technology managed to evolve despite the constraints of existing laws, manufacturers have taken other actions to limit their liability and still innovate.


B. Effecting Manufacturer’s Actions

1. Applying Some Pressure


[70]     In March 2016, AV proponents petitioned Congress to regulate the industry in order to avoid letting states construct a patchwork of laws which could hamper innovation.[204] Chris Urmson, Google’s self-driving car project technical leader stated, “[i]f every state is left to go its own way without a unified approach, operating self-driving cars across state boundaries would be an unworkable situation and one that will significantly hinder safety innovation, interstate commerce, national competitiveness and the eventual deployment of autonomous vehicles.”[205]


[71]     On September 19, 2016, the NHTSA delivered on its promise to publish updated recommendations for the treatment of AVs, including a request for states to work together to develop uniform policies.[206] NHTSA has said that it will not prevent states from setting their own standards for AVs (so long as they do not conflict with federal law), but this request signals that the NHTSA expects states to cooperate and strive for uniformity.[207]


2. Stopping the Buck


[72]     This uncertainty led Volvo to take the drastic step of announcing in October 2015, that it would assume full liability whenever one of its cars is in autonomous mode.[208] Volvo Car Group President and CEO Håkan Samuelsson warned that a lack of federal guidelines for the testing and certification of AVs may cost the U.S. its leading position in the field.[209] He stated, “Europe has suffered to some extent by having a patchwork of rules and regulations. It would be a shame if the U.S. took a similar path to Europe in this crucial area.”[210] Mr. Samuelsson explained that the lack of federal oversight risks slowing the growth and development of AV technologies, “by making it extremely difficult for car makers to test, develop and sell [AVs]. The absence of one set of rules means car makers cannot conduct credible tests to develop cars that meet all the different guidelines of all 50 [] states.”[211]


[73]     In a fashion, Volvo self-insured its self-driving cars. By assuming all liability, Volvo expressed confidence in its product, and found a way to more accurately project costs—eventually passing them on to the consumer.[212] Consumers will likely be willing to pay a slightly higher price for the assurance that litigation will be avoided.[213] Since Volvo made this promise, Google and Mercedes Benz followed suit making similar assurances.[214] However, this tactic of cutting out the insurance industry could be considered anti-competitive, and it may cause insurance companies to mobilize in opposition.[215] For conservative manufacturers who are not willing to take such a drastic step, this comment proposes alternative steps to reduce liability.


C. PROPOSAL: Limiting, Dividing, and Shifting Liability


[74]     This comment proposes that manufacturers who do not voluntarily assume liability may take the following steps to limit liability: petition Congress for preemptive protection, “split the bill” with insurance companies, and develop special training modules as part of the purchase or lease agreement.


1. Going to Capitol Hill


[75]     Manufacturers could apply additional pressure to state and federal legislative and regulatory bodies to write laws and rules limiting their liability, targeting Congress in particular. In the past, Congress protected industries that provided a good that served a public health interest (like vaccine manufacturers[216]) or provided transportation (like the airline industry[217]).[218]


[76]     Manufacturers can argue that AVs provide both a benefit to public health, by reducing the number of accidents due to human error,[219] and a source of transportation, and they are therefore deserving of liability protection via federal action.[220] The social benefits of AVs range from a sharp decrease in traffic related fatalities, to more efficient land use, significantly reduced emissions, reduced social isolation, and access to essential services.[221] Some projections predict AVs could save nearly 300,000 lives over the course of a decade in the U.S. alone—putting AVs in the company of public health benefits like vaccines, which save 42,000 lives per U.S. birth cohort.[222] Moreover, the reduced emotional toll on the families of the 300,000 potential victims is immeasurable.[223]

[77]     Yet, for all these benefits, legal liability remains the greatest roadblock to mass adoption of AVs.[224] Pressure from foreign markets, as Samuelsson pointed out, coupled with pressure from manufacturers may convince Congress to act.


2. Going Dutch[225]


[78]     Autonomous technology also shakes up the insurance industry, and much has been written predicting reactions.[226] Certainly, if the AV is in autonomous mode when a crash occurs, as the Google car was, insurance companies will seek to shift liability away from the human driver and toward manufacturers.[227] One scholar suggests establishing a national car insurance fund to pay for AV accidents.[228] This comment proposes that manufacturers could lead the effort. A national fund presents the advantage of allowing manufacturers to negotiate with other stakeholders (e.g. NHTSA, insurance companies, and ride sharing companies) to determine a proportional contribution, rather than rolling the dice in court whenever a plaintiff files a products liability claim. Without lawmaker action to completely immunize manufacturers from liability, the next best option may be negotiating liability absent a jury.


[79]     Working with the insurance industry may be a better move than working against it. As altruistic as Volvo’s self-insurance model may seem, it has the potential to alienate the insurance industry. As previously mentioned, cutting out insurance companies may create friction and ultimately backfire if the move is deemed anti-competitive.[229]


3. Going it Alone


[80]     In the face of regulatory drought, manufacturers may take unilateral steps to limit liability. For example, manufacturers could develop and provide special training modules for prospective buyers, making satisfactory completion a part of the purchase or lease agreement. As previously mentioned, manufacturers will want to avoid the awkward position of managing consumer expectations and providing adequate warnings for safe use of AVs, while simultaneously encouraging use and advertising the overall increased safety of the product.[230] Training modules would provide manufacturers with the opportunity to fully verse purchasers in the capabilities and limitations of AVs, allowing manufacturers to fulfill their duty to provide adequate warnings in a controlled environment—somewhat privately, or at least not center stage in front of a public that is already terrified by the idea of death by machine.[231]



V. Conclusion

[81]     Fully autonomous vehicles already roam public streets, but automotive products liability law lags behind the technology. Tort law in its current state cannot appropriately address concerns arising from the mass adoption of AVs, and while lawmakers ponder the best course of action, manufacturers must be prepared to litigate the necessary changes—like the adoption of a reasonable car standard.


[82]     Just as the law reacts to new technology, technology symbiotically reacts to the law. Thus, manufacturers must design accordingly, while consumer demand requires that manufacturers design AVs that push the limits of existing products liability law. Therefore, to ensure that innovation is not unduly hampered, manufacturers must take additional steps like seeking liability protection via legislation, leading the way to establish a national insurance fund, and developing training modules for buyers as part of purchase and lease agreements.


[83]     AV cars are here. The law will react. Manufacturers should ready themselves to influence that reaction.






*J.D., M.P.A., B.A. Political Science. The author thanks Tobias Ogemark for his inspiration and insight into the technical areas of this field, Hiram Molina for his help editing, and JOLT.

[1] Matt Novak, Nikola Tesla’s Amazing Predictions for the 21st Century, Smithsonian (Apr. 19, 2013),,

[2] See Nick Statt, Google’s bus crash is changing the conversation around self-driving cars, Verge (Mar. 15, 2016, 2:56 PM),,

[3] See id.

[4] See id.; see Chris Ziegler, A Google self-driving car caused a crash for the first time, Verge (Feb. 29, 2016, 1:50PM),,

[5] See Statt, supra note 2; see generally Alissa Walker, What Google’s Self-Driving Car Learned From Hitting That Bus, Gizmodo (Mar. 11, 2016, 7:15 PM),, (discussing Google’s response to the February crash).

[6] See Statt, supra note 2; see Ziegler, supra note 4.

[7] See Ziegler, supra note 4; see also Chris Ziegler, Watch the moment a self-driving Google car sideswipes a bus, The Verge (Mar. 9, 2016, 11:57 AM) [hereinafter Watch the moment],,

[8] See generally, e.g., James M. Anderson et al., Autonomous Vehicle Technology: A Guide for Policymakers xiii (RAND Corp. 2016) [hereinafter RAND],, (discussing the policy changes in response to new autonomous vehicle technology); Sven A. Beiker, Legal Aspects of Autonomous Driving, 52 Santa Clara L. Rev. 1145,1146 (2012) (providing an overview of legal issues involving autonomous vehicles); Steve Brachmann, Regulatory issues involving self-driving vehicles begin to take shape, IPWatchdog (Apr. 3, 2015),, (discussing regulatory issues surrounding autonomous vehicles); Frank Douma & Sarah A. Palodichuk, Criminal Liability Issues Created by Autonomous Vehicles, 52 Santa Clara L. Rev. 1157, 1158-59 (2012) (describing the third party liability issues created by autonomous vehicles); Andrew P. Garza, Note, “Look Ma, No Hands!:” Wrinkles and Wrecks in the Age of Autonomous Vehicles, 46 New Eng. L. Rev. 581, 616 (2012) (arguing that liability will fall on manufacturers but that increased safety benefits will decrease liability); Dorothy J. Glancy, Privacy in Autonomous Vehicles, 52 Santa Clara L. Rev. 1171, 1173 (2012); Kyle Graham, Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations, 52 Santa Clara L. Rev. 1241, 1243 (2012) (discussing how tort liability evolves with emerging technology); Robert B. Kelly & Mark D. Johnson, Defining a Stable, Protected and Secure Spectrum Environment for Autonomous Vehicles, 52 Santa Clara L. Rev. 1271, 1274 (2012) (discussing autonomous vehicle communication systems); Monica Kleja, Läsarna: Tillverkarna ska ta ansvar för självkörande bilar [Readers: Manufacturers must take responsibility for self-driving cars], NyTeknik (Feb. 24, 2016) (stating that manufactures must take responsibility for self-driving cars); Gary E. Marchant & Rachel A. Lindor, The Coming Collision Between Autonomous Vehicles and the Liability System, 52 Santa Clara L. Rev. 1321, 1339 (2012) (arguing that the vehicle manufacturer should be liable for accidents caused in autonomous mode); Robert W. Peterson, New Technology—Old Law: Autonomous Vehicles and California’s Insurance Framework, 52 Santa Clara L. Rev. 1341, 1342 (2012) (discussing how insurance markets will be effected by autonomous vehicles); Eddie Pröckl, Självkörande bilar känsligast för hackning, Ny Teknik 4–5 (Swed.) (Apr. 6, 2016) (discussing how self-driving cars can be hacked); J.B. Ruhl, Can AI Make AI Obey the Law?, Law 2050 A Forum About the Legal Future, (Feb. 16, 2016, 8:58 PM), (outlining the legal issues stemming from autonomous vehicles); John Villasenor, Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation, Brookings Institution Press, (Apr. 24, 2014) [hereinafter Brookings],, (discussing the liability issues arising from autonomous vehicles).

[9] See Neal Katyal, Disruptive Technologies and the Law, 102 Geo. L.J. 1685, 1689 (2014).

[10] See Garza, supra note 8, at 616; Jeffrey R. Zohn, When Robots Attack: How Should the Law Handle Self-Driving Cars That Cause Damages, 2015 U. Ill. J.L. Tech. & Pol’y 461, 464 (2015) (arguing that “there is enough precedential law to support autonomous vehicle liability and that the law should treat autonomous vehicles like other autonomous machines, not traditional automobiles”); F. Patrick Hubbard, “Sophisticated Robots:” Balancing Liability, Regulation, and Innovation, 66 Fla. L. Rev. 1803, 1872 (2014) (advocating the ability of tort law to balance victim compensation and innovation).

[11] See Roy Alan Cohen, Self-Driving Technology and Autonomous Vehicles: A Whole New World for Potential Product Liability Discussion, 82 Def. Couns. J. 328, 330–31 (2015), reprinted in Products Liability, IADC Committee Newsletter; RAND, supra note 8, at xxii.

[12] See Marchant & Lindor, supra note 8, at 1340. But see Jeffrey K. Gurney, Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles, 2013 U. Ill. J.L. Tech. & Pol’y 247, 273 (2013) (arguing that “current products liability law will not be able to adequately assess [] fault” for autonomous vehicles and current doctrines should be reconsidered).

[13] See Alexander Hars, Top Misconceptions of Autonomous Cars and Self-Driving Vehicles, Thinking Outside the Box: Inventivio Innovation Briefs 1 (2016),,


[14] See id. at 1, 5.

[15] In 2013, NHTSA established five levels of automation in vehicles:

No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls – brakes, steering, throttle, and motive power – at all times.

Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.

Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.

Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.

Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.

See Russ Heaps, Self-Driving Cars: Department of Transportation Issues New Classification Levels for Autonomous Cars, Autotrader (Oct. 2016),, (citing Press Release, Nat’l High. Traf. Safety Admin., Preliminary Statement of Policy Concerning Autonomous Vehicles 4–5 (last visited Apr. 13, 2017) [hereinafter NHTSA Preliminary Statement],,; see also Lewis Bass & Thomas Parker Redick, Prod. Liab.: Design and Mfg. Defects § 26:3 (2d ed. 2017).

[16] See id.


[17] See Lane Keeping Assist: Helps keep drivers within lanes, Toyota,, (last visited Apr. 22, 2016).

[18] See Aaron M. Kessler & Bill Vlasic, Semiautonomous Driving Arrives, Feature by Feature, N.Y. Times (Apr. 2, 2015),,

[19] LIDAR, or Laser Illuminating Detection and Ranging, is like radar, but uses lasers to detect objects and build a 3-D map of the car’s surroundings. See Bryan Clark, How Self-Driving Cars Work: The Nuts and Bolts Behind Google’s Autonomous Car Program, MakeUseOf (Feb. 21, 2015),,

[20]Telematics is a general term that refers to any device which merges telecommunications and informatics. Telematics includes anything from GPS systems to navigation systems. It is responsible for many features in vehicles from OnStar to hands free mobile calling.” See Welcome to,,, (last visited Apr. 22, 2016).

[21] “The Global Positioning System (GPS) is a satellite-based navigation system made up of at least 24 satellites … Each satellite transmits a unique signal and orbital parameters that allow GPS devices to decode and compute the precise location of the satellite. GPS receivers use this information and trilateration to calculate a user’s exact location.” See What is GPS?, Garmin,, (last visited Apr. 18, 2017).


[22] See generally Steven H. Bayless et al., Intelligence Transp. Soc’y of Am., U.S. Dep’t of Transp., Connected Veh. Insights: Trends in Roadway Domain Active Sensing 2 (Aug. 14, 2013),,

[23] See Kessler & Vlasic, supra note 18.


[24] See Sami Haj-Assaad, Future Cars Will Update Wirelessly To Stay Safe, (Oct. 21, 2015),,


[25] See Harman, Redbend Software Management Platform Software Update Management, Redbend,, (last visited Apr. 22, 2016).

[26] See RedBend Software, Updating Car ECUs Over-The-Air (FOTA) (2011) [hereinafter Redbend White Paper] at 10,,

[27] See How it works, Waymo,, (last visited Apr. 22, 2016) [hereinafter How it works]; Muhammad Azmat & Clemens Schuhmayer, Inst. of Transp. & Logistics Vienna Univ. of Econ. & Bus., at Fed. Procurement Agency Austria Workshop Innovation Platform – E-mobility, Future Scenario: Self Driving Cars—The Future has Already Begun (May 7, 2015),,

[28] See How it works, supra note 27.

[29] See id.

[30] See Hars, supra note 13, at 4.


[31] See id.


[32] See id.; see also How it works, supra note 27.

[33] See How it works, supra note 27.


[34] See id.


[35] See Hars, supra note 13; see also Dorothy J. Glancy, Legal Outlook for Autonomous, Automated, and Connected Cars, Fed’n of Def. & Corp. Couns. Ann. Meeting (July 25-Aug. 1, 2015),

[36] V2V communication systems use short range radio to “talk” to each other. The Department of Transportation estimates V2V will avoid 76% of roadway crashes. Self-Driving Cars and Insurance, Ins. Info. Institute (July 2016) [hereinafter Insurance Information Institute],, But see RAND, supra note 8, at xx; and Brachmann, supra note 8; and Dorothy J. Glancy, Autonomous and Automated and Connected Cars-Oh My! First Generation Autonomous Cars in the Legal Ecosystem, 16 Minn. J.L. Sci. & Tech. 619, 648 (2015) (“What remains uncertain is whether NHTSA’s narrow definition of connected vehicles to include only DSRC V2V communications in passenger cars and light trucks, will be a required feature of first generation autonomous cars.”)[hereinafter Autonomous and Automated and Connected Cars-Oh My!].

[37] See Press Release, Nat’l Highway Traffic Safety Admin., Transp. Sec. Foxx announces steps to accelerate road safety innovation (May 13, 2015),,

[38] David Shepardson, Google says it bears ‘some responsibility’ after self-driving car hot bus, Reuters (Feb. 29, 2016),,

[39] See Self-Driving Cars: Past, Present and Future, GEICO (Dec. 4, 2015),,


[40] See id.; see Mike Ramsey, Google Self-Driving Car Hits Bus, Wall St. J. (Mar. 1, 2016, 8:47 AM),,

[41] See Azmat & Schuhmayer, supra note 27, at 10.

[42] See id.

[43] Jan Hauser, Amerika schaltet auf Autopilot [America switches to Autopilot], Frankfurter Allgemeine Zeitung (Sep. 19, 2015),,

[44] See Rand, supra note 8, at 9 (discussing these benefits and exhaustively listing the promises and perils related to AVs).

[45] See id. at 15.


[46] See id. at 9.


[47] See id. at 17.


[48] See id. at 28.


[49] See Rand, supra note 8, at 28.


[50] See Insurance Information Institute, supra note 36.

[51] See Driverlessuser, HDI Gerling first insurance company to insure a driverless car, Driverless car market watch (Mar. 26, 2012) [hereinafter HDI Gerling],, ; Insurance Information Institute, supra note 36; Carrie Schroll, Splitting the Bill: Creating A National Car Insurance Fund to Pay for Accidents in Autonomous Vehicles, 109 Nw. U.L. Rev. 803, 814 (2015).

[52] “Nevada was the first state to authorize the operation of autonomous vehicles in 2011. Since then, ten other states—Alabama, California, Florida, Louisiana, Michigan, North Dakota, Pennsylvania, Tennessee, Utah and Virginia—and Washington D.C. have passed legislation related to autonomous vehicles. Governors in Arizona and Massachusetts issued executive orders related to autonomous vehicles.” Autonomous Vehicles – Self-Driving Vehicles Legislation, Nat’l Conf. of State Legislatures (Feb. 23, 2016), Autonomous Vehicle Legislation, (“[In 2016], 20 states introduced legislation. Sixteen states introduced legislation related to autonomous vehicles in 2015, up from 12 states in 2014, nine states and D.C. in 2013, and six states in 2012.”).

[53] For example, AVs cannot be programmed to break the law. California requires that the test vehicle and driver must obey all provisions of the state Vehicle Code and the local highway laws. See Cal. Code Regs. tit. 13, § 227.18(c) (2017). However, not giving an AV that same discretion to break a minor traffic regulation (like driving on the shoulder) to avoid a collision, would create unnecessary risk and could be a potential design defect. See Patrick Lin, The Ethics of Autonomous Cars, Atlantic (Oct. 8, 2013),,; Alexander Hars, Supervising autonomous cars on autopilot: A hazardous idea, Inventivio Innovation Briefs, Issue 2013-09,, [hereinafter Supervising Autonomous Cars]. Programming AVs to break the law is perhaps not the wisest way to solve the problem. Instead, the traffic code could be updated to allow for an otherwise illegal maneuver to be deemed legal under certain conditions. Compare Autonomous and Automated and Connected Cars-Oh My!, supra note 36, at 653–54 (2015) (stating that traditional traffic laws should apply to first generation autonomous vehicles, but perhaps not later generations); with Benjamin I. Schimelman, How to Train A Criminal: Making Fully Autonomous Vehicles Safe for Humans, 49 Conn. L. Rev. 327, 330 (2016) (advocating that autonomous vehicles be developed to strategically break the rules of the road so they blend more easily into the existing ecosystem of human drivers).

[54] See U. Wash. Tech, Law & Pol’y Clinic, Autonomous Vehicle Law Report and Recommendations to the ULC 20 [hereinafter AV Team, Law Report] (unpublished report) (on file with University of Washington School of Law),, (“Nevada, Florida, and Michigan require: If a third party makes changes to an AV and those changes cause harm, the manufacturer is not liable for damages unless the defect was present when originally manufactured.”); see also Nev. Rev. Stat. § 482A.090 (2017); Fla. Stat. § 316.86 (2017); D.C. Code § 50-2353 (2017); Mich. Comp. Laws § 257.817 (2017).

[55] See RAND, supra note 8, at 138; see Insurance Information Institute, supra note 36.

[56] See M. Ryan Calo, Open Robotics, 70 Md. L. Rev. 571, 602–07 (2011) (proposing limited immunity from liability for manufacturers of autonomous systems); see Marchant & Lindor, supra note 8, at 1337 (providing the rationale and case law for such legislative intervention).

[57] See Insurance Information Institute, supra note 36.

[58] Secretary Foxx Unveils President Obama’s FY17 Budget Proposal of Nearly $4 Billion for Automated Vehicles and Announces DOT Initiatives to Accelerate Vehicle Safety Innovations, U.S. Dep’t Transp. (Jan. 14, 2016),, NHTSA has the authority to update the Federal Motor Vehicle Safety Standards, and set emissions standards. Rules related to meeting those emissions standards are promulgated by the Environmental Protection Agency.

[59] See Nat’l Highway Traffic Safety Admin., “DOT/NHTSA Policy Statement Concerning Automated Vehicles” 2016 Update to “Preliminary Statement of Policy Concerning Autonomous Vehicles” (2016) [hereinafter 2016 Update],,; Autonomous Vehicles – Self-Driving Vehicles Legislation, supra note 52.

[60] See Richard Adhikari, Feds Put AI in the Driver’s Seat, TechNewsWorld (Feb. 11, 2016, 10:19 AM),, This puts liability squarely on the manufacturer by way of the AV. The updated recommendations leave liability determinations up to the states. See Kelsey D. Atherton, What You Need To Know About The New Federal Rules For Driverless Cars, Popular Sci. (Sept. 21, 2016),,

[61] See Adhikari, supra note 60.

[62] See Insurance Information Institute, supra note 36.

[63] See Nat’l High. Traf. Safety Admin., 2014 Crash Data Key Findings (Nov. 2015) [hereinafter 2014 Crash Data].

[64] See Press Release, Nat’l Highway Traf. Safety Admin., Traf. Fatalities Fall in 2014, but Early Estimates Show 2015 Trending Higher (Nov. 24, 2015),

[65] See Adrienne Lafrance, One Thing Baby Boomers and Millennials Agree On: Self-Driving Cars, Atlantic (Oct. 16, 2015),,

[66] See Claire Cain Miller, When Driverless Cars Break the Law, N.Y. Times (May 13, 2014),, (As Bryant Walker Smith, a fellow at Stanford University’s Center for Automotive Research, succinctly stated, “It’s the one headline, ‘machine kills child,’ rather than the 30,000 obituaries we have every year from humans killed on the roads. It’s the fear of robots. There’s something scarier about a machine malfunctioning and taking away control from somebody. We saw that in the Toyota unintended acceleration cases, when people would describe their horror at feeling like they could lose control of their car.”).

[67] See Andrew Hawkins, Voices Clash at First Public Hearing on Self-Driving Cars, Verge (Apr. 8, 2016),, (commenting that NHTSA’s first public hearing on AVs lasted seven hours, ranging from “this is the best thing ever” to “ban self-driving cars before they kill us all.”).

[68] See id.


[69] See generally 2016 Update, supra note 59.

[70] See supra note 58.


[71] See Garza, supra note 8, at 589 (“Because ‘[e]rror in legislation is common, and never more so than when the technology is galloping forward,’ it is important to avoid attempts to ‘match an imperfect legal system to an evolving world that we understand poorly.’”).

[72] See Derek H. Swanson et al., U.S. Automotive Prod. Liab. Law 3 (McGuireWoods, 2nd ed. 2009),,

[73] See Garza, supra note 8, at 589; Am. L. Prod. Liab. 3d § 31:10 (“In manufacturing defect cases, strict liability and negligence are distinct theories and are based on different factual predicates. While strict liability rests on a showing only of a product defect, negligence requires a showing of fault leading to a product defect.”); see generally Glancy, supra note 35, at 26; Brookings, supra note 8, at 7–8 (“While the landscape is somewhat in flux with respect to the specific theories of liability that can be invoked to pursue claims regarding manufacturing defects, design defects, and failure to warn, all three remain central to products liability law.”).

[74] See David G. Owen, The Evolution of Products Liability Law, 26 Rev. Litig. 955, 956 (2007) (stating that products liability dates back even further, “at least to Roman law, which imposed an implied warranty of quality against defects on sellers of certain goods, a rule that may be traced to ancient Babylon, one or two thousand years before”).

[75] In Winterbottom v. Wright, Mr. Winterbottom was injured when the mail coach he drove collapsed because of shoddy construction. Winterbottom’s employer, the Postmaster General, had purchased the mail coach from Mr. Wright, the manufacturer. Winterbottom sued Wright, but his case was dismissed based on a general rule that a product seller cannot be sued—even for proven negligence—by someone with whom he has not contracted, or in other words, someone with whom he is not “in privity.” See Winterbottom v. Wright, 10 M & W 109, 114 (1842), see Vernon Palmer, Why Privity Entered Tort – Tort An Historical Reexamination of Winterbottom v. Wright, XXVII Am. J. Legal Hist. 85, 92 (1983).

[76] “The connection or relationship between two parties, each having a legally recognized interest in the same subject matter.” PRIVITY, Black’s Law Dictionary (10th ed. 2014).

[77] See MacPherson v. Buick Motor Co., 111 N.E. 1050, 1053 (N.Y. 1916) (enlarging “inherent danger” to swallow the general rule of privity). Justice Cardozo wrote, “We hold, then, that the principle of [inherent danger] is not limited to poisons, explosives, and things of like nature, to things which in their normal operation are implements of destruction. If the nature of a thing is such that it is reasonably certain to place life and limb in peril when negligently made, it is then a thing of danger. Its nature gives warning of the consequences to be expected. If to the element of danger there is added knowledge that the thing will be used by persons other than the purchaser, and used without new tests, then, irrespective of contract, the manufacturer of this thing of danger is under a duty to make it carefully.” Id.

[78] See William L. Prosser, The Fall of the Citadel (Strict Liability to the Consumer), 50 Minn. L. Rev. 791, 791 (1966) (suggesting that Henningsen v. Bloomfield Motors, Inc., 121 A.2d 69, 90 (1960) marked the “fall of the citadel of privity”); see also Greenman v. Yuba Power Products, Inc., 377 P.2d 897 (Cal. 1963) (wherein Justice Traynor famously writes, “To establish the manufacturer’s liability it was sufficient that plaintiff proved he was injured while using the [product] in a way it was intended to be used as a result of a defect in the design and manufacture of which the plaintiff was not aware that made the [product] unsafe for its intended use.”).

[79] David G. Owen, Prod. Liab. L. 257 (3d ed. 2015). Although the Restatement (Second) of Torts uses the language “defective condition unreasonably dangerous,” Owen argues that most courts and commentators encapsulate this phrase with the use of the term “defective,” which simply means that a product is “more dangerous than it properly should be.” See id., at 258.

[80] Am. L. Prod. Liab. § 17:3 (3d ed. 2017).

[81] See id.

[82] See id.; “Allegations of defective design can also be made under any theory of liability. In negligence, the plaintiff must prove the breach of a design standard. In warranty, the question is whether the design renders the automobile unfit for its ordinary purposes. In strict liability, the issue is framed in terms of a defect that renders an automobile unreasonably dangerous. The strict liability standard is often left to the jury solely on the instruction that a defect exists if the automobile is more dangerous than an ordinary consumer would have expected.” Swanson et al., supra note 72, at 8.

[83] See Am. L. Prod. Liab. § 17:3 (3d ed. 2017).

[84] Larsen v. General Motors Corp. was the landmark case for crashworthiness doctrine. In Larsen, the steering column of the Corvair caused head trauma above and beyond that which would have been sustained in the crash alone. See Larsen v. General Motors Corp., 391 F.2d 495, 502–03 (8th Cir. 1968). Crashworthiness doctrine is also recognized in The Restatement (Third) of Torts, which specifically adopts the theory under another name: the so-called enhanced injury doctrine. See Restatement (Third) of Torts: Prod. Liab. § 16(a) (Am. Law Inst.1998); see also 63A. Am. Jur. 2d Prod. Liab. § 931 (2d. ed. 2017).

[85] See Larsen, 391 F.2d at 502.

[86] Garza, supra note 8, at 590 (citing Owen, supra note 74, at 1056–57).

[87] See id. at 594.


[88] See id.


[89] See id.


[90] See Owen, supra note 74, at 1056–28.

[91] Auto Products Liability, Conley Griggs Partin,, (last visited Apr. 21, 2017).

[92] Consalo v. Gen. Motors Corp., 609 A.2d 75, 76 (N.J. Super. Ct. App. Div. 1992).

[93] Restatement (Third) of Torts: Prods. Liab. § 2 cmt. a (Am. Law Inst. 1998) (“Strict liability . . . performs a function similar to the concept of res ipsa loquitur . . . .”).

[94] Garza, supra note 8, at 591.


[95] Id.

[96] See Restatement (Second) of Torts § 402A (Am. Law Inst. 1979); Salerno v. Innovative Surveillance Tech., Inc., 932 N.E.2d 101, 109 (Ill. App. Ct. 1st Dist. 2010); Linda Sharp, Annotation, Products Liability: Consumer Expectation Test, 73 A.L.R. 5th 75, *3 (1999).

[97] Sharp, supra note 96; see Salerno, 932 N.E.2d at 109.

[98] Baley v. Fed. Signal Corp., 982 N.E.2d 776, 790 (App. Ct. 1st Dist. 2012).

[99] See Garza, supra note 8, at 593.


[100] See id., at 593–94.

[101] Larsen v. General Motors Corp., 391 F.2d 495, 496–97.

[102] See id. at 502.

[103] Id.

[104] Garza, supra note 8, at 591; Bruce K. Ottley, Rogelio A. Lasso & Terrence F. Kiely, Understanding Products Liability Law 137–38 (2d ed. 2013).

[105] See, e.g., Jackson v. General Motors Corp., 60 S.W.3d 800, 804 (Tenn. 2001).

[106] See Garza, supra note 8, at 601–02; Gurney, supra note 12, at 261 (“Because of the complexity of traditional automobiles, some courts hesitate to apply the consumer expectations test to most automotive accidents.”); but see Aubin v. Union Carbide Corp., 177 So. 3d 489, 493–94 (Fla. 2015) (applying consumer expectation test, rather than risk utility test, applied to design defect claim against asbestos manufacturer); Jackson, 60 S.W.3d at 806 (citing Cunningham v. Mitsubishi Motors Corp., No. C-3-88-582, 1993 U.S. Dist. LEXIS 21299, at *14 (S.D. Ohio June 16, 1993)) (“This Court is simply not willing to . . . preclud[e] the use of the consumer expectation test in a situation involving a familiar consumer product which is technically complex or uses a new process to accomplish a familiar function. Many familiar consumer products involve complex technology.”).

[107] See Garza, supra note 8, at 601–02; Gurney, supra note 12, at 261; but see Aubin v. Union Carbide Corp., 177 So. 3d 489, 493–94 (Fla. 2015) (applying consumer expectation test, rather than risk utility test, applied to design defect claim against asbestos manufacturer); Jackson, 60 S.W.3d at 806 (citing Cunningham v. Mitsubishi Motors Corp., No. C-3-88-582, 1993 U.S. Dist. LEXIS 21299, at *14 (S.D. Ohio June 16, 1993)).


[108] Corey Doctorow, The Problem with Self-driving Cars: Who Controls the Code?, Guardian (Dec. 23, 2015, 7:00 PM),, (noting that the Trolley Problem was first posed by Philippa Foot).

[109] See Jared Newman, How to Make Driverless Cars Behave, Time (June 6, 2014),,

[110] See Zohn, supra note 10, at 464 (examining how civil liability will attach to autonomous vehicle accidents).

[111] Garza, supra note 8, at 595 (discussing the application of products liability law in accidents by autonomous vehicles).

[112] See How do elevators work, DiscoveryKids,, (last visited Apr. 15, 2017).


[113] See Kyle Colonna, Autonomous Cars and Tort Liability, 4 Case W. Res. J.L. Tech. & Internet 81, 93 (2012) (distinguishing between elevators and AVs, but concluding strict liability would apply to both).

[114] Zohn, supra note 10, at 483; see Willoughby v. Montgomery Elevator Co., 87 S.W.3d 509, 512 (Tenn. Ct. App. 2002); see also Cent. of Ga. Ry. Co. v. Lippman, 36 S.E. 202, 207 (Ga. 1900) (stating that common carriers usually cannot avoid liability for negligence).

[115] See Supervising Autonomous Cars, supra note 53, at 2.

[116] See Jerry Hirsch, 253 million cars and trucks on U.S. roads; average age is 11.4 years, L.A. Times (June 9, 2014),,;

Rose Eveleth, A Map of Every Passenger Plane in the Skies at This Instant, Smithsonian Mag. (Sept. 17, 2012),,


[117] In fact, Elon Musk, co-founder of Tesla, predicted that human driving will be outlawed within twenty years. See Josh Lowensohn, Elon Musk: Cars You Can Drive Will Eventually be Outlawed, Verge (Mar. 17, 2015, 2:40 PM),,

[118] See RAND, supra note 8, at xx.

[119] See Supervising Autonomous Cars, supra note 53, at 1.

[120] See id. at 1-2; Zohn, supra note 10, at 482 (arguing the elimination of “appeal of this product to elderly, disabled, or other individuals that would otherwise struggle with operating an automobile” is a necessary evil to assure a competent driver remains ready to take the wheel of an AV).

[121] See History of Seat Belts in the U.S., Bisnar Chase,, (last visited Apr. 18, 2017).


[122] See Garza, supra note 8, at 603 (“While analogizing vehicle restraint and air bag statistics to OAVs is admittedly an apples-to-oranges affair, these statistics may be indicative of how the benefits of autonomous vehicle technologies are likely to be perceived.”).

[123] See generally Karinna Hurley, How Pedestrians Will Defeat Autonomous Vehicles, Sci. Am. (Mar. 21, 2017),, (discussing the implications of autonomous vehicles on pedestrians and traffic flow).
[124] Seatbelts and airbags rate a level 0, but cruise control rates at level 1 and adaptive cruise control at level 2. See Heaps, supra note 15, at 3-5.

[125] Cruise control is also a mechanical feature but a complex one that courts have had a mixed reaction over, allowing either consumer-expectation or risk-utility, leaning away from consumer expectation. See Garza, supra note 8, at 600–03.

[126] See Cohen, supra note 11, at 332. “Manufacturing defects claims in the autonomous vehicle context face a significant complication: courts have not applied the manufacturing defect doctrine to software because nothing tangible is manufactured. Because of this, a plaintiff will not be able to allege under a manufacturing defect theory that the software erred, rather the plaintiff will want to allege that the autonomous technology did not meet manufacturing specifications. This will be tricky for a plaintiff to do if the defect is really a software error (algorithm).” Gurney, supra note 12, at 259; see also Jessica S. Brodsky, Autonomous Vehicle Regulation: How an Uncertain Legal Landscape May Hit the Brakes on Self-Driving Cars, 31 Berkeley Tech. L.J. 851, 863–64 (2016) (discussing that because software is not a product “courts have used the economic loss doctrine to limit liability when an economic loss is suffered due to software failure but have also allowed tort actions to proceed when software glitches lead to actual physical harm.”).

[127] A “malfunction theory” which uses a “res ipsa loquitur like inference to infer defectiveness in strict liability where there was no independent proof of a defect in the product.” Garza, supra note 8, at 591.

[128] See id.


[129] “Some jurisdictions do not recognize the malfunction doctrine. Courts that do apply the doctrine hesitate to apply it to claims in a widespread fashion and typically require a showing of unique circumstances before applying it. When applying the doctrine to traditional vehicles, some courts require that the vehicle was relatively new and that the vehicle part was not repaired. An expert is usually required to show that the accident could not have been caused by anything other than the alleged defect. These limitations, along with the fact that some jurisdictions do not recognize the malfunction doctrine, limit the usefulness of the doctrine, making it difficult to apply for autonomous vehicles.” Gurney, supra note 12, at 260.

[130] See Garza, supra note 8, at 591–92; Gurney, supra note 12, at 260–61; Cohen, supra note 11, at 332–33.

[131] See Restatement (Third) of Torts: Prods. Liab. § 2 cmt. g (Am. Law Inst.1998) (“[C]onsumer expectations do not constitute an independent standard for judging the defectiveness of product designs.”).

[132] But see Gurney, supra note 12, at 261 (“Although autonomous technology could be considered complex,’ developing consumer expectations does not require knowledge of the complexity.”).

[133] It could be argued that other manufacturers are similarly situated without confusing consumers or courts. For example, cigarette manufacturers must place warnings on their products, all the while advertising and selling their wares. The effects of tobacco use are widely known, and even though manufacturers must now place a warning on their products, they were not the first decry the unhealthy effects of tobacco use. Conversely, the perils of AV use are not widely known (although they may be widely assumed by consumers, either accurately, or without any factual basis). Making AV manufacturers responsible for disseminating detrimental information about a fledgling technology (which carries substantial societal benefits) is therefore not akin to requiring cigarette makers place a warning on their product (which do not carry a substantial societal benefit)—which they did only after their addictive product was established in the marketplace, and after years of litigation. AV manufacturers would have little incentive to fully disclose potential risks, because even though doing so might allow manufacturers to present an assumption of risk defense, the defense would only extend to occupants of the AV, not to victims outside the AV, and courts often refuse to recognize this defense, instead lumping it in to a comparative negligence analysis. See Marchant & Lindor, supra, note 8, at 1336–37.

[134] See Gurney, supra note 12, at 262; Restatement (Third) of Torts: Prods. Liab. § 2(b) (Am. Law Inst.1998).

[135] See RAND, supra note 8, at xxii (noting AV technology would bring about a “decreased number of crashes and associated lower insurance costs”); see also Brookings, supra note 8, at 2 (noting AV technology would “increase safety on highways by reducing both the number and severity of accidents”).

[136] See Glancy, supra note 35, at 26 (“To the extent that such litigation does occur, it is likely to be technologically challenging and more than usually expensive”).

[137] See Brookings, supra note 8, at 8–9.


[138] See, e.g., Burden of Proving Feasibility of Alternative Safe Design in Products Liability Action Based on Defective Design, 78 A.L.R. 4th 154, *3.

[139] See Gurney, supra note 12, at 265–66.

[140] See Redbend white paper, supra note 26, at 2.

[141] See Owen, supra note 74, at 400 (discussing that state jurisdictions are split as to whether to admit into evidence subsequent remedial measures); see also Fed. R. Evidence 407; see also Christopher B. Mueller, Laird C. Kirkpatrick & Charles H. Rose., Evidence Practice Under the Rules 231 (3d ed. 2009) (“FRE 407 bars evidence of subsequent remedial measures to prove negligence, culpable conduct, product or design defects, or the need for a warning or instruction.”).

[142] See Gurney, supra note 12, at 257–58 (“[S]ince this analysis is focusing on Google Cars and crashworthiness is concerned with the structure and design of the vehicle, the analysis of a vehicle’s crashworthiness would be the same for a vehicle with autonomous technology and one without autonomous technology”).

[143] See Newman, supra note 109.

[144] See generally Nicholas Stringfellow, Law and the Problem of Autonomous Cars, Colum. Sci. & Tech. L. Rev. (Nov. 22, 2015),, (raising the issue that autonomous cars should be able to account for minimizing loss or injury when a crash is inevitable).

[145] See discussion infra Part IV.

[146] See Stringfellow, supra note 144; see also Jonathan O’Callaghan, Should a Self-Driving Car Kill its Passengers in a “Greater Good” Scenario?, IFLScience (Oct. 26, 2015),, (reporting the results of Amazon’s Mechanical Turk, an online crowdsourcing tool, which presented respondents with a modern trolley car scenario: “on the whole, people were willing to sacrifice the driver in order to save others, but most were only willing to do so if they did not consider themselves to be the driver. While 75% of respondents thought it would be moral to swerve, only 65% thought the cars would actually be programmed to swerve”).

[147] See Stringfellow, supra note 144; see O’Callaghan, supra note 146.


[148]See Larsen v. General Motors Corp., 391 F.2d 495, 502 (8th Cir. 1968).

[149] See Jean-François Bonnefon, Azim Shariff & Iyad Rahwan, Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?, ARXIV, at 10 (Oct. 13, 2015),,

[150] See id. at 8; see also O’Callaghan supra note 146.

[151] See supra note 121 and accompanying text.

[152] See Colonna, supra note 113, at 93, 97, 99.

[153] See Nick Belay, Note, Robot Ethics and Self-Driving Cars: How Ethical Determinations in Software Will Require a New Legal Framework, 40 J. Legal Prof. 119, 129 (2015) (advising a legislative solution with a reasonableness standard).

[154] See Nat’l Highway Traffic Safety Admin., Event Data Recorders, 24, 28 (2006) (codified at 49 C.F.R. pt. 563),,

[155] Garza, supra note 8, at 604; see also Marchant & Lindor, supra note 8, at 1333; see also Jeremy Levy, No Need to Reinvent the Wheel: Why Existing Liability Law Does Not Need to be Preemptively Altered to Cope with the Debut of the Driverless Car, 9 J. Bus. Entrepreneurship & L. 355, 381 (2016) (discussing a comparison of human drivers to AVs under a consumer expectation test).

[156] See U.S. Dep’t of Transp., Nat’l Highway Traffic Safety Admin., Fed. Automated Vehicles Policy, 1, 5 (2016).


[157] NHTSA has reserved the 5.9GHz spectrum for V2V (“vehicle-to-vehicle”) communication. Press Release, Nat’l Highway Traffic Safety Admin., Transportation Sec. Foxx Announces Steps to Accelerate Road Safety Innovation (May 13, 2015),,

[158] V2V communication systems use short range radio to “talk” to each other. The Department of Transportation estimates V2V will avoid 76% of roadway crashes. See Insurance Information Institute, supra note 36. But see RAND, supra note 8, at xx; see also Brachmann, IPWatchdog, supra note 8.

[159] See Marchant & Lindor, supra note 8, at 1339-40.

[160] See Zohn, supra note 10, at 477 (finding this flaw in applying the risk-utility test to self-driving cars); see also Paul A. Eisenstein, Driver Becomes ‘Co-Pilot’ in the Self-Drive Car, NBC News (Aug. 28, 2013, 11:17 AM), http://, (discussing the Nissan Leaf autonomous vehicle being developed).

[161] But see Jeffrey K. Gurney, Crashing into the Unknown: An Examination of Crash-Optimization Algorithms Through the Two Lanes of Ethics and Law, 79 Alb. L. Rev. 183, 227 (2016) [Crashing into the Unknown] (stating that intent cannot be inferred from AV software for purposes of an intentional tort because, “certainly if the manufacturer had its choice, no one would ever be harmed by its car”); see also Nathan A. Greenblatt, Self-Driving Cars Will Be Ready Before Our Laws Are, IEEE Spectrum, (Jan. 19, 2016),, (arguing for an application of ordinary negligence laws to AVs).

[162] Patrick Lin, The Ethics of Autonomous Cars, The Atlantic, (Oct. 8, 2013),,

[163] See Levy, supra note 155, at 382 (“The burden of expert testimony in such cases evaluating the technology would also be high, and could result in challenges due to protection of trade secrets in scrutinizing a company’s technology.”); Crashing into the Unknown, supra note 161, at 236 (discussing how a safer crash-optimization algorithm would require various experts). Compare Chris Savoie, IoT, the Internet of Threats? Novel Liability Issues for Connected, Autonomous Vehicles and Intelligent Transportation Systems, 12 NO. 3 ABA SciTech Lawyer 12, 15 (Spring 2016) (stating that finding the reasonableness of a decision making matrix in AVs would involve complex expert testimony which would be confusing to jurors and expensive to litigants creating “a disincentive for lawyers to take on relatively small cases (small to the attorney but significant to the injured party)”), with Matt McFarland, Who’s responsible when an autonomous car crashes?, CNNTech, (Jul. 7, 2016, 2:00 PM),, (stating that design defect litigation may open up new class action lawsuits brought by consumers alleging a design defect in AV software damages the resale value of the AV).

[164] See Cal. Code Regs. tit. 13, § 227.18 (2014). Most traffic codes do not have this requirement presumably because the traffic code was written for traditional vehicles possessing these features. However, in “California, Nevada, Michigan, and Florida, test drivers must be able to reassume immediate control at any time in the event of an AV failure or emergency, which requires two things: [1] There must be a driver’s seat with a steering wheel and pedals; [and] [2] The driver must be in the driver’s seat and monitoring safe operation at all times.” AV Team, Law Report, supra note 54, at 4; see also Nev. Rev. Stat. § 482A.060 (2013); Mich. Comp. Laws Serv. § 257.665(1) (2016) (LexisNexis 2016); Additionally, insurance policies require a driver remain at the ready. see HDI Gerling, supra note 51.

[165] See Zohn, supra note 10, at 478 (“All autonomous vehicles that are currently being designed have an emergency override switch that will enable drivers to manually take over driving should they feel it is necessary.”); Andrew R. Swanson, Comment, ‘‘Somebody Grab the Wheel!”: State Autonomous Vehicle Legislation and the Road to a National Regime, 97 Marq. L. Rev. 1085, 1091 (2014) (describing the override function on autonomous vehicles).

[166] See Robert Sykora, The Future of Autonomous Vehicle Technology as A Public Safety Tool, 16 Minn. J.L. Sci. & Tech. 811, 818 (2015) (“Kill-switch complications continued to vex insurers, however. Multiple occupants in a single AV created a ‘who’s in charge?’ confusion when each thought the other to have responsibility to hit the kill-switch. With no pedals and no wheel, there was no clear ‘driver’s seat,’ so actual responsibility remained somewhat ambiguous.”) Additionally, a “number of states already have statutes that impose liability on registered owners of run-away vehicles, which are often described in the statutes as ‘driverless vehicles.’ These ‘driverless car’ statutes impose liability on registered owners as presumed ‘drivers’. Since there may be no humans at all in autonomous cars used to transport only cargo, either these statutes or some form or vicarious liability may impose damages liability on either the autonomous car’s owner or its operator.” Glancy, supra note 35, at 27-28.

[167] “Allowing the operation of an autonomous vehicle without a driver aboard is risky this early in the development of the technology. While the goal may be to enable things like the parking of the vehicle after a human has been dropped off, there are many foreseeable situations in which the vehicle will incorrectly interpret road signs, parking-garage signs, or subtle communications with another driver in the tight quarters of a parking garage – all situations in which human intervention may be required. While these challenges are likely surmountable in the medium to long-term, regulators should be wary of allowing [autonomous vehicles] to operate without humans aboard in the near future.” AV Team, Law Report, supra note 54, at 21-22.

[168] See supra text accompanying notes 44, 120.


[169] See Zohn, supra note 10, at 482 (“[A]t least in the early years of this technology, it is reasonable to impose the expectation on autonomous cars to make sure the owners are using it responsibly.”); but see Glancy, supra note 35, at 4, (“It is unclear whether there will still be some form of dashboards, steering wheels, accelerator and brake pedals.”).

[170] See generally CAL. VEH. CODE § 38750(c)(1)(G) (2017) (requiring California to have crash data recorders for autonomous vehicles sold to the public with detailed requirements for their use, but not requiring them for testing); but see Nev. Rev. Stat. Ann. §482A.060 (2017) (requiring Nevada to have recorders on autonomous vehicles used for testing as well as autonomous vehicles offered for sale to the public); NHTSA Preliminary Statement, supra note 15, at 14 (stating that NHTSA recommends test vehicles have crash-data recorders); see also Dr. Sven A. Beiker, Legal Aspects of Autonomous Driving, 52 Santa Clara L. Rev. 1145, 1152 (2012) (proposing data recorders in AVs should be mandatory).

[171] See generally 2014 Crash Data, supra note 63 (reporting that over 32,000 people perished with human error being the critical factor 94% of the time).

[172] See Gurney, supra note 12, at 267-68 (discussing the applicability and weaknesses of this comparative-fault defense).

[173] See AV Team Law Report, supra note 54, at 11.

[174] See Matt McFarland, Here’s Volvo’s concept of a self-driving car’s interior, Wash. Post, (Nov. 18, 2015),,

[175] See Jim Motavalli, Automakers Rethink Seats for Self-Driving Cars, N.Y. Times, (Jan. 15, 2015),,

[176] See Stringfellow, supra note 144 and accompanying text.

[177] See Joel Achenbach, Driverless cars are colliding with the creepy Trolley Problem, Wash. Post, (Dec. 29, 2015),,


[178] Id.

[179] See Matt Richtel & Conor Dougherty, Google’s Driverless Cars Run Into Problem: Cars With Drivers, N.Y. Times, (Sept. 1, 2015),,

[180] See Achenbach, supra note 177.

[181] See Statt, supra note 2.

[182] See Richtel & Dougherty, supra note 179.

[183] Stringfellow, supra note 144; Achenbach, supra note 177.

[184] See id.

[185] Stringfellow, supra note 144; See also Jonathan O’Callaghan, Should A Self-Driving Car Kill Its Passengers In A “Greater Good” Scenario?, IFLScience, (Oct. 25, 2015),, (reporting the results of Amazon’s Mechanical Turk, an online crowdsourcing tool, which presented respondents with a modern trolley car scenario: “on the whole, people were willing to sacrifice the driver in order to save others, but most were only willing to do so if they did not consider themselves to be the driver. While 75% of respondents thought it would be moral to swerve, only 65% thought the cars would actually be programmed to swerve.”); see also Bonnefon, Utilitarian Cars, supra note 149.

[186] Supra note 150 and accompanying text.

[187] See e.g., Programming Safety into Self-Driving Cars, Nat’l Sci. Found. (Feb. 2, 2015),, (introducing algorithms designed to incorporate adequate safety controls in semi-autonomous vehicles).

[188] See e.g., Why Self Driving Cars Must be Programmed to Kill, MIT Tech. Rev. (Oct. 22, 2015),, (discussing findings that show “[p]eople are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves”).

[189] Achenbach, supra note 177. Notably, in 2016 Joshua Brown died when his Tesla autopilot system failed to recognize a tractor-trailer turning in front of his Model S and collided. The NHTSA is investigating the incident, but there has been no indication that this fatal crash has stymied demand for AVs. See Matt McFarland, Who’s responsible when an autonomous car crashes?, CNNTech (Jul. 7, 2016, 2:00 PM),,

[190] This defense may also apply to claims of design defect as to the algorithm itself. A plaintiff could allege that the algorithm could have been written better, but the manufacturer could argue that assessing a new risk that precipitates the accident was technologically infeasible at the time. See Gurney, supra note 12, at 269.

[191] See Redbend White Paper, supra note 21 and accompanying text.

[192] See Pröckl, Självkörande bilar, supra note 8, at 4-5 (discussing how AVs can be hacked).

[193] See id.

[194] See RAND, supra note 8, at xxi. “Telematics is a general term that refers to any device which merges telecommunications and infomatics. Telematics includes anything from GPS systems to navigation systems. It is responsible for many features in vehicles from OnStar to hands free mobile calling.” What is Telematics?, Telematics (last visited Apr. 18, 2017),, [hereinafter Telematics].

[195] V2V communication systems use short range radio to “talk” to each other. The Department of Transportation estimates V2V will avoid 76% of roadway crashes involving at least one light vehicle. See NHTSA, Frequency of Target Crashes for IntelliDrive Safety Systems, 1, 6 (2010),,.; but see RAND, supra note 8, at xx; see also AV Team Law Report, supra note 54, at 14-21.

[196] RAND supra note 8, at 75.

[197] Id.

[198] Nat’l Highway Traffic Safety Admin., V2V Fact Sheet,, (last visited Apr. 11, 2017).

[199] Id.

[200] See id.

[201] Id.


[202] RAND supra note 8, at 76.

[203] Id. at 75.


[204] Nathan Bomey, Self-driving Car Leaders ask for National Laws, USA Today, (March 15, 2016, 10:27 PM),,

[205] Id.

[206] See BI Intelligence, NHTSA releases self-driving car guidelines, Bus. Insider, (Sept. 21, 2016, 2:24 PM),,


[207] See id.

[208] See Press Release, US urged to establish nationwide Federal guidelines for autonomous driving, Volvo Car Group (Oct. 7, 2015),, [hereinafter Volvo Press Release]; see also Kirsten Korosec, Volvo CEO: We will accept all liability when our cars are in autonomous mode, Fortune (Oct. 7, 2015, 3:34 PM),,

[209] See Volvo Press Release, supra note 208.

[210] Id.

[211] Id.; see Korosec, supra note 208.

[212] See Gurney, supra note 12, at 272 (stating that manufacturers could “adjust the price of the autonomous vehicles to compensate them for the cost of liability”).

[213] See id. at 273 (stating that “[p]eople would probably be willing to pay more for autonomous cars knowing that the manufacturer will be liable for accidents caused while the vehicle is in autonomous mode”).

[214] See Mark Harris, Why You Shouldn’t Worry About Liability for Self-Driving Car Accidents, IEEE Spectrum (Oct. 12, 2015 8:00PM),,

[215] See Alexander Hars, Volvo’s liability promise for autonomous mode may cut out insurance companies and independent repair shops, Driverless car market watch (Oct. 24, 2015),, [hereinafter Volvo’s liability promise].

[216] See Marchant & Lindor, supra note 8, at 1331.

[217] See id. at 1338.

[218] One critique of total preemption is that it may lead to victims subsidizing corporations. See Christopher B. Dolan, Self-Driving Cars & the Bumpy Road Ahead, Trial (February 2016),,

[219] See Sophia H. Duffy & Jamie Patrick Hopkins, Sit, Stay, Drive: The Future of Autonomous Car Liability, 16 SMU S Ci. & Tech. L. Rev. 453, 479 (2013) (arguing that negligent driving can, in effect, be eliminated by autonomous cars).

[220] See RAND, supra note 8, at xxii (asserting that Congress could preempt state tort law to limit manufacturer liability, or in the alternative create an non-rebuttable presumption of human control in AVs); see also Marchant & Lindor, supra note 8, at 1338-39 (discussing preemption of state tort action by the Federal Motor Vehicle Safety Standards).

[221] See generally RAND, supra note 8 (listing numerous benefits of autonomous vehicles).

[222] See Adrienne LaFrance, Self-Driving Cars Could Save 300,000 Lives Per Decade in America, Atlantic (Sept. 29, 2015),,; see also Morbidity and Mortality Weekly Report, CDC (May 20, 2011),,

[223] See LaFrance, supra note 222.

[224] See Insurance Information Institute, supra note 36.

[225] See generally Schroll, supra note 51 (utilizing the metaphor “splitting the bill” to describe a national car insurance fund to pay for accidents involving AVs).

[226] See id.; see generally Sophia H. Duffy & Jamie Patrick Hopkins, Sit, Stay, Drive: The Future of Autonomous Car Liability, 16 SMU S Ci. & Tech. L. Rev. 453 (2013); see Garza, supra note 8; see Marchant & Lindor, supra note 8, at 1327-28; see generally Julie Goodrich, Comment, Driving Miss Daisy: An Autonomous Chauffeur System, 51 Hous. L. Rev. 265, 269-70 (2013).

[227]See Schroll, supra note 51, at 810.

[228] Id. at 822.

[229] See Volvo’s liability promise, supra note 215.

[230] See Achenbach, supra note 177.


[231] See id.

Calling an End to Culling: Predictive Coding and the New Federal Rules of Civil Procedure

pdf_icon Serhan Publication Version PDF

Cite as: Stephanie Serhan, Calling an End to Culling: Predictive Coding and the New Federal Rules of Civil Procedure, 23 Rich. J.L. & Tech. 5 (2016),

Stephanie Serhan*

Table of Contents

I.     Introduction. 2

II.     Why Timing Matters in Predictive Coding. 4

A.      The Technical Difference Between the Two Methods. 5

B.     The Practical Implications in Applying the Two Methods. 6

III.     Court Decisions and the New Federal Rules. 11

A.     Court Decisions under the Old Rules. 11

1.      Ex-Ante Permissibility of Predictive Coding. 11

2.     Ex-Post Permissibility of Keyword Culling. 15

B.     Reinforcement of Court Decisions under the New Rules. 21

1.     Recent Amendments to the Rules. 21

2.     Subsequent Reactions to the New Rules. 25

IV.     Encouraging Predictive Coding Ex Ante. 28

A.     Why Predictive Coding Ex Ante is Preferable. 28

B.     How Parties and Courts Should Proceed. 30

V.     Conclusion. 35

I. Introduction 

[1]       In corporate litigation and dispute resolution, discovery is often a significant undertaking for both the producing and requesting parties. Each party’s approach during discovery is usually guided by considerations regarding efficiency and accuracy during the process. One area of discovery in which parties prioritize these considerations is the implementation of predictive coding. Several studies have proven that the method of predictive coding is substantially more efficient and accurate than traditional methods of conducting discovery.[1]

[2]       The method of predictive coding begins with a senior attorney who is intimately familiar with the case identifying relevant and irrelevant documents to create a “seed set.”[2] This seed set is then fed into the predictive coding software, which trains the software to determine which documents are relevant, while suggesting other documents that may also be relevant.[3] Additionally, the attorney might review a random sample of documents;[4] or the attorney could feed in words, phrases, and concepts that are appropriate to the case, and the software can subsequently find similar phrases, with linguistic or sociological relevance.[5] The aim of the method is to identify the most relevant documents to produce to the requesting party.

[3]       Within predictive coding, tension between efficiency and accuracy frequently arises in deciding the appropriate time at which to apply predictive coding. This timing concern has sparked numerous debates, as well as a split between court opinions. The issue parties and courts address is whether predictive coding should be applied at the outset of discovery to an entire universe of documents, or if it should be applied after keyword culling.

[4]       This issue has become increasingly addressed in virtually every important case that has large volumes of documents in discovery. Addressing this issue is important to the parties involved because it has profound implications regarding efficiency and accuracy. Courts have also been asked to address this question, but have offered little guidance regarding the time at which to implement predictive coding in a case. Rule 1 of the Federal Rules of Civil Procedure addresses this exact balance as a trade-off between the just resolution and the efficiency of a case, which has often arisen in issues concerning discovery.[6] The recent amendments to the Federal Rules of Civil Procedure further emphasize this trade-off.[7]

[5]       This paper examines the impact of the most recent amendments to the Federal Rules of Civil Procedure on the current split between courts about whether predictive coding should be applied at the outset or to a set of keyword-culled documents. Since the new Rules explicitly implement the concept of proportionality and a new set of standards in Rule 26, I argue that applying predictive coding at the outset is more compliant with the Federal Rules of Civil Procedure. Part II will explain the difference in timing between applying predictive coding after keyword culling or prior to it, and discuss the implications of accuracy and efficiency. Part III will first discuss the split between courts regarding the two methods prior to the recent amendments to the Rules, and subsequently, it will discuss reactions by courts and scholars regarding the applicability after the amendments to the Rules. Part IV will argue that the method of applying predictive coding at the outset is more compliant with the new amendments to the Rules since it is more accurate, and it will suggest that parties and courts should begin to implement these changes. Ultimately, this proposal will improve accuracy, without jeopardizing efficiency, with the goal of achieving the just resolution of a case.

II. Why Timing Matters in Predictive Coding

[6]       During the process of discovery, parties often face a choice regarding which method to use on large volumes of documents. Predictive coding has recently become a predominant method through which attorneys and parties alike may narrow down the universe of documents in an efficient and accurate manner.[8] However, parties differ over the appropriate time at which predictive coding should be used in the discovery process, which has created two methods that differ only in timing. The two methods are: (i) the use of predictive coding at the outset, or (ii) the use of predictive coding after keyword culling documents. This Part explains the technical difference between these two methods, as well as the practical implications in applying each of these methods.

            A. The Technical Difference Between the Two Methods

[7]       Regarding the timing of when to apply predictive coding, the two methods are: (i) the use of predictive coding at the outset, or (ii) the use of predictive coding after keyword culling. The first method involves applying predictive coding at the beginning of the discovery phase; the second method involves keyword culling documents first, and subsequently applying predictive coding to the keyword-culled documents. Each of these methods will be explained separately.

[8]       The first method provides the option of applying predictive coding to the entire universe of documents at the beginning of the discovery phase. All documents are gathered, and the predictive coding technology is applied to all of the documents at the outset as a whole.[9] Applying predictive coding to all documents means there is no previous method, such as keyword culling, to narrow down the universe of documents. The use of predictive coding will narrow down the universe of documents based on which documents are relevant, or predicted to be relevant, through a programmed algorithm.[10] Alternatively, the second method allows a party to apply predictive coding to a set of documents that has already been reduced in size by keyword search techniques. These techniques are frequently referred to as “keyword culling.” In order to perform keyword culling on documents, a party would begin with the entire universe of documents that pertain to a case, and narrow down the universe of documents by searching for keywords. Through this method, documents are identified as relevant or irrelevant based on those search terms. The relevant documents remain, and these are a much smaller set of documents. These relevant documents are referred to as the keyword-culled documents, and predictive coding is subsequently applied only to these keyword-culled documents.[11]

            B. The Practical Implications in Applying the Two Methods

[9]       These two methods have significant implications regarding a party’s monetary expenditures and time spent, which relates to important concerns of accuracy and efficiency in choosing between these two methods. Regarding accuracy, the use of predictive coding at the outset provides a much more accurate return of relevant documents than keyword culling.[12] Applying predictive coding on the entire set of documents is the most accurate method in identifying relevant documents because it is applied to all documents, rather than the ones selected by keyword culling.[13] Keyword culling is not as accurate because the party may lose many relevant documents if the documents do not contain the specified search terms, have typographical errors, or use alternative phraseologies.[14] The relevant documents removed by keyword culling would likely have been identified using predictive coding at the outset instead.[15] Therefore, keyword culling is not as accurate as predictive coding when used on the entire set of documents at the outset.

[10]     Regarding efficiency, both methods provide efficient returns, depending on how efficiency is defined. The use of predictive coding at the outset can be beneficial in narrowing down documents based on even “‘linguistic’ or ‘sociological’” relevance.[16] Another efficient benefit is that the technology is programmed at the outset and can identify the most relevant documents.[17] Keyword culling, on the other hand, narrows down the universe of documents by conducting a keyword search that does not identify other potentially-relevant documents, but simply searches through the documents using the keywords that are chosen.[18] The keyword search can be quickly applied to a set of documents to determine which documents to keep and which to remove.[19] Keyword culling can be useful since it narrows down the universe of documents to a much smaller number, as it does not predict other potentially-relevant documents.[20] It may be quicker for the technology to simply apply keyword searches prior to predictive coding to limit the number of documents that need to be coded, but once again, it comes at the cost of accuracy in revealing responsive documents.[21]

[11]     Furthermore, prior to keyword culling, the parties often spend significant amounts of time discussing which keywords to employ in the search.[22] This back and forth between the parties frequently results in disagreement.[23] The danger is that the inputted terms for searching might be “over- or underinclusive, either returning large amounts of irrelevant documents or failing to capture relevant ones.”[24] Consequently, “…the requesting party may ask for additional search terms or request that the producing party takes steps to verify the completeness of production.”[25]

[12]     Since predictive coding would be employed under each of the two methods, the costs associated with each are not significantly different. The majority of costs associated with predictive coding come from: the time of a senior attorney who is intimately familiar with the case, the cost of employing a company that has the available technology and software to run predictive coding, and the time associated with training the software to identify relevant documents.[26] These three categories of costs will be incurred regardless of which of the two methods is employed.

[13]     The point at which the monetary costs and time spent may vary between the two methods is a senior attorney’s identification of potentially relevant documents or training of the software on a larger universe of documents. In predictive coding, there may be a larger universe of potentially relevant documents, simply because the software is more accurate in predicting which documents may be potentially relevant.[27] Keyword culling, on the other hand, eliminates many documents, even if they may be potentially relevant.[28] The reason is that the method of searching by keywords does not have that “predictive” feature; it merely eliminates any documents that do not contain the inputted words and phrases.[29] Accordingly, the cost differential between these two methods is not in the cost of the technology of predictive coding, but in the time it takes to identify the potentially relevant documents, as well as the resulting production of those documents.

[14]     In sum, both methods employ predictive coding but at different stages in the discovery process. Predictive coding at the outset is abundantly more accurate than applying predictive coding after keyword culling.[30] The main costs associated with predictive coding will be the same, but since predictive coding at the outset is applied to more documents than keyword-culled documents, there may be additional time spent in training the software.[31] Therefore, the actual cost of predictive coding will likely be substantially equal in both methods since the majority of the costs will be incurred in both methods.

[15]     The remainder of this paper will discuss how this trade-off between accuracy and efficiency has been approached by several courts, litigating parties, and the Federal Rules of Civil Procedure in choosing the appropriate time to apply predictive coding.

III. Court Decisions and the New Federal Rules

[16]     This Part will first address how courts have dealt with the issue, which developed a split in court decisions between applying predictive coding at the outset versus applying it on keyword-culled documents. Second, this Part will describe the recent amendments to the Federal Rules of Civil Procedure, as well as the subsequent reactions of courts and scholars.

            A. Court Decisions under the Old Rules

[17]     Prior to the recent amendments to the Federal Rules of Civil Procedure, parties and courts were aware of the concept of proportionality, but there have been various outcomes in different cases. In the past few years, the split in authority regarding the timing of predictive coding has spurred important realizations of accuracy and efficiency. The discussion below will reveal that some courts encouraged predictive coding at the outset, while some have allowed defendants to employ keyword culling first. These perspectives often depend on what the parties had mutually agreed on, what the parties had already accomplished, and the specific issue in the case. The arguments for each method are usually party-driven, as requesting parties argue for a broader scope of discovery to find the maximum amount of relevant documents, whereas producing parties tend to argue for a narrower scope of discovery to produce fewer documents.[32]

1. Ex-Ante Permissibility of Predictive Coding

[18]     Courts have routinely found that the application of predictive coding at the outset is appropriate. For example, in the 2012 landmark decision of Da Silva Moore v. Publicis Groupe SA, the court of the Southern District of New York found that predictive coding at the outset was appropriate.[33] The discovery issue in this case was whether predictive coding should be used at the outset, compared to other methods of discovery, including keyword culling.[34] The defendants had gathered approximately three million emails, a sizable amount of documents.[35]

[19]     The defendants sought to use predictive coding, and although the plaintiffs voiced their concerns, the plaintiffs were not opposed to predictive coding.[36] Magistrate Judge Peck allowed the use of predictive coding and emphasized the concept of proportionality from the Federal Rules of Civil Procedure.[37] Subsequently, the plaintiffs raised objections, which fell under the purview of the district judge.[38] The district judge found that the magistrate judge’s decision was not clearly erroneous, denied the plaintiffs’ objections, and accordingly adopted the magistrate judge’s opinion.[39] The district judge noted that “the use of the predictive coding software as specified in the ESI protocol is more appropriate than keyword searching.”[40] In this case, the defendants used, and the court allowed, predictive coding at the outset instead of keyword culling.

[20]     A circuit court in Virginia upheld a similar ruling in Global Aerospace, Inc. v. Landow Aviation, L.P. in the same year. [41] The court addressed whether the defendants would be permitted to use predictive coding at the outset instead of keyword culling. The defendants urged for the application of predictive coding at the outset instead of keyword culling.[42] Although the plaintiffs objected to the use of predictive coding at the outset,[43] the judge allowed it, stating that the defendants “shall be allowed to proceed with the use of predictive coding for purposes of the processing and production of electronically stored information.”[44]

[21]     Similar to the rulings in Da Silva Moore and Global Aerospace, Inc., the court in In Re Actos (Pioglitazone) Products Liability Litigation also allowed the parties to employ predictive coding at the outset.[45] The parties worked together and collaborated in choosing which method to employ. The high level of transparency and cooperation between the parties enabled the successful implementation of predictive coding at the outset on the entire universe of documents.[46] The parties agreed to review document samples collaboratively, meet and confer, and reveal their respective methodologies to each other.[47] The court allowed the parties to proceed in this manner because it was a mutually agreed upon method and proportional under the Rules.[48]

[22]     A slightly different case reveals a court’s hesitation in applying simplistic keyword searches. In McNabb v. City of Overland Park, the defendant produced about 20,000 e-mails after it unilaterally redacted the information that it thought was “confidential or irrelevant.”[49] The plaintiff also submitted a list of about thirty-five search terms for the defendant to use, but the defendant argued that the requests were “overly broad and would encompass a significant number of documents.”[50] The court agreed with the defendant and denied the plaintiff’s motion, on grounds of proportionality. In other words, the court denied the implementation of these broad, general keyword searches.[51] The motion papers in this case indicate “that the parties considered using predictive coding[,]” but the defendant decided not to.[52] The outcome may have been different if the parties agreed to employ predictive coding at the outset because the plaintiff may have received more of the relevant data it was searching for, and the defendant may have been able to protect other documents as well.[53]

[23]    Overall, when presented with the issue at the outset, courts have routinely held that predictive coding is appropriate. The courts in Da Silva Moore v. Publicis Groupe SA, Global Aerospace, Inc. v. Landow Aviation, L.P., and In Re Actos all allowed the parties to proceed with the application of predictive coding at the outset.[54] The judge’s reluctance and refusal to allow simplistic keyword searches in McNabb also points in the same direction, suggestive of the possibility that predictive coding may have been an appropriate approach from the outset.[55] Accordingly, parties and courts have been supportive of the use of predictive coding at the outset.

2. Ex-Post Permissibility of Keyword Culling

[24]     Courts have only permitted the use of predictive coding on previously keyword-culled documents after the fact, meaning after the documents had already been culled. In one example, the Northern District of Illinois court allowed the defendants to first employ keyword culling in Kleen Products, LLC v. Packaging Corporation of America in 2012.[56] The defendants had already produced “more than three million pages of documents” through keyword culling,[57] but plaintiffs requested the judge to order redoing discovery by employing predictive coding at the outset instead.[58] After several months of disputing these discovery issues, the parties reached an agreement.[59] The plaintiffs withdrew their demand to restart and apply predictive coding at the outset on the entire universe of documents in the case.[60] In other words, the defendants kept the documents that were already culled down using keyword searches and were not required to restart the discovery process with predictive coding.[61] The magistrate judge approved their agreement to employ keyword culling at the outset and restated Sedona Principle 6, “responding parties are best situated to evaluate” the appropriate method, with deference to the producing party.[62]

[25]     In the same year, the court in In Re Biomet M2a Magnum Hip Implant Products Liability Litigation also permitted keyword culling prior to the application of predictive coding.[63] The party had already employed keyword culling and reduced the universe of documents from “19.5 million to 3.9 million.”[64] The court stated that if the party was ordered to restart and apply predictive coding on the entire universe of documents, it would not have been proportional under the previous version of Rule 26.[65] The court said this approach was reasonable under the circumstances.[66] The judge stated that the issue is not whether predictive coding is better than keyword culling, but whether the party satisfied its discovery obligations.[67] Furthermore, the judge stated that regardless of the other proportionality factors, the additional cost of going back to do the predictive coding on all documents would have outweighed the benefit of potentially finding more relevant documents.[68]

[26]     In a related line of cases, two courts have allowed keyword culling after the parties had agreed to it, but courts and parties have disagreed as to the proper approach after keyword culling. For example, in Progressive Casualty Insurance Company v. Delaney, the parties agreed to use keyword culling at the outset.[69] The producing party employed keyword culling which reduced the amount of documents from 1.8 million to 565,000.[70] For the remaining 565,000 documents, after employing keyword culling, the parties disagreed as to the appropriate method that should be used.[71] The producing party found that subsequently performing manual review would take a significant amount of time and money. [72] To circumvent these costs, the producing party unilaterally chose to employ predictive coding instead of manual review on the remaining 565,000 documents.[73] After the producing party made this decision, it informed the requesting party, and the requesting party filed a motion to compel.[74] The court did not allow this change from manual review to predictive coding because it was not originally agreed upon by the parties, and it would result in more disputes and delays.[75] This case demonstrates that other disputes may arise after keyword culling is used because it calls into question the accuracy of subsequent methods. Predictive coding is contemplated but disagreed upon after keyword culling since the parties had already agreed upon manual review, although it is a time-consuming approach.[76] Instead, when predictive coding is used at the outset, these disputes are eliminated.

[27]     Another example in which keyword culling was permitted at the outset is in Bridgestone Americas, Inc. v. International Business Machines Corp.[77] The plaintiff had already employed keyword culling and wanted to proceed to use predictive coding. The defendant argued it would be unfair for the plaintiff to use predictive coding after documents had already been keyword culled, relying on Progressive Casualty Insurance Company.[78] However, because of concerns regarding proportionality and efficiency, the judge allowed the use of predictive coding on the previously keyword-culled documents.[79] This case also stands for the proposition that the parties should be the ones to try to resolve this issue.[80] The court believed that the use of keyword culling prior to predictive coding can be appropriate under Rule 26, but it depends on many factors, including “the type of data, the value of the case juxtaposed to the cost of using advanced analytics, and other factors that are matter specific.”[81]

[28]     As demonstrated by Bridgestone Americas, Inc. and Progressive Casualty Insurance Company, when parties agree on keyword culling at the outset, parties and courts are left confused as to the appropriate method to use going forward to review the remaining documents. The reason is that the accuracy of the remaining relevant documents is already called into question since keyword culling is not as accurate as predictive coding.[82] Furthermore, concerns of time, cost, and efficiency going forward in deciding between manual review and predictive coding become prominent issues for the parties.

[29]     All four of these cases share a common denominator of one part of their holding regarding the discovery issue.[83] All four courts in Kleen Products, LLC, In Re Biomet, Progressive Casualty Insurance Company, and Bridgestone Americas, Inc. permitted the parties to employ keyword culling at the outset only after they had already performed keyword culling, or after it was already agreed upon by the parties.[84] Although the parties disagreed as to the proper method to apply after keyword culling was employed,[85] the courts found that ordering the parties to restart discovery and employ predictive coding would have been disproportional under the Rules.[86]

            B. Reinforcement of Court Decisions under the New Rules

[30]     Recently, the drafters of the Federal Rules of Civil Procedure and the Supreme Court rebalanced the priorities of discovery and set a legislative-like answer in the amendments to the Rules. This Part discusses those amendments, as well as the subsequent reactions of courts and scholars.

            1. Recent Amendments to the Rules

[31]     The Federal Rules of Civil Procedure were recently amended and deemed effective as of December 1, 2015. The new revisions can be found in the 2016 edition of the Federal Rules of Civil Procedure.[87] Many rules were amended, but the revisions to Rules 1 and 26 directly impact this discussion. Through these revisions, the rule drafters and the Supreme Court chose to highlight proportionality, as well as the responsibility of parties and courts in making these decisions.

[32]     Rule 1 was amended to emphasize that parties are just as responsible as courts in applying the Federal Rules of Civil Procedure to ensure the efficiency of every action in a case.[88] The previous version of Rule 1 stated that the rules “should be construed and administered to secure the just, speedy, and inexpensive determination of every action and proceeding.”[89] The new version of Rule 1 states that the rules “should be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.”[90]

[33]     Rule 26 was amended to emphasize factors of proportionality in defining the scope of discovery.[91] The previous version of Rule 26(b)(1) stated:

Scope in General. Unless otherwise limited by court order, the scope of discovery is as follows: Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense—including the existence, description, nature, custody, condition, and location of any documents or other tangible things and the identity and location of persons who know of any discoverable matter. For good cause, the court may order discovery of any matter relevant to the subject matter involved in the action. Relevant information need not be admissible at the trial if the discovery appears reasonably calculated to lead to the discovery of admissible evidence. All discovery is subject to the limitations imposed by Rule 26(b)(2)(C).[92]

[34]     The amended version of Rule 26(b)(1) now states: 

Scope in General. Unless otherwise limited by court order, the scope of discovery is as follows: Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit. Information within this scope of discovery need not be admissible in evidence to be discoverable.[93]

[35]     The concept of proportionality appeared in Rule 26(b)(2)(C) in the previous version and has always been present; however, it now appears at the beginning of Rule 26(b)(1), which makes it more explicitly applicable to the entire scope of discovery.[94] Specifically, the proportionality factors moved from Rule 26(b)(2)(C)(iii) to the new location at the beginning of Rule 26(b)(1).[95] The Committee’s intention in moving these factors is to “make them an explicit component of the scope of discovery, requiring parties and courts alike to consider them when pursuing discovery and resolving discovery disputes.”[96]

[36]     It is important to note that the Committee made revisions to the actual factors that pertain to proportionality as well. They amended the order of the factors; the “importance of the issues at stake” now precedes the “amount in controversy” which places an emphasis on proportionality related to the issues, not only the dollar amount.[97] They also added one additional factor: “the parties’ relative access to relevant information.”[98]

[37]     The other change to Rule 26 is the removal of the language “reasonably calculated to lead to the discovery of admissible evidence.”[99] This means that the previous guidance in discovery, to find evidence that might lead to admissible evidence, has been taken out. Since it is no longer a requirement to potentially lead to admissible evidence, there may be a push from attorneys to narrow the scope of discovery.[100] The reason is that the previous requirement did not require a direct nexus to the case as discoverable evidence only had to potentially lead to other admissible evidence. In this application, it might be a call to highlight the most relevant evidence in discovery.

[38]     In sum, Rule 1 now explicitly makes it the priority of parties and courts to ensure that a case proceeds in a just and expedient manner. Rule 26 now explicitly prioritizes proportionality to dictate the scope of discovery. Both of these rules impact the decision of when it is the right time to apply predictive coding for several reasons. Predictive coding and keyword culling, as discussed above, have important implications regarding the accuracy and efficiency of the discovery process.

2. Subsequent Reactions to the New Rules

[39]     Courts have begun to apply these recent amendments of the Federal Rules of Civil Procedure, and there has not been a drastic change in the past few months. Many courts are finding that the priority of proportionality has been present since the prior version of the Rules, but the courts are able to more easily point to this priority as it is explicitly referred to first in Rule 26 regarding the scope of discovery.

[40]     For instance, just six days after the amendments went into effect, the court in Carr v. State Farm Mutual Automobile Insurance Co. found that the burdens on the parties have not fundamentally changed.[101] In that case, the defendant’s motion to compel was granted since the burden on the plaintiff to resist the motion to compel had not changed under the new rules, as evidenced by the Committee’s notes on the amendments.[102] Another court has concluded the application of predictive coding was disproportional under the new rules in Gilead Sciences, Inc. v. Merck & Co., Inc., but it stated that the result would have been the same even under the prior version of the Rules.[103] In that patent infringement case, the defendant’s motion to compel additional discovery was denied because the plaintiff would have needed to produce an excessive amount of information regarding the contents of tubes of compounds that were not at issue in the case.[104]

[41]     The court stated that the amendments now first require an inquiry into whether the additional discovery would be proportional, rather than whether it might lead to something admissible.[105]

[42]     Similarly, the court of the Southern District of Florida allowed the defendants to redact information that was irrelevant from documents that were considered responsive.[106] The court based its opinion on the concept of proportionality in Rule 26.[107]

[43]     The Year-End Report of the Federal Judiciary argues that the amendments have had a profound impact on the expected efficiency of parties and courts.[108] Magistrate Judge John M. Facciola believes the Rules were significantly modified in that the scope of discovery does not regard whether an item is “reasonably calculated to lead to the discovery of admissible evidence,”[109] but rather regards the issues at stake and proportionality concerns.[110] Because of this, lawyers may argue to narrow the scope of discovery.[111]

[44]     The courts that have begun to apply the new amendments to the Rules are finding that the outcome would have been similar even under the old Rules. The courts are only able to more easily point to the primary concerns of proportionality, justness, and expediency through the new amendments.

IV. Encouraging Predictive Coding Ex Ante 

[45]     In light of the court decisions and recent amendments to the Federal Rules of Civil Procedure, predictive coding should be encouraged at the outset of the discovery process to be applied on the entire universe of documents in a case. This Part will first explain the reasons why predictive coding should be used at the outset, and second, it will suggest how parties and courts should proceed in implementing this method.

            A. Why Predictive Coding Ex Ante is Preferable

[46]     Employing predictive coding at the outset provides significantly more accurate results in identifying relevant documents than keyword culling.[112] Predictive coding employs sophisticated technology which can more accurately predict relevant documents, beyond the simplistic search terms used in keyword culling.[113] The method of keyword culling is not as accurate because many relevant documents slip through the cracks when keyword searches are employed.[114] In terms of accuracy, predictive coding is significantly more accurate than keyword culling when used on the entire set of documents at the outset.

[47]     Since predictive coding would be employed under each of the two methods, the costs associated with either method are not significantly different. The majority of costs associated with predictive coding come from the time of a senior attorney who is intimately familiar with the case training the software, and the cost of employing a company that has the available technology and software to run predictive coding.[115] However, these costs will be expended in both methods since predictive coding is used in both methods. The point at which the monetary costs and time spent may vary between the two methods is in the senior attorney identifying potentially relevant documents and training the software on a larger volume of documents.[116] Accordingly, the cost differential between these two methods is in the time it takes to identify these potentially relevant documents, as well as the resulting production of documents. There has not been enough empirical research done on this inquiry, but no courts have held, and no parties have argued, that predictive coding would cost more at the outset. Although there is currently no proof that the costs are steeper, even if that were the case, it is likely not substantial enough to outweigh the benefit of accuracy in identifying relevant documents.

[48]     Furthermore, as discussed in Part III.A, courts have routinely upheld and encouraged the use of predictive coding at the outset. The courts that held keyword culling is permissible at the outset only found it permissible after the documents had already been keyword culled, and found it too burdensome and costly to restart discovery.[117]

[49]     The recent amendments to the Federal Rules of Civil Procedure further reinforce the concepts of proportionality and the responsibilities of the parties and courts to ensure the just and efficient resolution of a case. Rule 1 now mandates that the rules “should be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.”[118] There is now an explicit emphasis on both courts and the parties to work justly and efficiently all throughout a case from the beginning to the end, which includes the discovery phase. More specifically, Rule 26(b) now highlights that the scope of discovery must begin with an inquiry of proportionality.[119] The Rule mandates that the parties and courts consider several factors of proportionality, including “the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit.”[120]

[50]     The Rules explicitly emphasize proportionality with a list of many factors. This legislative-like answer set by the rules’ drafters and the Supreme Court was a deliberate decision to refocus the attention of discovery to the issues at stake as well as the importance of discovery in finding a resolution to those issues. As discussed above, the cost differential between both methods is likely insignificant. Proportionality, as applied in a discovery issue, concerns both accuracy and efficiency because it impacts time, cost, and the just resolution of a case. Since cost is not a determinative factor, the parties will gain accuracy in employing predictive coding at the outset, which is particularly proportional in the scope of discovery under the Rules. In this way, the parties gain accuracy without sacrificing efficiency.

            B. How Parties and Courts Should Proceed

[51]     At the beginning of discovery, parties should opt to employ predictive coding on the entire universe of documents in a case, in light of the benefits regarding accuracy and proportionality. Even under the previous version of the Rules, parties were encouraged to collaborate regarding discovery methods and to consider each step of predictive coding at the outset.[121] This collaboration is essential because the parties are usually the ones that are in the best position to initially evaluate the method rather than courts.[122]

[52]     The ideal protocol is that which was employed by the parties in In Re Actos.[123] In that case, the parties cooperated and collaborated at the beginning of the discovery phase and were able to successfully implement predictive coding.[124] At the opposite end of the spectrum, the parties in Kleen Products, demonstrated how destructive it was to dispute the methodology of discovery for several months, wasting both time and money on the dispute.[125] Further, the plaintiffs withdrew their demand which allowed the defendants to keep their previously keyword-culled documents.[126] This end result of accepting the keyword-culled documents was not a judicial decision, nor was it a collaborative effort by the parties. Rather, it was the easier solution after several months of dispute, and a result that was brought on by the plaintiffs’ withdrawal of the demand.[127] If parties are encouraged to collaborate at the outset and practice transparency by sharing the predictive coding methodology with the other party, there is little left for the other party to object to.[128] The reason is that costs are already being saved by employing predictive coding regardless of the time at which it is applied, and the method of employing predictive coding is overwhelmingly more accurate in producing relevant documents than keyword culling.[129]

[53]     Subsequently, all that is left that the parties may dispute is the input to the predictive coding software. Parties may disagree about the inputs in training the software, but it does not have to be a daunting task, as the parties in In Re Actos planned for that and allowed options to work together on the inputs and scheduled for times to meet and confer.[130] Therefore, it is more proportional and worthwhile to start with predictive coding at the outset.[131]

[54]     The courts in McNabb and Progressive Casualty Insurance Company also teach an important lesson about the importance of collaboration between the parties at the outset.[132] Since the court in McNabb rejected the plaintiff’s motion to compel and employ further keyword searches,[133] the parties could have both benefitted from predictive coding at the outset. The producing party in Progressive Casualty Insurance Company unilaterally decided to switch to predictive coding, which instigated a motion to compel from the requesting party.[134] These situations could have been avoided if there were collaborative efforts at the outset, as well as transparency throughout the process.

[55]     As discussed in Part III.A.2, courts allowed predictive coding to be used after keyword culling, primarily because keyword culling had already been employed by the producing party, and it would have been costly to start over with predictive coding on the entire universe of documents in the case. The judges reasoned that it would have been highly inefficient and disproportional to require that party to start over at the beginning, especially if the parties agreed on the use of the keyword search method at the outset.[135] In Kleen Products, LLC v. Packaging Corporation of America, the “defendants [had] [already] produced more than three million pages of documents” through keyword culling,[136] but plaintiffs requested the judge to order redoing discovery using predictive coding.[137] The parties eventually reached an agreement, with the plaintiffs withdrawing their demand.[138] The court in In Re Biomet M2a Magnum Hip Implant Products Liability Litigation allowed keyword culling prior to the application of predictive coding because if the party was ordered to restart and apply predictive coding on the raw data, it would have been expensive and disproportional under Rule 26.[139]

[56]     As shown by these cases, producing parties continually employ keyword culling at the outset, possibly because it is quicker or because it produces a smaller amount of documents.[140] Regardless of the motive, once this discovery issue is before the courts and the producing party has already employed keyword culling, courts have been hesitant to order the party to start the discovery process again. In effect, the producing parties are permitted to retain their keyword culling methods.

[57]     Courts need to lead the change. If the parties do not begin to employ predictive coding at the outset and continue to employ keyword culling, courts should suggest the use of predictive coding at the outset. It will be relatively simple for courts to encourage or mandate predictive coding at the outset, as the courts discussed in Part III.A did. Courts may be more reluctant to order a producing party to abandon its keyword culling and restart the discovery process to employ predictive coding at the outset, but at this point, it is necessary. Proportionality is a primary concern under the Federal Rules of Civil Procedure. When predictive coding will be used in a case, it should be used at the outset in order to obtain the most accurate documents. It may only take one court in one case to capture the attention of parties and other courts, in order to lead the change for a more accurate and proportional discovery process in the cases to come.

V. Conclusion

[58]     Predictive coding has been proven to be more accurate and efficient than traditional methods of discovery. There has been a split in authority as to the point at which predictive coding should be applied. The issue that courts have been facing is whether predictive coding should be applied at the outset to the entire universe of documents, or if it should be applied to keyword-culled documents. Courts have gone both ways on this issue, but as of December 1, 2015, the drafters of the Federal Rules of Civil Procedure and the Supreme Court approved amendments to the Rules. Primarily, the amendments to Rules 1 and 26(b)(1) directly impact this discussion, as these rules emphasize the responsibility of parties and courts to ensure that a case proceeds justly and efficiently, while highlighting the importance of proportionality in the scope of discovery. Considering these amendments, predictive coding should be applied at the outset on the entire universe of documents in a case. The reason is that it is far more accurate, and is not more costly or time-consuming, especially when the parties collaborate at the outset. As seen in prior cases, this is the best method to identify more relevant documents. The point at which it becomes costly and inefficient is if a party had already used keyword culling and must restart the discovery process to employ predictive coding. However, if parties collaborate and participate in transparency at the outset, they will often find that it is significantly more effective and in the interest of both parties to employ predictive coding to identify the most relevant documents. If parties cannot agree or fall back on old ways of keyword culling, courts can and should lead the change by encouraging predictive coding at the outset of the discovery process, with the recent amendments to the Federal Rules of Civil Procedure on their side.

*J.D. Candidate 2017, University of Richmond School of Law. B.A., 2012, American University of Beirut. The author gratefully acknowledges Professor Jessica Erickson for her mentorship in the organization and articulation of arguments in this article, as well as Ms. Meghan Podolny for her assistance in the primary research phase of this topic. The author would also like to thank the editors and staff of the Richmond Journal of Law & Technology for their efforts in editing this article.

[1] See, e.g., Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1, 43, 48 (2011) (discussing benefits of predictive coding when conducting discovery); see also Joe Palazzolo, Why Hire a Lawyer? Computers are Cheaper, Wall Street J., (June 18, 2012, 2:06 PM),, archived at (noting that predictive coding is one subset of technology-assisted review (TAR) processes); see Andrew Peck, Search, Forward; Will Manual Document Review and Keyword Searches be Replaced by Computer-Assisted Coding?, Law Tech. News (Oct. 2011),, archived at

[2] Covington & Burling LLP, The Duty to Produce ESI, in Litigating Securities Class Actions § 13.04(2)(c) (Jonathan Eisenberg ed., 2016).

[3] See Tonia Hap Murphy, Mandating Use of Predictive Coding in Electronic Discovery: An Ill-Advised Judicial Intrusion, 50 Am. Bus. L.J. 609, 618 (2013) (noting that predictive coding uses sophisticated technology to narrow down documents that are most relevant to a case).

[4] See id.

[5] See id. at 617.

[6] See Fed. R. Civ. P. 1.

[7] See discussion infra Part III.B.1.

[8] See Da Silva Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182, 193 (S.D.N.Y. 2012). The Da Silva Moore case has received a significant amount of attention, since it was the first case in which predictive coding was judicially approved. See also Bennett B. Borden & Jason R. Baron, Finding the Signal in the Noise: Information Governance, Analytics, and the Future of Legal Practice, 20 Rich. J.L. & Tech. 1, 7, 16 (2014) (providing an in-depth statistical analysis finding that predictive coding is abundantly more accurate and efficient than traditional methods of discovery); see generally Grossman & Cormack, supra note 1, at 3 (discussing the efficiency and effectiveness of predictive coding).

[9] See Most Important Documents Get Looked at First: Using Predictive Coding to Prioritize & Expedite Review, Consilio (2016),, archived at (noting that if predictive coding were used at the outset it would have saved 70% of the time it took to conduct manual review).

[10] See Murphy, supra note 3, at 621–22.

[11] See Jim Eidelman, Best Practices in Predictive Coding: When are Pre-Culling and Keyword Searching Defensible?, Catalyst, Jan. 9, 2012,, archived at

[12] See id.; see also Barry Kazan & David Wilson, Technology-Assisted Review Is a Promising Tool for Document Production, N.Y. L.J. Online, Mar. 18, 2013,–for-Document-Production, archived at (citing a case in which one party found that keyword culling only produces 20% of relevant documents, whereas predictive coding would be sufficient even when finding at a 75% responsive rate).

[13] See Eidelman, supra note 11.

[14] See Kazan & Wilson, supra note 12.

[15] See John Hopkins, Large Data and Document Production – Keyword Search and Predictive Coding, Searcy L. Blog, May 31, 2013,, archived at

[16] Murphy, supra note 3, at 617.

[17] See id. at 620.

[18] See id. at 614–16, 620.

[19] The traditional way to employ keyword culling is run keywords through documents to retain the documents, which contain those keywords. See Ralph C. Losey, Predictive Coding and the Proportionality Doctrine: A Marriage Made in Big Data, 26 Regent U. L. Rev. 7, 58–59 (2013) (arguing that keyword culling could instead be used to cull documents out that are least likely to be relevant).

[20] See Jacob Tingen, Technologies-That-Must-Not-Be-Named: Understanding and Implementing Advanced Search Technologies in E-Discovery, 19 Rich. J.L. & Tech. 1, 33, 37 (2012); see Kate Mortensen, E-discovery Best Practices for Your Practice, Step 4: Search and Review, Inside Counsel, May 20, 2014,, archived at

[21] See Joseph H. Looby, E-Discovery – Taking Predictive Coding Out of the Black Box, FTI J. (Nov. 2012),, archived at

[22] See Mark F. Foley, Expert Testimony May Be Needed for E-Discovery Keyword Searches, vonBreisen, Mar. 1, 2008,, archived at

[23] See Murphy, supra note 3, at 614.

[24] Id. at 615–16.

[25] Id. at 614–15.

[26] See Matt Miller, Making Sure Your Predictive Coding Solution Doesn’t Cost More, DiscoverReady Blog, Apr. 30, 2013,, archived at

[27] See id.

[28] See Eidelman, supra note 11.

[29] Id.

[30] See id.

[31] See Miller, supra note 26.

[32] See, e.g., In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig., No. 3:12MD2391, 2013 U.S. Dist. LEXIS 84440, at *1 (N.D. Ind. Apr. 18, 2013) (Order Regarding Discovery of ESI) (noting that the requesting party expected about 10 million documents, but the producing party only produced 2.5 million documents).

[33] See Da Silva Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182, 193 (S.D.N.Y. 2012).

[34] See id. at 184–85.

[35] See id. at 184.

[36] See id. at 184–86.

[37] See id. at 186, 188.

[38] See Da Silva Moore v. Publicis Groupe SA, No. 11 Civ. 1279, 2012 U.S. Dist. LEXIS 58742, at *2 (S.D.N.Y. Apr. 25, 2012).

[39] See id. at *8–9.

[40] Id. at *8.

[41] See Global Aerospace Inc. v. Landow Aviation, L.P., No. CL 61040, 2012 Va. Cir. LEXIS 50, at *2 (Va. Cir. Ct. Apr. 23, 2012).

[42] See Brief in Opposition of Plaintiffs, Motion for Protective Order Regarding Electronic Documents and “Predictive Coding” at 2, Global Aerospace Inc. v. Landow Aviation, L.P., No. CL 61040, 2012 Va. Cir. LEXIS 50 (Va. Cir. Ct. Apr. 16, 2012), 2012 WL 1419848, at *1–2.

[43] See id. at *2–3.

[44] Global Aerospace Inc., 2012 Va. Cir. LEXIS 50, at *2.

[45] See In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2012 U.S. Dist. LEXIS 187519, at *20, *34 (W.D. La. July 27, 2012).

[46] See id. at *20.

[47] See id. at *21.

[48] See id. at *43.

[49] See McNabb v. City of Overland Park, No. 12-CV-2331 CM/TJJ, 2014 U.S. Dist. LEXIS 37312, at *5 (D. Kan. Mar. 21, 2014).

[50] See id. at *2.

[51] See Adam Kuhn, The Interplay Between Proportionality and Predictive Coding in e-Discovery, Recommind, June 12, 2014,, archived at [hereinafter Interplay Between Proportionality and Predictive Coding].

[52] Id.

[53] See id.

[54] See Da Silva Moore v. Publicis Groupe SA, 287 F.R.D. 182, 193 (S.D.N.Y. 2012); see Global Aerospace Inc. v. Landow Aviation, L.P., No. CL 61040, 2012 Va. Cir. LEXIS 50, at *2 (Va. Cir. Ct. Apr. 23, 2012); In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2014 U.S. Dist. LEXIS 187519, at *12, *20 (W.D. La. July 27, 2012) (Case Management Order: Protocol Relating to the Production of Electronically Stored Information).

[55] See McNabb v. City of Overland Park, No. 12-CV-2331 CM/TJJ, 2014 U.S. Dist. LEXIS 52534, at *7, *9 (D. Kan. Apr. 16, 2014).

[56] See Murphy, supra note 3, at 629 (noting that the district judge allowed the discovery issue to be decided separately by the magistrate judge).

[57] Id. at 629–30 (citing the Joint Status Conference Report No. 3, at 3, Kleen Prods., LLC v. Packaging Corp. of Am., Civil Case No. 1:10–cv–05711 (N.D. Ill. May 17, 2012)).

[58] See id. at 630 (quoting Defendants’ Brief on Discovery Issues at 1, Kleen Prods., LLC v. Packaging Corp. of Am., No. 1:10–cv–05711 (N.D. Ill. Feb. 6, 2012).

[59] See id.

[60] See id.

[61] See Murphy, supra note 3, at 630–31.

[62] See Matthew Verga, Predictive Coding Cases, Part 2 – Kleen Products, Modus, Mar. 5, 2015,, archived at

[63] See In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig., No. 3:12MD2391, 2013 U.S. Dist. LEXIS 84440, at *1 (N.D. Ind. Apr. 18, 2013) (Order Regarding Discovery of ESI).

[64] Bob Ambrogi, In Praise of Proportionality: Judge OKs Predictive Coding After Keyword Search, Catalyst, Apr. 29, 2013,, archived at

[65] See Citing Proportionality, Court Declines to Require Defendant to Redo Discovery Utilizing Only Predictive Coding, K&L Gates, Apr. 23, 2013,, archived at (citing Order Regarding Discovery of ESI, In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig.) [hereinafter Citing Proportionality].

 [66] See Keyword Filtering Prior to Predictive Coding Deemed Reasonable, EDiscovery Wire, Dec. 6, 2013,, archived at

[67] See Ambrogi, supra note 64.

[68] See id.

[69] See Progressive Cas. Ins. Co. v. Delaney, No. 2:11-CV-00678-LRH-PAL, 2014 U.S. Dist. LEXIS 69166, at *5 (D. Nev. May 20, 2014).

[70] See id. at *6-7.

[71] See id.

[72] See id. at *6.

[73] See id.

[74] See Progressive Cas. Ins. Co., 2014 U.S. Dist. LEXIS 69166, at *3–4.

[75] See id. at *31.

[76] See id.

[77] See Bridgestone Ams., Inc. v. Int’l Bus. Machs. Corp., No. 3:13-1196, 2014 U.S. Dist. LEXIS 142525, at *3 (M.D. Tenn., July 24, 2014) (Order Regarding use of Predictive Codes in Discovery) (explaining that the Magistrate Judge may permit the Plaintiff to use predictive coding on the documents).

[78] See Adam Kuhn, Bridgestone v. IBM Approves Predictive Coding Use, Rejects Progressive, Recommind, Aug. 12, 2014,, archived at

[79] See Bridgestone Ams., Inc., 2014 U.S. Dist. LEXIS 142525, at *3.

[80] See Gilbert S. Keteltas, Predictive Coding After Keyword Screening!? Don’t Miss the Point of Bridgestone Americas, BakerHostetler: Discovery Advocate, Aug. 21, 2014,, archived at

[81] Jason Bonk, Reasonableness and Proportionality Win Another Fight for Predictive Coding, E-Discovery L. Rev. (Sept. 17, 2014),, archived at (quoting Eric Seggebruch).

[82] See discussion supra Part II.B.

[83] See Edward Schoenecker Jr., Nine Cases on Predictive Coding from Modus, LinkedIn, April 14, 2015,, archived at

[84] See Bridgestone Ams., Inc. v. Int’l Bus. Machs. Corp., No. 3:13-1196, 2014 U.S. Dist. LEXIS 142525, at *1–2 (M.D. Tenn., July 24, 2014) (Order Regarding use of Predictive Codes in Discovery); see also Progressive Cas. Ins. Co. v. Delaney, No. 2:11-CV-00678-LRH-PAL, 2014 U.S. Dist. LEXIS 69166, at *31 (D. Nev. May 20, 2014); see In Re Biomet M2a Magnum Hip Implant Prods. Liab. Litig., No. 3:12MD2391, 2013 U.S. Dist. LEXIS 84440, at *1 (N.D. Ind. Apr. 18, 2013) (Order Regarding Discovery of ESI); see Kleen Prods. LLC v. Packaging Corp. of Am., No. 10 C 5711, 2012 U.S. Dist. LEXIS 139632, at *14–19 (N.D. Ill. Sept. 28, 2012).

[85] See Bridgestone Ams., Inc., 2014 U.S. Dist. LEXIS 142525, at *1–2; see Progressive Cas. Ins. Co., 2014 U.S. Dist. LEXIS 69166, at *31.

[86] See Bridgestone Ams., Inc., 2014 U.S. Dist. LEXIS 142525, at *5; see Kleen Prods., LLC, 2012 U.S. Dist. LEXIS 139632, at *28.

[87] See 2015-2016 Federal Rules of Civil Procedure Amendments Released, Federal Rules of Civil Procedure Updates, May 13, 2015,, archived at [hereinafter 2015-2016 Federal Rules Amendments]

[88] Id.; see also Federal Rule Changes Affecting E-Discovery Are Almost Here – Are You Ready This Time?, K&L Gates, Oct. 1, 2015,, archived at [hereinafter Rule Changes].

[89] Fed. R. Civ. P. 1 (2014) (amended 2015).

[90] Fed. R. Civ. P. 1 (emphasis added).

[91] See 2015-2016 Federal Rules Amendments, supra note 87.

[92] Fed. R. Civ. P. 26(b)(1) (2014) (amended 2015).

[93] Fed. R. Civ. P. 26(b)(1) (emphasis added).

[94] See Just Follow the Rules! FRCP Amendments Could be E-Discovery Game Changer, Metropolitan Corporate Counsel (July 17, 2015, 11:49 PM),, archived at [hereinafter Just Follow the Rules!].

[95] See E-Discovery Update: Federal Rules of Civil Procedure Amendments Go into Effect, McGuireWoods, Dec. 1, 2015,, archived at

[96] Rule Changes, supra note 88, at 2 (quoting The Committee on Rules of Practice and Procedure, Report of the Judicial Conference Committee on Rules of Practice and Procedure to the Chief Justice of the United States and Members of the Judicial Conference of the United States app. at B–8 (2014),

[97] Just Follow the Rules!, supra note 94 (arguing that although a case may not have an amount in controversy, it could still be a significant issue that deserves the concern of proportionality, such as discrimination or First Amendment cases).

[98] Rule Changes, supra note 88, at 2.

[99] Fed. R. Civ. P. 26(b)(1).

[100] See Just Follow the Rules!, supra note 94.

[101] See Court Applies Amended Rule 26 Concludes Burdens on Parties Resisting Discovery have not Fundamentally Changed, K&L Gates, Dec. 17, 2015,, archived at

[102] See Carr v. State Farm Mut. Auto. Ins. Co., No.3:15-CV-1026-M, 2015 U.S. Dist. LEXIS 163444, at *1517 (N.D. Tex. Dec. 7, 2015).

[103] See Court Concludes Defendant’s Request was “Precisely the Kind of Disproportionate Discovery That Rule 26—Old Or New—Was Intended to Preclude,” K&L Gates, Jan. 19, 2016,, archived at (citing Gilead Sciences, Inc. v. Merck & Co., Inc., No. 5:13-CV-04057-BLF, 2016 U.S. Dist. LEXIS 5616 (N.D. Cal. Jan. 13, 2016)) [hereinafter Court Concludes].

[104] Gilead Sciences, Inc. v. Merck & Co., Inc., No. 5:13-CV-04057-BLF, 2016 U.S. Dist. LEXIS 5616, at *7 (N.D. Cal. Jan. 13, 2016).

[105] See Court Concludes, supra note 103 (citing Gilead Sciences, Inc. v. Merck & Co., Inc., No. 5:13-CV-04057-BLF, 2016 U.S. Dist. LEXIS 5616 (N.D. Cal. Jan. 13, 2016)).

[106] See Court Approves Proposal to Redact or Withhold Irrelevant Information from Responsive Documents and Document Families, K&L Gates, Mar. 3, 2016,, archived at (citing In re Takata Airbag Prods. Liab. Litig., 2016 U.S. Dist. LEXIS 131746 (S.D. Fla. Mar. 1, 2016)).

[107] See id.

[108] See 2015 Year-End Report on the Federal Judiciary, SupremeCourt.Gov 1, 6, 9 (Dec. 31, 2015),, archived at

[109] Just Follow the Rules!, supra note 94.

[110] See id.

[111] See id.

[112] See Eidelman, supra note 11; see also Kazan & Wilson, supra note 12.

[113] See Eidelman, supra note 11; see also Kazan & Wilson, supra note 12.

[114] See Eidelman, supra note 11; see also Kazan & Wilson, supra note 12.

[115] See Miller, supra note 26.

[116] See id.

[117] See discussion, supra Part III.A.

[118] Fed. R. Civ. P. 1 (emphasis added).

[119] See Fed. R. Civ. P. 26(b)(1).

[120] Id.

[121] See Karl Schieneman & Thomas C. Gricks III, The Implications of Rule 26(g) on the Use of Technology-Assisted Review, 7 Fed. Cts. L. Rev. 239, 273–74 (2013) (noting that even under the old Rules, counsel was encouraged to consider each step of technology-assisted review under Rule 26(g) and 26(b)(2)(C)(iii)).

[122] See Charles Yablon & Nick Landsman-Roos, Predictive Coding: Emerging Questions and Concerns, 64 S.C. L. Rev. 633, 674 (2013) (citing Sedona Principle 6).

[123] See In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2012 U.S. Dist. LEXIS 187519, at *27 (W.D. La. July 27, 2012).

[124] See id. at *27.

[125] See Kleen Prods. LLC v. Packaging Corp. of Am., No. 10-C-5711, 2012 U.S. Dist. LEXIS 139632, at *60–62 (N.D. Ill. Sept. 28, 2012).

[126] See id. at *62–63.

[127] See id. at *58, *62.

[128] See id. at *58.

[129] See Grossman & Cormack, supra note 1, at 44, 48.

[130] See In re Actos (Pioglitazone) Prods. Liab. Litig., No. 6:11-MD-2299, 2014 U.S. Dist. LEXIS 86101, at *20–34 (W.D. La. June 23, 2014).

[131] See Interplay Between Proportionality and Predictive Coding, supra note 51 (“[A] party who unilaterally decides later on in discovery that its search tactics were too imprecise could find that proportionality standards prevent the use of more advanced, accurate, and targeted searches with predictive coding technologies.”).

[132] See, e.g., McNabb v. City of Overland Park, No. 12-CV-2331 CM/TJJ, 2014 U.S. Dist. LEXIS 37312, at *2–14 (D. Kan. Mar. 21, 2014); see also Progressive Cas. Ins. Co. v. Delaney, No. 2:11-CV-00678-LRH-PAL, 2014 U.S. Dist. LEXIS 69166, at *30–32 (D. Nev. May 20, 2014).

[133] See McNabb, 2014 U.S. Dist. LEXIS 37312, at *5.

[134] See Progressive Casualty, 2014 U.S. Dist. LEXIS 69166, at *2, *30–32.

[135] See id. at *30–32.

[136] Murphy, supra note 3, at 629–30.

[137] See id. at 630.

[138] See id.

[139] See Citing Proportionality, supra note 65 (discussing the court’s decision in Kleen regarding ESI searches).

 [140] See Eidelman, supra note 11.

Resisting the Resistance: Resisting Copyright and Promoting Alternatives

pdf_icon Frosio Publication Version PDF

Cite as: Giancarlo F. Frosio, Resisting the Resistance: Resisting Copyright and Promoting Alternatives, 23 Rich. J.L. & Tech. 4 (2017),

 Giancarlo F. Frosio*


This article discusses the resistance to the Digital Revolution and the emergence of a social movement “resisting the resistance.” Mass empowerment has political implications that may provoke reactionary counteractions. Ultimately—as I have discussed elsewhere—resistance to the Digital Revolution can be seen as a response to Baudrillard’s call to a return to prodigality beyond the structural scarcity of the capitalistic market economy. In Baudrillard’s terms, by increasingly commodifying knowledge and expanding copyright protection, we are taming limitless power with artificial scarcity to keep in place a dialectic of penury and unlimited need. In this paper, I will focus on certain global movements that do resist copyright expansion, such as creative commons, the open access movement, the Pirate Party, the A2K movement and cultural environmentalism. A nuanced discussion of these campaigns must account for the irrelevance of copyright in the public mind, the emergence of new economics of digital content distribution in the Internet, the idea of the death of copyright, and the demise of traditional gatekeepers. Scholarly and market alternatives to traditional copyright merit consideration here, as well. I will conclude my review of this movement “resisting the resistance” to the Digital Revolution by sketching out a roadmap for copyright reform that builds upon its vision.

I. Introduction

[1]       In The Creative Destruction of Copyrights, Raymond Ku applied for the first time the wind of creative destruction—made famous by Joseph Schumpeter—to the Digital Revolution.[1] According to Schumpeter, the “fundamental impulse that sets and keeps the capitalist engine in motion” is the process of creative destruction which “incessantly revolutionizes the economic structure by incessantly destroying the old one, incessantly creating a new one.”[2] Traditional business models’ resistance to technological innovation unleashed the wind of creative destruction. Today, we are in the midst of a war over the future of our cultural and information policies. The preamble of the Washington Declaration on Intellectual Property and the Public Interest explains the terms of this struggle:

[t]he last 25 years have seen an unprecedented expansion of the concentrated legal authority exercised by intellectual property rights holders. This expansion has been driven by governments in the developed world and by international organizations that have adopted the maximization of intellectual property control as a fundamental policy tenet. Increasingly, this vision has been exported to the rest of the world. Over the same period, broad coalitions of civil society groups and developing country governments have emerged to promote more balanced approaches to intellectual property protection. These coalitions have supported new initiatives to promote innovation and creativity, taking advantage of the opportunities offered by new technologies. So far, however, neither the substantial risks of intellectual property maximalism, nor the benefits of more open approaches, are adequately understood by most policy makers or citizens. This must change if the notion of a public interest distinct from the dominant private interest is to be maintained.[3]

[2]       The underpinnings of this confrontation extend to a broader discussion over the cultural and economic tenets of our capitalistic society, freedom of expression and democratization.

II. Resistance and Resisting the Resistance

[3]       Since the origins of the open source movement, mass collaboration has been envisioned as an instrument to create a networked democracy.[4] The political implications of mass collaboration in terms of mass empowerment are relevant to the ideas of freedom and equality. User-generated mass collaboration has promoted decentralization and autonomy in our system of creative production.[5] Internet mass empowerment might spur enhanced content production’s democratization from which political democratization might follow.[6] As Clay Shirky described, open networks reverse the usual sequence of filter, then publish, by making it easy to publish, then filter.[7] Minimizing cultural filtering empowers sub-cultural creativity and thus cultural distinctiveness and identity politics.[8]

[4]       Mass empowerment, however, triggers reactionary effects. Change has always unleashed a fierce resistance from the established power, both public and private. It did so with the Printing Revolution.[9] It does now with the Internet Revolution. For public power, the emergence of limitless access, knowledge, and therefore freedom, is a destabilizing force that causes governments to face increasing accountability and therefore relinquish a share of their power.[10] Through mass empowerment, the Internet, and global access to knowledge, private power sees the dreadful prospect of having to switch from a top-down to a bottom-up paradigm of consumer consumption.[11] Much to the dismay of the corporate sector, the Internet presents serious obstacles for the management of consumer behavior.[12] As Patry noted, “‘[c]opyright owners’ extreme reaction to the Internet is based on the role of the Internet in breaking the vertical monopolization business model long favored by the copyright industries.”[13] In combatting this breakdown, the copyright industries have waged “…[t]he Copyright Wars [which] are an effort to accomplish the impossible: to stop time, to stop innovation, to stop new ways of learning and new ways of creating.”[14] In particular, the steady enlargement of copyright becomes a tool used by reactionary forces willing to counter the Digital Revolution.[15] From a market standpoint, stronger rights allow the private sector to enforce a top-down consumer system.[16] The emphasis of copyright protection on a permission culture favors a unidirectional market, where the public is only a consumer, passively engaged to pay-per use or else stop using copyrighted works.[17] From a political standpoint, a tight control on reuse of information prevents mainstream culture from being challenged by alternative culture.[18] Copyright law empowers mainstream culture and marginalizes minority alternative counter-culture, therefore relenting any process leading to a paradigm shift.[19]

[5]       From a broader socio-economic perspective, there is also a more systemic explanation to the reaction facing the emergence of the networked information society. Baudrillard’s arguments might explain the reaction to the Digital Revolution—driving cultural goods’ marginal cost of distribution and reproduction close to zero.[20] Copyright law might become an instrument to protect the capitalistic notion of consumption and perpetuate a system of artificial scarcity. As the Digital Revolution turns consumers into users, and then creators, it defies the very notion of consumer society. It turns the capitalistic consumer economy into a networked information economy, which is characterized by a sharing and gift economy. So, for the socio-economic consumerist paradigm not to succumb, the limitless power of peer and mass collaboration must be tamed by the artificial scarcity created by copyright law. Ultimately, resistance to the Digital Revolution can be seen as a response to Baudrillard’s call for a return to prodigality beyond the structural scarcity of the capitalistic market economy.[21] The Internet and networked peer collaboration may represent a return to “collective ‘improvidence’ and ‘prodigality’” and their related “real affluence.”[22] New Internet dynamics of exchange and creativity might answer in the positive Baudrillard’s question whether we will “…return, one day, beyond the market economy, to prodigality[.]”[23] In Baudrillard’s terms, by increasingly commodifying knowledge and expanding copyright protection, we are taming limitless power with artificial scarcity to keep in place a “dialectic of penury” and “unlimited need.”[24] Therefore, the reaction to the Internet revolution may be construed as a gatekeepers’ attempt to keep their privileges in place as they thrive within a paradigm that builds the need of production—and overproduction—over an obsession with artificial scarcity.

[6]       In the past few years, a global movement grew under the understanding that the digital networked environment must be protected from external manipulations intended to stop exchange and re-instate scarcity. In this sense, resistance to copyright over-expansion can be understood as a cultural movement “resisting the resistance” to the Digital Revolution.[25] Francis Gurry, Director General of the World Intellectual Property Organization, gives a good explanation of these resistance mechanics.

[7]       Gurry noted that:

…the central question of copyright policy…implies a series of balances: between availability, on the one hand, and control of the distribution of works as a means of extracting value, on the other hand; between consumers and producers; between the interests of society and those of the individual creator; and between the short-term gratification of immediate consumption and the long-term process of providing economic incentives that reward creativity and foster a dynamic culture. Digital technology and the Internet have had, and will continue to have, a radical impact on those balances. They have given a technological advantage to one side of the balance, the side of free availability, the consumer, social enjoyment and short-term gratification. History shows that it is an impossible task to reverse technological advantage and the change that it produces. Rather than resist it, we need to accept the inevitability of technological change and to seek an intelligent engagement with it. There is, in any case, no other choice—either the copyright system adapts to the natural advantage that has evolved or it will perish.[26]

[8]       In the dedication to the Expositiones in Summulas Petri Hispani—printed around 1490 in Lyons—the editor, Johann Trechsel, announced: “[i]n contrast to xylography, the new art of impression I am practi[c]ing ends the career of all the scribes. They have to do the binding of the books now.”[27] Similarly, in the digital era, distributors’ roles and functions might be redefined. One of the key lessons in the gradual shift in market power in the entertainment industry these days is that the power of the old gatekeepers is declining, even as the overall industry grows. The power, instead, has definitely moved directly to the content creators themselves. Creators no longer need to go through a very limited number of gatekeepers, who often provide deal terms that significantly limit the creator’s ability to make a living.[28]

[9]       Instead, “…a major new opportunity has opened up, not for gatekeepers, but for organizations that enable artists to do the different things that the former gatekeeper used to do—but while retaining much more control, as well as a more direct connection with fans.”[29] As discussed at length in another piece of mine,[30] multiple emerging organizations are enabling a direct discourse between artists and users (e.g. Kickstarter, TopSpin or Bandcamp.)[31] As a consequence, traditional cultural intermediaries might be forced to give up their Ancien Régime’s privileges, causing further resistance to change. In the words of Nellie Kroes, European Commission Vice-President for the Digital Agenda, [a]ll revolutions reveal, in a new and less favourable light, the privileges of the gatekeepers of the “Ancien Régime.” It is no different in the case of the internet revolution, which is unveiling the unsustainable position of certain content gatekeepers and intermediaries. No historically entrenched position guarantees the survival of any cultural intermediary. Like it or not, content gatekeepers risk being sidelined if they do not adapt to the needs of both creators and consumers of cultural goods…Today our fragmented copyright system is ill-adapted to the real essence of art, which has no frontiers. Instead, that system has ended up giving a more prominent role to intermediaries than to artists. It irritates the public who often cannot access what artists want to offer and leaves a vacuum which is served by illegal content, depriving the artists of their well-deserved remuneration. And copyright enforcement is often entangled in sensitive questions about privacy, data protection or even net neutrality. It may suit some vested interests to avoid a debate, or to frame the debate on copyright in moralistic terms that merely demonise millions of citizens. But that is not a sustainable approach…My position is that we must look beyond national and corporatist self-interest to establish a new approach to copyright.[32]

III. Resisting Copyright (at Zero Marginal Cost) and Promoting Alternatives

[10]     In the aftermath of the legal battles targeting P2P platforms (such as ThePirateBay), the Pirate Party “emerge[d] [in Sweden] to contest elections on the basis of the abolition or radical reform of intellectual property, in general, and copyright, in particular. The platform of the Pirate Party proclaims that ‘[t]he monopoly for the copyright holder to exploit an aesthetic work commercially should be limited to five years after publication. A five years copyright term for commercial use is more than enough.’”[33] “Non-commercial use should be free from day one”.[34] The Pirate Party saw large successes at its first electoral appearances both in Sweden and Germany and similar political groups have now formed in other countries.[35] The Pirate Party serves as an “extreme expression [of] the sentiment of distaste or disrespect for intellectual property on the Internet”.[36] However, even the Economist has argued that copyright should return to its roots, because as it is now, it may cause more harm than good–proving that the sentiment is widespread.[37] A recent Report from the Australian Government Productivity Commission widely criticized the present “copy(not)right” model, pointing at a number of very critical issues:

…Australia’s copyright arrangements are weighed too heavily in favour of copyright owners, to the detriment of the long-term interests of both consumers and intermediate users. Unlike other IP rights, copyright makes no attempt to target those works where ‘free riding’ by users would undermine the incentives to create. Instead, copyright is overly broad; provides the same levels of protection to commercial and non-commercial works; and protects works with very low levels of creative input, works that are no longer being supplied to the market, and works where ownership can no longer be identified.[38]

[11]     Therefore, copyright law has fallen into a deep crisis of acceptance with respect to both users and creators.[39] Especially with new generations,[40] copyright tends to become irrelevant in the public mind, if not altogether opposed.[41] Sharing a common opinion, David Lange noted that the over-expansion of copyright entitlements lies at the backbone of their crisis in public acceptance:

…Raymond Nimmer has said that copyright cannot survive unless it is accorded widespread acquiescence by the citizenry. I think his insight is acutely perceptive and absolutely correct, for a reason that I also understand him to endorse: Never before has copyright so directly confronted individuals in their private lives. Copyright is omnipresent. But what has to be understood as well is that copyright is also correspondingly over-extended.[42]

[12]     Technological and cultural change played a central role in lowering the acceptance of an over-expansive copyright paradigm. Ubiquitous technology, cost minimization, and the emergence of fan authorship radically affect the traditional market failure that copyright is supposed to cure, both at the creation and distribution levels. The distributive power of the Internet instituted new economics of distribution for digital content.[43] Distribution and reproduction marginal costs being close to zero potentially eliminates, or at least strongly reduces, the need for third-party investment. In The Creative Destruction of Copyrights, Raymond Ku wonders whether a copyright monopoly at close to zero marginal cost is still a sustainable option.[44] Ku concludes that, absent the need for encouraging content distribution, the artificial scarcity and exclusive rights created by copyright cannot find any other social reason for existence.[45] When distributors’ rights are unbundled from creators’ rights, society can no longer support the protection of distributors’ rights.[46] Under these circumstances, copyright would serve no other social purpose than transferring wealth from the public to distributors.[47] Therefore, in Ku’s view, copyright in the digital environment is a meaningless burden for society and should be eliminated.[48] As radical as Ku’s position may be, if technological innovation led to a substantial reduction of the production, reproduction, and distribution costs of cultural artefacts, a case could be made in sharp contrast with any position asserting the expansion of the copyright monopoly.

[13]     Reproduction and distribution cost minimization also affected the traditional discourse regarding incentive to create.[49] Reductions in the production and distribution costs of original expressive works encourages non-professional authors to create.[50] Therefore, the number of authors, for whom the lucre of copyright proves a necessary stimulus, should drop. Additionally, low marginal costs empower few authors to reach a broader audience.[51] If decentralized and unprofessional authors increasingly satisfy the market demand–because non-monetary incentives stimulate creation–a copyright monopoly will eventually prove superfluous, at least for these works.[52] In respect to creative works provided by decentralized and unprofessional authors, the burdens of a copyright monopoly will exceed its benefits.[53]

[14]     This crisis propelled a cultural copyright resistance movement. Neelie Kroes stressed that copyright fundamentalism has prejudiced our capacity to explore new models in the digital age:

So new ideas which could benefit artists are killed before they can show their merit, dead on arrival. This needs to change…So that’s my answer: it’s not all about copyright. It is certainly important, but we need to stop obsessing about that. The life of an artist is tough: the crisis has made it tougher. Let’s get back to basics, and deliver a system of recognition and reward that puts artists and creators at its heart.[54]

[15]     The digital opportunity led many to challenge the obsolescence of the traditional copyright monopoly, seeking more radical reform. In 1994, John Perry Barlow’s manifesto laid out the necessity of re-thinking digitized intellectual property and radically noted that: “[i]n the absence of the old containers, almost everything we think we know about intellectual property is wrong”.[55] Nicholas Negroponte reinforced Barlow’s point by stating that “[c]opyright law is totally out of date…[i]t is a Gutenberg artifact…[s]ince it is a reactive process, it will have to break down completely before it is corrected.”[56] Recently, the Hargreaves report noted that archaic copyright laws “obstruct[] innovation and economic growth[.]”[57] In a message delivered to the G20 leaders, the President of Russia, Dimity Medvedev, pointed out that “[t]he old principles of intellectual property protection established in a completely different technological context do not work any longer in an emerging environment, and, therefore, new conceptual arrangements are required for international regulation of intellectual activities on the Internet.”[58]

[16]     Many highlighted the necessity of re-shaping present copyright laws[59] or abolishing them altogether.[60] In particular, a growing copyright “abolitionism” emerged online in response to a worrying tendency to criminalize the younger generation and new models of online digital creativity, such as mash-up, fanfiction, or machinima.[61] The Committee on Intellectual Property Rights and the Emerging Information Infrastructure considered the notion that copying might not be an appropriate mechanism for achieving the goals of copyright in the digital age.[62] Among the inadequacies, the Committee highlights that “in the digital world copying is such an essential action, so bound up with the way computers work, that control of copying provides, in the view of some, unexpectedly broad powers, considerably beyond those intended by the copyright law.”[63] Sharing is essential to emerging digital culture. Young generations digitize, share, rip, mix, burn, and share again as a basic form of human interaction. Increasingly, many social forces maintain that full recognition of a non-commercial right to share creative works should be the goal of modern policies for digital creativity. At the same time, criminalization of Internet users by cultural conglomerates is a source of social tension.[64] At the WIPO Global Meeting on Emerging Copyright Licensing ModalitiesFacilitating Access to Culture in the Digital Age, Lessig called for an overhaul of the copyright system, which would “never work on the internet” and “[i]t’ll either cause people to stop creating or it’ll cause a revolution.”[65]

[17]     Resistance to copyright lies at the crossroad between academic investigation, civic involvement, and political activity. As Michael Strangelove argued in the Empire of Mind, the Internet set in motion an anti-capitalistic movement resistant to authoritarian forms of consumer capitalism and globalization.[66] This movement is “resisting the resistance” to change, resisting copyright, seeking access to knowledge and promoting the public domain. Creative Commons (CC), the Free Software Foundation, and the Open Source movement,[67] propelled the diffusion of viable market alternatives to traditional copyright management. The “power of open,” as Catherine Casserly and Joi Ito have termed creative commons, has spread quickly with more than four hundred million CC-licensed works available on the Internet.[68] Again, mostly driven by scholarly efforts, projects like the Access to Knowledge (A2K) Movement, the Open Access Publishing Movement, and the Public Domain Project lead the resistance to copyright over-expansion by seeking to re-define the hierarchy of priorities embedded in the traditional politics of intellectual property.[69] Meanwhile, proposals for reform tackled the uneasy coexistence between copyright, digitization, and the networked information economy.[70] I will discuss these proposals first and later discuss the social movements resisting the resistance.

A.    Copyright Terms, Formalities and Registration Systems

[18]     As suggested by some scholars, a potential solution to the weaknesses of the current copyright regime is a setting in which published works are not copyrighted unless the authors comply with specific formalities. These formalities should be very simple, cheap, and non-discriminatory with respect to national versus foreign authors.[71]

[19]     The international community was persuaded to abolish most discriminatory hurdles in the analog world; similarly, the digital era may provide opportunities for creativity in adapting formalities.[72] The idea of a global online copyright registry for creative works is increasingly gaining momentum.[73] A carefully crafted registration system may enrich the public domain, enhance access and reuse, and avoid transaction costs burdening digital creativity and digitization projects.[74] Today, state-of-the-art technology enables the creation of global digital repositories that ensure the integrity of digital works, render filings user-friendly and inexpensive, and enable searches on the status of any creative work.[75] Registration could be a precondition for protection by providing the creators with full ownership rights, while, absent registration, the default level of protection would be limited to the moral right of attribution. Alternatively, if making global registration, rather than notice, a precondition for protection is considered too harsh a requirement, then registration might at least be required as a precondition of protection extensions.

[20]     In particular, registries and data collection should ease the orphan works problem.[76] Measures to improve the provision of rights management information range from encouraging digital content metadata tagging, to promoting the use of CC-like licenses, and encouraging the voluntary registration of rights ownership information in specifically designed databases.[77] Many projects aim at increasing the supply of rights management information to the public, merging unique sources of rights information, and establishing specific databases for orphan works. Notably, the EU mandated project ARROW (Accessible Registries of Rights Information and Orphan Works) includes national libraries, publishers, writers’ organizations and collective management organizations. It aspires to find ways of identifying rights holders, determining and clearing rights, and possibly confirming the public domain status of a work.[78]

[21]     Marco Ricolfi’s Copyright 2.0 proposal is a specific articulation of an alternative copyright default rule, coupled with the implementation of a formality and registration system.[79] Similar proposals have been made by other scholars, such as Lessig.[80] In Ricolfi’s Copyright 2.0, traditional copyright, or Copyright 1.0, is still available. In order to be enjoyed, Copyright 1.0 has to be claimed by the creator at the onset, for example by inserting a copyright notice before the first publication of a work.[81] In certain conditions, the Copyright 1.0 notice could also be added after the first publication, possibly during a short grace period.[82] The Copyright 1.0 protection given by the original notice is deemed withdrawn after a specified short period of time, unless an extension period is formally requested through an Internet based renewal and registration procedure, whose registration data would be accessible online.[83] If no notice is given, Copyright 2.0 applies, and giving creators mainly one right, the right to attribution.[84]

B. Mandatory Exceptions and Diligent Search for Orphan Works and UGC

[22]     Nellie Kroes warns against the welfare loss of the immense cultural riches unveiled by digitization, nevertheless locked behind the intricacies of an outdated copyright model.[85]

Think of the treasures that are kept from the public because we can’t identify the right-holders of certain works of art. These “orphan works” are stuck in the digital darkness when they could be on digital display for future generations. It is time for this dysfunction to end.[86]

[23]     Institutional proposals in both Europe and the United States advocate the implementation of a diligent search system as a defense to copyright infringement. A report from the United States Copyright Office recommended that Congress enact legislation to limit liability for copyright infringement if the alleged infringer performed “a reasonably diligent search” before any use.[87] Additionally, the Copyright Office laid down several suggestions to promote privately-operated registries as a more efficient arrangement than government-operated registries. The Copyright Office’s recommendations were included in the Orphan Works Act of 2006, and again in the Orphan Works Act of 2008.[88] So far, neither bill has been adopted into law. The High Level Expert Group on the European Digital Libraries Initiative made similar recommendations:

Member States are encouraged to establish a mechanism to enable the use of such works for non-commercial and commercial purposes, against agreed terms and remuneration, when applicable, if diligent search in the country of origin prior to the use of the works has been performed in trying to identify the work and/or locate the rightholders…The mechanisms in the Member States need to fulfill prescribed criteria… the solution should be applicable to all kinds of works; a bona fide/good faith user needs to conduct a diligent search prior to the use of the work in the country of origin; best practices or guidelines specific to particular categories of works can be devised by stakeholders in different fields.[89]

[24]     The system should be based on reciprocity so that Member States will recognize solutions in other Member States that fulfill the prescribed criteria. As a result, materials that are lawful to use in one Member State would also be lawful to use in another. Partially endorsing these principles, a Directive on certain permitted uses of orphan works has been recently enacted by the European Commission.[90]

[25]     In Europe, the most comprehensive proposal for an orphan works’ mandatory exception is outlined in a paper for the Gowers Review by the British Screen Advisory Committee (BSAC).[91] This proposal sets up a compensatory liability regime.[92] First, to trigger the exception, a person is required to have made ‘best endeavours’ to locate the copyright owner of a work.[93] ‘Best endeavours’ would be judged against the particular circumstances of each case. The work must also be marked as used under the exception to alert any potential rights owners.[94] If a rights owner emerges, he is entitled to claim a ‘reasonable royalty’ agreed upon by negotiation, rather than sue for infringement. If the parties cannot reach agreement, a third party steps in to establish the royalty amount. The terms of use of the formerly-orphan work would need to be negotiated between the user and the rights owner, according to the traditional copyright rules. However, users should be allowed to continue using the work that has been integrated or transformed into a derivative work, contingent upon payment of a reasonable royalty and sufficient attribution. Slightly modified versions of the U.S. and European model have been also investigated. For example, Canada established a compulsory licensing system based on diligent searches to use orphan works.[95]

[26]     In addition to orphan works, user-generated content (UGC) is another massive phenomenon that struggles with present copyright law. Mandatory exceptions have been claimed as a solution for user-generated content, together with the use of informal copyright practices.[96] Proposals have been made for introducing an exception for transformative use in user-generated works.[97] Both specific and general exception clauses have been under discussion.[98] Canada introduced a specific exception to this effect, allowing the use of a protected work—which has been published or otherwise made available to the public—in the creation of a new work, if the use is done solely for non-commercial purposes and does not have substantial adverse effects on the potential market for the original work.[99] Likewise, European institutions and stakeholders have recently discussed specific exceptions for UGC, after sidelining proposals for micro-licensing arrangements.[100] In a narrower context, the U.S. Copyright Office rulemaking on the Digital Millennium Copyright Act (DMCA) anti-circumvention provisions recently introduced an exception for the use of movie clips for transformative, non-commercial works, bringing a breath of fresh air to the world of ‘vidding’.[101] Also, general fair use exception clauses, if properly construed, may prove effective to give UGC creators some breathing space.[102] In particular, recent U.S. case law protects UGC creators from bogus DMCA takedown notices in cases of blatant misrepresentation of fair use defences by copyright holders. In Lenz v. Universal Music, the 9th Circuit ruled that “the statute requires copyright holders to consider fair use before sending takedown notification.”[103] The Court also recognized the possible applicability of section 512(f) of the DMCA that allows for the recognition of damages in case of proven bad-faith, which would occur if the copyright holder did not consider fair use or paid “lip service to the consideration of fair use by claiming it formed a good faith belief when there is evidence to the contrary.”[104]

C. Extended and Mandatory Collective Management

[27]     Extended Collective Licenses (ECL) are applied in various regions in Denmark, Finland, Norway, Sweden, and Iceland.[105] The ECL arrangement has become a tempting policy option in several jurisdictions, both to tackle the orphan works problem, and the larger issue of file sharing in digital networks.[106] In particular, a recent draft directive would apply this collective management mechanism to the use of out-of-commerce works by cultural heritage institutions.[107]

[28]     The system combines the voluntary transfer of rights from rights holders to a collective society with the legal extension of the collective agreement to third parties who are not members of the collective society. However, to be extended to third parties of the same category, the collective society must represent a substantial number of rights holders.[108] In any event, the legislation in Nordic countries provides the rights holders with the option of claiming individual remuneration or opting out from the system.[109] Therefore, with the exception of the rights holders who opted out, the extended collective license automatically applies to all domestic and foreign rights owners, unknown or untraceable rights holders, and deceased rights holders, even where estates have yet to be arranged. With an extended collective licensing scheme in place, a user may obtain a license to use all the works included in a certain category, with the exception of the opted out works. Re-users of existing works should have no legal concerns all orphan works will be covered by the license, opted out works instantly cease to be orphan. If ECL is applied to legitimize file-sharing, collective management bodies will negotiate the license with users’ associations or internet service providers (ISPs). In exchange for the right of reproductioning and making available content online, rights holders will be remunerated by the proceedings collected through the extended collective license. A related proposal would place the right to make available to the public under mandatory collective management.[110] According to this proposal, to enjoy the economic rights attached to the right of making available to the public, rights holders would be obligated to use collective management. As a consequence, the ISPs would pay a lump-sum fee or levy to the collective societies in exchange for the authorization to download and make the collective society’s entire repertoire of managed available to users.[111] The money collected would be then redistributed to the rights holders.

[29]     Actually, courts have expressed hesitations in endorsing the ECL opt-out mechanism (as seen in the Google books case).[112] A recent ECJ decision ruled against this arrangement, while reviewing a French law that regulated the digital exploitation of out-of-print 20th century books.[113] This French law gave approved collecting societies the right to authorize the reproduction and digital representation of out-of-print books.[114] Meanwhile, the law provided authors—or their successors in title—with an opt-out mechanism subject to certain conditions. In Soulier, the ECJ declared the French law uncompliant with European law,[115] which provides authors—not collecting societies—with the right to authorize the reproduction and communication to the public of their works.[116] The Soulier decision might have far-reaching effects for the EU directive proposal—and more generally for all national systems of extended collective licensing that might be incompatible with EU law. The successful implementation of the directive proposal might remain the sole option to keep ECL arrangements in place by redressing this judicial interpretation

D. Alternative Compensation Systems or Cultural Flat Rate

[30]     As Volker Grassmuck noted, “the world is going flat(-rate).”[117] In search of alternative remuneration systems, researchers, activists, consumer organizations, artist groups, and policy makers have proposed to finance creativity on a flat-rate base. In the past, levies on recording devices and media have been set up upon the acknowledgment that private copying cannot be prevented.[118] The same reasoning applies to the introduction of a legal permission to copy and make available copyrighted works for non-commercial purposes in the Internet.[119] Flat rate proposals favor a sharing ecology that is best suited to the networked information economy.[120] A recent study of the Institute of European Media Law has argued that this may be “no[thing] less than the logical consequence [of] the technical revolution [introduced] by the internet.”[121] The Communia study also described the minimum requirements for a cultural flat-rate as follows: “(i) a legal license permitting private individuals to exchange copyright works for non-commercial purposes; (ii) a levy, possibly collected by ISPs, flat, possibly differentiated by access speed; and (iii) a collective management, i.e. a mechanism for collecting the money and distributing it fairly.”[122]

[31]     Several flat-rate models have been proposed.[123] Some see the flat-rate payment by Internet subscribers as similar to private copying levies managed by collecting societies, while others want to put in place an entirely new reward system, giving the key role to Internet users themselves.[124] A non-commercial use levy permitting non-commercial file sharing of any digitized work was first proposed by Professor Neil Netanel.[125] Such a levy would be imposed on the sale of any consumer electronic devices used to copy, store, send or perform shared and downloaded files, but also on the sale of internet access and P2P software and services.[126] An ad hoc body would be in charge of determining the amount of the levy.[127] The proceeds would be distributed to copyright holders by taking into consideration the popularity of the works measured by tracking and monitoring technologies.[128] Users could freely copy, circulate, and make non-commercial use of any works that the rights holder has made available on the Internet. William Fisher followed up on Netanel with a more refined and comprehensive proposal.[129] Creators’ remuneration would still be collected through levies on media devices and Internet connection.[130] In Fisher’s system, however, a governmentally administered registrar for digital content, or alternatively a private organization, would be in charge of the management of creative works in the digital environment.[131] Digitized works would be registered with the Registrar and embedded with digital watermarks. Tracking technologies would measure the popularity of the works circulating online.[132]The Registrar would then redistribute the proceedings to the registered right holders according to popularity. Philippe Aigrain proposed a “creative contribution” encompassing a global license to share published digital works in the form of ECL, or absent an agreement, of legal licensing.[133] Remuneration would be provided by a flat-rate paid by all Internet subscribers.[134] Half of the money collected would be used for the remuneration of works shared over the Internet—distributed according to their popularity.[135] Measurement of popularity would be based on a large panel of voluntary Internet users transmitting anonymous data on their usage to collective management societies.[136] The other half of the money collected would be devoted to funding the production of new works and the promotion of added-value intermediaries in the creative environment.[137] Another suggestion included among flat-rates models is Peter Sunde’s Flattr “micro-donations” scheme. An internet user would give between 2 and 100 euros per month and could then nominate works that they wish to reward or “flattr,” a play on the words “flatter” and “flat-rate.”[138] Finally, the “German and European Green Parties included in their policy agenda the promotion of a cultural flat rate to decriminalise P2P users, remunerate creativity and relieve the judicial system and the ISPs from mass-scale prosecution.”[139] The “Green Party’s proposal has been backed up by the mentioned EML study that found that a levy on Internet usage legalizing non-commercial online exchanges of creative works conforms with German and European copyright law, even though it requires changes in both.”[140]

IV. The Access 2 Knowledge (A2K) Movement

[32]     As Nelson Mandela once noted, “[e]liminating the distinction between the information rich and information poor is…critical to eliminating economic and other inequalities between North and South, and to improving the life of all humanity.”[141] “Access to learning and knowledge…[are] key elements towards the improvement of the situation of under-privileged countries…”[142] Extreme copyright expansion and constant cultural appropriation, together with a dysfunctional access to scientific and patented knowledge, heightened the North-South cultural divide. The Global South has been exposed to the effects of a pernicious form of cultural imperialism, without the advantages of freely reusing that culture for its own growth. The Vatican noted that

[o]n the part of rich countries there is excessive zeal for protecting knowledge through an unduly rigid assertion of the right to intellectual property, especially in the field of health care. At the same time, in some poor countries, cultural models and social norms of behaviour persist which hinder the process of development.[143]

[33]     The issue of access to knowledge was first publicly expressed by the Brazilian government in a 1961 draft resolution.[144] Since then, access to knowledge has recently returned to become a question of major international concern. Access to Knowledge (A2K) is a globalized movement aimed at promoting redistribution of informational resources in favor of minorities and the Global South.[145] In 2006, the Yale Information Society Project held an A2K conference committed “to building a broad conceptual framework of ‘Access to Knowledge’ that can foster powerful coalitions between diverse groups.”[146] Yale’s 2007 A2K conference aimed to “further build the coalition amongst institutions and stakeholders” from the 2006 conference.[147] The Consumer Project on Technology (CPT) says that A2K:

takes concerns with copyright law and other regulations that affect knowledge and places them within an understandable social need and policy platform: access to knowledge goods…The rich and the poor can be more equal with regard to knowledge goods than to many other areas.[148]

[34]     Under the umbrella of Article 27 of the Universal Declaration of Human Rights, several working projects at the international level have been set up to address the requests of the A2K movement.[149] As part of the discussions leading to the adoption of the WIPO Development Agenda,[150] activists produced a document to start negotiations on a Treaty on Access to Knowledge.[151] The proposed treaty is based on the core idea that “restrictions on access ought to be the exception, not the other way around,” and that “both subject matter exclusions from, and exceptions and limitations to, intellectual property protection standards are mandatory rather than permissive.”[152] Unfortunately, consensus on the A2K Treaty is still an ephemeral mirage. Though, after a long battle,[153] a narrow version of the A2K Treaty, to promote the use of protected works by disabled persons was signed in Marrakesh in 2013.[154]

[35]     The quest for access to knowledge goes hand in hand with the desire of the Global South and minorities to reclaim cultural identity from imperialist power. The search for cultural distinctiveness and access to knowledge becomes a paradigm of equality.[155] Although international agreement from all stakeholders on an A2K Treaty may be hard to reach, grass-roots movements spearheaded similar goals through different routes. A quest for open access to academic knowledge occupied the recent agenda of a global network of institutions and stakeholders.

V. From “Elite-nment” to Open Knowledge Environments

[36]     In a momentous speech at the European Organization for Nuclear Research (CERN) in Geneva, Professor Lawrence Lessig reminded the audience of scientists and researchers that most scientific knowledge is locked away for the general public and can only be accessed by professors and students in a university setting.[156] Lessig pungently made the point that “if you are a member of the knowledge elite, then there is free access, but for the rest of the world, not so much…publisher restrictions do not achieve the objective of enlightenment, but rather the reality of ‘elite-nment.’” [157]

[37]     Other authors have reinforced this point. John Willinsky, for example, suggested that, as its key contribution, open access publishing (OAP) models may move “knowledge from the closed cloisters of privileged, well-endowed universities to institutions worldwide.”[158] As Willinsky noted, “[o]pen access could be the next step in a tradition that includes the printing press and penny post, public libraries and public schools. It is a tradition bent on increasing the democratic circulation of knowledge…”[159] There is a common understanding that the path to digital enlightenment may start with open access to scientific knowledge.

[38]     The open access movement in scholarly publishing was inspired by the dramatic increase in prices for journals and publisher restrictions to the reuse of information.[160] The academics’ reaction against the ‘cost of knowledge’—also known as the serial crisis—is on the rise, especially against the practice of charging “exorbitantly high prices for…journals,” and of “agree[ing] to buy very large ‘bundles.’”[161] As Reto Hilty noted, the price increase of publishers’ products—while publishers’ costs have sunk dramatically—has forced the scientific community to react by implementing open access options, because antiquated copyright laws have failed to bring about reasonable balance of interests.[162] George Monbiot stressed the unfairness of the academic publishing system by noting, with specific reference to publishers such as Elsevier, Springer, or Wiley-Blackwell:

[w]hat we see here is pure rentier capitalism: monopolising a public resource then charging exorbitant fees to use it. Another term for it is economic parasitism. To obtain the knowledge for which we have already paid, we must surrender our feu to the lairds of learning.[163]

[39]     The parasitism lies in a monopoly over content that the academic publishers do not create and do not pay for. Researchers, hoping publish with reputable journals, surrender their copyrights for free.[164] Most of the time, the production of that very content—now monopolized by the academic publishers—was funded by the public, through government research grants and academic incomes.[165] This led some authors to discuss the opportunity of abolishing copyright for academic works all together.[166] From the ancient proverbial idea of scientia donum dei est unde vendi non potest to the emergence of the notion of ‘open science’, the normative structure of science presents an unresolvable tension with the exclusive and monopolistic structure of intellectual property entitlements. Merton powerfully emphasized the contrast between the ethos of science and intellectual property monopoly rights:

“Communism,” in the nontechnical and extended sense of common ownership of goods, is a second integral element of the scientific ethos. The substantive findings of science are a product of social collaboration and are assigned to the community. They constitute a common heritage in which the equity of the individual producer is severely limited. An eponymous law or theory does not enter into the exclusive possession of the discoverer and his heirs, nor do the mores bestow upon them special rights of use and disposition. Property rights in science are whittled down to a bare minimum by the rationale of the scientific ethic. The scientist’s claim to “his” intellectual “property” is limited to that of recognition and esteem which, if the institution functions with a modicum of efficiency, is roughly commensurate with the significance of the increments brought to the common fund of knowledge.[167]

[40]     The major propulsion to open access at the European level was driven by the Berlin Conferences. The first Berlin Conference was organized in 2003 by the Max Planck Society and the European Cultural Heritage Online (ECHO) project to discuss ways of providing access to research findings.[168] Annual follow-up conferences have been organized ever since. The most significant result of the Berlin Conference was the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (“Berlin Declaration”), including the goal of disseminating knowledge through the open access paradigm via the Internet.[169] The Berlin Declaration has been signed by hundreds of European and international institutions. OAP is a publishing model where the research, institution or the party financing the research pays for publication and the article is then freely accessible. In particular, OAP refers to free and unrestricted world-wide electronic distribution and availability of peer-reviewed journal literature.[170] The Budapest Open Access Initiative uses a definition that includes free reuse and redistribution of “[o]pen [a]ccess” material by anyone.[171] According to Peter Suber, the de facto spokesperson of the OAP movement, Open access (OA) is free online access. OA literature is not only free of charge to everyone with an Internet connection, but free of most copyright and licensing restrictions. OA literature is barrier-free literature produced by removing the price barriers and permission barriers that block access and limit usage of most conventionally published literature, whether in print or online.[172]

[41]     Since the inception of the open-access initiative in 2001, there are now almost eleven thousand open access journals and their number is constantly rising.[173] In addition, several leading international academic institutions endorsed open-access policies and are working towards mechanisms to cover open-access journals’ operating expenses.[174] The same approach is increasingly followed by governmental institutions,[175] in light of the fact that economic studies have shown a positive net value of OAP models when compared to other publishing models.[176] The European Commission, for example, plans to make OAP the norm for research receiving founding from its Horizon 2020 programme—the EU framework programme for research and innovation.[177] As part of its Innovation and Research Strategy for Growth, the UK government has announced that all publicly funded scientific research must be published in open-access journals.[178] In the US, several research funding agencies have instituted open access conditions.[179] After an initial voluntary adoption in 2005, the Consolidated Appropriations Act of 2008[180] instituted an open access mandate for research projects funded by the NIH.[181] So far, the NIH has reported a compliance rate of 75%.[182] Together with research articles, data, teaching materials, and the like, the importance of open access models extends also to books. Millions of historic volumes are now openly accessible from various digitization projects such as Europeana, Google Books, or Hathi. In addition, many recent volumes are also openly available from a variety of academic presses, government and nonprofit agencies, and other individuals and groups. Libraries’ cataloging data are increasingly released under open access models.[183]

[42]     Criticizing the university for having become part of the problem of enclosure of scientific commons by “avidly defending their rights to patent their research results, and licence as they choose,” Richard Nelson argues that “the key to assuring that a large portion of what comes out of future scientific research will be placed in the commons is staunch defense of the commons by universities.”[184] Nelson continues by arguing that if universities “have policies of laying their research results largely open, most of science will continue to be in the commons.”[185] There is a true responsibility of the academic community towards expanding OAP. The role of universities in the open access and OAP movement is critical and more than any other institutions they have motive to promote the goals of “open science.” Willinsky advocated the idea that scholars have a responsibility to make their work available OA globally by referring to an ‘access principle’ and noting that “[a] commitment to the value and quality of research carries with it a responsibility to extend the circulation of such work as far as possible and ideally to all who are interested in it and all who might profit by it.”[186] In this sense, the true challenge ahead of the OAP movement is to turn university environments, and the knowledge produced within, into a more easily and freely accessible public good, perhaps better integrating the OAP movement with Open University and Open Learning.

[43]     Seeking to reap the full value that open access can yield in the digital environment, Jerome Reichman and Paul Uhlir proposed a model of open knowledge environments (OKEs) for digitally networked scientific communication.[187] OKEs would “bring the scholarly communication function back into the universities” through “the development of interactive portals focused on knowledge production and on collaborative research and educational opportunities in specific thematic areas.” [188] Also, OKEs might reshape the role of libraries. As mentioned earlier, libraries are knowledge infrastructures and should be one of the main drivers of access to knowledge in the digital networked society. However, extreme commodification of information, propelled by the present legal framework, may drive libraries away from their function as knowledge repositories. As Guy Pessach noted,

[l]ibraries are increasingly consuming significant shares of their knowledge goods from globalized publishers according to the contractual and technological protection measures that these publishers impose on their digital content. Thus there is an unavoidable movement of enclosure regarding the provision of knowledge through libraries, all in a manner that practically compels libraries to take part in the privatization of knowledge supply and distribution.[189]

[44]     Therefore, the road to global access to knowledge is to provide digital libraries with a better framework to support their independence from the increasing commodification of knowledge goods. Several preliminary steps have been taken in the context of articles 3-1(V) and 3-1(VIII) of the WIPO A2K draft treaty and other legal instruments.[190] A World Digital Public Library that integrates OKEs will push forth the rediscovery of currently unused or inaccessible works, open up the riches of knowledge in formats that are accessible to persons with disabilities, and empower a superior democratic process by favoring access regardless of users’ market power.

VI. The Emergence of the Public Domain[191]

[45]     As Jessica Litman noted, “a vigorous public domain is a crucial buttress to the copyright system; without the public domain, it might be impossible to tolerate copyright at all.”[192] The increasing enclosure of the public domain has contributed to the crisis of acceptance in which copyright law is fallen. The emergence and recognition of the public domain, the development of a public domain project, and the advent of a movement for cultural environmentalism are key elements to the resistance to copyright over-expansion. More fundamentally perhaps, the emphasis over the importance of the public domain has gained momentum together with the rise of the networked information economy and its ethical revolution emphasizing mass collaboration, sharing economy and gift exchange. In this respect, Daniel Drache noted that the emergence of the public domain and public goods in the globalized society have increasingly troubled the future prospects of ‘market fundamentalism.’[193]

[46]     Authors suggested that the Statute of Anne actually created the public domain, by limiting the duration of protected works and by introducing formalities.[194] However, in early copyright law, there was no positive term to affirmatively refer to the public domain, though terms like publici juris or propriété publique had been employed by 18th century jurists.[195] Nonetheless, the fact of the public domain was recognized, though no single locution captured that concept. Soon, the idea of the public domain evolved into a “discourse of the public domain—that is, the construction of a legal language to talk about public rights in writings.”[196] Historically, the term public domain was firstly employed in France by the mid-19th century to mean the expiration of copyright.[197] The English and American copyright discourse borrowed the term around the time of the drafting of the Berne Convention with the same meaning.[198] “Traditionally, the public domain has been defined in relation to copyright as the opposite of property, as the “other side of the coin of copyright” that “is best defined in negative terms”.[199] This traditional definition regarded the public domain as a “wasteland of undeserving detritus” and did not “worry about ‘threats’ to this domain any more than [it] would worry about scavengers who go to garbage dumps to look for abandoned property.”[200] This is no more. This definitional approach has been discarded in the last thirty years.

[47]     In 1981, Professor David Lange published his seminal work, Recognizing the Public Domain, and departed from the traditional line of investigation of the public domain. Lange suggested that “recognition of new intellectual property interests should be offset today by equally deliberate recognition of individual rights in the public domain.”[201] Lange called for an affirmative recognition of the public domain and drafted the skeleton of a new theory for the public domain. The public domain that Lange had in mind would become a “sanctuary conferring affirmative protection against the forces of private appropriation” that threatened creative expression.[202] The affirmative public domain was a powerfully attractive idea for scholarly literature and civic society. Lange spearheaded a “conservancy model,” concerned with promoting the public domain and protecting it against any threats, that juxtaposed the traditional “cultural stewardship model” which regarded ownership as the prerequisite of productive management.[203] The positive identification of the public domain propelled the “public domain project,” as Michael Birnhack called it.[204]

[48]     Many authors attempted to define, map, and explain the role of the public domain as an alternative to the commodification of information that threatened creativity.[205] This ongoing public domain project offers many definitions that attempt to positively construe the public domain. In any event, a positive, affirmative definition of the public domain is fluid by nature. An affirmative definition of the public domain is a political statement, the endorsement of a cause. In other words, “[t]he public domain will change its shape according to the hopes it embodies, the fears it tries to lay to rest, and the implicit vision of creativity on which it rests. There is not one public domain, but many.”[206] Notwithstanding many complementary definitions, consistency is found in the common idea that the “materials that compose our cultural heritage must be free for all to use no less than matter necessary for biological survival.”[207] As a corollary, many modern definitions of the public domain are unified by concerns over recent copyright expansionism. The common understanding of the participants to the public domain project is that enclosure of the “materials that compose our cultural heritage” is a welfare loss against which society at large must be guarded from.[208] The modern definitional approach endorsed by the public domain project is intended to turn the old metaphor, describing the public domain as what is “left over after intellectual property had finished satisfying its appetite,”[209] upside down by thinking of copyright as “a system designed to feed the public domain providing temporary and narrowly limited rights…all with the ultimate goal of promoting free access.”[210] Moreover, the public domain envisioned by recent legal, public policy and economic analysis becomes “the place we quarry the building blocks of our culture.”[211] However, the construction of an affirmative idea of the public domain should always consider that the abstraction of the public domain is slippery.[212] That affirmative notion must be embodied in a physical space that may be immediately protected and nourished. As Professor Lange puts it, “the problems will not be resolved until courts have come to see the public domain not merely as an unexplored abstraction but as a field of individual rights fully as important as any of the new property rights.”[213]

[49]     The modern public domain discourse owes much to the legal analysis of the governance of the commons, natural resources used by many individuals in common. Although the public domain and commons are diverse concepts,[214] the similarities are many. Since the origin of the public domain discourse, the environmental metaphor has been largely used to refer to the cultural public domain.[215] Therefore, the traditional environmental conception of the commons was ported to the cultural domain and applied to intellectual property policy issues. Environmental and intellectual property scholars started to look at knowledge as a commons—a shared resource. In 2003, the Nobel Prize Elinor Ostrom and her colleague Charlotte Hesse discussed the applicability of their ideas on the governance and management of common pool resources to the new realm of the intellectual public domain.[216] The following literature continued to develop the concept of cultural commons in the footsteps of the Ostrom’s analyses.[217] The application of the physical commons literature to cultural resources brings a shift in approach and methodology from the previous discourse of the public domain. This different approach has been described as follows:

[t]he old dividing line in the literature on the public domain had been between the realm of property and the realm of the free. The new dividing line, drawn as a palimpsest on the old, is between the realm of individual control and the realm of distributed creation, management, and enterprise. [218]

[50]     Under this conceptual scheme, restraint on use may no longer be an evil, but a necessity of a well-run commons. The individual, legal, and market based control of the property regime is juxtaposed with the collective and informal controls of the well-run commons.[219] The well-run commons can avoid the “tragedy of the commons” without the need for single party ownership.

[51]     The movement to preserve the environmental commons inspired a new politics of intellectual property.[220] The environmental metaphor has propelled what can be termed as a cultural environmentalism.[221] Several authors spearheaded by Professor James Boyle have cast a defense of the public domain on the model of the environmental movement. Morphing the public domain into the commons, and casting the defense of the public domain on the model of the environmental movement, has the advantage of embodying the public domain in a much more physical form, minimizing its abstraction and the related difficulty of actively protecting it.[222] The primary focus of cultural environmentalism is to develop a discourse that will make the public domain visible.[223] Before the movement, the environment was invisible. Therefore, “like the environment”, Boyle suggests by echoing David Lange, “the public domain must be ‘invented’ before it can be saved.”[224] Today, the public domain has been “invented” as a positive concept and the “coalition that might protect it”, evoked if not called into being by scholars more than a decade ago, is formed.[225] Many academic and civic endeavors have joined and propelled this coalition. [226] Civic advocacy of the public domain and access to knowledge has also been followed by several institutional variants, such as the World Intellectual Property Organization’s “Development Agenda.”[227] Recommendation 20 of the Development Agenda endorses the goal “[t]o promote norm-setting activities related to IP that support a robust public domain in WIPO’s Member States.”[228] Europe put together a diversified network of projects for the protection and promotion of the public domain and open access.[229] As a flagship initiative, the European Union has promoted COMMUNIA, the European Thematic Network on the Digital Public Domain.[230] Several COMMUNIA members embodied their vision in the Public Domain Manifesto.[231] In addition, other European policy statements endorsed the same core principles of the Public Domain Manifesto. The Europeana Foundation has published the Public Domain Charter to stress the value of public domain content in the knowledge economy.[232] The Free Culture Forum released the Charter for Innovation, Creativity and Access to Knowledge, pleading for the expansion of the public domain, the accessibility of public domain works, the contraction of the copyright term, and the free availability of publicly funded research.[233] The Open Knowledge Foundation launched the Panton Principles for Open Data in Science to endorse the concept that “data related to published science should be explicitly placed in the public domain.”[234]

[52]     The focus of cultural environmentalism has been magnified in online commons and the Internet as the “über-commons—the grand infrastructure that has enabled an unprecedented new era of sharing and collective action.”[235] In the last decade, we have witnessed the emergence of a “single intellectual movement, centered on the importance of the commons to information production and creativity generally, and to the digitally networked environment in particular.”[236] According to David Bollier, the commoners have emerged as a political movement committed to freedom and innovation.[237] The “commonist” movement created a new order that is embodied in countless collaborative online endeavors.

[53]     The emergence and growth of an environmental movement for the public domain and, in particular, the digital public domain, is morphing the public domain into our cultural commons. We must look at it as a shared resource that cannot be commodified, much like our air, water, and forests. As with the natural environment, the public domain and the cultural commons that it embodies must enjoy a sustainable development. As with our natural environment, the need to promote a “balanced and sustainable development” of our cultural environment is a fundamental right that is rooted in the Charter of Fundamental Rights of the European Union.[238] Overreaching property theory and overly protective copyright law disrupt the delicate tension between access and protection. Unsustainable cultural development, enclosure and commodification of our cultural commons will produce cultural catastrophes. As unsustainable environmental development has polluted our air, contaminated our water, mutilated our forests, and disfigured our natural landscape, unsustainable cultural development will outrage and corrupt our cultural heritage and information landscape.

VI. Conclusions

[54]     I would like to conclude my review of this movement “resisting the resistance” to the Digital Revolution by sketching out a roadmap for reform that builds upon its vision. This roadmap reshapes the interplay between community, law, and market to envision a system that may fully exploit the digital opportunity and looks to the history of creativity as a guide.[239] This proposal revolves around the pivotal role of the users in a modern system for enhancing creativity. The coordinates of the roadmap correlate to four different but interlinked facets of a healthy creative paradigm: namely, (a) the necessity to rethink users’ rights, in particular users’ involvement in the legislative process; (b) the emergence of a politics of the public domain, rather than a politics of intellectual property; (c) the need to make cumulative and transformative creativity regain its role through the re-definition of the copyright permission paradigm; and (d) the transition to a consumer gift system or user patronage, through digital crowd-funding.

[55]     The roadmap for reform emphasizes the role of the users. The Internet revolution is a bottom-up revolution. User-based culture defines the networked society, together with a novel concept of authorship mingling users and authors together. Therefore, the role of users in our legislative process and the relevance of user rights should be reinforced. So far, users have had very limited access to the bargaining table when copyright policies had to be enacted. This is due to the dominant mechanics of lobbying that largely excludes users from any policy decisions. This led to the implementation of a copyright system that is strongly protectionist and pro-distributors. In particular, the regulation of the Internet and the solutions given to the dilemmas posed by digitalization may undermine the potential of this momentous change and limit positive externalities for users.

[56]     In the networked, peer, and mass productive environment, creativity seeks a politics of inclusive rights, rather than exclusive. This is a paradigm shift that would re-define the hierarchy of priorities by thinking in terms of “cultural policy” and developing a political policy of the public domain, rather than a political policy of intellectual property. Before the recognition of any intellectual property interests, a politics of the public domain must set up the “deliberate recognition of individual rights in the public domain.”[240] It must provide positive protection of the public domain from appropriation. A politics of the public domain would reconnect policies for creativity with our past and our future, looking back at our tradition of cumulative creativity and looking forward at networked, mass collaborative, user-generated creativity.[241]

[57]     In order to reconnect the creative process with its traditional cumulative and collaborative nature, a politics of inclusive rights and a politics of the public domain seek the demise of copyright exclusivity.[242] In my roadmap for reform, I argue for the implementation of additional mechanisms to provide economic incentives to create, such as a liability rule integrated into the system and an apportionment of profits. A politics of inclusivity would de-construct the post-romantic paradigm that over-emphasized creative individualism and absolute originality in order to adapt policies to networked and user-generated creativity.

[58]     Finally, I draw a parallel between traditional patronage, corporation patronage, and neo-patronage or user patronage as a re-conceptualization of a patronage system in line with the exigencies of an interconnected digital society.[243] In the future, support for creativity may increasingly derive from a direct and unfiltered exchange between authors and the public, who would become the patrons of our creativity. Remuneration through attribution, self-financing through crowd-funding, ubiquity of digital technology, and mass collaboration will keep the creative process in motion. This market transformation will facilitate a direct, unrestrained “discourse” between creators and the public. Yet, the role of distributors will be redefined and may partially disappear, making the transition long and uncertain.

* Senior Researcher and Lecturer, Centre for International Intellectual Property Studies (CEIPI), University of Strasbourg; Non-Resident Fellow, Stanford Law School, Center for Internet and Society. S.J.D., Duke University School of Law, Durham, North Carolina; LL.M., Duke University School of Law, Durham, North Carolina; LL.M., Strathclyde University, Glasgow, UK; J.D., Università Cattolica del Sacro Cuore, Milan, Italy. The author can be reached at

[1] See Raymond S. R. Ku, The Creative Destruction of Copyrights: Napster and the New Economics of Digital Technology, 69 u. Chi. L. Rev. 263 (2002).

[2] Joseph A. Schumpeter, Capitalism, Socialism, and Democracy 82-83 (Harper and Row 1975) (1942).

[3] The Washington Declaration on Intellectual Property and the Public Interest, (August 25-27, 2011),, archived at; See also Sebastian Haunss, The Politicisation of Intellectual Property: IP Conflicts and Social Change, 3 W.I.P.O.J. 129 (2011).

[4] See Douglas Rushkoff, Open Source Democracy 46-62 (DEMOS 2003); see also Yochai Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm, 112 Yale L. J. 369, 371-372 (2002).

[5] See id. at 374.

[6] See Yochai Benkler, A Free Irresponsible Press: Wikileaks and the Battle over the Soul of the Networked Fourth Estate, 46 Harvard Civil Rights-Civil Liberties L. Rev. 311 2011 (discussing the democratic functionality of Wikileaks).

[7] See Clay Shirky, Cognitive Surplus: Creativity and generosity in a Connected Age 81-109 (The Penguin Press 2010).

[8] See, e.g., Rebecca Tushnet, Payment in Credit: Copyright Law and Subcultural Creativity, 70 Law & Contemp. Probs. 138 (2007); see also Theorizing Fandom: Fans, Subculture and Identity (Alexander Alison & Harris Cheryl eds., Hampton Press 1997); see generally Andrew L. Shapiro, The Control Revolution: How the Internet is Putting Individuals in Charge and Changing the World we Know (Public Affairs 1999).

[9] See e.g., Denise E. Murray, Changing Technologies, Changing Literacy Communication, 2 Language Learning & Tech. 43, (2000).

[10] See e.g., William Patry, Moral Panics and the Copyright Wars 27 (Oxford U. Press 2009) (explaining the impossibility of governments prosecuting all violations of copyright infringement in a peer-to-peer network).

[11] See id at 27-28.

[12] See id at 25-27.

[13] Id. at 5.

[14] See Patry, supra note 10 at 39.

[15] See Copyright In The Digital Era, Building Evidence For Policy, National Academies (2013),, archived at

[16] See Patry, supra note 10 at 26.

[17] See id.

[18] See Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm, supra note 4 at 400-401.

[19] I have discussed the effects of copyright expansion on semiotic democracy—with a comprehensive review of literature on point—in a previous piece of mine to which I remand. See generally Giancarlo F. Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness? 13(2) J. Marshall Rev. Intell. Prop. L. 341 (2014),, archived at .

[20] See generally Giancarlo F. Frosio, User Patronage: The Return of the Gift in the “Crowd Society”, 2015(5) Mich. St. L. Rev. 1983, 2036-2039 (2015),, archived at (discussing Baudrillard’s categories as applied to cyberspace and the Digital Revolution).

[21] See Jean Baudrillard, The Consumer Society: Myths and Structures 66–68 (Mike Featherstone ed., Sage Publ’ns 1998) (1970).

[22] Id. at 67.

[23] Id. at 68.

[24] Id. at 67.

[25] Eben Moglen, Professor, Speech at the Law of the Common Conference at Seattle University: Free and Open Software: Paradigm for a New Intellectual Commons (March 13, 2009) (transcript available at for_a_New_Intellectual_ Commons), archived at

[26] Francis Gurry, Dir. Gen. of the World Intellectual Prop. Org., Speaker at the Blue Sky Conference: Future Directions in Copyright Law at Queensl. Univ. of Tech., Brisbane, Austl. (February 25, 2011) (transcript available at, 1–2), archived at (emphasis added).

[27] See Uwe Neddermeryer, Why were there no Riots of the Scribes? First Result of a Quantitative Analysis of the Book-production in the Century of the Gutenberg, 31 Gazette Dulivre Medieval 1, at 4-7 (1997) (discussing that at the time of the printing revolution, the resistance to the new technology was little. Only few protests from scribes were recorded throughout Europe. In fact, the only reported protests in Genoa in 1472, in Augsburg in 1473, and in Lyon in 1477. Reconversion from old to new jobs was smooth. A variety of new jobs was created and there are no indications of unemployment or poverty suffered by any part of society due to the introduction of the new technology.); see also Peter Burke, The Italian Renaissance: Culture and Society in Italy, at 71 (Princeton U. Press 1999) (noting the adaptability of several scribes, who became printers themselves); see also Cyprian Blagden, The Stationers’ Company: A History, 1403–1959, at 23 (Stanford U. Press 1977) (1960) (reporting that “there is no evidence of unemployment or organized opposition to the new machines” in England). Quite the contrary, in the last quarter of the fifteenth century more money was spent on books that any time before.

[28] Michael Masnick & Michael Ho, The Sky is Rising: A Detailed Look at the State of the Entertainment Industry, Floor 64, 5 (January 2012),, archived at

[29] Id.

[30] See Frosio, supra note 20, at 2039-2046.

[31] See Masnick & Ho, supra note 28 at 5-6.

[32] Neelie Kroes, European Commission Vice-President for the Digital Agenda, A Digital World of Opportunities at the Forum d’Avignon – Les Rencontres Internationales de la Culture, de l’Économie et des Medias, (November 5, 2010), available at, archived at

[33] See Gurry supra note 26.

[34] Copyright Perspectives: Past, Present and Prospect vii (Brian Fitzgerald and John Gilchrist eds., 2015).

[35] See AP, Pirate Party gains three seats in Iceland’s parliament, CBS News (Apr. 30, 2013, 12:16 PM),, archived at

[36] See Gurry supra note 26. See e.g., Miaoran Li, The Pirate Party and The Pirate Bay: How the Pirate Bay Influences Sweden And International Copyright Relations, 21 Pace Int’l L. Rev. 281 (2009); see also Jonas Anderson, For the Good of the Net: The Pirate Bay as a Strategic Sovereign, 10 Culture Machine 64 (2009); see also Neri Luca, La Baia dei Pirati: Assalto al Copyright (Cooper Editore 2009).

[37] See Copyright and Wrong: Why the Rules on Copyright need to Return to Their Roots, The Economist (Apr. 8, 2010),, archived at .

[38] Austl. Productivity Commission, Intell. Prop. Arrangements, Drft. Rep. 16-17 (2016),, archived at

[39] See generally, Jessica Silbey, The Eureka Myth: Creators, Innovators, and Everyday Intellectual Property (Stan. U. Press 2015) (noting that, after collecting interview-based empirical data, suggesting that creators – and even businesses – need intellectual property and exclusivity overstates, if not misstates, the facts and explaining how this misunderstanding about creativity sustains a flawed copyright system); see also Jessica Litman, Real Copyright Reform, 96 Iowa L. Rev. 1, 3-5, 31-32 (2010) (noting that “the deterioration in public support for copyright is the gravest of the dangers facing the copyright law in a digital era…[c]opyright stakeholders have let copyright law’s legitimacy crumble…”); see also John Tehranian, Infringement Nation: Copyright 2.0 and You xvi-xxi (Oxford U. Press 2011); see also Brett Lunceford &cohenle Shane Lunceford, The Irrelevance Of Copyright In The Public Mind, 7 Nw. J. Tech. & Intell. Prop. 33 (2008).

[40] See e.g., Music Downloading, File-Sharing and Copyright, Pew Res. Ctr.: Internet & Am. Life Project,, archived at

[41] See id.

[42] David Lange, Reimagining The Public Domain, 66 Law & Contemp. Probs. 471 (2003).

[43] See Sacha Wunsch-Vincent, The Economics of Copyright and the Internet: Moving to an Empirical Assessment Relevant in the Digital Age, (World Intell. Prop. Org., Economic Research Working Paper No. 9, 2013) at 2,, archived at

[44] See Ku supra note 1, at 300-305; see also Raymond S. R. Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, 18 Berkeley Tech. L. J. 539 (2003); see also Paul Ganley, The Internet, Creativity and Copyright Incentives, 10 J. Intell. Prop. Rts. 188 (2005); see also John F. Duffy, The Marginal Cost Controversy In Intellectual Property, 71 U. Chi. L. Rev. 37 (2004).

[45] See Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, supra note 44 at 539.

[46] See id. at 566.

[47] See id.

[48] See Ku, supra note 1, at 304-305.

[49] See Ku, Consumers and Creative Destruction: Fair Use Beyond Market Failure, supra note 44 at 539.

[50] See Tom W. Bell, The Specter of Copyism v. Blockheaded Authors: How User-Generated Content Affects Copyright Policy, 10 Vand. J. Ent. & Tech. L. 841, 853 (2008).

[51] See Wunsch-Vincent, supra note 43.

[52] See Bell, supra note 50, at 844.

[53] See id. at 855.

[54] Neelie Kroes, Vice President, Eur. Comm’n, Speech at the Forum d’Avigon, Who Feeds the Artist? (Nov. 19, 2011) (transcript available at, archived at

[55] John Perry Barlow, Selling Wine Without Bottles: The Economy of Mind on the Global Net, Wired (Mar. 1, 1994),, archived at

[56] Nicholas Negroponte, Being Digital 58 (First Vintage Books ed. 1996).

[57] Ian Hargreaves, Digital Opportunity: A Review of Intellectual Property and Growth 1 (2011).

[58] Dmitry Medvedev, President of Russ., Message to the G20 Leaders (Nov. 3, 2011) (transcript available at, archived at .

[59] See, e.g., Pamela Samuelson, The Copyright Principles Project: Directions for Reform, 25 Berkeley Tech. L. J. 1175, 1178–79 (2010); see also William Patry, How to Fix Copyright (Oxford U. Press 2012); see also Diane Zimmerman, Finding New Paths through the Internet, Content and Copyright, 12 Tul. J. Tech. & Intell. Prop. 145, 145 (2009); see also Hannibal Travis, Opting Out of the Internet in the United States and the European Union: Copyright, Safe Harbors, and International Law, 84 Notre Dame L. Rev. 331, 335 (2008); see also Guy Pessach, Reciprocal Share-Alike Exemptions in Copyright Law, 30 Cardozo L. Rev. 1245, 1247 (2008); see also Jessica Litman, Sharing and Stealing, 27 Hastings Comm. & Ent. L. J. 1, 2 (2004); see also Mark Lemley & R. Anthony Reese, Reducing Digital Copyright Infringement Without Restricting Innovation, 56 Stan. L. Rev. 1345, 1349–50 (2004); see also William Landes & Richard Posner, Indefinitely Renewable Copyright, 70 U. Chi. L. Rev. 471, 471 (2003).

[60] See, e.g., Stephen Breyer, The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs, 84 Harv. L. Rev. 281, 282 (1970) (concluding “[i]t would be possible, for instance, to do without copyright, relying upon authors, publishers, and buyers to work out arrangements among themselves that would provide books’ creators with enough money to produce them.”); see also Jon M. Garon, Normative Copyright: A Conceptual Framework for Copyright Philosophy and Ethics, 88 Cornell L. Rev. 1278, 1283 (2003) (noting “[u]nless there is a valid conceptual basis for copyright laws, there can be no fundamental immorality in refusing to be bound by them.”); see also Michele Boldrin and David Levine, Against Intellectual Monopoly (Cambridge U. Press 2008) (disputing the utility of intellectual property altogether); see also Martin Skladany, Alienation by Copyright: Abolishing Copyright to Spur Individual Creativity, 55 J. Copyright Soc’y U.S.A. 361, 361 (2008); see also Joost Smiers and Marieke van Schijndel, Imagine There Is No Copyright and No Cultural Conglomerates Too (Inst. of Network Cultures 2009); see also Joost Smiers, Art Without Copyright: A Proposal for Alternative Regulation, in Freedom of Culture: Regulation and Privatization of Intellectual Property and Public Space 22–29 (Jorinde Seijdel trans., NAi Publishers 2007); see also Joost Smiers and Marieke Van Schijndel, Imagining a World Without Copyright: The Market and Temporary Protection, a Better Alternative for Artists and Public Domain, in Copyright and Other Fairy Tales: Hans Christian Andersen and the Commodification of Creativity 129 (Helle Porsdam ed., Edward Elgar Publ’g Ltd. 2006); see also Frank Thadeusz, No Copyright Law: The Real Reason for Germany’s Industrial Expansion?, Spiegel Online (Aug. 18, 2010),,1518,710976,00.html, archived at (providing a historical and empirical argument against copyright). Cf. Lior Zemer, The Conceptual Game in Copyright, 28 Hastings Comm. & Ent L. J. 409, 409 (2006).

[61] See, e.g., Lawrence Lessig, Laws that Choke Creativity, Ted (2007) (transcript available at, archived at

[62] See Nat’l Res. Council, Executive Summary, The Digital Dilemma: Intellectual Property in the Information Age, 62 Ohio St. L. J. 951 (2001),, archived at

[63] National Research Board, The Digital Dilemma: Intellectual Property in The Information Age 140 (National Academy Press, 2000).

[64] See COPYRIGHT POLICY, CREATIVITY, AND INNOVATION IN THE DIGITAL ECONOMY, USPTO (July 2013),, archived at (demonstrating how lawmakers have struggled for years trying to strike a balance).

[65] See Larry Lessig, Speech at the WIPO Global Meeting on Emerging Copyright Licensing Modalities –Facilitating Access to Culture in the Digital Age, Geneva, Switzerland (November 4, 2010), available at, archived at

[66] See Michael Strangelove, The Empire of Mind: Digital Piracy and the Anti-Capitalist Movement (University of Toronto Press 2005).

[67] See, e.g., Moglen Eben, Freeing the Mind: Free Software and the Death of Proprietary Culture, June 29, 2003, available at, archived at; see also Moglen Eben, Anarchism Triumphant: Free Software and the Death of Copyright, June 28, 1999, available at, archived at

[68] See Catherine Casserly and Joi Ito, The Power of Open (Creative Commons 2011),, archived at; see also Niva Elkin-Koren, Exploring Creative Commons: A Skeptical View of a Worthy Pursuit, in The Future of the Public Domain: Identifying the Commons In Information Law 325-345 (Lucie Guibault and P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[69] See Giancarlo Frosio, Communia Final Report 50-60, (Communia 2011), (last visited January 31, 2017).

[70] See, e.g., supra note 64 at iii.

[71] See, e.g., Lewis Hyde, How to Reform Copyright, The Chronicle (October 9, 2011),, archived at; see also Christopher Sprigman, Reform(aliz)ing Copyright, 57 Stan. L. Rev. 485 (2004) (proposing an optional registration system that subjects unregistered works to a default license under which the use of the work would trigger only a modest statutory royalty liability); see also Lawrence Lessig, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity 140 (Penguin 2004); see also Lawrence Lessig, The Future of Ideas: The Fate of The Commons in a Connected World (Vintage Books 2002); see also Lawrence Lessig, Recognizing the Fight We’re In, Keynote Speech delivered at the Open Rights Group Conference, London, UK (March 24, 2012), at 36:40-38:28, available at, archived at (proposing the reintroduction of formalities at least to secure extensions of copyright, if legislators decide to introduce them).

[72] See Stef van Gompel, Formalities in the digital era: an obstacle or opportunity?, in Global Copyright: Three Hundred Years Since the Statute of Anne, from 1709 to Cyberspace 2-4 (Lionel Bently, Uma Suthersanen and Paul Torremans eds., Edward Elgar 2010) (arguing that the pre-digital objections against copyright formalities cannot be sustained in the digital era); see also Takeshi Hishinuma, The Scope of Formalities in International Copyright Law in a Digital Context, in Global Copyright: Three Hundred Years Since the Statute of Anne, from 1709 to Cyberspace 460-467 (Lionel Bently, Uma Suthersanen and Paul Torremans eds., Edward Elgar 2010).

[73] See Andrew Gowers, Gowers Review of Intellectual Property (HM Treasury, November 2006), at 6, ([r]ecommendation 14b endorses the establishment of a voluntary register of copyright),, archived at

[74] See id. at 40. 

[75] See Tanya Aplin, A Global Digital Register for the Preservation and Access to Cultural Heritage: Problems, Challenges and Possibilities, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World 3, at 23 (Estelle Derclaye (ed.), Edward Elgar 2010) (discussing copyright registers); see also Caroline Colin, Registers, Databases and Orphan Works, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World, supra, 28, at 29; see also Steven Hetcher, A Central Register of Copyrightable Works: a U.S. Perspective, in Copyright and Cultural Heritage: Preservation and Access to Works in a Digital World, 156, at 158.

[76] See Orphan Works and Mass Digitization: A Report of the Register of Copyrights, United States Copyright Office at 66 (June 2015),, archived at

[77] See van Gompel, supra note 72, at 12-13 (noting that only voluntary supply of information would be compliant with the no-formalities prescription of the Berne Convention).

[78] See Accessible Registries of Rights Information and Orphan Works [ARROW],, archived at (creating registries of rights information and orphan works); see also Barbara Stratton, Seeking New Landscapes: a Rights Clearance Study in the Context of Mass Digitization of 140 Books Published between 1870 and 2010, at 5, 35-36 (British Library 2011),, archived at, (showing that in contrast to the average four hours per book to undertake a diligent search, “the use of the ARROW system took less than 5 minutes per tile to upload the catalogue records and check the results.”).

[79] See Marco Ricolfi, Copyright Policies for Digital Libraries in the Context of the i2010 Strategy, at 2, 6 (July 1, 2008),, archived at (paper presented at the 1st COMMUNIA Conference); see also Marco Ricolfi, Making Copyright Fit for the Digital Agenda, 5-6 (Feb. 25, 2011), available at, archived at

[80] See Lawrence Lessig, Remix: Making Art and Commerce Thrive in the Hybrid Economy 253-255 (Bloomsbury 2008) (proposing different routes for professional, remix and amateur authors, registries, and the re-introduction of formalities and an opt-in system).

[81] See Ricolfi, Making Copyright Fit for the Digital Agenda, supra note 79 at 6.

[82] See id.

[83] See id.

[84] See id.

[85] Neelie Kroes, Vice-President of the European Commission responsible for the Digital Agenda, Speech at Business for New Europe event: Ending Fragmentation of the Digital Single Market (Feb. 7, 2010) (transcript available at, archived at, at 2.

[86] Id.

[87] See U.S. Copyright Office, Rep. of the Reg. of Copyrights: Rep. on Orphan Works 95 (Jan. 2006).

[88] See Christian L. Castle & Amy E. Mitchell, Unhand That Orphan: Evolving Orphan Works Solutions Require New Analysis, 27 Ent. & Sports Law. 1 (Spring 2009).

[89] European Comm’n, High Level Expert Group on Digital Libraries, Final Report: Digital Libraries: Recommendations and Challenges for the Future 4 (Dec. 2009) (i2010 European Digital Libraries Initiative).

[90] See Council Directive 2012/28/EU, of the European Parliament and of the Council of 25 October 2012 on Certain Permitted Uses of Orphan Works, 2012 O.J. (L 299/5), 3 [hereinafter Orphan Works Directive].

[91] British Screen Advisory Council, Copyright and Orphan Works 3 (Aug. 31, 2006),, archived at (paper prepared for the Gowers Review).

[92] See id. at 16.

[93] Id. at 25.

[94] See id. at 30.

[95] See Copyright Act, R.S.C. 1985, c C-42, art. 77 (Can). Under the Canadian system, users can apply to an administrative body to obtain a license to use orphan works. In order to obtain the license the applicant must prove that they have conducted a serious search for the rightsholder. If the Canadian Copyright Board is satisfied that, despite the search, the rightsholders cannot be identified, it issues the applicant a non-exclusive license to use the work. The license will shield the license holder from any liability for infringement. However, the license is limited to Canada. see id.

[96] See Steven A. Hetcher, Using Social Norms to Regulate Fan Fiction and Remix Culture, 157 U. Pa. L. Rev. 1869, 1880 (2009); see also Edward Lee, Warming Up To User-Generated Content, 2008 U. Ill. L. Rev. 1459, 1461 (2008) (noting that “informal copyright practices—i.e., practices that are not authorized by formal copyright licenses but whose legality falls within a gray area of copyright law—effectively serve as important gap fillers in our copyright system”).

[97] See e.g., Daniel Gervais, The Tangled Web of UGC: Making Copyright Sense of User-Generated Content, 11 Vand. J. Ent. & Tech. L. 841, 869–70 (2009); see also Debora Halbert, Mass Culture and the Culture of the Masses: A Manifesto for User-Generated Rights, 11 Vand. J. Ent. & Tech. L. 921, 958 (2009); see also Mary W. S. Wong, “Transformative” User-Generated Content in Copyright Law: Infringing Derivative Works or Fair Use?, 11 Vand. J. Ent. & Tech. L. 1075, 1110 (2009).

[98] See, e.g., Peter K. Yu, Can the Canadian UGC Exception Be Transplanted Abroad?, 26 Intell. Prop. J. 176, 176–79 (2014) (discussing also a Hong Kong proposal for a UGC exception); see also Warren B. Chik, Paying it Forward: The Case for a Specific Statutory Limitation on Exclusive Rights for User-Generated Content Under Copyright Law, 11 J. Marshall Rev. Intell. Prop. L. 240, 270 (2011).

[99] See An Act to Amend the Copyright Act, 2010, Bill C-32, art. 22 (Can.),, archived at (introducing an exception for non-commercial UGC).

[100] See Eur. Commission, Rep. on the Responses to the Public Consultation on the Review of the EU Copyright Rules 68 (July 2014),, archived at (noting that respondents often favor a legislative intervention, which could be done “by making relevant existing exceptions (parody, quotation and incidental use and private copying are mentioned) mandatory across all Member States or by introducing a new exception to cover transformative uses”); see also Eur. Commission, Commission Comm. on Content in the Digital Single Mkt. 3-4 (2011),, archived at (proposing licensing arrangements).

[101] See U.S. Copyright Office, Rulemaking on Exemptions from Prohibition on Circumvention of Technological Measures that Control Access to Copyrighted Works (Jul. 26, 2010),, archived at

[102] See Mariam Awan, The User-Generated Content Exception: Moving Away From a Non-Commercial Requirement (Nov. 11, 2015), at 6, 8–9,, archived at

[103] Lenz v. Universal Music Corp., 801 F.3d 1126, 1129 (9th Cir. 2015).

[104] Id. at 1134-35 (noting also that there’s no liability under § 512(f), “[i]f, however, a copyright holder forms a subjective good faith belief the allegedly infringing material does not constitute fair use”).

[105] See, e.g., Zijian Zhang, Transplantation of an Extended Collective Licensing System – Lessons from Denmark, 47 Int’l Rev. Intell. Prop. & Competition L. 640, 641–42 (2016).

[106] See European Comm’n, High Level Expert Group—Copyright Subgroup, Report on Digital Preservation, Orphan Works and Out-of-Print Works: Selected Implementation Issues 5 (Apr. 18, 2008) (i2010 European Digital Libraries Initiative),, archived at (identifying ECL as a possible solution to the orphan works’ problem); see also Jia Wang, Should China Adopt an Extended Licensing System to Facilitate Collective Copyright Administration: Preliminary Thoughts, 32 Eur. Intell. Prop. Rev. 283, (2010); see also Marco Ciurcina, et al., Creatività Remunerata, Conoscenza Liberata: File Sharing e Licenze Collettive Estese [Remunerating Creativity, Freeing Knowledge: File-Sharing and Extended Collective Licences], Nexa Ctr. for Internet & Soc’y, at 8 (It.) (Mar. 15, 2009),, archived at (highlighting the positive externalities of the adoption an extended collective licensing scheme as the most appropriate tool to be used by a European Member State to legitimize the file-sharing of copyrighted content); see also Johan Axhamn & Lucie Guibault, Cross-border Extended Collective Licensing: A Solution to Online Dissemination of Europe’s Cultural Heritage?, Instituut Voor Informatierecht , at 4 (Neth.)(Aug. 2011),, archived at

[107] See Commission Proposal for a Directive of the European Parliament and of the Council on Copyright in the Digital Single Market, at 26, COM (2016) 593 final (Sept. 14, 2016) [hereinafter DSM Directive Proposal].

[108] See id.

[109] See id. at 5, 30.

 [110] See Silke v. Lewinski, Mandatory Collective Administration of Exclusive Rights – A Case Study on its Compatibility with International and EC Copyright Law, e-Copyright Bulletin (UNESCO), Jan.-Mar. 2004 at 2 (discussing a proposed amendment in the Hungarian Copyright Act); see also Carine Bernault & Audrey Lebois, Peer-to-Peer File Sharing and Literary and Artistic Property: A Feasibility Study Regarding a System of Compensation for the Exchange of Works via the Internet (June 2005) (discussing the same proposal endorsed by the French Alliance Public-Artistes, campaigning for the implementation of a Licence Globale).

[111] See Volker Grassmuck, A New Study Shows Copyright Exception for Legalising File-Sharing is Feasible as a Cease-Fire in the “War on Copying” Emerges, Intellectual Property Watch (Nov. 5, 2009),, archived at

[112] See Authors Guild v. Google, Inc., 804 F.3d 202, 229 (2d Cir. 2015).

[113] See LOI 2012-287 du 1er mars 2012 relative à l’exploitation numérique des livres indisponibles du XXe siècle [Law 2012-287 of March 1, 2012 on the Digital Explotation of the Unavailable Books of the 20th Century], Journal Officiel de la République Française [J.O.] [Official Gazette of France], Mar. 2, 2012, p. 3986.

[114] See id.

[115] See Case C-301/15, Soulier v. Ministre de la Culture et de la Comm., Premier Ministre, 2016 Curia.Europa.Eu ECLI:EU:C:2016:878 (Nov. 16, 2016) [Fr.],, archived at (involving a request for a preliminary hearing by the Council of State, regarding an action Mac Soulier and Sara Doke against the Minister of Culture and Communication, and the Prime Minister, on the interpretation of Articles 2 and 5 of a European Council Directive).

[116] See id.

[117] Grassmuck, supra note 111.

[118] In the analog environment, many national legislations implemented quasi flat rate models and different arrangements of private copying levies that may be envisioned as a form of cultural tax. Private copying levies are special taxes, which are charged on purchases of recordable media and copying devices and then redistributed to the right holders by means of collecting societies. See, e.g., Martin Kretschmer, United Kingdom Intellectual Prop. Office, Private Copying and Fair Compensation: An Empirical Study of Copyright Levies in Europe 64 (2011),, archived at (follow “Download this paper” hyperlink).

[119] See generally Bernt Hugenholtz et al., The Future of Levies in a Digital Environment, Institute for Information Law, at ii., 74 (2003),, archived at

[120] See generally Grassmuck, supra note 111 (exploring flat rate proposals and emerging models).

[121] See Alexander Roßnagel et al., Die Zulässigkeit einer Kulturflatrate nach Nationalem und Europäischem Recht [The Admissibility of a Cultural Flat Rate under National and European Law], Institut für Europäisches Medienrecht [Institute of European Media Law], at 63 (2009),, archived at

[122]See id.; see COMMUNIA Network on the Digital Public Domain, Recommendation 14, in Final Report 171 (Mar. 31, 2011),, archived at

[123] See e.g., Alain Modot et al., The “Content Flat-Rate”: A Solution to Illegal File-Sharing?, European Parliament, at 26 (2011),, archived at

[124] See Neil W. Netanel, Impose A Noncommercial Use Levy To Allow Free Peer-To-Peer File Sharing, 17 Harv. J. L. & Tech. 1, 32, 80 (2003). 

[125] See id. at 4.

[126] See id.

[127] See id. 

[128] See Netanal supra note 124 at 4. 

[129] See generally William W. Fisher, Promises To Keep: Technology, Law and the Future of Entertainment (2004).

[130] See id. at 217.

[131] See id. at 223–24.

[132] See id.

[133] See Philippe Aigrain with Suzanne Aigrain, Sharing: Culture and the Economy in the Internet Age 76–77 (2012).

[134] See id. at 65.

[135] See id .

[136] See id at 152–53.

[137] See id..

[138] See Re:publica, Peter Sunde – Flattr Social Micro Donations, YouTube (Apr. 22, 2010),, archived at (describing the Flattr platform); see also Flattr,, archived at (last visited Feb. 9, 2017).

[139] COMMUNIA, Recommendation 14, supra note 121, at 171.

[140] Id.

[141] Nelson Mandela, Remarks Made at the TELECOM 95 Conference, 3 Oct. 1995, 9 Trotter Rev. 4, 4 (1995).

[142] World Intellectual Property Organization (WIPO), Provisional Committee on Proposals Related to a WIPO Development Agenda (PCDA), Revised Draft Report, at 6 (Aug. 20, 2007),, archived at

[143] Benedict XVI, Caritas In Veritate [Encyclical Letter on Good Will on Integral Human Development in Charity and Truth], sec. 22 (June 29, 2009) available at, archived at

[144] See Graham Dutfield and Uma Suthersanen, Global Intellectual Property Law 277 (2008).

[145] See Amy Kapczynski, The Access to Knowledge Mobilization and The New Politics of Intellectual Property, 117 Yale L. J. 804, 807–08 (2008); see generally Access to Knowledge in the Age of Intellectual Property (Gaëlle Krikorian and Amy Kapczynski eds., Zone Books 2010); see also Access to Knowledge in Africa: The Role of Copyright (Chris Armstrong et al. eds., UCT Press 2010) (showing an example of the body of work created by pro-A2K groups).

[146] Conference, 2nd Annual Access to Knowledge Conference (A2K2), Yale Information Society Project (2007),, archived at

[147] Id.

[148] Consumer Project on Technology, Access to Knowledge,, archived at

[149] See G.A. Res. 217 (III) A, Universal Declaration of Human Rights (Dec. 10, 1948),, archived at (follow “Download PDF”).

[150] See WIPO, Development Agenda for WIPO,, archived at

[151] See CPTech, Proposed Treaty on Access to Knowledge (May 9, 2005) (Draft),, archived at

[152] Laurence R. Helfer, Toward a Human Rights Framework for Intellectual Property, 40 U.C. Davis L. Rev. 971, 1013 (2007) (citing William New, Experts Debate Access to Knowledge, IP Watch (2005), archived at; see also Proposed A2K Treaty, supra note 151 (mentioning other actions to achieve A2K goals, such as the use of the Internet as a tool for broader public participation; preservation of public domain; control of anticompetitive practices; restriction of the use of TPMs limiting A2K; use of educational material made available at an unreasonable price; and a new role of fair use, especially for purposes including but not limited to parody, reverse engineering and use of works by disabled person).

[153] See, e.g., Margot E. Kaminski & Shlomit Yanisky-Ravid, Working Paper: Addressing the Proposed WIPO International Instrument on Limitations and Exceptions for Persons with Print Disabilities: Recommendation or Mandatory Treaty?, Yale Information Society 6 (Nov. 14, 2011),, archived at (follow “Download This Paper” hyperlink).

[154] See Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled, July 27, 2013, WIPO, (entered into force Sept. 30, 2016).

[155] See Joost Smiers & Marieke Van Schijndel, Imagine There is no Copyright and No Cultural Conglomerates too, 4 Institute of Network Cultures 5, 26; see also Johanna Gibson, Community Resources: Intellectual Property, International Trade and Protection of Traditional Knowledge 127–28 (2005).

[156] See Lawrence Lessing, The Architecture of Access to Scientific Knowledge: Just How Badly We Have Messed This Up, Address at CERN Colloquium and Library Science Talk, (Apr. 18, 2011),, archived at; see also Lawrence Lessig, Recognizing the Fight We’re In, Address at the Open Rights Group Conference, (Mar. 24, 2012),, archived at

[157] Lessing, CERN Colloquium Address, supra note 156.

[158] John Willinsky, The Access Principle: The Case for Open Access to Research and Scholarship 33, (2006).

[159] Id. at 30.

[160] See Giancarlo F. Frosio, Open Access Publishing: A Literature Review 74 (study prepared for the RCUK Centre for Copyright and New Business Models in the Creative Economy) (2014),, archived at (providing a book length overview of the OAP movement and several open access initiatives and projects, economics of academic publishing and copyright implications, OAP business models, and OAP policy initiatives).

[161] 16538 Researchers Taking a Stand, The Cost of Knowledge,, archived at; see also The Price of Information: Academics are Starting to Boycott a Big Publisher of Journals, The Economist, Feb. 4 2012,, archived at; see also Eyal Amiran, The Open Access Debate, 18 Symploke 251, 251 (2011) (reporting several other example of these reactions and boycotts).

[162] See Reto M. Hilty, Copyright Law and the Information Society – Neglected Adjustments and Their Consequences, 38(2) ICC 135 (2007).

[163] George Monbiot, Academic Publishers Make Murdoch Look like a Socialist, The Guardian, (Aug. 29, 2011 4:08 PM),, archived at; see also Richard Smith, The Highly Profitable but Unethical Business of Publishing Medical Research, 99 J. R. Soc. Med. 452–53 (2006) (discussing in similarly strong terms the unethical nature of the business of publishing medical research).

[164] See Richard Smith, supra note 163 at 452.

[165] See id. at 454.

[166] See, e.g., Steven Shavell, Should Copyright of Academic Works Be Abolished?, 2 J. Legal Analysis 301, 301–05 (2010).

[167] Robert K. Merton, The Normative Structure of Science, in The Sociology of Science: Theoretical and Empirical Investigations 267, 273 (Norman W. Storer ed., U. Chicago Press 1973) (1942) (emphasis added),, archived at; see also James Boyle, Mertonianism Unbound? Imagining Free, Decentralized Access to Most Cultural and Scientific Material, in Understanding Knowledge as a Commons: From Theory to Practice 123 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2007),, archived at

[168] See Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (October 22, 2003), Berlin Conference, Berlin, October 20-22, 2003,, archived at

[169] See id.

[170] See id.

[171] Budapest Open Access Initiative, Budapest Open Access Inititative,, archived at

[172] Peter Suber, Creating an Intellectual Commons Through Open Access, in Understanding Knowledge as a Commons: From Theory to Practice 171 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2006).

[173] See Directory of Open Access Journals(DOAJ), DOAJ (last visited Feb. 9, 2017),, archived at

[174] See Open Access, The Scholarly Publishing & Academic Resources Coalition [SPARC],, archived at; see also SHERPA/JULIET – Research funders’ open access policies, SHERPA (last visited Feb. 9, 2017),, archived at; see also Manual of Policies and Procedures – F/1.3 QUT ePrints repository for research output, Queensland Univ. of Tech. [QUT] (Apr. 6, 2016),, archived at; see also Eric Priest, Copyright and The Harvard Open Access Mandate, 10 Nw. J. Tech. & Intell. Prop. 377, 394 (2012).

[175] See Frosio, Open Access Publishing, supra note 160, at 9.

[176] See John Houghton, Open Access – What are the Economic Benefits?, Victoria University, 13 (June 23, 2009) (report prepared for Knowledge Exchange) (showing that adopting an open access model to scholarly publications could lead to annual savings of around €70 million in Denmark, €133 million in the Netherlands and €480 million in the United Kingdom); see also John Houghton et al., Economic and Social Returns on Investment in Open Archiving Publicly Funded Research Outputs, Victoria University, 12 (July 2010) (report prepared for The Scholarly Publishing & Academic Resources Coalition [SPARC]) (concluding that free access to U.S. taxpayer-funded research papers could yield $1 billion in benefits).

[177] See What is Horizon 2020?, European Commission,, archived at

[178] See Department for Business Innovation and Skills, Innovation and Research Strategy for Growth 76–78 (Dec. 8, 2011),, archived at; see also Finch Report: Report of the Working Group on Expanding Access to Published Research Findings, Accessibility, Sustainability, Excellence: How to Expand Access to Research Publications, Research Information Network,, archived at

[179] See U.S. Department of Education, Institute of Education Sciences (IES), Request for Application, IES 11 (2009),, archived at; see also New Open Access Policy for NCAR Research, AtmosNews (October 20, 2009),, archived at; see also Howard Hughes Medical Institute, Research Policies: Public Access to Publications 1 (June 11, 2007),, archived at

[180] See Consolidated Appropriations Act of 2008, H.R. 2764, 110th Cong. Div. G, II § 218; see also Eve Heafey, Public Access to Science: The New Policy of The National Institutes of Health in Light of Copyright Protections in National and International Law, 15 UCLA J. L. & Tech. 1, 3 (2011),, archived at .

[181] See National Institute of Health, Revised Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research, (Jan. 11, 2008),, archived at; see also Peter Suber, An Open Access Mandate for the National Institutes of Health, 2(2) Open Medicine e39–e41 (2008),, archived at

[182] See Richard Poynder, Open Access Mandates: Ensuring Compliance, Open and Shut? (May 18, 2012),, archived at

[183] See, e.g., Adrian Pohl, Launch of the Principles on Open Bibliographic Data, Open Knowledge International Blog (Jan. 18, 2011),, archived at

[184] Richard R Nelson, The Market Economy, and the Scientific Commons, 33 Research Policy 455, 467 (2004),, archived at

[185] Id.

[186] Willinsky, supra note 158, at xii; see also Peter Suber, Open Access (MIT Press 2012) (discussing the emergence of this principle).

[187] See Jerome H. Reichman, Tom Dedeurwaerdere, & Paul F. Uhlir, Governing Digitally Integrated Genetic Resources, Data and Literature: Global Intellectual Property Strategies for a Redesigned Microbial Research Commons 441 (Cambridge U. Press, 2016).

[188] Paul F. Uhlir, Revolution and Evolution in Scientific Communication: Moving from Restricted Dissemination of Publicly-Funded Knowledge to Open Knowledge Environments, Paper Presented at the 2nd COMMUNIA Conference (June 28, 2009) (on file with COMMUNIA),, archived at

[189] Pessach Guy, The Role of Libraries in A2K: Taking Stock and Looking Ahead, 2007 Mich. St. L. Rev. 257, 267 (2007).

[190] See Proposed WIPO A2K Treaty, supra note 151, at 5; see also Orphan Works Directive, supra note 90 (enabling the use of orphan works after diligent search for public libraries digitization projects); see also Case C-117/13, Technische Universität Darmstadt v Eugen Ulmer KG, 2014 E.C.R. 23 (September 11, 2013) (stating that European libraries may digitize books in their collection without permission from the rightholders with caveats); see also Act of September 11, 2015, on Amendments to the Copyright and Related Rights Act and Gambling Act (Poland) (bringing library services in Poland into the twenty-first century by enabling digitization for socially beneficial purposes, such as education and preservation of cultural heritage).

[191] Portions of the analysis in this Section can also be found in the Communia Final Report, supra note 69.

[192] Jessica Litman, The Public Domain, 39 Emory L. J. 965, 977 (1990).

[193] See Daniel Drache, Introduction: The Fundamentals of Our Time – Values and Goals that are Inescapably Public, in The Market or the Public Domain?: Global Governance and the Asymmetry of Power 1 (Daniel Drache ed., Routledge 2000).

[194] See Jane C. Ginsburg, “Une Chose Publique”? The Author’s Domain and the Public Domain in Early British, French and US Copyright Law, 65 Cambridge L. J. 636, 642 (2006).

[195] Id. at 638.

[196] Mark Rose, Nine-Tenths of the Law: The English Copyright Debates and the Rhetoric of the Public Domain, 66 Law & Contemp. Probs. 75, 77 (2003).

[197] See Ginsburg, supra note 194, at 637–38.

[198] See id. at 637.

[199] M. William Krasilovsky, Observations on Public Domain, 14 Bull. Copyright Soc’y 205 (1967).

[200] Pamela Samuelson, Mapping the Digital Public Domain: Threats and Opportunities, 66 Law & Contemp. Probs. 147, 147 (2003).

[201] David Lange, Recognizing The Public Domain, 44 Law & Contemp. Probs. 147, 147 (1981).

[202] See Lange, Reimagining The Public Domain, supra note 42 at 466.

[203] Julie E. Cohen, Copyright, Commodification, and Culture: Locating the Public Domain, in The Future of the Public Domain: Identifying the Commons in Information Law 133–34 (Lucie Guibault & P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[204] Michael D. Birnhack, More or Better? Shaping the Public Domain, in The Future of the Public Domain: Identifying the Commons in Information Law 59–60 (Lucie Guibault & P. Bernt Hugenholtz eds., Kluwer Law International 2006).

[205] See e.g., id.

[206] James Boyle, The Second Enclosure Movement and the Construction of the Public Domain, 66 Law & Contemp. Probs. 62 (2003).

[207] L. Ray Patterson & Stanley W. Lindberg, The Nature of Copyright: A Law of Users’ Rights 50 (University of Georgia Press 1991).

[208] Id. at 50–51.

[209] See Lange, Reimagining The Public Domain, supra note 42, at 465, n.11 (for the “feeding” metaphor).

[210] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 60.

[211] James Boyle, The Public Domain: Enclosing the Commons of the Mind 41 (Yale Univ. Press 2009).

[212] See Ronan Deazley, Rethinking Copyright: History, Theory, Language 105 (Edward Elgar Pub. 2008).

[213] Lange, Recognizing the Public Domain, supra note 201, at 178.

[214] The main difference lies in the fact that a commons may be restrictive. The public domain is free of property rights and control. A commons, on the contrary, can be highly controlled, though the whole community has free access to the common resources. Free Software and Open Source Software are examples of intellectual commons. See Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom 63–67 (Yale Univ. Press 2007). The source code is available to anyone to copy, use and improve under the set of conditions imposed by the General Public License. However, this kind of control is different than under traditional property regimes because no permission or authorization is required to enjoy the resource. These resources “are protected by a liability rule rather than a property rule.” Lawrence Lessig, The Architecture of Innovation, 51 Duke L. J. 1783, 1788 (2002). A commons is defined by the notions of governance and sanctions, which may imply rewards, punishment, and boundaries. See Wendy J. Gordon, Response, Discipline and Nourish: On Constructing Commons, 95 Cornell L. Rev. 733, 736–49 (2010).

[215] See Mark Rose, Copyright and Its Metaphors, 50 UCLA L. Rev. 1, 8 (2002); see also William St Clair, Metaphors of Intellectual Property, in Privilege and Property: Essays on the History of Copyright 369, 391–92 (Ronan Deazley et al. eds., Open Book Publishers 2010).

[216] See Charlotte Hess & Elinor Ostrom, Ideas, Artifacts, and Facilities: Information as a Common-Pool Resource, 66 Law & Contemp. Probs. 111, 111 (2003); see also Michael J. Madison, Brett M. Frischmann & Katherine J. Strandburg, The University as Constructed Cultural Commons, 30 Wash. U. J. L. & Pol’y 365, 403 (2009).

[217] See, e.g., Madison, Frischmann, & Strandburg, supra note 216, at 373 (acknowledging that Ostrom’s previous work laid the groundwork for their research); see also Elinor Ostrom & Charlotte Hess, A Framework for Analyzing the Knowledge Commons, in Understanding Knowledge as a Commons: From Theory to Practice 41–81 (Charlotte Hess & Elinor Ostrom eds., MIT Press 2007),, archived at (using Ostrom’s previous research as a base for new research throughout the chapter).

[218] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 66.

[219] See James Boyle, Foreword: The Opposite of Property, 66 Law & Contemp. Probs. 1, 8 (2003),, archived at

[220] See James Boyle, A Politics of Intellectual Property: Environmentalism for the Net?, 47 Duke L. J. 87, 110 (1997).

[221] See James Boyle, Cultural Environmentalism and Beyond, 70 Law & Contemp. Probs. 5, 6 (2007).

[222] See Boyle, The Public Domain: Enclosing the Commons of the Mind, supra note 211, at 180.

[223] See id. at 241–42.

[224] Boyle, The Second Enclosure Movement and the Construction of the Public Domain, supra note 206, at 52.

[225] Boyle, A Politics of Intellectual Property: Environmentalism for the Net?, supra note 220, at 113.

[226] See COMMUNIA, Survey of Existing Public Domain Competence Centers, Deliverable No. D6.01 (Draft, September 30, 2009) (survey prepared by Federico Morando and Juan Carlos De Martin for the European Commission) (on file with the author),, archived at (reviewing the current landscape of European competence and excellence centers that focus on the study of the public domain).

[227] See WIPO, Development Agenda for WIPO, supra note 152; see also Severine Dusollier, Scoping Study on Copyright and the Public Domain, WIPO (prepared for the Word Intellectual Property Organization) (May 7, 2010).

[228] Chair of the Provisional Committee on Proposals Related to a WIPO Development Agenda (PCDA), Initial Working Document for the Committee on Development and Intellectual Property (CDIP), WIPO (Mar. 3, 2008),, archived at .

[229] Compare Communia Final Report, supra note 69 (launching programs together with Communia, as part of the i2010 policy strategy); with LAPSI: The European Thematic Network on Legal Aspects of Public Sector Information, European Commission (Dec. 17, 2012),, archived at; and Digital Repository Infrastructure Vision for European Research, CORDIS (last visted Jan. 30, 2017),, archived at; and ARROW, supra note 78; and DARIAH, Digital Research Infrastructure for the Arts and Humanities,, archived at (aiming to enhance and support digitally-enabled research across the humanities and the arts).

[230] See Communia, The European Thematic Network on the Digital Public Domain, COMMUNIA,, archived at; see also Giancarlo F. Frosio, Communia and the European Public Domain Project: A Politics of the Public Domain, in The Digital Public Domain: Foundations for an Open Culture (Juan Carlos De Martin & Melanie Dulong de Rosnay eds., OpenBooks Publishers 2012).

[231] See The Public Domain Manifesto, The Public Domain Manifesto (2009),, archived at

[232] See generally The Europeana Public Domain Charter,, archived at (advocating for the public’s interest in maintaining access to Europe’s cultural and scientific heritage).

[233] See Charter for Innovation Creativity and Access to Knowledge, Free Culture Forum,, archived at (last visited Jan. 30, 2017).

[234] John Dupuis, Panton Principles: Principles for Open Data in Science, Science Blogs (Feb. 22, 2010),, archived at

[235] David Bollier, The Commons as a New Sector of Value-Creation: It’s Time to Recognize and Protect the Distinctive Wealth Generated by Online Commons, On the Commons (Apr. 22, 2008),, archived at

[236] Benkler, supra note 214 at I.

[237] See David Bollier, Viral Spiral: How the Commoners Built a Digital Republic of Their Own 3–14, (New Press 2009).

[238] See Charter of Fundamental Rights of the European Union, December 18, 2000, 2000 O.J. (C364) 1, 8, 37.

[239] Individual components of this roadmap for reform have been described in previous works of mine—to which I refer in this article. A more detailed review of this roadmap for reform—with each component of the proposal acting as a pillar for a metaphorical temple dedicated to the enhancement of creativity—will be the subject of Chapter 12 from my forthcoming book. Giancarlo F. Frosio, Rediscovering Cumulative Creativity: From the Oral-Formulaic Tradition to Digital Remix: Can I Get a Witness? (Edward Elgar, forthcoming 2017) (expanding on Frosio, Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness?, supra note 19).

[240] Lange, Reimagining the Public Domain, supra note 42, at 463.

[241] See Communia Final Report, supra note 69 (further discussing the politics of the public domain).

[242] This proposal—and the historical interdisciplinary research that serves as a background—has been discussed at length in previous works of mine to which I refer. See Giancarlo F. Frosio, A History of Aesthetics from Homer to Digital Mash-ups: Cumulative Creativity and the Demise of Copyright Exclusivity, 9(2) Law and Humanities 262 (2015),, archived at; see also Murray, supra note 9.

[243] For a full discussion of the idea of user patronage—and a review of the economics of creativity form a historical perspective—See Frosio, Rediscovering Cumulative Creativity from the Oral Formulaic Tradition to Digital Remix: Can I Get a Witness? supra note 19 at 376–90.

“Danger, Will Robinson”? Artificial Intelligence in the Practice of Law: An Analysis and Proof of Concept Experiment

pdf_icon Greenbaum Publication Version PDF

Cite as: Daniel Ben-Ari, Yael Frish, Adam Lazovski, Uriel Eldan, & Dov Greenbaum, “Danger, Will Robinson”? Artificial Intelligence in the Practice of Law: An Analysis and Proof of Concept Experiment, 23 Rich. J.L. & Tech. 3 (2017),

Daniel Ben-Ari,*Yael Frish,** Adam Lazovski,*** Uriel Eldan,**** & Dov Greenbaum*****

Table of Contents

I.     Introduction: What Is Artificial Intelligence?. 3

II.     Disciplines & Recent Developments. 5

III.     Ethics & Philosophy. 9

IV.     The Emergence of Artificial Intelligence, Its Pioneers, and The Beginning of Its Implications. 13

A.     The Turing Test. 13

B.     The Roots of Artificial Intelligence. 17

C.     Physical Symbol Systems Hypothesis 19

D.     Computational intelligence. 20

E.     Child Machine. 21

V.     Artificial Intelligence and Its Implications in Law: 22

A.     Market Failure. 24

B.     The Vast Market Size. 24

C.     Funding. 25

VI.     The Reality as We See It, The Day After Artificial Intelligence. 34

VII.    Specific Ethical, Legal, and Social Implications. 36

VIII.    Artificial Intelligence in Fair Use–An Early Stage Proof of Concept. 45

IX.      Conclusion and Recommendations for Courses of Action. 53

“Artificial intelligence is our biggest existential threat”

– Elon Musk[2]

I. Introduction: What Is Artificial Intelligence?

[1]       In this position paper, we seek to provide a preliminary outline of the ethical, legal, and social implications facing society in light of the growing engagement of artificial intelligence (“AI”) in our everyday lives as attorneys. In particular, we investigated these implications by developing, in collaboration with the IBM Watson team, a proof of concept. In this proof of concept, we aimed to specifically demonstrate the usefulness of AI in analyzing case law in the field of intellectual property, particularly within copyright fair use. To this end, we have extensively reviewed the relevant literature in an effort to pose pertinent and challenging questions regarding the implications of AI in all areas of law.

[2]       AI is a sub-field of computer science;”[3] it can be broadly characterized as intelligence by machines and software.[4] Intelligence refers to many types of abilities, yet is often constrained to the definition of human intelligence. It involves mechanisms, some that are fully discovered and understood by scientists and engineers, and some that are not.[5]

[3]       AI is playing an increasingly important role in our everyday lives.[6] It is asserted that in the near-future AI will replace or enhance various human professions.[7] One of the overarching goals of the AI discipline is to improve machines and systems so that they can reason, learn, self-collect information, create knowledge, communicate autonomously, and manipulate their environment in unexpected fashions.[8] During the past two decades, AI has advanced to make major and influential improvements in quality and efficiency for services and manfucturing procedures.

[4]       Some researchers hope AI will closely approximate or even surpass human intelligence, via an emphasis on problem solving and goals achievement.[9] Both are possible, and AI may even reach computing levels more complicated than the human mind could ever reach.[10]

[5]       Many claim we are still far from achieving this objective, and that fundamental new ideas and paradigm shifts are required in order to push this field forward.[11] These aims notwithstanding, AI studies thus far continue to progress in the direction of understanding and “modeling human consciousness and the inner mind.”[12]

II. Disciplines & Recent Developments

[6]       To understand the field of AI we must first understand how researchers and philosophers observe this field. They divide AI into two categories: strong and weak.[13] Strong AI further divides into human formed AI and non-human formed AI.[14] The first refers to the ability of computers to think, reason, and deduce in a manner similar to humans, and the latter refers to the ability to reason independently, without similarity to the human brain.[15] Weak AI refers to computers mimicking thinking and reasoning abilities, without actually having these abilities. [16] Understanding these observations is important when discussing issues of AI, thinking, and consciousness.

[7]       The main progress made so far has been within weak AI. However, some computer scientists are not “holding their breath” to attribute actual thinking and reasoning abilities to a machine with AI.[17] To quote Edsger W. Dijkstra–a member of computer science’s founding generation–“[t]he question of whether Machines Can Think (…) is about as relevant as the question of whether Submarines Can Swim.”[18] Analogically, computer scientists argue that planes are tested on how well they fly, not whether they fly as birds. Essentially, these scientists believe that we need to step out of the current linguistic frameworks. Can a submarine swim? Can an airplane fly? Can a machine think? Many scientists claim these distinctions are meaningless–when we refer to machines as ‘acting’ intelligently, we are actually saying that they do not possess a mind or a consciousness.[19]

[8]       There are various AI applications each different from the other. For example: speech recognition, language understanding, problem solving, game playing, computer vision (two-dimensional vs three-dimensional), expert systems, heuristic classification, and more.[20] These applications comprise two interest groups. One involves narrow applications (such as speech recognition), and the other is broader (artificial general intelligence (AGI), including autonomous agent possibilities).[21]

[9]       Currently, most AI applications are narrow (i.e., highly specialized entities used to carry out specific tasks).[22] In contrast, the human brain excels in many different environments and combines strategies across applications. Current AI examples include a word processing program that automatically corrects spelling, a computer that learns and plays a video game, a chess-playing computer (e.g. Deep Blue, IBM’s chess-playing computer),[23] or a GO playing system (e.g. AlphaGo, Google’s GO playing system).[24]

[10]     Due to the obvious distinction from human intelligence, society generally sees this type of AI as posing no immediate danger or threat. Yet, it is important to understand that even the current state of AI is represented by a broad spectrum of applications–including “[25] (assessing one simple task); “speech recognition programs,…collaborative filtering software, like that used by…”;[26] “Aaron, a robotic artist that produces paintings that could easily pass for human work;”[27] IBM’s Watson,[28] eBay’s computerized arbitration Modria;[29] and much more. All of these narrow AI applications range in capability from one simple task to intricate intelligent procedures.[30]

[11]     One explanation for the vast immersion of AI within current society may be the process of incorporating basic science researchers (such as computer scientists) in high tech companies.[31] Here, scientists have quickly learned to appreciate that in order for AI to become accepted in human society, the emphasis must be on its benefits as a bridge for what could not have been achieved thus far – assisting and contributing to humans–instead of on how AI could replace them.[32] This is in stark contrast to AI in fiction.[33]

[12]     In fiction and cinema, AI is frequently portrayed as an ominous entity entwined with danger (e.g. HAL in “2001: A Space Odyssey,”[34] Agent Smith in “The Matrix,”[35] and the T1000 in “The Terminator”).[36] In many of these plots, AI is depicted as fully autonomous machines acting out in a way that is harmful to human beings.[37] However, there are also movies, such as Spielberg’s “A.I.,”[38] that portray machines in a softer, more humanlike light. Other films use AI simply for comedic relief, such as Star Wars[39] or its spoof, Spaceballs.[40] While reality is still far from the entities portrayed in science fiction, there are already AI machines that can cause injuries or death (e.g. autonomous cars), act as home and service robots (e.g., iRobot’s Roomba, Anny the CareBot), or serve in the private, finance, and governmental sectors.[41]

[13]     In light of all of the bad press it gets, it is important to understand how AI is being presented to society, what people think about it, and what needs to be considered nowadays in order to promote innovation in this area.

III. Ethics & Philosophy

[14]     The use of AI poses many important ethical questions. The philosopher John Searle, in his famed Chinese Room Argument, noted that the idea of a non-biological machine being intelligent is incoherent: “[t]he point is not that computers cannot think. The point is rather that computation as standardly defined in terms of the manipulation of formal symbols is not by itself constitutive of, nor sufficient for, thinking.”[42] Further the eminent computer scientist, Joseph Weizenbaum, warned that “the idea [of an AI] is obscene, anti-human and immoral.”[43]

[15]     Many philosophers, scientists, and others have deliberated on such ethical and existential dilemmas. The artificial intelligence control problem for example, was discussed in a book published in 2014 by Swedish philosopher Nick Bostom, titled “Superintelligence: Paths, Dangers, Strategies.”[44] It hypothesizes that AI could evolve into a form of super intelligent entities that outsmart human intelligence,[45] and are even capable of self-improvement.[46] In that process, he suggests the entities might become uncontrollable and lead to a human existential catastrophe.[47]

[16]     Two foundational concepts in the evolution of AI that tend to come up when people refer to the dangers of AI are technological singularity and swarm intelligence. Technological singularity refers to the point at which technological progress will become incomprehensibly rapid and complicated beyond our human capabilities.[48] The AI, in a feedback loop of ever accelerating self-improvement, will surpass us in its intelligence and become too smart for us to control. [49] The term was first used in this context by the mathematician John von Neumann, and was published in 1958 when Stanislaw Ulam wrote about a conversation he had with Neumann.[50]

[17]     When we speak about technological singularity in the AI context, we speak about the point at which the intelligence will surpass all human control or understanding, becoming too immeasurable and profound for humans to grasp – an “intelligence explosion.”[51] It can occur either when AI enters into a “runaway effect” of ever accelerating self-improvement, or when AI is autonomously capable of building other more intelligent and powerful entities.[52]

[18]     The second term, swarm intelligence, refers to incorporation of self-replicating machines in all aspects of life, science, industry, and even politics.[53] The swarm will become a decentralized, self-organizing system.[54] In the Terminator movies, this is Cyberdyne: a swarm of self-improving AI machines that take over the world. [55]

[19]     In addition to the ethical dangers of AI machines, there are also complicated existential questions, that raise not only questions regarding AI, but also humanity. Can machines have, or act as though they have, human intelligence? And if so, then do they have a mind? If they have a conscience, or self-awareness, do they have rights?

[20]     Consciousness relates to abilities of understanding and thinking. Nevertheless, consciousness is still a widely unknown concept. Should a machine be aware of its mental state and actions? Can it be aware? Is it even relevant? Can minds be artificially created? (as John Searle stated[56]) And how about free will? Even in some fields of philosophy it is debatable whether humans have free will, so how does it reflect on artificial entities? And if we consider AI entities as entities with consciousness or minds, then does it become immoral to dismantle them? And then how do we program them with an understanding of right and wrong? [57]

[21]     The vast majority of AI researchers do not pay attention to most of these ethical and social questions. Whether the machines actually think is not a concern for them, as long as the machines function properly.[58] Yet, philosophers urge all researchers to consider the ethical and social implications of their modus operandi.[59]

[22]     When examining the connection between society and science, history shows us dreadful events regarding ethics and responsibility. However, the science of AI raises new intricacies – regarding employment, rights, duties, and accountability. For example, are we as a society obligated to establish robot rights? This is not so implausible. For instance, the UK Office of Science and Innovation commissioned a report in 2006 dealing with robo-rights and possible future implications on law and politics.[60]

[23]     All of the above questions and discussions are yet to be answered, and as long as deeper understanding in the subject is not evident, strong AI will likely remain controversial.[61]

[24]     Evolving new technologies come with both a risk and a utility. It is unclear what AI will look like in the years to come. However, today we have the ability to try and lay the groundwork for a future in which man and machine will function together, and quite possibly as one.

IV. The Emergence of Artificial Intelligence, Its Pioneers, and The Beginning of Its Implications

[25]     The AI field began evolving after World War II when a number of people, among them the English mathematician Alan Turing, independently started working on intelligent machines.[62]

            A. The Turing Test

[26]     It is argued that Alan Turing’s publication entitled “Computing Machinery and Intelligence,”[63] published in 1950, was the first significant milestone in the AI field. [64] In his book, Turing presented what is now known as the Turing Test.[65] The goal of the test is to determine, to a satisfactory level, whether a computer has intelligence.[66] Succinctly, to pass the test an observer has to be unable to determine if he is interacting with a computer or a human.[67] There are three test participants – a ‘judge’ played by a human being, and two entities, a human and a computer.[68] The judge asks both entities questions through a computer terminal, and if he cannot distinguish between the human and the computer, then the computer is said to have passed the test and is considered to have intelligence.[69]

[27]     The Turing test is both highly acknowledged and highly criticized We have already witnessed situations in which computers have outsmarted man: IBM’s Deep Blue won a chess game in 1996 against one of the world’s best players and IBM’s Watson won the U.S. trivia game-show Jeopardy in 2011 against two former winners.[70]

[28]     In his relatively simple test, Turing aimed to elegantly examine a narrow range of AI capabilities including thinking, natural language processing, logic, and learning.[71]

[29]     The Test also has its critics who claim that the comparison to human intelligence is deficient in two respects: first, the comparison includes non-intelligent human behavior, and second, it does not include non-human intelligent behavior.[72] For the second reason, a number of alternative tests have been designed to assess super-intelligent non-human computational capabilities:

  • C-tests, or Comprehension Tests: designed to test comprehension abilities – a main component of intelligence – while formulating information with new given data.[73]
  • Universal Anytime Intelligence Tests: aim to examine intelligence of any present or future biological or artificial system.[74]
  • The Winograd Schema Challenge: conceived by Levesque Hector, a professor of Computer Science at the University of Toronto, is based on a series of multiple choice questions (i.e. linguistic antecedents) which require spatial and interpersonal skills, preliminary knowledge, and other commonsense insights.[75]
  • The Logic Theorist System: demonstrated by Alan Newell and Herb Simon, is engineered to mimic the problem solving skills of humans and the determination of high-order intellectual processes.[76]
  • The Lovelace 2.0 Test: conceived in 2001 by Selmer Bringsjord and colleagues (and perfected in 2014 by Mark Riedl, a Georgia Tech professor),[77] examines intelligence by measuring creativity under the assumption that there are works of art that require intelligence in order to create them.[78]

[30]     Another aspect of the Turing Test that received criticism is human misidentification,[79] meaning it is not uncommon for humans to be misidentified as machines. One explanation for this is judge bias based on the answers he expects to receive.[80]

            B. The Roots of Artificial Intelligence

[31]     The 1955 Dartmouth Summer Research Project on Artificial Intelligence is considered the birthplace of AI as a discipline.[81] Amongst its participants were John McCarthy and Marvin Minsky.[82]

[32]     John McCarthy, who is typically thought to have coined the term artificial intelligence, was an American computer scientist and cognitive scientist, and one of the founders of the AI discipline.[83] In 1979 McCarthy published “Ascribing Mental Qualities to Machines,” where he argued that “[m]achines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance.”[84]

[33]     Marvin Lee Minsky was an American cognitive scientist in the field of AI and one of the main AI theorists.[85] Minsky believed that computers were not fundamentally different than the human mind.[86] Amongst his achievements was the construction of robotic arms and grippers, computer vision systems, and the first electronic learning system.[87] In 1969, Minsky, along with Seymour Papet, published the book “Perceptrons”[88] in which he emphasized critical issues that he felt prevented developmental research of the neural networks.[89] Minsky was also an active contributor to the symbolic approach (described below) and the research of human intelligence.[90] In general, Minksy had a positive outlook regarding the future humanlike intelligence capabilities of AI.[91]

            C. Physical Symbol Systems Hypothesis

[34]     The “Physical Symbol Systems Hypothesis” was developed in 1976 by Newell and Simon, and later became a core part of AI.[92] The hypothesis states that “[i]ntelligence is the work of symbol systems…a physical symbol system has the necessary and sufficient means for general intelligent action.”[93] AI computers, as recognized physical symbol systems, are able to exhibit intelligence, and humans, as intelligent beings, must also be physical symbol systems, and therefore similar to computers.[94] Both are capable of processing structures of symbols.[95]

[35]     One problem related to the Physical Symbol Systems Hypothesis, is that some activities human beings find hard or challenging–like mathematics–are easy for computers; while some activities that human beings find easy–like face recognition–are difficult for computers.[96] This problem led researchers to develop a strategy that later became known as the “Artificial Neural Network” (also known as Connectionism) which aims to create systems with brain-like characteristics that are capable of learning.[97] These particular efforts embrace some key elements from Machine Learning Strategy and provide partial answers for “The Common Sense Knowledge Problem” through an effort to create a database containing all of the general common sense knowledge a human possesses, presentable in an AI retrievable fashion.[98]

            D. Computational intelligence

[36]     Computational intelligence aims to understand the principles that enable intelligent behavior in artificial systems. According to this area of research, AI has the following four common features:

  • Ability and flexibility to change in the environment;
  • Evidential reasoning and perception;
  • Ability to plan and execute goals; and
  • Ability to learn.[99]

[37]     The early AI successes left researchers optimistic; however, in the late 1950’s the field began to encounter obstacles and difficulties. One concern that is still highly relevant today is the “Common Sense Knowledge Problem”: a system only “knows” the information that it explicitly receives, and it is often incapable of making trivial connections on its own.[100] To this end, many research strategies are trying to find a way around this problem, including limited domain systems and machine learning.[101]

            E. Child Machine

[38]     The idea of a “Child Machine” was first introduced in the 1950’s.[102] A child machine aims to emulate the learning experience of a human child and implement it on an AI computer.[103] In that way, a computer starts as a “child” and improves by acquiring experiences and knowledge.[104] Yet current programs still have many drawbacks regarding physical experiences and language skills, which hinder the desired successful outcome.[105]

[39]     Even though there has been substantial progress in the science of AI, high hurdles remain. Difficult issues and thought-provoking questions that were raised over two decades ago are still far from receiving answers. Achieving human-level abilities, such as described in the common sense knowledge problem above, is still far from being reached.[106] While some types of human reasoning have been emulated to varying degrees, overall progress remains relatively sluggish.[107]

[40]     In order for AI to further evolve, it is necessary to continue researching different implementation techniques of common reasoning such as: logical analysis, handcrafted large scale databases, web mining, and crowd sourcing.[108]

[41]     The next sections examine and analyze the involvement of AI in the field of law, including its ethical, legal, and social implications in both the short term and the long term. Further, the third chapter discusses the fair use doctrine (a subfield of copyright law) which is used as a test-case to demonstrate AI abilities.[109] The proof of concept was conducted through IBM’s Watson with the guidance of IBM Israel.[110]

V. Artificial Intelligence and Its Implications in Law

“Of course I’ve got lawyers. They are like nuclear weapons; I’ve got them ‘cause everyone else has. But as soon as you use them they screw everything up.”

Danny DeVito.[111]

[42]     Notwithstanding the dire lack of paradigm shifting progress described above, AI technologies are still progressing rapidly, not only theoretically, but also practically. Developers in both large corporations and in start-ups aim to create learning and computerized thinking algorithms that will disrupt our reality.[112] While some of these algorithms encompass the future of mankind’s welfare, others pose dramatic and imminent threats.[113]

[43]     This chapter depicts the rationale that brought forth and promoted the ‘invasion’ of AI into the world of law.[114] After reviewing the causes, we depict the technologies and companies worthy of the title ‘game-changing’, that might bring great value to society, followed by dramatic shifts – ethically, socially and legally.

[44]     In the final part of this chapter we discuss what these dramatic societal shifts can offer, both as opportunities and threats.

[45]    Market failure provides a great opportunity for AI to come in to the field of law with a big impact on it. Our analysis focuses mainly on the United States market.

            A. Market Failure

[46]     Legal systems around the world are collapsing under an ever-growing workload.[115] It is not a secret that the United States is currently leads the world in number of lawyers per-capita and has dramatically overloaded judicial systems. [116] The fact remains, the judicial process is time consuming, inefficient, and cannot keep up with the speed and scalability in which conflicts grow.[117] Add to that the legal tactics lawyers use to stall, earn time, and sometimes ‘dry’ their opponents out of resources, and you have a very dysfunctional system. The system’s own frequent users, lawyers, are active partners in creating the inability to function.[118]

[47]     Although this realization is not news to most, the fact remains that with the current population growth, as well as the ever process of the internet, the worldwide potential for legal conflicts continues to grow as many judicial systems cannot keep up to face this growth.

            B. The Vast Market Size

[48]     The United States is among the largest consumers of legal services in the world.[119] The market size in estimated to be 437 billion USD annually.[120] Additionally, in recent years there is an on-going shift of power. While in the past large law firms controlled most of the market, today, nimble boutique firms are gaining an ever-increasing market share.[121] The potential to compete with the largest firms empowers young and small firms to innovate, become more efficient, and even try new services, enabling them to gain a competitive edge.[122]

            C. Funding

[49]     The legal industry is currently witnessing two trends in funding which make the invasion of AI into the world of law a fait accompli. First, reaching a five-year record, 2015’s fourth quarter had the highest funding levels for the entire area of AI.[123] In addition, funding for legal tech start-ups has grown from seven million USD in 2009 to a whopping one hundred-fifty million in 2013.[124]

[50]     These account for a fertile ground in which technological solutions can arise, solving big scale problems like the ones portrayed by the judicial systems around the world. [125]

[51]     While the use of computation and software is not new to the field of law, [126] we can now identify three main technological fields–Machine Learning, Natural Language Processing, and Big Data–which may enable AI to reign the world of law.

[52]     Some of these technologies comprise different pieces of the puzzle which AI will soon piece together. When applied in a holistic manner these technologies may replace most lawyers and judges.[127] These changes will not be in the short term, but rather in years to come.[128] Yet, we believe it will be faster than expected. The three main technological fields are:

  • Machine Learning:[129] A computer science subfield in which computer generated algorithms are trained to recognize patterns within data.[130] This usually involves massive amounts of data in all areas–from visuals, to categorizing language patterns within human conversations, to written data.[131]
  • Natural Language Processing: (NLP)[132] A sub-category within AI and machine learning.[133] In essence, NLP is heavily reliant on machine learning.[134] This form of research integrates computer science, psychology, and the interaction between the two.[135] Research in this field seeks to ‘teach’ computers how to comprehend human language, seek patterns, and perform deductions based on language patterns and reasoning.[136] The difference between NLP and machine learning is the added value from interactions with human behavior, human language, and even human biases and other psychological traits.[137]
  • Big Data: This field typically refers to data sets too excessive to deal with and analyze via traditional data analytics.[138] Big data sets are relatively young and likely due in part to the accumulation of legal data, which has accelerated greatly since the beginning of the digital storage age (2002).[139] These data sets are used to create predictive analysis algorithms in various fields, from business trends to target audience marketing methods.[140] It can also be used to analyze legal claims, judicial opinions, and more.[141] This type of data usually exists in public records.[142]

[53]     We are now on the verge of a legal renaissance.[143] Market failure mixed with an immense market, growing funding for start-ups, and available and rapidly growing technology is a volatile concoction, which will likely create dramatic and disruptive changes in the nearby future.[144]

[54]     Authors Buchanan and Thomas first raised the notion of using AI in the legal field in November 1970 in their article “Some Speculation about Artificial Intelligence and Legal Reasoning.”[145] In their research, they suggest the use of computers to model human thought processes and as a direct outcome, also help lawyers in their reasoning processes.[146] Later, an experiment was conducted by Thorne McCarty who created a program that was capable of performing a narrow form of legal reasoning in the specific area of corporate reorganization taxation.[147] Given a ‘description’ of the ‘facts’ of a corporate reorganization case, the program could implement an analysis of these facts in terms of several legal concepts.[148]

[55]     Today, in this subfield of AI and law there are already numerous technologies such as:

  • IBM’s Watson Debater:[149] The debater is a new feature of IBM’s well-known Watson computer.[150] When asked to discuss any topic, it can autonomously scan its knowledge database for relevant content, ‘understand’ the data, select what it believes are the strongest arguments, and then construct sentences in natural language to illustrate the points it had selected, in favor and against the topic. Using that process, it can assist lawyers by suggesting the most persuasive arguments and precedents when dealing with a legal matter.[151]
  • ROSS Intelligence:[152] “SIRI for the law”[153] was developed in IBM’s Watson labs. ROSS is a legal research tool that enables users to obtain legal answers from thousands of legal documents, statutes, and cases.[154] The question can be asked in plain English and not necessarily in legal form. Ross’s responses include legal citations, suggested articles for further reading, and calculated ratings to help lawyers prepare for cases.[155] Because Ross is a cognitive computing platform, it learns from past interactions, i.e. Ross’s responses increase in accuracy as lawyers continue to use it. This feature can help lawyers reduce the time spent on research.[156]
  • ModusP:[157] An Israeli startup which has created an advanced search engine using sophisticated algorithms based on AI. The search function helps jurists reduce legal research hours by finding legal knowledge and insights more efficiently.[158]
  • Lex Machina:[159] An intellectual property (“IP”) research company that helps companies anticipate, manage, and win patent and other IP lawsuits by comparing cases to a database of information and helping their customers draw valuable conclusions that inform winning business and legal strategies.[160] The technology compiles data and documents from court cases and converts them into searchable text files.[161] After a keyword, patent, or party is searched for, data and documents are sent back out.[162] It gives lawyers more information on specific judges, a client’s history, and information on what they can do to have a better chance at winning.[163]
  • Modria:[164] A cloud based platform, initially developed for eBay and PayPal, functions as Online Dispute Resolution (“ODR”).[165] It enables companies “to deliver fast and fair resolutions to disputes of any type and volume.”[166] This technology aims to prevent submission of lawsuits, by providing easily accessible alternatives for dispute resolution.[167] Modria aims to create fair ODRs, based on the knowledge and insights from millions of cases and other disputes that the system has already solved.[168]
  • Premonition:[169] A technology which utilizes Big Data and AI to expose which lawyers win the most cases and before which judges.[170]
  • BEAGLE:[171] A technology that uses AI to quickly highlight the most important clauses in a contract and also provides a real-time collaboration platform that enables lawyers to easily negotiate a contract or pass it around an organization for quick feedback.[172] Beagle’s learning process allows the program to adapt to focus on what users care about most.[173]
  • Legal Robot:[174] A platform that enables users to check, analyze, and spot problems in contracts before signing them.[175] The platform is also meant to help users understand complex legal language by parsing legal documents and translating them into accessible language by transforming them into numeric expressions, so statistical and machine learning techniques can derive meaning.[176] It is also designed to compare thousands of documents in order to build a legal language model to be used as a tool for referencing and analyzing contracts.[177]

[56]     The development of the field of AI and law starts with programs that analyze cases and continues with technologies that make lawyers’ tasks efficient, solve disputes, and replace human intervention. Surveying this course of development, we can predict that in the long run, AI technologies using machine learning and deep learning techniques may replace lawyers, arbitrators, mediators, and even judges. Computers could do the work of a lawyer – examining a case, analyzing the issues it raises, conducting legal research, and even deciding on a strategy.

VI. The Reality as We See It, The Day After Artificial Intelligence

            A. Judges and Physical Courts

[57]     Judges and their courts will become less necessary.[178] Most commercial disputes and criminal sentencing will be run by algorithms and wizards,[179] enabling algorithms like Modria to construct conflict resolutions in a much healthier and down to earth manner. After all, they reportedly solve over fifty million disputes every year without any human intervention.[180] Most disputes can then be solved by an AI algorithm to determine the amount of damages to be paid to each side. Similar processes can occur in divorce hearings–algorithms can automatically asses the individuals’ property, financial background, and calculate the amount of time spent together to create a fair agreement of divorce.

[58]     One of the biggest problems with conflict resolution is the fact that it is run by human beings– prone to effects of emotion, fatigue, and general current mood.[181] When a legal claim is first constructed by algorithms instead of human beings, the outcomes are likely to be more productive. For example, Modria is able to resolve hundreds of millions of commercial disputes yearly without the intervention of third party human beings providing a verdict.[182] Claimants will, of course, be able to appeal to a human judge, but the need for those should dramatically decrease over time as machine learning algorithms gain better understandings of the statistical meaning of justice. To reduce the amount of appeals in tort cases, a government can create a fund to financially accommodate damages in order to facilitate a ‘sense of justice’ in the claimants’ minds.

[59]     Some judges may remain in office to rule on algorithm cases not brought to a decision suitable to both sides, and in cases where entirely new issues are being presented.

            B. Lawyers

[60]     Lawyers may also become a dying breed,[183] as algorithms learn how to structure claims, check contracts for problematic caveats, negotiate deals, predict legal strategies, and more. Using AI to create simple, optimally designed regulations and laws that are easier to learn, understand, and litigate by computer, will further the winnowing of the legal profession.[184]

[61]     Lawyers–or something similar–will still be necessary, however they will focus mainly on risk engineering instead of litigation and contracts.[185] Lawyers will need to use intuition and skills not yet available to machines to analyze exposure and various aspects of performing business and civil actions.[186] They will, however, be helped by AIs that have already sifted through all the relevant data. Until AI is able to integrate the data into a nuanced analysis that requires some form of higher thinking, creativity, and predicting likely outcomes based on human reactions, we still need lawyers. In the future, all but the most skilled litigation and corporate lawyers will become unemployed as computer algorithms learn to emulate earlier successful strategies and avoid unsuccessful strategies to achieve optimal outcomes.[187] Young (often overpaid) associates will become unnecessary as much of their grunt work will be doable by machine.[188]

[62]     In some areas of the law, lawyers may take longer to disappear entirely. In areas without clear precedent, cases may be deemed too delicate to be dealt with by computers. Some clients may never trust computers and insist on using humans; it will take time until we are willing to entrust our freedom (or our lives, in certain states in the United States) in the hands of algorithms.[189]

            C. Jury

[63]     Juries, like the other members of the legal system, will not be needed for most cases as there will be fewer trials.[190] The majority of legal issues will be solved by algorithms. In addition, technology may ensure that juries are designed to represent society, perhaps even mimicking human biases involving race, background, and life experience.[191] Such a jury could easily be instructed to disregard information, or weigh some data differently than others.[192]           

            D. Law School

[64]     Law schools will change dramatically, not least because we will need fewer lawyers. Moreover, the nature of legal learning will change to include subjects that are not taught in law schools today–creativity, understanding of statistics, big data analysis, and more.[193]

VII. Specific Ethical, Legal, and Social Implications

[65]     When considering these technologies and the changes they bring to the legal field, we must refer to the ethical, legal, and social implications that they create:

[66]     Today, the legal profession—lawyers, judges, and legal education—faces a disruption, mostly because of the growth of AI technology, both in power and capacity.[194]

[67]     An example of this disruption is that today, computers can review documents, a task which human lawyers did in the past. The role of AI is growing exponentially,  so it is predicted that technology will evolve to a level that will enable computers to replace more complex legal tasks such as legal document generation and predicting litigation outcomes in litigation.[195] These implementations will become possible as the learning abilities of the machine intelligence becomes better and better. Already, fifty-eight percent of respondents to the question “Is your firm doing any of the following to increase efficiency of legal service delivery” responded saying “Using technology tools to replace human resources.”[196] More specifically, forty-seven percent saw Watson replacing paralegals, thirty-five percent thought the same for first year associates. Thirteen and a half percent even though Watson could replace experienced legal partners.[197] Notably, while twenty percent said that computers will never replace human practitioners,[198] that number has gone down from forty-six percent in 2011.[199]

[68]     There are some benefits which derive from these implications. First, they will increase competition in the legal services market, which will increase efficiency.[200] Second, today the pricing process of lawyer’s services is very ambiguous because it is hard to predict the total required services. This implication could enable price comparisons and entrance of new players to the legal services market.[201]

[69]     The forecast is that these implications will affect the following legal areas:[202]

  • Legal Discovery: Machine searches will enhance the legal discovery process by making the review of legal documents more efficient. There are already a handful of software tools that use predictive coding to minimize lawyerly interference in the e-discovery process, including, Relativity,[203] Modus,[204] OpenText,[205] kCura,[206] and others.[207] The courts have also acknowledged the use and promise of predictive coding.[208]
  • Legal Search: Search tools as Lexis[209] and Westlaw[210] were the first legal search engines to use an intelligence search tool. Afterward, Watson enabled searching using semantics instead of keywords.[211] Semantic search allows searching natural language queries and the computer responds semantically with relevant legal information.[212] Ross, mentioned above, is an example of this kind of system.[213] Advanced features provide information about the strength of a precedent, considering how much others rely on it, and enabling an effective use of it.[214] Eventually AI will even be able to issue spot, based on the searches conducted.[215]
  • Compliance: Legal and regulatory compliance is often socially and morally required, not to mention the penalties that are due for non-compliance.[216] As such, many corporations employ teams of lawyers to confirm that they comply with the applicable regulatory regimes. AI machines are already being employed in this area, including Neota Logic,[217] which powers other companies’ AI regulatory compliance systems, such as, Compliance HR for employment regulations[218] or Foley and Lardner Global Risk Solutions (GRS) for Foreign Corrupt Practices Act of 1977 (FCPA) compliance.[219]
  • Legal document Generation: In the past, the usage of templates helped reduce the cost of these legal services. Machine intelligence will evolve to generate documents that answer the specific needs of an individual. When these files are reviewed in court, AI will be able to improve the documents by tracking their effectiveness, using his learning abilities.[220]
  • Document Analysis: In addition to generating documents, AI can and will continue to assess the liabilities and risks associated with particular contracts, as well as determining ways for companies to optimize contracts to reduce costs.[221] Nowadays Companies such as eBrevia[222] and LegalSifter[223] are doing just that.
  • Brief and Memos Generation: Machine intelligence will be able to create drafts and memos that will then be revised and shaped by lawyers. In the future it will create much more accurate briefs and memos, assisted by legal research programs which will provide useful data.[224] Some have even suggested using AI to draft legislative documents.[225]
  • Legal Analytics: Companies such as Lex Machina,[226] Lex Predict,[227] and Legal Operations Company, LLC[228] already combine data and analysis abilities to predict the outcomes of situations that have not yet occurred. There are areas of law ­such as copyright and fair use, which will be discussed next that are easier to model because the data related to this subject revolves around specific, easily predictable.[229] Combining the exponential improvement of computers and their learning abilities, these models and predictions will evolve to support more complex areas of law, and to make prediction of case outcomes.[230]

[70]     These changes will not only affect access to hard to obtain legal representation,[231] they also affect the workplace of lawyers. Those who practice these tasks and do not assimilate these shifts could lose their jobs.[232] Additionally, in the future, fewer substandard lawyers will be needed. On the other hand, super-star lawyers or bespoke attorneys[233] will be more easily identifiable (because of the legal analytics which can monitor lawyers success rate) and will use these technologies to their use. Even though machines could replace many tasks of a lawyer, they cannot speak in courts in the foreseeable future, so litigators will be needed.[234] Moreover, there are some areas of law that are subject to a rapid legal change, so even intelligence machines won’t be able to learn them that fast, so lawyers will be needed in those specialized areas. Also, lawyers’ human judgments may still add value to computer predictions.[235]

[71]     As a result of these changes, predicting case outcomes will be easier and more accurate, cases will be more likely to get settled, and fewer trials will be conducted. So it follows that the number of physical courtrooms may also reduce dramatically.

[72]     Another change will occur in law schools. As a result of the changes mentioned above, fewer jurists will be needed and only in certain areas of law. Therefore, law schools should change their aim and focus on the necessities of the new legal profession including technical expertise and an ability to interact with and efficiently use the new multidisciplinary AI technology.[236]

VIII. Artificial Intelligence in Fair Use–An Early Stage Proof of Concept

[73]     In our quest to explore the social, legal, and ethical implications of AI we partnered with the IBM Watson team which is creating a workable software product in the area of the AI and law. Being students with a strong orientation to other disciplines such as law, psychology, business, and government–we were naturally drawn to the field of conflict resolution. First, in the words of Steve Jobs, we had to create a “stupid-simple” legal analysis scheme.[237] In this scheme we aim to effectively explain how lawyers and law students approach a case, to the engineering staff at IBM Watson.

[74]     We drilled down on the set of questions one asks himself when reading a ruling. As a rule, the more features or details one adds to the algorithm, the more data needs to be analyzed in order for Watson to effectively learn how the data was initially analyzed. To summarize this point, if we just needed Watson to identify a win or loss, the task would be relatively easy. However, we wanted Watson to analyze why someone won or lost, which is orders of magnitude more complex.

Model 1 – Case Law Analysis Scheme:

[75]     The logical process of analyzing case law is roughly similar, independent of the law, but requires a process of specific sets of stages in order to analyze the case at hand. After learning that process, the system creates a data set in certain legal topics, thus gaining the ability to analyze new cases.

Stage 1: Identifying the case type variables

[76]     In this stage the focus is on the details of the case and establishing the specific normative framework.

  • Variable 1: Court type. The algorithm must identify in which court, state, or jurisdiction the case is being tried. This is imperative, since the court hierarchy dictates if an earlier ruling is binding for different lower courts. For example: United States Supreme Court case law is considered precedent for all lower courts. Any ruling that conflicts with binding precedent case law will not hold up on appeal. In addition, there are different approaches to the case in each court – district courts find facts and then apply the law, while in most appellate courts, previously found facts are applied to their understanding of the law.
  • Variable 2: Location & Date. The general rules in legal precedent are that new rulings overturn old rulings at that same judicial level and below, and that specific rulings overturn general rulings. This is why it is imperative for the machine learning algorithm to appreciate the source of each ruling.
  • Variable 3: Parties. The algorithm must identify which of the parties is the plaintiff and which is the defendant. This differentiation is imperative to refine which claims have been accepted by the court and which have not. In addition, different degrees of proof may be applied to different stakeholders in a case.
  • Variable 4: Legislative Standards. This should be categorized by both federal law and state law. The labeling of case laws and statuses makes it easier to locate cases with similar issues. In this variable, it must be remembered that there is also a hierarchy in legal sources.
  • Variable 5: Rulings & Other Case Law. IBM’s algorithms need to identify other case law cited within each case as persuasive precedent or unpersuasive precedent. This enables the algorithm to develop a broad network structure, enabling it to understand which ties between rulings are relevant and to suggest more cases, which can be addressed in a legal matter.
  • Variable 6: Secondary Sources. Legal literature such as academic articles, books, and blogs, provide valuable academic information, enabling the user of a search engine to find new ideas for forming his claims or to find opinions that oppose binding precedents (which will be valuable when dealing with a case that is being tried in a court with the same jurisdictional level as the court who ruled the existing precedent).
  • Variable 7: The Judiciary. The algorithm should identify the names of the judges and if they ruled with the majority or minority. In some instances, it should be determined whether a dissenting opinion could be used in another case to provide valuable insights into which claims might be taken under consideration by specific judges. As the famous saying goes, “A good lawyer knows the law; a great lawyer knows the judge.”

[77]     We have left out some critical factors in this document which might dramatically influence the form in which IBM’s algorithms approaches cases. However, it was essential to create a relatively simplified approach for the algorithm to read the available case law, in order for it to understand the basic ground rules. Factors related to whether laws are general or specific, the times in which they were legislated, history of upholding and overturning a particular ruling, and other factors have been left out for the sake of creating an initial proof of concept which can predict or evaluate real legal claims to an extent greater than chance. Another important consideration for this effort is the size of the data set–additional factors exponentially increase the amount of training data necessary to teach the algorithm how to think like a lawyer.

Stage 2: Selecting the Field of Dispute for the Case Law Data Set

[78]     The second stage required to create the proof of concept was finding a relatively structured area of law with hard and fast, consistent, factors, which have not changed much over recent years. A legal area with a very simple and clear list of standards would be the optimal tool to assure that no claims have been skipped regarding a respective field of dispute. Further, we sought a field of law defined mostly by federal law rather than state law, as that would require us to create a different schema for every state.

[79]     Lastly, we wanted to challenge ourselves and find an area of law that would be of interest to the general public and that could result in a usable product. We eventually chose to pursue the creation of an AI algorithm in the field of fair use in publishing, under the Copyright Act. The fair use doctrine incorporates all of the above requirements, and also plays an important societal role due to the public’s misunderstanding and content owners’ misuse of the doctrine, which contribute to copyright’s continued and expanding burden on free speech.[239]

Stage 3: Defining the Fair Use Analysis Scheme

[80]     In order to teach Watson how to analyze a fair use case, we have created, based on various resources (text books, articles, and the web), a fair use analysis scheme depicting the rationale and analysis performed by lawyers when approaching such claims. One particularly helpful site was the Stanford Copyright and Fair Use Center[240] and Cornell’s Fair Use Checklist.[241]

  • Fair Use in Publishing – Analysis Scheme: As part of the first model for our case law analysis scheme, we built a data set of fair use in publishing based on verdicts from all of the United States federal circuit courts. Although we had initially attempted to limit this to just the Second and Ninth Circuits, these two circuits did not provide sufficient case law for the analysis.
  • Copyright in a Nutshell: Copyright protection in the United States is legislated under the Copyright Act of 1976.[242] Section 102 of this act elaborates which works of authorship fall under the copyright protection.[243] Section 104 of the act deals with the question of when a work becomes the subject matter of copyright. [244] Section 104(a) rules that unpublished works specified by section 102 and/or section 103 are subject to copyright protection under this act, without regard to the nationality or domicile of the author.[245] Regarding published works, section 104(b) elaborates on when copyright protection will apply regarding the nature of the work and the nationality or domicile of the author.[246] Section 106 covers the exclusive rights of the author, like the right to reproduce copies of the copyrighted work, prepare derivative works based upon the copyrighted work, distribute copies to the public, and more. [247] The Copyright Act provides for limitations to these exclusive rights, like reproduction by libraries and archives[248] and transfers of particular copy after the first sale[249] (e.g. selling a CD that you bought from a store). The fair use doctrine in U.S. law is based on Section 107.[250] The fair use doctrine provides a defense for infringement–“Fair use was traditionally defined as ‘a privilege in others than the owner of the copyright to use the copyrighted material in a reasonable manner without his consent’”[251]—the application of the formerly judicial doctrine[252] requires the balancing of four statutory factors:
    • (1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
    • (2) the nature of the copyrighted work;
    • (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
    • (4) the effect of the use upon the potential market or value of the copyrighted work.[253]

[83]     The court decides each factor, ruling in favor or against the fair use. Then, each of the four factors is weighed against the total weight of the other factors.[254] This is not a trivial process, even for an experienced judge: “the four statutory factors be [may not] treated in isolation, one from another. All are to be explored, and the results weighed together, in light of the purposes of copyright.”[255] As such, it is an optimal exercise for an AI.

Stage 4: Method of the analysis

[84]     In each case law verdict, we examine and analyze each sentence and categorize it within the fair use doctrine analysis (i.e. marking which factor each sentence relates to and determining whether it supports the claims of the plaintiff or the defendant). Some sentences are deemed irrelevant to either side and are categorized as dicta or as support for the judge’s ruling.

[85]     Each sentence is then electronically tagged with information such as whether it favored the plaintiff or defendant in each factor. After reviewing the checklist with the Watson team, we concluded that in order to teach Watson to understand which sentences favor or oppose each factor –Purpose, Nature, Amount, and Effect – without going into the details of each sub-factor, there is a need for approximately five hundred analyzed and marked verdicts with tags. This produced approximately ten thousand sentences as a learning set for Watson.

[86]     Through examining various fair use cases, most of which concentrate in the Second and Ninth circuits (most of the relevant IP claims are filed in these courts which encompass New York and California —the centers of literature and film, respectively) we note that each case, in the aspect of the fair use doctrine, has roughly twenty to twenty-five sentences relating to the subject. Following the examination, we analyzed all relevant cases, marking each relevant sentence in each case that discussed fair use doctrine. This marking included determinations for each sentence in the following categories:

  • Data: The minimal amount of words needed to classify the sentence under the Factor label or the Side label, as discussed in the following articles.
  • Factor: Purpose / Nature / Amount / Effect / Ratio / Dicta.[256]
  • Side: Plaintiff / Defendant / Neutral.

For example, Figure 1.


[87]     We are currently conducting a pattern analysis via the AI algorithms of Watson in order to identify patterns in the rational of judges based on the given data. After incorporating this vast data set, Watson can provide, for a hypothetical case, exactly which claims and arguments are best, depending on whether we argue for the plaintiff or the defendant.

[88]     This entire project is complex and will take substantial time to complete. Nevertheless, with the planning phase complete and complications accounted for, the next step is to implement the technology.

IX. Conclusion and Recommendations for Courses of Action

[89]     The fruits of AI research are often attributed to other fields, as new revelations rapidly turn into mundane computer science inventions. However, we must remember that there is much more to explore and reveal within the yet unknown realms of AI.

[90]     In this paper we reviewed what defines AI and how it came about and evolved. We covered recent developments in AI relevant to the field of law and how they are leading to changes such as: automated cases analysis, increased efficiency in judicial tasks, replacing or minimizing human intervention in solving disputes, etc.

[91]     It is gradually becoming more conceivable that AI will change the world of law and the profession of lawyers in the near future. We are ready for it in two ways: via the market and technology. First, market failure has resulted in a judicial system overload. Second, funding for legal tech start-ups has grown from 7 million USD in 2009 to a whopping 450 million in 2013.[257] Market failures and technological achievements will work together to pave the way for a new version of the legal profession.

[1] Lost in Space (1965–1968) Quotes, IMDB,, archived at (last visited Sept. 16, 2016) (quoting “Robot: “Danger, Will Robinson! Danger!”).

The authors would like to thank the Zvi Meitar Family for their continued support of all of our research endeavors. In addition, the authors would like to thank the researchers at IBM Watson for their help and support throughout this project. Finally, the authors would like to especially thank Inbar Carmel for her incredible management of the Zvi Meitar Institute.

*Daniel Ben-Ari is a fourth-year student in the joint program in Law and Business Administration at Radzyner Law School and Arison Business School at Interdisciplinary Center Herzliya (IDC). Daniel served as an Operations Sergeant in the operations division in the Israel Defense Forces. At the IDC, Daniel participated in the Law Clinic for Class Actions and the Certificate Program in European Studies, and he is also a member of the Israeli-German Lawyers Association (IDJV). Additionally, Daniel is a member of the elite program of KPMG Accounting Firm for excellent students. He also volunteers with Nova Project, which provides business consulting services to NGOs. Currently, Daniel is the coordinator of the Zvi Meitar Emerging Technologies Program and working as a teaching assistant of the course Accounting Theory.

**Yael Frish is a BA graduate of the Honors Track at Lauder School of Government, Diplomacy and Strategy at IDC Herzliya.  In the IDF, Yael served as an intelligence officer in an elite intelligence unit, ranked Lieutenant. Yael graduated from the “Zvi Meitar Emerging Technologies” Program and is an alumna of the ProWoman organization. In summer 2014, Yael participated in an Israeli-Palestinian delegation to Germany, and in summer 2015, she represented Israel at the American Institution for Political and Economic Solutions’ Summer Program at Charles University, Prague. In the last two years, Yael has gained professional experience as an analyst, consultant and business developer in consulting and business intelligence companies.

***Adam Lazovski is a Managing Partner at Quedma Innovation Ltd.  Adam is also the founder and Program Manager of the Excel Ventures Program part of Birthright’s Leadership Branch (Excel).  Adam has also worked as a strategy and business development consultant in Robus, Israel’s largest and leading legal marketing and consulting firm.  Adam holds a B.A in Psychology and an LL.B, both from IDC Herzliya. During his studies Adam was part of the Zvi Meitar Emerging Technologies Program.  In addition, Adam was also part of the Rabin Leadership Program, where he initiated a social venture and studied the science behind leaders and entrepreneurs.  Adam served as a first sergeant in the Demolition and Sabotage special unit within the Paratroopers Brigade in the Israel Defense Forces. He maintains an active reserve status.

****Uriel Eldan is a graduate of the joint program in Law (LL.B) and Business Administration (B.A.) at Radzyner Law School and Arison Business School at IDC Herzliya. Uriel served in the elite unit “8200” and in the Research Department of the Army Intelligence in the Israel Defense Forces.  Uriel co-founded the Capital Markets Investment Club at IDC and was a member of the Zvi Meitar Emerging Technologies honors program.  Uriel has worked as a teaching assistant in numerous courses at IDC, and now also in Tel Aviv University.  After graduation, Uriel started his legal internship at one of Israel’s top law firms Herzog, Fox & Neeman in the Technology & Regulation department.

*****Dov Greenbaum is Director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, Interdisciplinary Center, Herzliya, Israel (IDC). Dov is also an Assistant Professor (adj) in the Department of Molecular Biophysics and Biochemistry at Yale University and a practicing intellectual property attorney.  Dov has degrees and postdoctoral fellowships from Yale, UC Berkeley, Stanford, and  Eidgenössische Technische Hochschule Zürich (ETH Zürich).

[2] Samuel Gibbs, Elon Musk: Artificial Intelligence is Our Biggest Existential Threat, The Guardian (Oct. 27 2014, 6:26),, archived at

[3] Kris Hammond, What is Artificial Intelligence?, Computerworld (Apr. 10, 2015, 4:05 AM),, archived at

[4] See id.; see, e.g., Stuart Jonathan Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 18 (3rd ed. 2010) (discussing important aspects of A.I.).

[5] See John McCarthy, What Is Artificial Intelligence? 2–3 (Nov. 12, 2007) (unpublished manuscript) (on file with Stanford University),, archived at

[6] See Ido Roll & Ruth Wylie, Evolution and Revolution in Artificial Intelligence in Education, 26 Int’l J. Artificial Intelligence in Educ. 582, 583 (2016); see Monika Hengstler, Ellen Enkel & Selina Duelli, Applied Artificial Intelligence and Trust—The Case of Autonomous Vehicles and Medical Assistance Devices, 105 Technological Forecasting & Social Change 105, 114 (2016).

[7] See Karamjit S. Gill, Artificial Super Intelligence: Beyond Rhetoric, 31 AI & SOCIETY 137, 137 (2016).

[8] See Avneet Pannu, Artificial Intelligence and its Application in Different Areas, 4 Int’l J. Engineering & Innovative Tech. (IJEIT) 79, 79, 84 (2015).

[9] See id. at 5.

[10] See id. at 3.

[11] See id. at 5.

[12] Katie Hafner, Still a Long Way from Checkmate, N.Y. Times, Dec. 28, 2000,, archived at

[13] See Russell & Norvig, supra note 4, at 1020.

[14] See id.

[15] See id.; see John Frank Weaver, Robots Are People Too: how Siri, Google Car, and artificial intelligence will force us to change our laws 3 (2014) [hereinafter Robots Are People Too].

[16] See Russell & Norvig, supra note 4, at 1020; see Robots Are People Too, supra note 15, at 3.

[17] See Russell & Norvig, supra note 4, at 1026; see Robots Are People Too, supra note 15, at 3.

[18] E.W. Dijkstra, The Threats to Computing Science (EWD898), E.W. Dijkstra Archive. USA: Center for American History, University of Texas at Austin,, archived at

[19] Russell & Norvig, supra note 4, at 1026.

[20] McCarthy, supra note 5, at 10-11.

[21] See Richard Thomason, Logic and Artificial Intelligence, Stanford Encyclopedia of Philosophy,, archived at, (last updated Oct. 30, 2013); see Raymond Reiter, Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems 133 (2001).

[22] See David Senior, Narrow AI: Automating The Future of Information Retrieval, TechCrunch, Jan. 31, 2015,, archived at

[23] See generally Feng-hsiung Hsu, IBM’s Deep Blue Chess Grandmaster Chips, 19 IEEE Micro 70, 70 (1999) (describing IBM’s Deep Blue super computer and discussing the main source of its computation power).

[24] See generally Aviva Rutkin, Anything You Can Do . . ., 229 New Scientist 20,20 (2016) (discussing how artificial intelligence has developed and advanced).

[25] Hafner, supra note 12.

[26] Id.

[27] Id.

[28] See generally Rob High, The Era of Cognitive Systems: An Inside Look at IBM Watson and How it Works (IBM Corp. ed., 2012) (providing a detail analysis on how Watson works).

[29] See Modria,, archived at (last visited Nov. 1, 2016).

[30] See Hafner, supra note 12.

[31] See id.

[32] See id.

[33] See, e.g., Robert Fisher, Representations of Artificial Intelligence in Cinema, University of Edinburgh–School of Informatics,, archived at (last updated Apr. 16, 2015); see Kathleen Richardson, Rebranding the Robot, 4 Engineering & Technology 42 (2009); see Robert B. Fisher, AI and Cinema Does Artificial Insanity Rule?, in Twelfth Irish Conf. on Artificial Intelligence and Cognitive Science (2001); see Elinor Dixon, Constructing the Identity of AI: A Discussion of the AI Debate and its Shaping by Science Fiction (May 28, 2015) (unpublished Bachelor thesis, Leiden University) (on file with the Leiden University Repository),, archived at

[34] 2001: A Space Odyssey, (Stanley Kubrick Productions 1968).

[35] The Matrix, (Village Roadshow Pictures, Groucho II Film Partnership & Silver Pictures 1999).

[36] The Terminator (Cinema ’84 & Pacific Western 1984).

[37] See Jean- Baptiste Jeangène Vilmer, Terminator Ethics: Should We Ban “Killer Robots” Ethics & Int’l Affairs, Mar. 23, 2015,, archived at

[38] A.I. Artificial Intelligence (Amblin Entertainment & Stanley Kubrick Productions 2001).

[39] Star Wars: Episode IV – A New Hope (Lucasfilm Ltd. 1977).

[40] Spaceballs (Brooksfilms 1987).

[41] See Wendell Wallach & Colin Allen, Moral Machines: Teaching Robots Right from Wrong 7–8 (2009).

[42] John Searle, The Chinese Room Argument, 4 Scholarpedia 3100 (2009),, archived at

[43] David Adrian Sanders & Giles Eric Tewkesbury, It Is Artificial Idiocy That Is Alarming: Not Artificial Intelligence, in Proc. of the 11th Int’l Conf. on Web Info. Sys. and Technologies 345, 347 (2015).

[44] See Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014).

[45] See id. at 26, 155.

[46] See id. at 29.

[47] See id. at 140.

[48] See Singularity Hypotheses: A Scientific and Philosophical Assessment 1–4 (Amnon H. Eden et al. eds., 2012) [hereinafter Singularity Hypotheses]

[49] See id. at 28–29. 

[50] See Stanislaw Ulam, John Von Neumann, 64 Bull. of the Am. Mathematical Soc’y 1, 5 (May 1958),, archived at

[51] Guia Marie Del Prado, Stephen Hawking Warns of an ‘Intelligence Explosion,’ Bus. Insider (Oct. 9, 2015, 2:17 PM),, archived at

[52] Singularity Hypotheses, supra note 48, at 3.

[53] See Hazem Ahmed & Janice Glasgow, Swarm Intelligence: Concepts, Models and Applications: Technical Report 2012-585, Queen’s Univ. School of Computing 2 (2012),, archived at

[54] See Eric Bonabeau, Marco Dorigo & Guy Theraulaz, Swarm Intelligence: From Natural to Artificial Systems 19 (1999).

[55] See Vilmer, supra note 37.

[56] See John Searle, Minds, Brains, and Computers, 3 The Behavioral & Brain Sciences 349, 353 (1980),, archived at (stating that the equation “mind is to brain as program is to hardware” is flawed).

[57] See Russell & Norvig, supra note 4, at 36–37.

[58] See Michael R. LaChat, Artificial Intelligence and Ethics: An Exercise in the Moral Imagination, 7 AI Mag. 70, 70–71 (1986),, archived at (“[T]he possibility of constructing a personal AI raises many ethical and religious questions that have been dealt with seriously only by imaginative works of fiction; they have largely been ignored by technical experts and by philosophical and theological ethicists”). 

[59] Russell & Norvig, supra note 4, at 1020.

[60] See Nick Bostrom, Robots & Rights: Will Artificial Intelligence Change the Meaning of Human Rights? 5, 5 (Matt James & Kyle Scott eds., 2008).

[61] See Russell & Norvig, supra note 4, at 331.

[62] See István S. N. Berkeley, What is Artificial Intelligence?, Univ. of La. at Lafayette (1997),, archived at

[63] See Alan M. Turing, Computing Machinery and Intelligence (1950).

[64] See Berkeley, supra note 62.

[65]See Daniel C. Dennett, Can Machines Think?, in How we Know (Michael Shafto ed., 1985),, archived at (last visited Sept. 22, 2016).

[66] See id.

[67] See id.  

[68] See id.

[69] See id.

[70] See Jo Best, IBM Watson: The Inside Story of How the Jeopardy-Winning Supercomputer was Born and What it Wants to do Next TechRepublic, (Sept. 9, 2013, 8:45 AM),, archived at

[71] See Stuart Russell, Introduction to AI: A Modern Approach, Univ. of CA- Berkeley,, archived at (last visited Oct. 31, 2016).

[72] See Gary Fostel, The Turing Test is For the Birds, 4 SIGART Bull. 7, 8 (1993).

[73] Jose Hernandez-Orallo, Beyond the Turing Test, 9 J. of Logic, Language & Info. 447, 447-466, 458 (2000).

[74] See José Hernández-Orallo & David L. Dowe, Measuring Universal Intelligence: Towards an Anytime Intelligence Test, 174 Artificial Intelligence 1508, 1509 (2010),, archived at

[75] See Hector J. Levesque, Ernest Davis, & Leora Morgenstern, The Winograd Schema Challenge, Proc. of the Thirteenth Int’l Conf. on Principles of Knowledge Representation & Reasoning 552, 554, 557–58 (2012),, archived at

[76] See generally Allen Newell & Herbert Simon, The Logic Theory Machine—A Complex Information Processing System, 2 IRE Transactions on Info. Theory 61, (1956),, archived at (detailing logic theorist system).

[77] See generally Mark O. Riedl, The Lovelace 2.0 Test of Artificial Creativity and Intelligence, arXiv: 1410.6142 (2014),, archived at (detailing the Lovelace 2.0 test).

[78] See id.

[79] See Kevin Warwick & Huma Shah, Human Misidentification in Turing Tests, 27 J. Exp. & Theoretical Artificial Intelligence 123, 124-25 (2014), archived at

[80] See Kevin Warwick & Huma Shah, Can Machines Think? A Report on Turing Test Experiments at the Royal Society, 27 J. Exp. & Theoretical Artificial Intelligence 1, 17 (2015), archived at

[81] See generally John McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955, 27 AI magazine 12, 13-14 (2006),, archived at (reproducing part of the Dartmouth summer research project and summarizing its proposal); see also Berkeley, supra note 62.

[82] See Berkeley, supra note 62.

[83] See Interview by Jeffrey Mishlove with John McCarthy, Ph.D., Thinking Allowed, Conversations on the Leading Edge of Knowledge and Discovery: Artificial Intelligence (1989),, archived at

[84] John McCarthy, Ascribing Mental Qualities to Machines, Stan. Artificial Intelligence Lab. 1, 2 (1979),, archived at

[85] See Marvin Minsky, ‘Father of Artificial Intelligence,’ Dies at 88, MIT News, Jan. 25, 2016,, archived at

[86] See Will Knight, What Marvin Minsky Still Means for AI, MIT Technology Rev., Jan. 26, 2016,, archived at

[87] See id. 

[88] See Jan Mycielski, Book Reviews, Perceptrons, An Introduction to Computational Geometry, 78 Bull. of the Am. Mathematical Soc’y 12, 12 (1972),, archived at (reviewing Perceptrons by Minsky and Papert); see also Jordan B. Pollack, Book Review, No Harm Intended, 33 J. Mathematical Psycholog. 358, 358 (1988),, archived at (reviewing the expanded edition of Perceptrons by Minsky and Papert).

[89] See Knight, supra note 86.

[90] See id. 

[91] See id.

[92] See Allen Newell & Herbert A. Simon, Computer Science as Empirical Inquiry: Symbols and Search, 19 Comm. ACM 113, 116 (1976).

[93] Herbert A. Simon, The Sciences of the Artificial 23 (3rd ed. 1996).

[94] See id. at 22.

[95] See Nils Nilsson, The Physical Symbol System Hypothesis: Status and Prospects, in 50 Years of AI 9, 11 (Max Lungarella, Fumiya Iida, Josh Bongard & Rolf Pfeifer eds., 2007).

[96] See David S. Touretzky & Dean A. Pomerleau, Reconstructing Physical Symbol Systems, 18 Cognitive Science, 345, 349 (1994).

[97] See generally Alexander Singer, Implementations of Artificial Neural Networks on the Connection Machine, 14 Parallel Computing 305 (1990) (discussing the practical implementation of artificial neural networks on the Connection Machine and the natural match between the two concepts).

[98] See Davis Ernest, Representations of Commonsense Knowledge 2 (Ronald J. Brachman ed., 1990); see John McCarthy Applications of Circumscription to Formalizing Common-Sense Knowledge, Dep’t of Computer Science, Stan. Univ. (1986),, archived at

[99] See David Lynton Poole, Alan K. Mackworth & Randy Goebel, Computational Intelligence: A Logical Approach 1, 18 (1998).

[100] See Nilssson, supra note 95, at 11; see Bo Göranzon, Artificial Intelligence, Culture and Language: On Education and Work 220 (Magnus Florin ed., 1990).

[101] See Berkeley, supra note 62.

[102] See John McCarthy, The Well-Designed Child, 172 Artificial Intelligence 2003, 2011 (2008).

[103] See id. 

[104] See Brenden M. Lake et al., Building Machines That Learn and Think Like People. Center for Brains, Minds, and Machines Memo No. 046, at 7 (2016),, archived at

[105] See McCarthy, supra note 4.

[106] See Ernest Davis & Gary Marcus, Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence, 58 Communications of the ACM 92, 93 (2015), archived at

[107] See Vincent C. Müller & Nick Bostrom, Future Progress In Artificial Intelligence: A Survey of Expert Opinion, Fundamental Issues Of Artificial Intelligence, 553, 553 (2016) (“The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter”),, archived at

[108] See Davis & Marcus, supra note 106, at 99-102.

[109] See infra text accompanying notes 240-42.

[110] See Education in Communities, IBM Corp. Resp. Rep. (2014),, archived at

[111] Other People’s Money (Warner Bros. 1991).

[112] See generally, Mark Bergen, Another AI Startup Wants to Replace Hedge Funds, recode, (Aug. 7, 2016, 11:15 AM),, archived at (explaining how a company aiming to integrate artificial intelligence in stock market trading is a part of a larger wave of start-ups attempting to integrate AI learning in financial markets).

[113] See generally Jacob Brogan, What’s the Deal With Artificial Intelligence Killing Humans? Slate, (April 1 2016, 7:03 AM),, archived at (explaining the differing views on the danger of AI in a variety of fields); see also Heather M. Roff, Killer Robots on the Battlefield, Slate (April 7, 2016 11:45 AM),, archived at (discussing the fears and benefits that accompany the prospect of autonomous weapons that engage targets entirely independent of human operation).

[114] See John O. McGinnis & Russell G. Pearce, The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services, 82 Fordham L. Rev. 3041, 3055 (2014) (discussing the possible disruptions that the legal profession may face as a result of integration of A.I. into the legal profession).

[115] See e.g., Overloaded Courts, Not Enough Judges: The Impact on Real People, People for the Am. Way,, archived at (last visited Oct. 31, 2016) (explaining the current strain on the American judiciary).

[116] See Guilty as Charged, The Economist (Feb 2, 2013, 4:02 PM),, archived at (“America has more lawyers per person of its population than any of 29 countries studied (except Greece)”).

[117] See Maria L. Marcus, Judicial Overload: The Reasons and the Remedies, 28 Buffalo L. Rev 111, 112-15, 120 (1978).

[118] See id. at 111. 

[119] See How Big is the US Legal Services Market?, Thompson Reuters (2015), archived at [hereinafter U.S. Legal Services Market]

[120] Id.

[121] See William D. Henderson, From Big Law to Lean Law, 38 Int’l Rev. of L. & Econ. 1, 3-5, 10, 11 (2014). But c.f., Russell G. Pearce & Eli Wald, The Relational Infrastructure of Law Firm Culture and Regulation: The Exaggerated Death of Big Law, 42 Hofstra L. Rev 109, 110 (2013) (discussing how death of big law is not dying and present contradicting evidence).

[122] See U.S. Legal Services Market, supra note 119.

[123] See Artificial Intelligence Global Quarterly Financing History 2010-2015, CB Insights (2016),, archived at

[124] See Christine Magee, The Jury is Out on Legal Startups, TechCrunch, Aug. 5 2014, archived at

[125] See Raymond H. Brescia, et al. Embracing Disruption: How Technological Change in the Delivery of Legal Services Can Improve Access to Justice, 78 Albany L. Rev. 553, 553-55 (2014); see generally, Joan C. Williams, Aaron Platt & Jessica Lee, Disruptive Innovation: New Models of Legal Practice, at 2-3 (2015), archived at (explaining the impact of new business models and technology on legal access).

[126] See Julius Stone, Legal System and Lawyers’ Reasonings 37-41 (1964).

[127] But see, Jonathan Smithers, President of the Law Society, Speech at the Union Internationale des Avocats (UIA) Conference: Lawyers Replaced by Robots: Will Artificial Intelligence Replace the Judgement and Independence of Lawyers? (Oct. 30, 2015), archived at

[128] See Ian Lopez, Can AI Replace Lawyers?, Law.Com, Apr. 8, 2016,, archived at

[129]See Machine Learning: What it is and Why it Matters, SAS Institute,, archived at (last visited Sept. 26, 2016).

[130] See id.

[131] See id.

[132] See Prakash M. Nadkarni, Lucila Ohno-Machado & Wendy W. Chapman, Natural Language Processing: An Introduction, 18 J. Am. Med. Informatics Ass’n. 544, 544 (2011),, archived at

[133] See id.

 [134] See id. at 545-46. 

[135] See