Richmond Journal of Law and Technology

The first exclusively online law review.

Page 17 of 26

Blog: Virtual Adultery: The World of Cyber Cheating

By: Micala MacRae, Associate Notes & Comments Editor

A virtual adultery epidemic has swept the nation. Online chat rooms, Facebook, twitter and other forms of social media have enabled individuals to make virtual connections that some argue are grounds for divorce.  In 1996, a New Jersey man filed for divorce based on adultery after discovering that his wife had been carrying on a “virtual” affair with a man in North Carolina through online chat rooms.[1]  Although the wife never met her cyber-paramour in person the relationship began to take over their lives and she began to neglect her job, family, and marriage.[2]  In the United States the courts have refused to hold virtual relationships reach the level of intimacy necessary for adultery.  Adultery is defined as intimate sexual activity outside of marriage.  However, virtual infidelity has become an increasingly important issue in Family Law.

Virtual infidelity can eventually lead a party to act.  If a spouse travels to meet an online partner in person, courts may infer adultery without much difficulty.  Courts have taken into consideration parents’ excessive time spent online on interactive gaming websites when determining child custody.[3]  When parents are not providing adequate support and care for their children due to their exorbitant time online courts infer from this they have relinquished their parental responsibilities.[4]  Courts may eventually see virtual infidelity as a renouncement of parental duties in divorce proceedings awarding the spouse who did not participate in the virtual infidelity full custody of the children.

Though courts have held virtual infidelity does not satisfy grounds for divorce it may satisfy other requirements such as neglect or abandonment.[5]  The spouse carrying out a virtual relationship abandons the marital relationship and the family when he or she spends great periods of time pursing the virtual relationship.  Many courts are willing to accept that sexual activity that is not proven to rise to the level of intercourse can still constitute legal adultery.[6]  Some courts even disapprove of emotional affairs, which are almost analogous to virtual adultery.

Although virtual infidelity may never involve physical contact courts may rule these virtual relationships that lead to the degradation of the marital relationship are grounds for divorce.  Online infidelity may qualify as adultery when the conduct is a substantial factor in the breakdown of the marriage.  Courts may expand the definition of adultery to include virtual infidelity as a factor in determining whether a divorce should be granted.  The law is behind the pace of technology and the evolution of views on marriage and infidelity.  It may be time to expand the law of adultery to include virtual infidelity, so that relief can be afforded to the victims.


[1] Douglas E. Abrams et al., Contemporary Family Law (3rd ed. 2012)

[2] Id.

[3] Andrew Feldstein, Is Cybersex Grounds for Divorce?,, (last updated Mar. 10, 2014).

[4] Id.

[5] Edward Nelson, Virtual Infidelity: A Ground for Divorce,, (Sept. 11, 2010, 4:18 PM),

[6] Id.

Blog: e-Vino Veritas: Archaic Wine Regulation in the Digital Age

By: Barry Gabay, Notes & Comments Editor has completely transformed humans’ understanding of book availability. A book that may have eluded our grasp for months or even years can now be readily attained in a matter of seconds. We no longer have travel costs associated with visiting a book retailer, and we no longer experience the disappointment of the retailer being out of stock. There is no more stress or hassle in book shopping in the world of e-Commerce, as the world’s largest library is constantly at our fingertips. Now imagine the same phenomenon with wine.

In November 2012, Amazon, the world’s largest online retailer, launched a wine marketplace with over 1,000 domestic wines available.[1] Today, the portal offers more than 5,000 wines from some 700 merchants, 80 percent of which are from domestic brands.[2] The website facilitates “direct-to-consumer” transactions between wineries and consumers, whereby consumers are delivered bottles and cases of wine packaged and shipped directly from the winery.[3]

The marketplace’s potential is self-evident, as Amazon netted over $61 billion in sales in 2012, up more than 27 percent from the previous year.[4] On top of that, the United States is the world’s largest wine consumer; we drank 856 million gallons of wine in 2012, roughly 2.73 gallons per citizen, and spent nearly $35 billion on wine. [5] Further, of the roughly 7,500 wineries existing in the United States, the vast majority of are boutique wineries who do not market out of state.[6] Thus, with Amazon’s direct-to-consumer shipping, these small wineries will be able to sell to customers across the nation, and consumers across the country will be able to purchase premium wines with the click of a button from any winery who registers…in an ideal world.

Due to highly regulated interstate distribution laws, the Amazon marketplace, at present, only serves customers in 22 states and the District of Columbia.[7] The major impediment is the three-tier system of alcohol distribution, under which wine, distilled spirits, and beer producers (tier one), wholesalers (tier two) and retailers (tier three) are completely separated, and alcohol must pass through all three tiers before it reaches the consumer.[8] The system was adopted by many states after the passing of the Twenty-First Amendment, which effectively gave states absolute authority to control alcohol within their borders. It originally served to halt the future emergence of Prohibition-era criminal syndicates, run by the likes of George Remus and Al Capone who used vertical integration tactics in their control of the liquor industry. The system is now used in nearly every state in hopes of “promoting temperance, ensuring orderly market conditions and raising revenue.”[9]

The three-tier system has, remarkably, survived recent constitutional challenges under the Commerce Clause, notably in the 2005 decision of Granholm v. Heald.[10] But while countless articles and several courts have found the three-tier system to, by its very nature, discriminate against out-of-state producers and consumers and thus in violation of the Dormant Commerce Clause, the current rise in e-Commerce offers a yet another justification for loosening state regulations on alcohol distribution.[11] Wineries, like nearly every other industry, have identified the Internet as a gateway for national distribution and expansion. E-Commerce provides an outlet for small wineries to reach consumers they would otherwise never have access to; the growing popularity of boutique wineries makes this outlet even more valuable.

Today, 44 states and the District of Columbia allow the direct shipment of wine to the consumer in some capacity,[12] though more often than not, there are stiff regulatory issues the winery must comply with.[13] Direct-to-consumer shipments were worth more than $1.46 billion from in 2012, an eight percent increase during that time frame the year prior.[14] Yet, although we are the world’s largest wine consumer, we are well behind Europeans (eight to ten percent of their wine purchased online) and the Chinese (27 percent) in terms of direct-to-consumer wine sales.[15] A decade-old Federal Trade Commission report found that the single biggest factor inhibiting the rise of direct-to-consumer wine sales was the three-tier system.[16] When that report was filed, total American e-Commerce sales were around $58 billion. That number reached $259 billion last year.[17] Thus, the extent to which the three-tier system inhibits wine distribution is self-evident and simply staggering.

With the emergence of Amazon’s wine marketplace, the potential benefits of direct-to-consumer shipment are once again being discussed in state legislatures.[18] Greater market competition benefits consumer costs, as lower online wine prices would induce local wineries to take competitive action. Opening up the market to allow wineries to sell directly to retailers and consumers will benefit boutique wineries and consumers. Boutique wineries will be able to independently expand their distribution out of state, and consumers will have a lifetime of different wine from which to choose without increased wholesale markup. In the current shift toward a universal marketplace, our wine cellar could be infinite.  



[1] Mark Brohan, Amazon Sales Top $61 Billion in 2012, Internet Retailer (Jan. 29, 2013),; Andrea Chang, Amazon Launches Online Wine Marketplace, L.A. Times (Nov. 9, 2012),

[2] Lauren Indvik, Amazon Begins Shipping Wine to New York, Michigan, Mashable (Oct. 17, 2013),

[3] Chang, supra note 1.

[4] Brohan, supra note 1.

[5] Table 6.1: World Wine Consumption, 2008-2011, % Change 2011/2008, and % of World Consumption-2011, The Wine Institute (2011),; 2012 Wine Sales in U.S. Reach New Record: Record California Wine Crop to Meet Surging Demand, The Wine Institute (2013),

[6] North American Winery Total passes 8,000, Wines & Vines (2013),; Devin McIntyre, Is Amazon Closer to Solving the Wine Shipping Puzzle?, The Wash. Post (2013),

[7] Amazon Wine States, (last visited Feb. 1, 2014).

[8] Amy Murphy, Discarding the North Dakota Dictum, 110 Mich. L. Rev. 819, 824-25 (2012).

[9] Wine Country Gift v. Steen, 612 F.3d 809, 814 (5th Cir. 2
010) (citing North Dakota v. United States, 495 U.S. 423, 432 (1990) (plurality opinion) (internal citations omitted)).

[10] Granholm v. Heald, 544 U.S. 460, 463 (2005).

[11] See e.g. Murphy, supra note 8; Desireé C. Slaybaugh, A Twisted Vine: The Aftermath of Granholm v. Heald, 17. Tex. Wesleyan L. Rev. 265 (2011); Costco Wholesale Corp. v. Hoen, 407 F. Supp 2d. 1247 (W.D. Wash. 2005); Cherry Hill Vineyards LLC v. Lilly, 553 F.3d 423 (6th Cir. 2008); Family Winemakers of California v. Jenkins, 592 F.3d 1 (1st Cir. 2010).

[12] State Shipping Laws for Wineries (Jan. 24, 2014),; see e.g.

[13] See e.g. Ala. Code § 28-3-5 (1975): (“Any retail dealer of alcoholic beverages … purchasing or receiving such commodities from without the state … shall, within 12 hours of receipt of such alcoholic beverages, mail … a true duplicate invoice of all such purchases or receipts to the board at Montgomery, Alabama, said invoice carrying the name of the person or firm from whom or through whom such purchases or shipments of the alcoholic beverages were received and showing kinds and quantities.”); Ind. Code § 7.1-3-26-9 (2011) (“A direct wine seller’s permit entitles a seller to sell and ship wine to a consumer” provided that the customer purchases the wine “in an initial face-to-face transaction.”).

[14] Jeff Carroll, Pawel Smolarkiewicz & Lynne Skinner, Direct to Consumer Wine Shipping Report 2013, Wines & Vines, 1-2,

[15] Rebecca Gibb, Internet Wine Sales Top $5 Billion, Wine-Searcher (June 18, 2013),

[16] Federal Trade Commission, supra note 7, at 3 (Note: The country’s two largest wine wholesalers, Southern Wine & Spirits and Republic National Distribution Company, generate revenues upwards of $13 billion, and the Wine & Spirit Wholesalers of America, the industry’s largest lobbying effort, spent $9.3 million in political action committee funds in the 2008 presidential election.).

[17] Allison Enright, U.S. e-Commerce Sales Could Top $434 billion in 2017, Internet Retailer (Apr. 25, 2013, 4:33 PM),; U.S. Census Bureau, Quarterly Retail e-Commerce Sales: 3rd Quarter 2013 (2013),

[18] See e.g. Steve Annear, Changes to Wine Direct Shipping Laws Are Fermenting on Beacon Hill, Boston Magazine (Nov. 11, 2013),

Blog: Football Concussion Suits: Reasonable or Hard Headed?

By: Bradford Schulz, Associate Staff

Juries across the nation recently are being asked to determine reasonable standards for football concussion helmet suits.[1] In a trademark case this past summer, the NFL settled with thousands of former professional league football players in a concussion related claim class action suit.[2]  The total NFL payout is $870 million with $675 million awarded for compensatory claims, $75 million for testing, $10 million for medical research, and $112 million for lawyers’ fees.[3]  The final settlement has approximately three payout formula categories; (1) a young retiree with amyotrophic lateral sclerosis or Lou Gehrig’s disease will be awarded $5 million, (2) 50-year-old retires with Alzheimer’s disease could receive $1.6 million, and (3) 80-year-old retires with early dementia will be awarded $25,000.[4]  Just this month, a splinter group from the settlement launched and lost their bid for appellate intervention on the merits of the settlement.[5]  The goal for the Sean Morey Objectors was to establish a legal custom in defining what football organizations know or should know about concussion safety.[6]  Juries in football concussion suits are quickly recognizing that the absence of a reasonable custom is not the only issue that needs addressing.

Before juries can tackle the appropriate legal custom in concussion related tort actions, scientists need to first figure out what a concussion is. Doctors struggle with establishing parameters for diagnosing concussions because they are unsure what specifically causes concussions. “If you talk to any doctor out there, you’re going to get 14 different opinions on what causes a concussion . . . [w]e don’t know if it’s a big hit or if it’s a whole bunch of little hits.”[7]  It is known that helmets protect the player’s head and are able to absorb a hit’s energy; however, helmets do not protect the brain from the hit’s acceleration.[8]

Any hit will likely have a perpendicular component and an angular component. A perpendicular hit is aligned straight at the head, directed exactly at the brain’s center of gravity. Football helmets do a satisfactory job absorbing the energy from a perpendicular hit because the structure of the shell transfers the energy away from the impact. The helmet significantly reduces the force, i.e. acceleration, of the perpendicular hit felt by the brain. Whereas an angular hit is any hit not straight at the brain’s center of gravity. This angled hit creates a rotational force around the brain’s center of gravity causing the head to spin, twist, or rotate. The helmet provides little protection to stop this additional rotation, because after all, the player needs to turn his head to look around. Imagine wearing a helmet and having someone hit the crown with a hammer; the helmet may not break, but you will likely undergo whiplash. It is believed that this rotational acceleration is a major component in football concussions.[9]

There are efforts in the scientific community to analyze the forces felt from a football hit. Researchers at several universities have installed sensors within their school’s helmets to measure the forces felt during hits. For instance, the InSite software measures violent movement and impact duration, and then it transmits this data to training staff on the sideline.[10]  Another program monitors player’s change in molecular information throughout a season in order to identify possible blood-based molecular correlations with concussions.[11] Dr. Duma, a university researcher, has found that “routine” hits equate to 20-40 times the force of gravity and “violent collisions” equate to 120 times the force of gravity.[12] An imperfect comparison is to acknowledge that astronauts train at 9 times the force of gravity; however, the durations are significantly different.

Several manufactures, some of which were involved in the NFL settlement, are beginning to offer new helmet designs. One manufacture is adding bullet stopping Kevlar inside their helmets; another is changing its external design to incorporate rubber padded foam, while others have sensors that update training staff on possible concussion-causing hits.[13]

So how is this affecting tort law? Other than the typical safety advertising suit, the lack of information on football hit concussions is affecting the custom standards juries use in determining reasonable safety precautions and designs. The first affect is that players, especially high school youth, believe that helmets protect them from concussions. As such, juries are willing to protect these youth by awarding plaintiffs for inadequate helmet safety warnings.[14] The second affect is that juries are struggling in establishing a test for negligent design. It is clear that juries are unsatisfied by the common practice in helmet manufacturing[15], but until the scientific research catches up juries are unable to hold the football helmet design to a satisfactory reasonable standard. And after all, unpredictable juries make for nervous litigators. Until science catches up and litigators have a clear custom for helmet safety negligence, we may see more settlements like the NFL case this past summer.

[1] FORBES, Hard Knocks: Xenith’s Helmet Technology Stands Tall Amidst Football’s Concussion Crisis, Sept. 2014 (available

[2] Associated Press. Federal Judge Approves NFL Concussion Settlement, July 7, 2014 (last updated July 9, 2014) (available at

[3] Id.

[4] In re Nat’l Football League Players’ Concussion Injury Litig., 2:12-MD-02323-AB, 2014 WL 3054250 (E.D. Pa. July 7, 2014).

[5] Paul D. Anderson, Objectors Seek Potentially Damning Discovery, NFL CONCUSION LITIGATION, Sept. 2014 (available at

[6] Id.

[7] Gary Mihoces, More Padding the Issue of Concussions and Better Helmets; USA TODAY SPORTS, Aug. 2013.

[8] Jim Avila and Serena Marshall, Riddell Unveils Overhauled New Football Helmet SpeedFlex, GOOD MORNING AMERICA, Aug 2014 (available at
[9] Id.

[10] Chris Fuhrmeister, New Riddell SpeedFlex Football Helmet Pits Technology vs. Concussions, SB NATION, Mar. 2014 (available at

[11] Hackney Publications, Riddell and TGen Team up with Arizona State University’s Football Program to Further Genetic Research into Athlete Concussion Detection and Treatment, Concussion Policy & the Law, August 2014 (available at

[12] Gregg Easterbrook, Virginia Tech Helmet Research Crucial, July 2011 (available at

[13] Jim Avila and Serena Marshall, Riddell Unveils Overhauled New Football Helmet SpeedFlex, GOOD MORNING AMERICA, Aug 2014 (available at; Gary Mihoces, More Padding the Issue of Concussions and Better Helmets; USA TODAY SPORTS, Aug. 2013.

[14] FORBES, Hard Knocks: Xenith’s Helmet Technology Stands Tall Amidst Football’s Concussion Crisis, Sept. 2014 (available

[15] Id.

Blog: Transparency in Law Enforcement: The Trend Towards Officer Body Cameras

by Eileen Waters, Associate Staff


The concept of body-mounted cameras worn by police officers is not brand new; in fact, police departments across the United States, England, Brazil, and Australia have been implementing systems with wearable cameras since the early 2000s. [1] Recently in the U.S., public interest has put a brighter spotlight on wearable cameras since an incident in Ferguson, Missouri, where an unarmed teenager named Michael Brown was shot by a police officer.[2] Confusion as to what actually happened when the incident occurred has led to debate and speculation about whether there would be less “civil unrest” if the officer who shot Michael Brown had worn a body camera.[3] In an effort to appease those who believe police cameras are the panacea for this subsequent civil unrest, police officers in Ferguson began wearing cameras earlier this month, having been donated by two private companies. [4] Locally, in “Henrico County [Virginia,] police officers will begin wearing body-mounted cameras this fall.”[5] With the acceleration of this trend, it is important to begin analyzing the pros and cons of police officers wearing body cameras.

The benefits of wearable cameras are numerous: the “potential to change the dynamics of police-citizen encounters, to either exonerate or implicate officers in wrongdoing, or provide evidence of citizen misconduct.”[6] “Body-worn cameras can increase accountability” not only for police officers, but also for the citizens they interact with.[7] The city of Rialto, California rolled out a camera program in 2012, and has since reported a 60% reduction in use-of-force incidents and an 88% reduction in filed citizen complaints, “when compared with the year prior to deployment.”[8] William Farrar, the police chief in Rialto, has spoken of cases where citizen have gone to their local police station to file a complaint “and the supervisor was able to retrieve and play on the spot the video of what transpired.”[9] Rialto is not the only city that has experienced a decrease in police-related issues and complaints since employing body cameras, many cities across the country are finding good results with such programs.[10]

Regardless of the benefits, there are also reasons to be wary of this new technology and approach the use of cameras with caution. Arguably, the most prevalent of which is that once a policy of camera-wearing is established by a law enforcement agency, “it will become increasingly difficult to have second thoughts or to scale back” such a program.[11] Many scholars have also strenuously noted privacy concerns that will arise with more camera usage.[12] “It takes little imagination to see how such cameras could augment already ubiquitous CCTV and facial recognition systems, allowing police to retroactively track and monitor innocent passersby.”[13] Proponents of body cameras should ask themselves if they are willing to give up much of their privacy for the program’s benefits. On top of these issues, cameras have a huge economic cost: “agencies that have deployed the cameras spent between $800 and $1,200 for each device.”[14] After the initial cost, it then becomes expensive to store the considerable amount of data created; the New Orleans Police Department will pay “an expected cost of $1.2 million over five years” for 350 body cameras. [15] Overall, there are appreciable costs to body camera programs that need to be weighed with the benefits when deciding whether a program should be implemented.

Currently, public interest seems to be in favor of wearable cameras. This has prompted Congressman Al Green to propose a federal bill last week that would require any “state or local law enforcement agency that receives Federal funds” to use those funds to purchase “body cameras for use by the law enforcement officers employed by that enforcement agency.”[16] On a state level, New Jersey Senator Donald Nocross announced that he is “drafting legislation that would require all police officers to wear body cameras while on patrol.”[17] Lawmakers, perhaps reacting to public opinion, are in the beginning stages of legislating for mandatory use of police body cameras. Now is the time where engaged citizens need to decide if these programs should be implemented nation-wide or not. As this post suggests, the issue is not black and white, and should be discussed and critiqued before concrete legislation is enacted.


[1] Joshua Kopstein, Police Cameras are No Cure-all After Ferguson, Aljazeera America (Aug. 29, 2014, 6:00AM),

[2] Id.

[3] Justin T. Ready & Jacob T.N. Young, Three Myths About Police Body Cams, Slate (Sept. 2, 2014 12:54AM),

[4] William Cummings, Ferguson Police Begin Using Body Cameras, USA Today (Sept. 1, 2014 1:43AM),

[5] Ted Strong & Brandon Shulleeta, Henrico Police to Roll Out Body Cameras for Officers, Richmond Times Disptach (Sept. 14, 2014),

[6] Bryce Clayton Newell, Crossing Lenses: Policing’s New Visibility and the Role of “Smartphone Journalism” as a Form of Freedom-Preserving Reciprocal Surveillance, 14 U. Ill. L. Tech. & Pol’y 59, 82 (2014).

[7] Kevin Johnson, Police Body Cameras Offer Benefits, Require Training, USA Today (Sept. 12, 2014 6:21 PM),

[8] Id.

[9] Randall Stross, Wearing a Badge, and a Video Camera, The New York Times (Apr. 6, 2014),

[10] Id.

[11] Johnson, supra note 6.

[12] Kopstein, supra note 1.

[13] Id.

[14] Johnson, supra note 6.

[15] Id.

[16] Transparency in Policing Act of 2014, H.R. 5407, 113th Cong. (2014).

[17] New Jersey Senator Proposes Bill Requiring Mandatory Body Cams for Police, Police State Daily (Sept. 11, 2014),

Homer Simpson May Be Headed to Court…D’Oh!

by Megan Carboni, Associate Staff


            Earlier this August, patent rights’ holder Alki David, owner of Hologram USA, filed suit against The Simpsons’ broadcaster, 20th Century Fox, for alleged patent rights infringements.[1] David asserts infringement of his acquired hologram technology used to bring Homer Simpson to life at this year’s Comic-Con convention in San Diego.[2] Oddly enough, Homer Simpson is not the only celebrity in hot water over alleged unauthorized use of David’s patented technology. Michael Jackson’s estate and Pulse Evolution are also being sued for the unauthorized use of David’s hologram technology to bring Michael Jackson back to life at the Billboard Music Awards.[3] Adding more fuel to the fire is Pulse’s cross complaint stating that David is “falsely claim[ing] credit for creating and developing the visual effects spectacle [of Jackson] in a nationally-televised interview on CNN, in press releases, and on his various websites […].”[4]

            So, where did this all begin? Stepping back in time, back to 1862, a stage trick for magic shows was developed by two magicians called “Pepper’s Ghost.”[5] “Pepper’s Ghost” was a lifelike illusion technique that has currently been popularized in movie special effects, concerts, and amusement park rides.[6] Most recently, “Pepper’s Ghost” inspired the hologram technology behind Tupac Shakur’s resurrection at the 2012 Coachella Music Festival, whose patent rights were acquired by David and Hologram USA in February 2013.[7] Unfortunately for the late Michael Jackson and the animated Homer Simpson, neither Pulse nor Fox obtained any licensing rights to use the same hologram technology to create their holograms before they were publicly debuted. [8] Thus, enter the multimillion-dollar patent infringement suits brought by David. David’s attorneys representing him in the Jackson lawsuit state that Pulse, and now Fox, “have created significant confusion in the marketplace [and] diluted the value of the Hologram USA brand.”[9]

            But were Simpson and Jackson holograms made with the same technology? Of course, patent experts in this field will have to weigh in to determine if any of David’s claims of stolen holograms have any weight to them. The accused parties have publicly disavowed David’s allegations, with Fox saying “[t]his filing is totally without merit […] except to say […] Mr. David has demonstrated his insatiable need to stay relevant.”[10] Pulse adds in their own suit against the Hologram USA owner that David is merely “divert[ing] public and industry attention away from Pulse Entertainment,” asserting claims against David of unfair business competition practices and trade libel.[11] Pulse further asserts that the “mischaracterization of the [Michael Jackson] animation as a hologram highlights David’s complete lack of technical expertise….[This] was not a hologram at all, rather, it was an animation projected onto a screen.”[12]

            Will the courts find for David in his patent infringement claims? Or will they find that there is little substance to his allegations? Does the industry need the distinction between each of the types of technology and animation to continue to bring this type of entertainment to the masses? Is it also coincidence that Fox successfully sued one of David’s media companies for copyright infringement in 2012?[13] Time, or a hefty settlement (D’Oh!), will tell who has the future rights to collect off of celebrity holograms/animations technology.




[1] Homer Simpson Duffed With Patent Lawsuit, WORLD INTELL. PROP. REV. (Aug. 18, 2014),

[2] Id.

[3] Id.

[4] Eriq Gardner, Michael Jackson ‘Hologram’ Show Sparks New Legal Crossfire (Exclusive), THE HOLLYWOOD REP. (June 19, 2014, 12:11 PM),

[5] Eriq Gardner, Homer Simpson Hologram at Comic-Con Draws Patent Lawsuit (Exclusive), THE HOLLYWOOD REP. (Aug. 15, 2014, 12:54 PM),

[6] Amended Complaint and Demand for Jury Trial at 2, Hologram USA, Inc. et al. v. Pulse Evolution Corp. et al. (D. Nev. May 29, 2014) (No. 2:14-cv-00772).

[7] Eriq Gardner, Homer Simpson Hologram at Comic-Con Draws Patent Lawsuit (Exclusive), THE HOLLYWOOD REP. (Aug. 15, 2014, 12:54 PM),

[8] Id.

[9] Eriq Gardner, Michael Jackson ‘Hologram’ Show Sparks New Legal Crossfire (Exclusive), THE HOLLYWOOD REP. (June 19, 2014, 12:11 PM),

[10] Gardner, supra note 6.

[11] Eriq Gardner, Michael Jackson ‘Hologram’ Show Sparks New Legal Crossfire, THE HOLLYWOOD REP. (June 19, 2014, 12:11 PM),

[12] Id.

[13] See WORLD INTELL. PROP. REV., supra note 1.

Cyber Security Active Defense: Playing with Fire or Sound Risk Management?


Cite as: Sean L. Harrington, Cyber Security Active Defense: Playing with Fire or Sound Risk Management?, 20 Rich. J.L. & Tech. 12 (2014),

 Sean L. Harrington*

Trying to change its program

Trying to change the mode . . . crack the code

Images conflicting into data overload[1]

 I. Introduction

[1]        “Banks Remain the Top Target for Hackers, Report Says,” is the title of an April 2013 American Banker article.[2] Yet, no new comprehensive U.S. cyber legislation has been enacted since 2002,[3] and neither legislative history nor the statutory language of the Computer Fraud and Abuse Act (CFAA) or Electronic Communications Privacy Act (ECPA) make reference to the Internet.[4] Courts have nevertheless filled in the gaps—sometimes with surprising results.

[2]        Because state law, federal legislative proposals, and case law all are in a continuing state of flux, practitioners have found it necessary to follow these developments carefully, forecast, and adapt to them, all of which has proved quite challenging. As the title of this Comment suggests, deploying sound cyber security practices is not only equally as challenging, but also “risky,” which may seem counterintuitive in light of the fact that intent of cyber security programs is to manage risk, not create it.[5]

[3]        Cyber security risks concern exploits made possible by technological advances, some of which are styled with familiar catch-phrases: “e-Discovery,” “social media,” “cloud computing,” “Crowdsourcing,” and “big data,” to name a few. Yet, long before the term “cloud computing” became part of contemporary parlance, Picasa used to store photos in the cloud (where the “cloud” is a metaphor for the Internet).[6] This author has been using Hotmail since 1997 (another form of cloud computing). As the foregoing examples illustrate, the neologisms were long predated by their underlying concepts.

[4]        One of the latest techno-phrases du jour is “hack back.”[7] The concept isn’t new, and the term has been “common” parlance at least as far back as 2003.[8] “Hack back”—sometimes termed “active defense,” “back hacking,” “retaliatory hacking,” or “offensive countermeasures” (“OCM”)—has been defined as the

“process of identifying attacks on a system and, if possible, identifying the origin of the attacks.” Back hacking can be thought of as a kind of reverse engineering of hacking efforts, where security consultants and other professionals try to anticipate attacks and work on adequate responses.”[9]

A more accurate and concise definition might be “turning the tables on a cyberhacking assailant: thwarting or stopping the crime, or perhaps even trying to steal back what was taken.”[10] One private security firm, renowned for its relevant specialization, defines active defense, in pertinent part, as “deception, containment, tying up adversary resources, and creating doubt and confusion while denying them the benefits of their operations.”[11] Some have proposed—or carried out—additional measures, such as “photographing the hacker using his own system’s camera, implanting malware in the hacker’s network, or even physically disabling or destroying the hacker’s own computer or network.”[12]

[5]        Back hacking has been a top-trending technology topic over the past year, prompted in part by the controversial Report of the Commission on the Theft of American Intellectual Property (“IP Commission Report”),[13] and has been debated on blogs, symposium panels, editorials, and news media forums by information security professionals and lawyers alike. One with the potential to grab practitioners’ attention was a panel of attorneys David Navetta and Ron Raether—both well regarded in the information security community—discussing the utility and propriety of such practices. One opined that, if the circumstance is exigent enough, a company may take “measures into [its] own hands,” and that it would, “not likely be prosecuted under the CFAA, depending on the exigency of the circumstances.”[14] The other reasoned that hack back “technically violates the law, but is anyone going to prosecute you for that? Unlikely.”[15] He noted, “[i]t provides a treasure trove of forensic information that you can use,” and continued, “[w]ith respect to the more extreme end of hack back, where you are actually going to shut down servers, I think there is a necessity element to it—an exigency: if someone’s life is threatened, if it appears that there is going to be a monumental effect on the company, then it might be justified.”[16] In 2014 at the most recent RSA conference, where the “hackback” debate continued, the presentation was billed, in part, with the proposition, “[a]ctive defense should be viewed as a diverse set of techniques along a spectrum of varying risk and legality.”[17] And, other commentators have urged that “offensive operations must be considered as a possible device in the cyber toolkit.” [18]

[6]        Most commentators and scholars, however, seem to agree that “hack back” is not only “risky,” but is also not a viable option for a variety of reasons.[19] Hack backs and other surreptitious cyber acts incur the risks of criminal liability, civil liability, regulatory liability, professional discipline, compromise of corporate ethics, injury to brand image, and escalation. One practitioner quoted by the LA Times exclaimed, “[i]t’s not only legally wrong, it’s morally wrong.”[20] James Andrew Lewis, a senior fellow at the Center for Strategic and International Studies, characterized hacking back as “a remarkably bad idea that would harm the national interest.”[21] The Cyber Intelligence Sharing and Protection Act, a major cybersecurity bill passed by the House in April 2013, contained an amendment that specifically provided that the bill did not permit hacking back.[22] Representative Jim Langevin (RI-D), who authored the amendment, explained, “[w]ithout this clear restriction, there is simply too much risk of potentially dangerous misattribution or misunderstanding of any hack-back actions.”[23] Further, the private security firm renowned for its active defense strategies, mentioned ante, has attempted to distance itself from the phrases such as “hack back” and “retaliatory hacking,” preferring instead the broader phrase “active defense.”[24] Another example of the importance of subtleties in word choice may be “Countermeasure,” where some appear to have conflated the word with the concept of active defense.[25]

II. Active Defense Approaches

[7]        Self-defense is not an abstraction created by civilization, but a law spawned by nature itself, and has been justified since antiquity.[26] It has been regarded since the early modern period as available to redress injuries against a state’s sovereign rights.[27] There is little question cyber-attacks against a designated critical infrastructure are attacks against a state’s sovereign rights,[28] because much of civilian infrastructure is both a military and national asset.[29] Accordingly, the focus of 2014 NATO International Conference on Cyber Conflict (“CyCon”) is active cyber defense, including implications for critical infrastructure.[30] Likewise, a project sponsored by NATO’s Cooperative Cyber Defense Centre of Excellence is set to publish a report in 2016 that establishes acceptable responses to pedestrian or quotidian cyber-attacks against nations, whereas its predecessor, regarded as an academic text, focused on cyber-attacks against a country that are physically disruptive or injurious to people and possible responses under the UN charter and military rules.[31] Both works are based on the concepts of self-defense and, under certain circumstances, preemptive “anticipatory self-defense.”[32]

[8]        The questions that scholars, policymakers, information security experts, and corporate executives have struggled with, however, is at what threshold do such attacks warrant the protection of the state,[33] whether a private corporation may respond in lieu of or in concert with protection by the state, and to what extent such collusion constitutes excessive entanglement between the private and public sector. Implicit in these questions is whether the government is willing and able to develop a modern and adaptable regulatory and criminal law framework and to allocate adequate law enforcement resources to confront the problem.[34] Because, at the time of this writing, it is widely perceived that the government is not yet willing and able,[35] victims often do not report suspected or actual cyber-attacks, and have resorted to inappropriate self-help, deploying their own means of investigating and punishing transgressors.[36] As one commentator posits,

With regard to computer crime, some might argue that the entire investigative process be outsourced to the business community. Historically, the privatization of investigations has assisted public law enforcement by allowing them to concentrate on other responsibilities, and has prevented their resources from being allocated in too sparse a manner to be useful.” [37]

Awaiting the ultimate resolution of these questions, American corporations have developed an array of active defense tactics. Below are a few of the more common examples of those, and the corresponding challenges:

 A. Beaconing

[9]        Beaconing is one of the most cited active defense techniques, and one mentioned in the IP Commission Report (along with “meta-tagging,” and “watermarking”) as a way to enhance electronic files to “allow for awareness of whether protected information has left an authorized network and can potentially identify the location of files in the event that they are stolen.”[38] A benign version of beaconing is the use of so-called Web bugs.[39] A Web bug is a link—a surreptitious file object—commonly used by spammers and placed in an e-mail message or e-mail attachment, which, when opened, will cause the e-mail client or program will attempt to retrieve an image file object from a remote Web server and, in the process, transmit information that includes the user’s IP address and other information.[40] This transmission is not possible “if the user did not preconfigure the e-mail client or program to refrain from retrieving images or HTML content from the Internet,” or if the user’s e-mail client blocks externally-hosted images by default.[41] “This information becomes available to the sender either through an automated report service (e.g., or simply by monitoring traffic to the Web server.”[42] In one project demonstrating the use advocated by the IP Commission Report, researchers employed such technology in decoy documents to track possible misuse of confidential documents.[43] So, is beaconing legal?

[10]      The Wall Street Journal (the “Journal”) quoted Drexel University law professor Harvey Rishikof—who also is co-chairman of the American Bar Association’s Cybersecurity Legal Task Force—as saying the legality of beaconing is not entirely clear.[44] Rishikof is quoted as saying, “‘[t]here’s the black-letter law, and there’s the gray area. . . . Can you put a beacon on your data? Another level is, could you put something on your data that would perform a more aggressive action if the data was taken?’”[45] The article went on to suggest more aggressive strategies such as “inserting code that would cause stolen data to self-destruct or inserting a program in the data that would allow a company to seize control of any cameras on the computers where the data were being stored.”[46] The Journal, citing an anonymous Justice Department source, further reported that, “[i]n certain circumstances beaconing could be legal, as long as the concealed software wouldn’t do other things like allow a company to access information on the system where the stolen data were stored.”[47]

[11]      Another important consideration is the fact that beaconing may fall within one of the active defense definitions (supra) as “deception.”[48] Although deception is recognized as both a common and effective investigative technique,[49] the problem is the possibility that the activities of the investigator could be imputed under Model Rule of Professional Conduct 5.3 to one or more attorneys responsible for directing or approving of those activities.[50] Under Model Rule 8.4(c), neither an attorney nor an attorney’s agent under his or her direction or control may “engage in conduct involving dishonesty, fraud, deceit, or misrepresentation.”[51] Although the question of whether deception, as contemplated in Rule 8.4, exists in the context of incident response or network forensics investigations is not well settled,[52] most states have held “[t]here are circumstances where failure to make a disclosure is the equivalent of an affirmative misrepresentation.”[53] A few state bar associations have already addressed similar technology-related ethical pitfalls. The Philadelphia Bar Association Professional Guidance Committee advised in Opinion 2009–02 that an attorney who asks an agent (such as an investigator) to “friend” a party in Facebook in order to obtain access to that party’s non-public information, would violate, among others, Rule 5.3 of the Pennsylvania Rules of Professional Conduct.[54] Likewise, the Association of the Bar of the City of New York Committee on Professional and Judicial Ethics issued Formal Opinion 2010–2, which provides that a lawyer violates, among others, New York Rules of Professional Conduct Rule 5.3, if an attorney employs an agent to engage in the deception of “friending” a party under false pretenses to obtain evidence from a social networking website.[55]

B. Threat Counter-Intelligence Gathering

[12]      One of the most seemingly-innocuous active defense activities is intelligence gathering. Security analyst David Bianco defines threat intelligence as “[c]onsuming information about adversaries, tools or techniques and applying this to incoming data to identify malicious activity.”[56] Threat intelligence gathering ranges from everything from reverse malware analysis and attribution to monitoring inbound and outbound corporate e-mail to more risky endeavors.[57] Some security experts claim to frequent “Internet store fronts” for malware, “after carefully cloaking [their] identity to remain anonymous.”[58] The reality, however, is that gaining access to and remaining on these black market fora requires the surreptitious visitor either to: (1) participate (“pay to play”); (2) to have developed a reputation over months or years, or founded the underground forum ab initio;or (3) to have befriended or been extended a personal invitation by an established member. The first two of these three activities implies that the participant would have co-conspirator or accomplice liability in the underlying crimes. Another risk is, if the site is reputed to also purvey child pornography, a court may find that the site visitor acquired possession (even as temporary Internet cache) of the contraband knowingly, even if the true intent of lurking was to gather intelligence.[59] Another obvious risk is that surreptitious monitoring of hacker sites using false credentials or representations is an act of deception which, for the reasons more fully set forth above, could create disciplinary liability for any attorneys who are involved or acquiesce to the activity.

C. Sinkholing

[13]      Sinkholing is the impersonation of a botnet command-and-control server in order to intercept and receive malicious traffic from its clients.[60] To accomplish this, either the domain registrar must redirect the domain name to the investigator’s machine (which only works when the connection is based on a DNS name), or the Internet Service Provider (ISP) must redirect an existing IP address to the investigator’s machine (possible only if the investigator’s machine is located in the IP range of the same provider), or the ISP must redirect all traffic destined for an IP address to the investigator’s machine, instead (the “walled garden” approach).[61]

[14]      Sinkholing involves the same issues of deception discussed ante, but also relies on the domain registrar’s willingness and legal ability to assist. As Link and Sancho point out in their paper Lessons Learned While Sinkholing Botnets—Not as Easy as it Looks!,“[u]nless there is a court order that compels them to comply with such a request, without the explicit consent of the owner/end-user of the domain, the registrar is unable to grant such requests.”[62] Doubtless they were referring to the Wiretap Act (Title 1 of the Electronic Communications Privacy Act), which generally prohibits unconsented interception (contemporaneous with transmission), disclosure, or use of electronic communications.[63] Further, a federal district court recently ruled that intentionally circumventing an IP address blacklist in order to crawl an otherwise-publicly available website constitutes “access without authorization” under the CFAA.[64] Link and Sancho continue that registrars have little incentive to assist because it does not generate revenue, and note that sinkholing invites distributed denial of service (“DDoS”) retaliation which could affect other customers of a cloud-provided broadband connection.[65] Finally, sinkholing is likely to collect significant amounts of data, including personally identifiable information (“PII”). The entity collecting PII is likely to be subject to the data privacy, handling, and disclosure laws of all the jurisdictions whence the data came.

D. Honeypots

[15]      A honeypot is defined as “a computer system on the Internet that is expressly set up to attract and ‘trap’ people who attempt to penetrate other people’s computer systems.”[66] It may be best thought of as “an information system resource whose value lies in unauthorized or illicit use of that resource.”[67] Honeypots do arguably involve deception, but have been in use for a comparatively long time, and are generally accepted as a valid information security tactic (therefore, relatively free from controversy). The legal risks, historically, have been identified as: (1) potential violations of the ECPA;[68] and (2) possibly creating an entrapment defense for the intruder.[69] Neither of these is applicable here, because, respectively: (1) the context of the deployment discussed herein is the corporate entity as the honeypot owner (thus, a party to the wire communication); and (2) the corporate entity is not an agent of law enforcement, and, further, the entrapment defense is only available when defendant was not predisposed to commit the crime (here, a hacker intruding into a honeypot is predisposed).[70] Nevertheless, Justice Department attorney Richard Salgado, speaking at the Black Hat Briefings, did reportedly warn that the law regarding honeypots is “untested” and that entities implementing devices or networks designed to attract hackers could face such legal issues as liability for an attack launched from a compromised honeypot.[71] This possibility was discussed six years ago:

If a hacker compromises a system in which the owner has not taken reasonable care to secure and uses it to launch an attack against a third party, the owner of that system may be liable to the third party for negligence. Experts refer to this scenario as “downstream liability.” Although a case has yet to arise in the courts, honeypot operators may be especially vulnerable to downstream liability claims since it is highly foreseeable that such a system be misused in this manner.[72]

Another honeypot risk is the unintended consequence of becoming a directed target because the honeypot provoked or attracted hackers to the company that deployed it, which hackers might otherwise have moved on to easier targets. Another is that an improperly configured honeypot could ensnare an innocent third party or customer and collect legally-protected information (such as PII). If that information is not handled according to applicable law, the owner of the honeypot could incur statutory liabilities therefor.[73] And yet another scenario is one that, perhaps, only a lawyer would recognize as a risk: “[i]f you have a honeypot and do learn a lot from it but don’t remedy or correct it, then there’s a record that is discoverable and that you knew you had a problem and didn’t [timely] fix it.”[74]

[16]      Finally, there are uses for honeypots which, when regarded as a source of revenue by its owners, have the potential to cause substantial injury to brand image and reputation, and possibly court sanctions: one law firm has been accused of seeding the very copyrighted content it was retained to protect, which the firm used as evidence in copyright suits it prosecuted.[75] Because of these alleged activities, the firm has been labelled a “copyright troll.”[76] The allegations, if proved true, also appear to involve acts of deception, discussed ante, which may subject the firm’s attorneys to attorney disciplinary proceedings.[77] Further, the firm’s attorneys may incur other possible liabilities, such as vexatious and frivolous filing sanctions, abuse of process, barratry, or champerty.[78]

E. Retaliatory Hacking

[17]      A common belief for why corporations have little to fear in the way of prosecution for retaliatory hacking is, “criminals don’t call the cops.”[79] Nevertheless, there is little debate that affirmative retaliatory hacking is unlawful,[80] even if done in the interests of national security.[81] Although there may be “little debate,” there is debate.[82]The views of many passionate information security analysts could be summed up by authors John Strand and Paul Asadoorian, who argue, “[c]urrently, our only defense tools are the same tools we have had for the past 10+ years, and they are failing.”[83] David Willson, the owner and president of Titan Info Security Group, and a retired Army JAG, contends that using “automated tools outside of your own network to defend against attacks by innocent but compromised machines” is not gaining unauthorized access or a computer trespass, and he asks, “[i]f it is, how is it different from the adware, spam, cookies, or others that load on your machine without your knowledge, or at least with passive consent?”[84] Willson provides a typical scenario and then examines the statutory language of the CFAA and offers some possible arguments—but notes his arguments bear stretch marks (and he makes no offer of indemnification should practitioners decide to use them).[85]

[18]      Willson is not alone in searching for leeway within the CFAA. Stewart Baker, former NSA general counsel, argues on his blog,

Does the CFAA, prohibit counterhacking? The use of the words “may be illegal,” and “should not” are a clue that the law is at best ambiguous. . . . [V]iolations of the CFAA depend on “authorization.” If you have authorization, it’s nearly impossible to violate the CFAA . . . [b]ut the CFAA doesn’t define “authorization.” . . . The more difficult question is whether you’re “authorized” to hack into the attacker’s machine to extract information about him and to trace your files. As far as I know, that question has never been litigated, and Congress’s silence on the meaning of “authorization” allows both sides to make very different arguments. . . . [C]omputer hackers won’t be bringing many lawsuits against their victims. The real question is whether victims can be criminally prosecuted for breaking into their attacker’s machine.[86]

Other theories —and assorted arguments bearing stretch marks— analogize retaliatory hacking as subject to the recapture of chattels privilege,[87] entry upon land to remove chattels,[88] private necessity,[89] or even the castle doctrine.[90] Jassandra K. Nanini, a cybersecurity law specialist, suggests applying the “security guard doctrine” as an analogy.[91] She posits that, if private actors act independently of law enforcement and have a valid purpose for their security activities that remains separate from law enforcement, then incidental use of evidence gained through those activities by law enforcement is permissible, even if the security guard acted unreasonably (as long as he remained within the confines of the purpose of his employer’s interests).[92] As applied, Nanini explains the analogy as follows:

If digital property were considered the same as physical, cyber security       guards could “patrol” client networks in search of intruder footprints, and based on sufficient evidence of a breach by a particular hacker, perhaps indicated by the user’s ISP, initiate a breach of the invader’s network in order to search for compromised data and disable its further use. Even more aggressive attacks designed to plant malware in hacker networks could be considered seizure of an offensive weapon, comparable to a school security guard seizing a handgun from a malicious party. Such proactive defense could use the hacker’s own malware to corrupt his systems when he attempts to retrieve the data from the company’s system. Certainly all of these activities are within the scope of the company’s valid interest, which include maintaining data integrity, preventing use of stolen data, and disabling further attack. . . . Similarly, companies may wholly lack any consideration of collecting evidence for legal recourse, keeping in step with the private interest requirement of the private security guard doctrine in general. All hack-backs could be executed without any support or direction from law enforcement, opening the door to utilization       of evidence in a future prosecution against the hacker. [93]

The foregoing theories notwithstanding, what is clear is that obtaining evidence by use of a keylogger, spyware, or persistent cookies likely is violative of state and federal laws, such asthe CFAA or ECPA.[94] The CFAA, last amended in 2008, criminalizes anyone who commits, attempts to commit, or conspires to commit an offense under the Act, including offenses such as knowingly accessing without authorization a protected computer (for delineated purposes) or intentionally accessing a computer without authorization (for separately delineated purposes).[95] Relevant statutory phrases, such as “without authorization” and “access,” have been the continuing subject of appellate review.[96] One federal court, referring to both the ECPA and CFAA, pointed out that “the histories of these statutes reveal specific Congressional goals—punishing destructive hacking, preventing wiretapping for criminal or tortious purposes, securing the operations of electronic communication service providers—that are carefully embodied in these criminal statutes and their corresponding civil rights of action.”[97] At least one court has held that the use of persistent tracking cookies is a violation of the Electronic Communications Privacy Act.[98] Congress is currently considering reform to the CFAA, as well as comprehensive privacy legislation that would, in some circumstances, afford a private right of action to consumers whose personal information is collected without their consent. [99]

[19]      Regardless of the frequency with which retaliatory hacking charges have been brought, one issue that has not yet been included in the debate involves illegally obtained evidence that is inadmissible. This matters because bringing suit under the CFAA or ECPA is a remedy that corporate victims have recently invoked increasingly.[100]

[20]      Another liability —the one most frequently cited— is that of misattribution and collateral damage:

[E]ncouraging digital vigilantes will only make the mayhem worse. Hackers like to cover their tracks by routing attacks through other people’s computers, without the owners’ knowledge. That raises the alarming prospect of collateral damage to an innocent bystander’s systems: imagine the possible consequences if the unwitting host of a battle between hackers and counter-hackers were a hospital’s computer.[101]

Likewise, Representative Mike Rogers (R-MI), sponsor for the Cyber Intelligence Sharing and Protection Act (CISPA) and Chair of the House Permanent Select Committee on Intelligence, warned private corporations against going on the offensive as part of their cyber security programs: “You don’t want to attack the wrong place or disrupt the wrong place for somebody who didn’t perpetrate a crime.”[102] Contemplate the civil liabilities that one could incur if, in an effort to take down a botnet through self-help and vigilantism, the damaged computers belonged to customers, competitors, or competitors’ customers. Aside from the financial losses and injury to brand reputation and goodwill, implicated financial institutions could expect increased regulatory scrutiny and could compromise government contracts subject to FISMA.

[21]      Yet another frequently discussed liability is that of escalation: cybercrime is perpetrated by many different attacker profiles of persons and entities, including cyber-terrorists, cyber-spies, cyber-thieves, cyber-warriors, and cyber-hactivists.[103] Because the purported motivation of a cyber-hactivist is principle, retaliation by the corporate victim may be received as an invitation to return fire and escalate. Similarly, “[e]ncouraging corporations to compete with the Russian mafia or Chinese military hackers to see who can go further in violating the law . . . is not a contest American companies can win.”[104] Conversely, the motivation of a cyber-thief is principal and interest, so retaliation by the target might be taken as a suggestion to move on to an easier target. Because the perpetrators are usually anonymous, the corporate victim has no way to make a risk-based and proportional response premised upon the classification of the attacker as nation-state, thief, or hactivist.

[I]n cyberspace attribution is a little harder. On the playground you can see the person who hit you . . . well, almost always[,] . . . in cyberspace we can track IP addresses and TTPs from specific threat actors, which smart analysts and researchers tell us is a viable way to perform attribution. I agree with them, largely, but there’s a fault there. An IP address belonging to China SQL injecting your enterprise applications is hardly a smoking gun that Chinese APTs are after you. Attackers have been using others’ modus operandi to mask their identities for as long as spy games have been played. Attackers have been known to use compromised machines and proxies in hostile countries for as long as I can remember caring—to “bounce through” to attack you. Heck, many of the attacks that appear to be originating from nation-states that we suspect are hacking us may very well be coming from a hacker at the coffee house next door to your office, using multiple proxies to mask their true origin. This is just good OpSec, and attackers use this method all the time, let’s not kid ourselves.[105]

If, without conclusive attribution and intelligence, the corporate victim is unable to make a risk-based and proportional response, it may be reasonable to question whether retaliatory hacking is abandoning the risk-based approach to business problems exhorted by FFIEC,[106]PCI,[107]and the NIST Cybersecurity Framework?[108] “If we start using those sort of [cyber weapons], it doesn’t take much to turn them against us, and we are tremendously vulnerable,” said Howard Schmidt, a former White House cyber security coordinator.[109]

[22]      Then there is the often overlooked issue of professional ethics—not for the attorneybut for the information security professional.“Ethics,” a term derived from the ancient Greek ethikos (ἠθικός), has been defined as “a custom or usage.”[110] Modernly, ethics is understood to be “[professional] norms shared by a group on a basis of mutual and usually reciprocal recognition.”[111] The codes of ethics provide articulable principles against which one’s decision-making is objectively measured, and serve other important interests, including presenting an image of prestige and credibility for the organization and the profession,[112] eliminating unfair competition,[113] and fostering cooperation among professionals.[114]

[23]      Many information security professionals are certified by the International Information Systems Security Certification Consortium ((ISC)). The (ISC) Committee has recognized its responsibility to provide guidance for “resolving good versus good, and bad versus bad, dilemmas,” and “to encourage right behavior.”[115] The Committee also has the responsibility to discourage certain behaviors, such as raising unnecessary alarm, fear, uncertainty, or doubt; giving unwarranted comfort or reassurance; consenting to bad practice; attaching weak systems to the public network; professional association with non-professionals; professional recognition of, or association with, amateurs; or associating or appearing to associate with criminals or criminal behavior.[116] Therefore, an information security professional bound by this code who undertakes active defense activities that he or she knows or should know are unlawful, or proceeds where the legality of such behavior not clear, may be in violation the Code.

[24]      It would stand to reason that, an organization that empowers, directs, or acquiesces to conduct by its employees that violates the (ISC)Code of Ethics may violate its own corporate ethics or otherwise compromise its ethical standing in the corporate community—or not: when Google launched a “secret counter-offensive” and “managed to gain access to a computer in Taiwan that it suspected of being the source of the attacks,”[117] tech sources praised Google’s bold action.[118]

[25]      Nevertheless, corporate ethics is an indispensable consideration in the hack back debate. The code of ethics and business conduct for financial institutions should reflect and reinforce corporate values, including uncompromising integrity, respect, responsibility and good citizenship. As noted above, retaliatory hacking is deceptive and has been characterized as reckless, and even Web bugs are commonly associated with spammers. Corporate management must consider whether resorting to techniques pioneered by and associated with criminals or spammers has the potential to compromise brand image in the eyes of existing and prospective customers. Similarly, to the extent that financial corporations are engaging in active defense covertly,[119] corporate management must consider whether customers’ confidence in the security of their data and investments could be shaken when such activities are uncovered. Will customers wonder whether their data has been placed at risk because of escalation? Will shareholders question whether such practices are within the scope of good corporate stewardship?

III. Alternatives to Retaliatory Hacking

[26]      The obvious argument in support of active defense is that the law and governments are doing little to protect private corporations and persons from cybercrime, which has inexorably resulted in resort to self-help,[120] and those who vociferously counsel to refrain from active defense often have little advice on alternatives. At the risk of pointing out the obvious, one counsels, “‘when you look at active defense, we need to focus on reducing our vulnerabilities.’”[121]

[27]      Alternatives to hacking back are evolving, and one of the more promising is the pioneering threat intelligence gathering and sharing from the Financial Services Information Sharing and Analysis Center (“FS-ISAC”), which collects information about threats and vulnerabilities from its 4,400 FI members, government partners, and special relationships with Microsoft®, iSIGHT PartnersSM, Secunia, et al., anonymizes the data, and distributes it back to members.[122] In addition to e-mail alerts and a Web portal, FS-ISAC holds regular tele-conferences during which vulnerability and threat information is discussed, and during which presentations on current topics are given.[123] The FS-ISAC recently launched a security automation project to eliminate manual processes to collect and distribute cyber threat information, according to Bill Nelson, the Center’s director.[124] The objective of the project is to significantly reduce operating costs and lower fraud losses for financial institutions, by consuming threat information on a real-time basis.[125]

[28]      Although, as American Banker wryly observes, “[b]ankers have never been too keen on sharing secrets with one another,”[126] dire circumstances have catalyzed a new era of cooperation, paving the way for the success of the cooperative model developed by the FS-ISAC—even before its current ambitious automation project, which has resulted in successful botnet takedown operations.[127] An illustrative example is the Citadel malware botnet takedown, where Microsoft’s Digital Crimes Unit, in collaboration with the FS-ISAC, the Federal Bureau of Investigation, the American Bankers Association, NACHA—The Electronic Payments Association, and others, executed a simultaneous operation to disrupt more than 1,400 Citadel botnets reportedly responsible for over half a billion dollars in losses worldwide.[128] With the assistance of U.S. Marshals, data and evidence, including servers, were seized from data hosting facilities in New Jersey and Pennsylvania, and was made possible by a court ordered civil seizure warrant from a U.S. federal court.[129] Microsoft also reported that it shared information about the botnets’ operations with international Computer Emergency Response Teams, which can deal with elements of the botnets outside U.S. jurisdiction, and the FBI informed enforcement agencies in those countries.[130] Similar, more recent, operations include one characterized as “major takedown of the Shylock Trojan botnet,” which botnet is described as “an advanced cybercriminal infrastructure attacking online banking systems around the world,” that reportedly was coordinated by the UK National Crime Agency (NCA), and included Europol, the FBI, BAE Systems Applied Intelligence, Dell SecureWorks, Kaspersky Lab and the UK’s GCHQ,[131] and another takedown operation that targeted the much-feared Cryptolocker.[132]   Following the FS-ISAC model, the retail sector has taken the “historic decision” to share data on cyber-threats for the first time through a newly-formed Retail Cyber Intelligence Sharing Center (R-CISC),[133] and the financial services and retail sectors formed a cross-partnership.[134]

[29]      Finally, at the time of this publication, a draft Cybersecurity Information-Sharing Act of 2014, advanced by Chairman Dianne Feinstein (D-CA) and ranking member Saxby Chambliss (R-GA), was passed out of the Senate Intelligence on a 12-3 vote, and is expected to be put to a vote in the full Senate.[135] The bill is designed to enhance and provide liability protections for information sharing between private corporate entities, between private corporate entities and the Government, and between Government agencies.

[30]      Yet another promising option is the partnership that critical infrastructure institutions have formed, or should investigate forming, with ISPs. For example, ISPs currently provide DDoS mitigation services that, although not particularly effective in application vulnerability (OSI model layer 7) attacks, are very capable in responding to volume-based attacks.[136] One senior ISP executive proposed to this author, under the Chatham House Rule,[137] the possibility that ISPs may be able to provide aggregated threat intelligence information, including attribution, based upon monitoring of the entirety of its networks (not merely the network traffic to and from an individual corporate client).

[31]      ISPs’ capabilities are, however, subject both to statutory and regulatory limitations, including, for example, the Cable Act,[138] and proposed rules that would restrict the blocking of “lawful content, applications, services, or non-harmful devices,” that may appear to implicate liability-incurring discretion.[139]

[32]      Nevertheless, several researchers urge that ISPs should assume a “larger security role,” and are in a good position “to cost-effectively prevent certain types of malicious cyber behavior, such as the operation of botnets on home users’ and small businesses’ computers.”[140] Likewise, the Federal Communications Commission has defined “legitimate network management” as including “ensuring network security and integrity” and managing traffic unwanted by end users:

In the context of broadband Internet access services, techniques to ensure network security and integrity are designed to protect the access network and the Internet against actions by malicious or compromised end systems. Examples include spam, botnets, and distributed denial of service attacks. Unwanted traffic includes worms, malware, and virus that exploit end-user system vulnerabilities; denial of service attacks; and spam.[141]

N.B., a 2010 study found that just ten ISPs accounted for 30 percent of IP addresses sending out spam worldwide.[142] And, in 2011, it was reported that over 80% of infected machines were located within networks of ISPs, and that fifty ISPs control about 50% of all botnet infected machines worldwide.[143]

[33]      Other options that some companies have pursued as alternatives to the pitfalls of inherently risky threat counter-intelligence gathering discussed above include risk transfer or automated monitoring, both of which rely on outside vendors or subscription services.

[34]      Under the risk transfer approach, a corporate entity may choose to rely on the findings of a private contractor or company without undue concern for how the contractor or firm acquired the information. U.S. companies already outsource threat intelligence gathering to firms who employ operatives in Israel, such as IBM-Trusteer and RSA,[144] ostensibly because these operatives are able to effectively obtain information without running afoul of U.S. law. For legal scholars, perhaps a case to help justify this approach might be that of the famous Pentagon Papers (New York Times v. United States), in which the Supreme Court held that the public’s right to know was superior to the Government’s need to maintain secrecy of the information, notwithstanding that the leaked documents were obtained unlawfully (i.e.,in alleged violation of § 793 of the Espionage Act).[145] Yet, a corporate entity that knowingly—or with blissful ignorance—retains the services resulting from unethical conduct or conduct that would be criminal if undertaken in the U.S. may nevertheless suffer injury to the brand resulting from revelations of the vendor’s actions.

[35]      Under the automated monitoring approach, corporate entities rely on vendor subscription services, such as Internet Identity (IID™), that use automated software to monitor various fora or social media sites for the occurrence of keywords, concepts, or sentiment, and then alert the customer. Variations of these technologies are in use for high frequency stock trading and e-Discovery. An example might be detecting the offering for sale on a site of primary account numbers and related information by a cyberthief, and providing real-time notification to the merchant so that the accounts can be disabled.

[36]      Other promising options include “big data” approach, which is to employ data scientists and software and hardware automation in-house to draw more meaningful inferences from the data and evidence already legally within the company’s custody and control. For example, David Bianco, a “network hunter” for security firm FireEye, suggests allocating resources for detecting, evaluating, and treating threat indicators according to their value to the attacker, which he represents in his so-called “Pyramid of Pain.”[146] Under this model, remediation efforts are directed toward those indicators that are costly (in time or resources) to the attacker, requiring the attacker to change strategy or incur more costs.[147] Bianco proposed this model after concluding that organizations seem to blindly collect and aggregate indicators, without making the best use of them.[148] Vendors, such as Guardian Analytics,[149] FireEye’s Threat Analytics Program,[150] CrowdStrike’s Falcon platform,[151] and HP’s Autonomy IDOL[152] (intelligent data operating layer) are endeavoring to bring real-time threat intelligence parsing or information sharing tools and services to the marketplace


III. Conclusion

[37]      Hack back or active defense, depending on how one defines each—and everything in between—consists of activities that are both lawful and unlawful, and which carry all the business and professional risks associated with deceptive practices, misattribution, and escalation. To urge a risk-based approach to using even lawful active defense tactics would be to state the obvious, and the use of certain types of active defense where misattribution is possible, may be to entirely abandon the risk-based approach to problem solving. Moreover, at the time of this writing, a qualified privilege to hack back through legislative reform seems unlikely, and would be difficult because the holder of such a privilege would not only have to establish proper intent, but also attribution. However, the tools, technologies, partnerships, and information sharing between corporations, governments, vendors, and trade associations are promising; they have already proven effective, and are steadily improving.



* The author is a cyber-security policy analyst in the banking industry and a digital forensics examiner in private practice. Mr. Harrington is a graduate with honors from Taft Law School, and holds the CCFP, MCSE, CISSP, CHFI, and CSOXP certifications. He has served on the board of the Minnesota Chapter of the High Technology Crime Investigation Association, is a current member of Infragard, the Financial Services Roundtable’s legislative and regulatory working groups, FS-ISAC, the U.S. Chamber of Commerce “Cyber Working Group,” the Fourth District Ethics Committee in Minnesota, and is a council member of the Minnesota State Bar Association’s Computer & Technology Law Section. Mr. Harrington teaches computer forensics for Century College in Minnesota, and recently contributed a chapter on the Code of Ethics for the forthcoming Official (ISC)²® Guide to the Cyber Forensics Certified Professional CBK®. He is also an instructor for the CCFP certification.


[1] Rush, The Body Electric, on Grace under Pressure (Mercury Records 1984).

[2] Sean Sposito, Banks Remain the Top Target for Hackers, Report Says, Am. Banker (April 23, 2013, 10:04 AM),

[3] Eric A. Fisher, Cong. Research Serv., R 42114, Federal Laws Relating to Cybersecurity: Overview and Discussion of Proposed Revisions 3 (2013), available at (discussing, for example, the Federal Information Security Management Act).

[4] See Yonatan Lupu, The Wiretap Act and Web Monitoring: A Breakthrough for Privacy Rights?, 9 Va. J.L. & Tech. 3, ¶¶ 7, 9 (2004) (discussing the use of the ECPA and the lack of words such as “Internet,” “World Wide Web,” and “e-commerce” in the text or legislative history); see also Eric C. Bosset et al., Private Actions Challenging Online Data Collection Practices Are Increasing: Assessing the Legal Landscape, Intell. Prop. & Tech. L.J., Feb. 2011, at 3 (“[F]ederal statutes such as the Electronic Communications Privacy Act (ECPA) and the Computer Fraud and Abuse Act (CFAA) . . . were drafted long before today’s online environment could be envisioned . . . .”); Miguel Helft & Claire Cain Miller, 1986 Privacy Law Is Outrun by the Web,N.Y. TIMES (Jan. 9, 2011), (noting that Congress enacted the ECPA before the World Wide Web or widespread use of e-mail); Orin S. Kerr, The Future of Internet Surveillance Law: A User’s Guide to the Stored Communications Act, and a Legislator’s Guide to Amending It, 72 Geo. Wash. L. Rev. 1208, 1208, 1213-14, 1229-30 (2004); see generally The Electronic Communications Privacy Act: Government Perspectives on Privacy in the Digital Age: Hearing Before the S. Comm. on the Judiciary, 112th Cong. 1-2(2011) (statement of Sen. Patrick Leahy, Chairman, S. Comm. on the Judiciary), available at (“[D]etermining how best to bring this privacy law into the Digital Age will be one of Congress’s greatest challenges. . . . [The] ECPA is a law that is hampered by conflicting standards that cause confusion for law enforcement, the business community, and American consumers alike.”).

[5] See generally Nat’l Inst. of Standards & Tech., Framework for Improving Critical Infrastructure Cybersecurity 4 (Version 1.0, 2014) available at (describing The Framework as “a risk-based approach to managing cybersecurity risk”).

[6] See, Eric Griffith, What is Cloud Computing?, PC Magazine (May 13, 2013),2817,2372163,00.asp.

[7] See, e.g., Ken Dilanian, A New Brand of Cyber Security: Hacking the Hackers, L.A. Times (Dec. 4, 2012), (proposing that “companies should be able to ‘hack back’ by, for example, disabling servers that host cyber attacks”).

[8] See, e.g., Scott Carle, Crossing the Line: Ethics for the Security Professional,SANS Inst. (2003), Readers, doubtless, will know of earlier references.

[9] Techopedia, (last visited June 28, 2014); see also NetLingo, (last visited June 28, 2014)(“[Back-hack is t]he reverse process of finding out who is hacking into a system. Attacks can usually be traced back to a computer or pieced together from ‘electronic bread crumbs’ unknowingly left behind by a cracker.”).

[10] Melissa Riofrio, Hacking Back: Digital Revenge Is Sweet but Risky, PCWorld (May 9, 2013, 3:00 AM),

[11] Dmitri Alperovitch, Active Defense: Time for a New Security Strategy, Crowdstrike (Feb. 25, 2013),

[12] Comm’n on the Theft of Am. Intellectual Prop., The IP Commission Report 81 (2013) [hereinafter The IP Commission Report], available at; see also Sam Cook, Georgia Outs Russian Hacker, Takes Photo with His Own Webcam, Geek (Oct. 31, 2012, 4:28 PM), See Jay P. Kesan & Carol M. Hayes, Thinking Through Active Defense in Cyberspace, in Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies and Developing Options for U.S. Policy

327, 328 (The National Academies Press ed., 2010) (“Counterstrikes of this nature have already been occurring on the Internet over the last decade, by both government and private actors, and full software packages designed to enable counterstriking have also been made commercially available, even though such counterstrikes are of questionable legality”).

[13] See The IP Commission Report, supra note 12.

[14] Tom Fields, To ‘Hack Back’ or Not?, BankInfoSecurity(Feb. 27, 2013),

[15] Id.

[16] Id.

[17] Hackback? Claptrap!—An Active Defense Continuum for the Private Sector, RSA Conf. (Feb. 27, 2014, 9:20 AM),

[18] Shane McGee, Randy V. Sabett, & Anand Shah, Adequate Attribution: A Framework for Developing a National Policy for Private Sector Use of Active Defense, 8 J. Bus. & Tech. L. 1 (2013) Available at:

[19] See, e.g., Rafal Los, Another Reason Hacking Back Is Probably a Bad Idea, InfosecIsland (June 20, 2013),; Riofrio, supra note 10.

[20] Dilanian, supra note 7;see also William Jackson, The Hack-Back vs. The Rule of Law: Who Wins?, Cybereye, (May 31, 2013, 9:39 AM) (stating “[i]n the face of increasing cyber threats there is an understandable pent-up desire for an active response, but this response should not cross legal thresholds. In the end, we either have the rule of law or we don’t. That others do not respect this rule does not excuse us from observing it. Admittedly this puts public- and private-sector organizations and individuals at a short-term disadvantage while correcting the situation, but it’s a pill we will have to swallow.”).

[21] James Andrew Lewis, Private Retaliation in Cyberspace,Center for Strategic & Int’l Studies (May 22, 2013),

[22] See Cyber Intelligence Sharing and Protection Act, H.R. 624, 113th Cong. (2013).

[23] Christopher M. Matthews, Support Grows to Let Cybertheft Victims ‘Hack Back’, Wall St. J. (June 2, 2013, 9:33 PM),

[24] See Alperovitch, supra note 11. The firm’s online marketing literature includes the following: “Active Defense is NOT about ‘hack-back,’ retaliation, or vigilantism . . . we are fundamentally against these tactics and believe they can be counterproductive, as well as potentially illegal.” Id.; see also Paul Roberts, Don’t Call It a Hack Back: Crowdstrike Unveils Falcon Platform, Security Ledger (June 19, 2013, 11:47 AM),

[25] Charlie Mitchell, Senate Judiciary Panel Will Examine Stronger Penalties for Cyber Crimes and Espionage, Inside Cybersecurity (May 9, 2014) (stating “[a]uthorization for so-called countermeasures is included in the draft cyber information-sharing and liability protection bill . . . White House and Department of Homeland Security officials . . . declined to discuss the administration’s view of deterrence issues such as active defense.”). To be distinguished from OCM, “countermeasure” is defined in the draft Cybersecurity Information-Sharing Act of 2014 as “an action, device, procedure, technique, or other measure applied to an information system or information that is stored on, processed by, or transiting an information system that prevents or mitigates a known or suspected cybersecurity threat or security vulnerability.” See H.R. 624.

[26] See, e.g.,Marcus Tullius Cicero, The Speech of M.T. Cicero in Defence of Titus Annius Milo, in The Orations of Marcus Tullius Cicero 390, 392-393 (C.D. Yonge trans., 1913).

[27] Sheng Li, Note, When Does Internet Denial Trigger the Right of Armed Self-Defense?, 38 Yale J. Int’l L. 179, 182 (2013).

[28] See, e.g., Walter Gary Sharp Sr., Cyberspace and the Use of Force 129-31 (1999).

[29] See U.S. Dep’t. of Def., Conduct of the Persian Gulf War: Final Report to Congress Pursuant to Title V of the Persian Gulf Conflict Supplemental Authorization and Personnel Benefits Act of 1991 (Public Law 102-25) N-1 (1992) (“Civilian employees, despite seemingly insurmountable logistical problems, unrelenting pressure, and severe time constraints, successfully accomplished what this nation asked of them in a manner consistent with the highest standards of excellence and professionalism.”).

[30] See CyCon, (last visited July 16, 2014).

[31] See NATO Coop. Cyber Defence Ctr. of Excellence, Tallinn Manual on the International Law Applicable to Cyber Warfare 4 (Michael N. Schmitt ed., 2013); see also U.N. Charter art. 2, para. 4 & art. 51 (governing the modern law of self-defense).

[32] See, e.g., Keiko Kono, Briefing Memo: Cyber Security and the Tallinn Manual, Nat’l Inst. For Def. Studies News, Oct. 2013, at 2, available at

[33] See, e.g.,Siobhan Gorman & Danny Yadron, Banks Seek U.S. Help on Iran Cyberattacks, Wall St. J. (June 16, 2013, 12:01 AM),; Christopher J. Castelli, DOJ Official Urges Public-Private Cybersecurity Partnership Amid Legal Questions,Inside Cybersecurity (April 1, 2014),

[34] One such example is the “Computer Trespasser” exception added by Congress to the Wiretap Act, which allows law enforcement officials to monitor the activities of hackers when (1) the owner or operator of the network authorizes the interception; (2) law enforcement is engaged in a lawful investigation; (3) law enforcement has reasonable grounds to believe the contents of the communications will be relevant to that investigation; and (4) such interception does not acquire communications other than those transmitted to or from the hacker. See 18 U.S.C. § 2511(2)(i)(I)-(IV) (2012); see also Bradley J. Schaufenbuel, The Legality of Honeypots, ISSA J., April 2008, at 16, 19, available at

[35] See, e.g., David E. Sanger, White House Details Thinking on Cybersecurity Flaws, New York Times, (April 28, 2014) (discussing the Government’s admission that it refrains from disclosing major computer sercurity vulnerabilities that could be useful to “thwart a terrorist attack, stop the theft of our nation’s intellectual property, or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks.”)

[36] See Sameer Hinduja, Computer Crime Investigations in the United States: Leveraging Knowledge from the Past to Address the Future, 1 Int’l J. Cyber Criminology 1, 16 (2007) (citation omitted).

[37] Id. at 19. But see Kesan & Hayes, supra, note 12 at 33 (“there is a more significant downside of entrusting active defense to private firms. Our model addressing the optimal use of active defense emphasizes that there are threshold points where permitting counterstrikes would be the socially optimal solution. However, it does not define these thresholds, and determining these thresholds requires some sort of standardization. It would be unwise to allow individual companies to make these decisions on a case by case basis.”)


[38] The IP Commission Report, supra note 12, at 81. See also Joseph Menn, Hacked Companies Fight Back With Controversial Steps, Reuters, June 18, 2012, available at

[39] See Stephanie Olsen, Nearly Undetectable Tracking Device Raises Concerns, CNET(July 12, 2000),

[40] See id. See also John Gilroy, Ask The Computer Guy, Wash. Post, Jan. 27, 2002, at H07 (describing web bugs in lay parlance).

[41] Sean L. Harrington, Collaborating with a Digital Forensics Expert: Ultimate Tag Team or Disastrous Duo?, 38 Wm. Mitchell L. Rev. 353, 363 (2011), available at

[42] Id.

[43] See generallyBrian M. Bowen et al., Baiting Inside Attackers Using Decoy Documents, Colum. Univ. Dep’t of Computer Sci. (2009), available at (last visited May 13, 2014) (introducing and discussing properties of decoys as a guide to design “trap-based defenses” to better detect the likelihood of insider attacks).

[44] See Matthews, supra note 23.

[45] Id.

[46] Id.

[47] Id.

[48] See Harrington, supra note 41, at 362-64.

[49]The Supreme Court has tacitly approved deception as a valid law enforcement technique in investigations and interrogations. See Illinois v. Perkins,496 U.S. 292, 297 (1990) (“Miranda forbids coercion, not mere strategic deception . . .”); United States v. Russell, 411 U.S. 423, 434 (1973) (“Criminal activity is such that stealth and strategy are necessary weapons in the arsenal of the police officer.”); Allan Lengel, Fed Agents Going Undercover on Social Networks Like Facebook, AOLNews (Mar. 28, 2010, 5:55 PM),

[50] See Model Rules of Prof’l Conduct R. 5.3 (2013).

[51] Model Rules of Prof’l Conduct r. 8.4(c); see, e.g., In re Disciplinary Action Against Carlson, No. A13-1091 (Minn. July 11, 2013)(public reprimand for “falsely posing as a former client of opposing counsel and posting a negative review about opposing counsel on a website, in violation of Minn. R. Prof. Conduct 4.4(a) and 8.4(c)”); In re Pautler, 47 P.3d 1175, 1176 (Colo. 2002) (disciplining a prosecutor, who impersonated a public defender in an attempt to induce the surrender of a murder suspect, for an act of deception that violated the Rules of Professional Conduct).

[52] See Sharon D. Nelson & John W. Simek, Muddy Waters: Spyware’s Legal and Ethical Implications, GPSolo Mag., Jan.-Feb. 2006, (“The legality of spyware is murky, at best. The courts have spoken of it only infrequently, so there is precious little guidance.”).

[53] In re Disciplinary Action Against Zotaley, 546 N.W.2d 16, 19 (Minn. 1996) (quoting Minn. R. Prof’l Conduct 3.3 cmt. 3 (2005)).

[54]See Phila. Bar Ass’n Prof’l Guidance Comm., Op. 2009-02, at 1-2 (2009), available at

[55] See N.Y.C. Bar Ass’n Prof’l & Judicial Ethics Comm., Formal Op. 2010-2 (2010), available at; cf. Justin P. Murphy & Adrian Fontecilla, Social Media Evidence in Government Investigations and Criminal Proceedings: A Frontier of New Legal Issues, 19 Rich. J.L. & Tech. 11, ¶ 21 n.76 (2013) (citing similar ethics opinions rendered by bar committees in New York State and San Diego County).

[56] David Bianco, Use of the Term “Intelligence” in the RSA 2014 Expo, Enterprise Detection & Response (Feb. 28, 2014)!/2014/03/use-of-term-intelligence-at-rsa.html.

[57] See Sameer, supra note 36, at 15 (citing A. Meehan, G. Manes, L. Davis, J. Hale & S. Shenoi, Packet Sniffing for Automated Chat Room Monitoring and Evidence Preservation, in Proceedings of the 2001 IEEE Workshop on Information Assurance and Security 285, 285 (2001))(“[T]he monitoring of bulletin-boards and chat-rooms by investigators has led to the detection and apprehension of those who participate in sex crimes against children.”), available at; see, e.g., Kimberly J. Mitchell, Janis Wolak & David Finkelhor, Police Posing as Juveniles Online to Catch Sex Offenders: Is It Working?, 17 Sexual Abuse: J. Res. & Treatment 241 (2005); Lyta Penna, Andrew Clark & George Mohay, Challenges of Automating the Detection of Paedophile Activity on the Internet, in Proceedings of the First International Workshop on Systematic Approaches to Digital Forensic Engineering (2005), available at

[58] Martin Moylan, Target’s Data Breach Link to ‘the Amazon of Stolen Credit Card Information’,MPRnews (February 3, 2014),

[59] See “Investigating the Dark Web — The Challenges of Online Anonymity for Digital Forensics Examiners,” Forensic Focus (July 28, 2014) (“It is certainly easier to access indecent images of children and similar content on the dark net.”) Available at And see, e.g., Minn. Stat. § 617.247 subd. 4(a) (2013) (criminalizing possession of “a pornographic work [involving minors] or a computer disk or computer or other electronic, magnetic, or optical storage system or a storage system of any other type, containing a pornographic work, knowing or with reason to know its content and character”).

[60] See Rainer Link & David Sancho, Lessons Learned While Sinkholing Botnets—Not As Easy As It Looks!, in Proceedings of the Virus Bulletin Conference 106, 106 (2011), available at

[61] Id.

[62] 107.

[63] “[C]onsent may be demonstrated through evidence of appropriate notice to users through service terms, privacy policies or similar disclosures that inform users of the potential for monitoring.” Bosset, supra note 4 (citing Mortensen v. Bresnan Commc’ns, LLC, No. CV 10-13-BLG-RFC, 2010 WL 5140454, at *3-5 (D. Mont. Dec. 13, 2010)).

[64] See Craigslist Inc. v. 3Taps Inc., 964 F. Supp. 2d 1178, 1182-83 (N.D. Cal. 2013).

[65] See Link & Sancho, supra note 60, at 107-08.

[66] Honeypot, SearchSecurity, (last visited June 29, 2014).

[67] Eric Cole & Stephen Northcutt, Honeypots: A Security Manager’s Guide to Honeypots, SANS Inst., (last visited May 13, 2014).

[68] See, e.g., Jerome Radcliffe, CyberLaw 101: A Primer on US Laws Related to Honeypot Deployments 6-9 (2007), available at

[69] See id. at 14-17.

[70] See Schaufenbuel, supra note 34, at 16-17 (“Because a hacker finds a honeypot by actively searching the Internet for vulnerable hosts, and then attacks it without active encouragement by law enforcement officials, the defense of entrapment is not likely to be helpful to a hacker.”).

[71] See Cole & Northcutt, supra note 67.

[72] Schaufenbuel, supra note 34, at 19.

[73] See generally id. (stating that the best way for a honeypot owner to avoid downstream liability is to configure the honeypot to prohibit or limit outbound connections to third parties).

[74] Scott L. Vernick, To Catch a Hacker, Companies Start to Think Like One, Fox Rothschild, LLP (Feb. 15, 2013),|15032388757.

[75] See Kevin Parrish, Copyright Troll Busted for Seeding on The Pirate Bay,tom’s GUIDE (Aug. 19, 2013, 2:00 PM),,news-17391.html#torrent-pirate-bay-copyright-troll-prenda-law-honeypot%2Cnews-17391.html?&_suid=1396370990577022740795081848747.

[76] Id.

[77] See id.

[78] See, e.g., Sean L. Harrington, Rule 11, Barratry, Champerty, and “Inline Links”, Minn. St. Bar Ass’n Computer & Tech. L. Sec. (Jan. 27, 2011, 11:42 PM), (discussing the vexatious litigation tactics of Righthaven, LLC).

[79] See Scott Cohn, Companies Battle Cyberattacks Using ‘Hack Back’, CNBC (June 04, 2013, 1:00 PM), (“[L]aw enforcement is unlikely to detect or prosecute a hack back. ‘If the only organization that gets harmed is a number of criminals’ computers, I don’t think it would be of great interest to law enforcement.”); Aarti Shahani, Tech Debate: Can Companies Hack Back?, Al Jazeera Am. (Sept. 18, 2013, 5:57 PM), (“The Justice Department has not prosecuted any firm for hacking back and, as a matter of policy, will not say if any criminal investigations are pending”).

[80] See Cohn, supra note 79 (statement of Professor Joel Reidenberg) (“‘Reverse hacking is a felony in the United States, just as the initial hacking was. It’s sort of like, if someone steals your phone, it doesn’t mean you’re allowed to break into their house and take it back.’”); Shahani, supra note 79 (statement of David Wilson) (“‘No, it’s not legal, not unless the blackmailer gave permission. . . . But who’s going to report it? Not the bad guy.’”).

[81] See, e.g.,Nathan Thornburgh, The Invasion of the Chinese Cyberspies (and the Man Who Tried to Stop Them),TIME (Sept. 5, 2005), (discussing the “rogue” counter-hacking activities of Shawn Carpenter, who was working with the FBI and for whose activities Carpenter claimed the FBI considered prosecuting him).

[82] See Dilanian, supra note 7 (“Others, including Stewart Baker, former NSA general counsel, said the law does allow hacking back in self-defense. A company that saw its stolen data on a foreign server was allowed to retrieve it, Baker argued.”) (In preparation for this comment, the author asked Mr. Baker about the interview, and he replied, “[T]he LA Times interview didn’t involve me talking about a particular case where retrieving data was legal. I was arguing that it should be legal.”).

[83] John Strand et al., Offensive Countermeasures: The Art of Active Defense 207 (2013).

[84] David Willson, Hacking Back in Self Defense: Is It Legal; Should It Be?, Global Knowledge (Jan. 6, 2012),

[85] See id.

[86] Stewart Baker, The Hack Back Debate (Nov. 02, 2012)

[87] See W. Page Keeton et al., Prosser & Keeton on the Law of Torts § 22 (5th ed. 1984).

[88] See id.

[89] See id. at§ 24.

[90] See id. at§ 21. And see McGee, Sabett, & Shah, supra, note 18 (“Reaching consensus on applying the concepts of self-defense to the cyber domain has proven to be a difficult task, though not for the lack of trying”).

[91] See Jassandra Nanini, China, Google, and Private Security: Can Hack-Backs Provide the Missing Defense in Cybersecurity, (forthcoming 2015) (manuscript at 14-15) (on file with author).

[92] See id. (manuscript at 14).

[93] Id. (manuscript at 15-16).

[94] See Sean Harrington, Why Divorce Lawyers Should Get Up to Speed on CyberCrime Law, Minn. St. B. Ass’n Computer & Tech. L. Sec. (Mar. 24, 2010, 9:40 PM), (collecting cases regarding unauthorized computer access).

[95] 18 U.S.C. § 1030 (2012); see Clements-Jeffrey v. Springfield, 810 F. Supp. 2d 857, 874 (S.D. Ohio 2011) (“It is one thing to cause a stolen computer to report its IP address or its geographical location in an effort to track it down. It is something entirely different to violate federal wiretapping laws by intercepting the electronic communications of the person using the stolen laptop.”).

[96] See generally Orin S. Kerr, Cybercrime’s Scope: Interpreting “Access” and “Authorization” in Computer Misuse Statutes, 78 N.Y.U. L. Rev. 1596, 1624–42 (2003) (showing how and why courts have construed unauthorized access statutes in an overly broad manner that threatens to criminalize a surprising range of innocuous conduct involving computers).

[97] In re DoubleClick Privacy Litig., 154 F. Supp. 2d 497, 526 (S.D.N.Y. 2001) (emphasis added).

[98] See In re Pharmatrak, Inc. Privacy Litig., 329 F.3d 9, 13 & 21-22 (1st Cir. 2003) (holding use of tracking cookies to intercept electronic communications was within the meaning of the ECPA, because the acquisition occurred simultaneously with the communication).

[99] See Peter J. Toren, Amending the Computer Fraud and Abuse Act,BNA (Apr. 9, 2013),

[100] See, e.g., Holly R. Rogers & Katharine V. Hartman, The Computer Fraud and Abuse Act: A Weapon Against Employees Who Steal Trade Secrets,BNA (June 21, 2011) (“[E]mployers are increasingly using this cause of action to go after former employees who steal trade secrets from their company-issued computers.”).

[101] A Byte for a Byte,Economist (Aug. 10, 2013), available at; see also Lewis, supra note 21 (“There is also considerable risk that amateur cyber warriors will lack the skills or the judgment to avoid collateral damage. A careless attack could put more than the intended target at risk. A nation has sovereign privileges in the use of force. Companies do not.”); John Reed, The Cyber Security Recommendations of Blair and Huntsman’s Report on Chinese IP Theft, Complex Foreign Pol’y (May 22, 2012), huntsman_report_on_chinese_ip_theft (“While it may be nice to punch back at a hacker and take down his or her networks or even computers, there’s a big potential for collateral damage, especially if the hackers are using hijacked computers belonging to innocent bystanders.”).

[102] John Reed, Mike Rogers: Cool It with Offensive Cyber Ops, Complex Foreign Pol’y (Dec. 14, 2012, 5:07 PM), http:/ (audio recording of full speech available at But see See McGee, Sabett, & Shah, supra, note 18 (urging the adoption of a “Framework for ‘good enough’ attribution”).

[103] For definitions and discussion of these terms, seeEric A. Fischer et al., Cong. Research Serv., R42984, The 2013 Cybersecurity Executive Order: Overview and Considerations for Congress2-4,(2013), available at

[104] Max Fisher, Should the U.S. Allow Companies to ‘Hack Back’ Against Foreign Cyber Spies?, Wash. Post (May 23, 2013, 10:43 AM), (quoting Lewis, supra, note 21).

[105] Los, supra note 19.

[106] See Fahmida Y. Rashid, Layered Security Essential Tactic of Latest FFIEC Banking Guidelines,eWeek (June 30, 2011), (“Banks must adopt a layered approach to security in order to combat highly sophisticated cyber-attacks, the Federal Financial Institutions Examination Council said in a supplement released June 28. The new rules update the 2005 ‘Authentication in an Internet Banking Environment’ guidance to reflect new security measures banks need to fend off increasingly sophisticated attacks. . . . The guidance . . . emphasized a risk-based approach in which controls are strengthened as risks increase.”).

[107] See PCI 2.0 Encourages Risk-Based Process: Three Things You Need to Know, ITGRC (Aug. 23, 2010),

[108] See Lee Vorthman, IT Security: NIST’s Cybersecurity Framework, NetApp (July 16, 2013, 6:01 AM), (“It is widely anticipated that the Cybersecurity Framework will improve upon the current shortcomings of FISMA by adopting several controls for continuous monitoring and by allowing agencies to move away from compliance-based assessments towards a real-time risk-based approach.”).

[109] Reed, supra note 102.

[110] Geoffrey C. Hazard, Jr., Law, Morals, and Ethics, 19 S. Ill. U. L.J. 447, 453 (1995), available at

[111] Id.

[112] See generally Heinz C. Luegenbiehl & Michael Davis, Engineering Codes of Ethics: Analysis and Applications 10 (1986) (referring to the “Contract with society” theory on the relation between professions and codes of ethics).

According to this approach, a code of ethics is one of those things a group must have before society will recognize it as a profession. The contents of the code are settled by considering what society would accept in exchange for such benefits of professionalism as high income and high prestige. A code is a way to win the advantages society grants only to those imposing certain restraints on themselves.

[113] See, e.g., Official (ISC)2 Guide to the CISSP CBK 1214 (Steven Hernandez ed., 3d ed. 2013) (“The code helps to protect professionals from certain stresses and pressures (such as the pressure to cut corners with information security to save money) by making it reasonably likely that most other members of the profession will not take advantage of the resulting conduct of such pressures. An ethics code also protects members of a profession from certain consequences of competition, and encourages cooperation and support among the professionals.”).

[114] See id.

[115] (ISC)2, (ISC)2 Overview: Evolving in Today’s Complex Security Landscape 4 (2013), available at

[116] See id.

[117] David E. Sanger & John Markoff, After Google’s Stand on China, U.S. Treads Lightly,N.Y. Times (Jan. 15, 2010),

[118] See, e.g.,Skipper Eye, Google Gives Chinese Hackers a Tit for Tat, Redmond Pie (Jan. 16, 2010), available at

[119] See Shelley Boose, Black Hat Survey: 36% of Information Security Professionals Have Engaged in Retaliatory Hacking, BusinessWire(June 26, 2012, 11:00 AM), (“When asked ‘Have you ever engaged in retaliatory hacking?’ 64% said ‘never,’ 23% said ‘once,’ and 13% said ‘frequently”. . . . [W]e should take these survey results with a grain of salt . . . . It’s safe to assume some respondents don’t want to admit they use retaliatory tactics.”).

[120] Lewis, supra note 21 (“Another argument is that governments are not taking action, and therefore private actors must step in.”).

[121] Reed, supra note 102.

[122] See About FS-ISAC, Fin. Serv.: Info. Sharing & Analysis Center, (last visited June 9, 2014). Launched in 1999, FS-ISAC was established by the financial services sector in response to 1998’s Presidential Directive 63. That directive ― later updated by 2003’s Homeland Security Presidential Directive 7 ― mandated that the public and private sectors share information about physical and cyber security threats and vulnerabilities to help protect the U.S. critical infrastructure. See id.

[123] See id.

[124] FS-ISAC Security Automation Working Group Continues to Mature Automated Threat Intelligence Strategy, Deliver on Multi-Year Roadmap, Fin. Serv.: Info. Sharing & Analysis Center (Feb. 26, 2014),

[125] See id.

[126] Sean Sposito, In Cyber Security Fight, Collaboration Is Key: Guardian Analytics, Am. Banker (Oct. 08. 2013, 2:01 PM),

[127] See generally, Taking Down Botnets: Public and Private Efforts to Disrupt and Dismantle Cybercriminal Networks: Hearing Before the S. Comm. on the Judiciary, 113th Cong. (July 15, 2014) (providing access to testimony from the hearing).

[128] See Tracy Kitten, Microsoft, FBI Take Down Citadel Botnets, Bank Info Security (June 6, 2013),

[129] See id.

[130] See id.

[131] See NCA Leads Global Shylock Malware Takedown, infosecurity (July 12, 2014)

[132] See Gregg Keizer, Massive Botnet Takedown Stops Spread of Cryptolocker Ransomware,ComputerWorld (June 5, 2014 02:15 PM),

[133] John E. Dunn, Worried US Retailers Battle Cyber-attacks Through New Intelligence-Sharing Body, TechWorld (May 16, 2014, 6:29 PM),

[134] See, e.g.,Dan Dupont Retail, Financial Sectors Form Cybersecurity Partnership in Wake of Data Breaches (March 13, 2014),

[135] See Press Release, Dianne Feinstein, Senate Intelligence Committee Approves Cyber Security Bill (July 8, 2014) available at

[136]See Brent Rowe et al., The Role of Internet Service Providers in Cyber Security 7 (2011), available at

[137] See, generally, Chatham House Rule, Chatham House; The Royal Institute of International Affairs (explaining the Chatham House Rule).

[138] Section 631 of the Cable Communications Policy Act of 1984, 47 U.S.C. §§ 521, et seq. The Cable Act prohibits cable systems’ disclosure of personally identifiable subscriber information without the subscriber’s prior consent; requires the operator to destroy information that is no longer necessary for the purpose it was collected, to notify subscribers of system data collection, retention and disclosure practices and to afford subscribers access to information pertaining to them; provides certain exceptions to the disclosure restrictions, such as permission for the cable operator to disclose “if necessary to conduct a legitimate business activity related to a cable service or other service” provided to the subscriber, and disclosure of subscriber names and addresses (but not phone numbers), subject to an “opt out” right for the subscriber. Congress expanded, as part of the Cable Television Consumer Protection and Competition Act of 1992, the privacy provision of the Communications Act to cover interactive services provided by cable operators. Id.

[139] Protecting and Promoting the Open Internet, GN Docket No. 14-28, at App’x A, §§ 8.5, 8.11 (May 15, 2015).

[140] Id. at 1-2.

[141] Preserving the Open Internet, 76 Fed. Reg. 59192, 59209 n.102 (Sept. 23, 2011).

[142] Michel Van Eeten et al., The Role of Internet Service Providers in Botnet Mitigation: An Empirical Analysis Based on Spam Data 1 (2010), available at

[143] Rowe et al., supra note 136.

[144] See, e.g., Meir Orbach, Israeli Cyber Tech Companies on Rise in US Market, Al Monitor (Jan. 23, 2014)

[145] See New York Times Co. v. United States, 403 U.S. 713, 714 (1971).

[146] See David Bianco, The Pyramid of Pain, Enterprise detection & Response Blog(Mar. 1, 2014),!/2013/03/the-pyramid-of-pain.html.

[147] See id.

[148] See id.

[149] See Sposito, supra note 126.

[150] See FireEye Threat Analytics Platform, FireEye, (last visited June 9, 2014).

[151] See Tim Wilson, CrowdStrike Turns Security Fight Toward Attacker, Dark Reading (June 25, 2013, 9:18 AM),

[152] See HP IDOL,HP Autonomy, (last visited June 9, 2014).


Virtual Currencies; Bitcoin & What Now After Liberty Reserve, Silk Road, and Mt. Gox?

Blocked: The Limits of Social Media as Evidence

by John A. Myers, Associate Staff


In the digital age, social media has become a dominant form of communication. Because of the increased usage of social media in recent years, user contributions to social media have increasingly been used as evidence in litigation. The main legal question that has arisen from social media as evidence is: How much access of their social media account does a party have to give to an opposing party that is requesting the evidence? If one party wants to introduce a single social media post as evidence against the opposing party, should that party have access to the other party’s entire social media account, or just that single post? Courts have recently started to adjudicate on this issue and the results have been mixed, with some courts arguing that access to opposing parties social media account is an unreasonable intrusion on privacy.


Because of the public nature of social media, posts made on social media sites have increasingly contributed to litigation. For example, the American Academy of Matrimonial Lawyers published a survey indicating 81% of divorce proceedings involve social media evidence, with 66% coming from Facebook alone.[1] It’s easy to understand how a Facebook post blasting a spouse or an Instragram picture showing a spouse with a mistress could be used as evidence during a subsequent divorce proceeding. The problem becomes when a court has to decide how much access the requesting party should be given to the opposing party’s social media account. While it may be easier to just allow the requesting party to have temporary access to the opposing party’s account for the purposes of securing the evidence requested, that also opens up the possibility that that party could find more evidence against their opposing party that wasn’t specified in a discovery request.[2]


Because of the potential encroachment on the privacy of the opposing party, courts have been hesitant to allow complete access to the requesting party and have attempted to establish a two-part test regarding access to social media evidence.[3] Firstly, the social media evidence must have some relevance to the facts that it is seeking to support.[4] This first part is well ingrained in the Federal Rules of Evidence and similar state rules for introduction of evidence from any source.[5] Secondly, the court must determine whether blanket access to the social media account is allowed or if the requesting party need only be given the social media post in question. Recent court cases have split on this issue. Some courts said that blanket access to the other party’s social media account is per se unreasonable.[6] Other courts have granted blanket access, but with restrictions. In Largent v. Reed, the plaintiff was ordered to turn over her Facebook login information to opposing counsel, who would then have 21 days to inspect a limited section of the account.[7] After that period, the plaintiff could change her password to prevent any further access to her account by opposing counsel.


What is most interesting about social media as evidence and its development is the affect on an individual’s privacy. Since the advent of Facebook, Twitter, and other social media platforms, the main legal question surrounding these platforms has been: How much privacy should their users expect from comments made on those sites? While the answer has almost always been “None”, the first cases to address the introduction of social media as evidence seem to indicate that there is at least some material on social media that is off limits to opposing parties. A Pennsylvania court recently concluded that a court order that would grant the opposing party access to information on a Facebook account that was only intended for “Friends” (of which the opposing party was not one), would be intrusive and potentially embarrassing for the acquiescing party.[8] Other state and federal cases have concluded that searches of social media accounts are an intrusive way of gathering evidence and less speculative and “annoying” methods should be used when possible.[9]


The use of social media as evidence is still in its infancy and its introduction or exclusion will likely develop for decades to come. It will be interesting to see the progress of social media evidence and whether future courts continue to hold certain aspects of social media to be off limits for evidentiary purposes.



[1] Press Release, American Academy of Matrimonial Lawyers, Big Surge in Social Networking Evidence Says Survey of Nation’s Top Divorce Lawyers (Feb. 10, 2010) (on file with author).


[2] Fed. R. Civ. P. 26(A)(ii)


[3] Margaret DiBianca, Discovery and Preservation of Social Media Evidence, Business Law Today (Jan. 2014),


[4] Fed. R. Evid. Rule 401(a).


[5] Id.; Va. R. Evid. 2:401.


[6] Trail v. Lesko, No. GD-10-017249, LEXIS 194, at *30-31 (Pa. D. & C. Jul. 3, 2012).


[7] Largent v. Reed, 2011 WL 5632688, No. 2009-1823 (Pa. D. & C. Nov. 8, 2011).


[8] See Lesko, LEXIS 194, at *28-30.

[9] Id.; Chauvin v. State Farm Mut. Auto. Ins. Co., No. 10-11735, 2011 U.S. Dist. LEXIS 121600, at *1-3 (S.D. Mich. Oct. 20, 2011). 

Riley v. California: Constitutional Reasonableness and Digital Device Searches

By: Adam Lamparello & Charles MacLean[1]

August 6, 2014


In an era of metadata collection and warrantless searches of laptops at the border, the Supreme Court recognized that privacy—and the Fourth Amendment—still matter.

A. The Court’s Opinion

In Riley v. California,[2] the defendant was stopped for having expired registration tags and arrested when law enforcement officers discovered that the defendant’s license had expired.[3] After the arrest, a detective conducted a warrantless search of the defendant’s Smartphone and discovered incriminating evidence that led to charges of assault and attempted murder.[4]

Relying on the search incident to arrest doctrine, the lower courts rejected the defendant’s Fourth Amendment challenge, holding that officer safety and evidence preservation justified the search of defendant’s Smartphone.[5]

The Supreme Court granted certiorari—and unanimously reversed.

Writing for the Court, Chief Justice John Roberts held that the two objectives underlying the search incident to arrest doctrine were not implicated in the cell phone context.[6] Chief Justice Roberts also rejected the Government’s argument that Smartphones were analogous to physical objects such as containers and cigarette packs, stating to it was akin to “saying a ride on horseback is materially indistinguishable from a flight to the moon.”[7]

Justice Roberts also recognized that, unlike finite physical objects, Smartphones can store “millions of pages of text, thousands of pictures, or hundreds of videos,”[8] which makes them different “in both a quantitative and a qualitative sense from other objects that might be kept on an arrestee’s person.”[9] In fact, “even the most basic phones that sell for less than $20 might hold photographs, picture messages, text messages, Internet browsing history, a calendar, [and] a thousand-entry phone book.”[10]

As such, “the sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions; the same cannot be said of a photograph or two of loved ones tucked into a wallet.”[11] Moreover, because of their immense storage capacity, “cell phone searches would typically expose to the government far more than the most exhaustive search of a house, and contains a broad array of private information never found in a home in any form—unless the phone is.”[12]

Based on these considerations, the Court held that the search violated the Fourth Amendment’s prohibition against unreasonable searches and seizures, which “was the founding generation’s response to the reviled ‘general warrants’ and ‘writs of assistance’ of the colonial era . . . [and] allowed British officers to rummage through homes in an unrestrained search for evidence of criminal activity.”[13]

Riley is a victory for privacy rights and shows that law enforcement’s investigatory powers are not without limits. Time will tell whether the Court’s reasoning will have implications for future cases involving, for example, the Government’s warrantless search of laptops at the border and indiscriminate collection of metadata. What we know now is that times have changed.

B. What Does Riley Mean for Privacy?

In their upcoming article, Professors MacLean and Lamparello will analyze the implications of Rileyin various contexts where expanding surveillance by the Government threatens to infringe individual privacy rights. They will analyze, among other things, the current circuit split regarding the warrantless—and suspicionless—collection of metadata, and discuss the extent to which the National Security Agency can continue to monitor and track metadata, email, internet browser histories, and other communications data both domestically and abroad.

Professors MacLean and Lamparello will argue that, although Rileyis a victory for privacy rights, there is reason to be cautious. Chief Justice John Roberts has taken a minimalist approach to constitutional adjudication and often strives to decide cases on the narrowest grounds possible.[14] In addition, the basis upon which Riley was decided—reasonableness—may be difficult to apply in contexts where the Government’s interest is heightened, the search is less intrusive than, for example, the search of a cell phone, and occurs in places where an individual’s expectations are privacy are diminished.

Ultimately, the authors will argue that the Court’s jurisprudence will proceed incrementally. Although the Court will increasingly safeguard privacy rights, it will not be able to keep pace with the speed of technology and the efforts by Government officials to increase the scope and breadth of their surveillance power. The authors will propose a detailed legislative solution that includes, among other things: (1) the privacy expectation that citizens should have in various contexts; (2) the levels of suspicion that the Government must satisfy before performing various searches; (3) and the circumstances, if any, when the interest in national security will outweigh a legitimate or diminished expectation of privacy. In so doing, the authors will provide a solution that they believe can guide lawmakers, citizens, and courts as the technology era increasingly implicates complex question about the balance between civil liberties and national security.





[1] Professors Charles E. MacLean and Adam Lamparello of Indiana Tech Law School filed an amicus brief in the case supporting Riley’s argument. The Professors argued that the search incident to arrest doctrine did not apply because: (1) cell phones, unlike other physical objects, could not be used as weapons; and (2) the risk that evidence would be destroyed, through either remote wiping or data encryption, was minimal and easily manageable. The authors have also written rather broadly in this area, including: Charles E. MacLean & Adam Lamparello, Abidor v. Napolitano: Suspicionless Cell Phone and Laptop “Strip” Searches at the Border Compromise the Fourth and First Amendments, 108 Nw. U. L. Rev. Colloquy 280 (Spring 2014); Adam Lamparello & Charles E. MacLean, Back to the Future: Returning to Reasonableness and Particularity under the Fourth Amendment, 99 Iowa L. Rev. Bull. 101 (Spring 2014); Katz on a Hot Tin Roof: The Reasonable Expectation of Privacy Doctrine is Rudderless in the Digital Age, unless Congress Continually Resets the Privacy Bar, 24 Albany L.J. Sci. & Tech. 47 (Spring 2014). The authors’ full article addressing the implications of the Riley decision will be published this fall in Volume 20 of the Richmond Journal of Law and Technology.

[2] Riley v. California, 134 S. Ct. 2473 (2014).

[3] See id. at 2480.

[4] See id. at 2480-81.

[5] See id. at 2481.

[6] See id. at 2485-87.

[7] Id. at 2488.

[8] Riley v. California, 134 S. Ct. 2473, 2489 (2014).

[9] Id.

[10] Id.

[11] Id.

[12] Id. at 2491.

[13] Id. at 2494.

[14] See University of Chicago Law School Faculty Blog, Chief Justice Roberts and Minimalism, (May 25, 2006), available at:

2014-2015 Student Law and Technology Writing Competition

The Richmond Journal of Law and Technology is pleased to announce the commencement of the 2014-2015 Student Law and Technology Writing Competition.  From now until midnight EST on Monday, January 26th, 2015, all law students across the country will be eligible to compete in the writing competition for cash prizes and a chance to be published in a future issue of the Journal.  The entries must focus on topics at the intersection of technology and the law.  The first place prize is $1,500 and the second place prize is $700.  Additionally, one student from the University of Richmond will be awarded the Rick Klau prize of $300.


In order to properly submit an entry, each student must follow the submission guidelines. Any entries that do not comply with the submission guidelines will not be considered.  Please e-mail all completed entries along with an entry form to and include “Student Law and Technology Writing Competition” in the subject line by midnight EST on Monday, January 26th, 2015. Both the submission guidelines and the entry form can be found on JOLT’s website.


Please direct any questions you may have to  The Journal is looking forward to reading your submissions.  Good luck!

General Information


Entry Form

Page 17 of 26

Powered by WordPress & Theme by Anders Norén