Richmond Journal of Law and Technology

The first exclusively online law review.

Category: Uncategorized (Page 2 of 3)

Blog: Bad Review or Get Sued?

yelp head but

By: Corinne Moini,

Have you ever posted a bad review of a product or company? Most people do this without a second thought—they feel cheated and want the world to know about it. They have a false sense of security that comes with an online post because many of these posts are submitted anonymously.[1] Whether you tell the truth or tell a lie, anonymously or not, companies are looking for these damaging reviews and those angry words you wrote can come back to haunt you in the form of a lawsuit.

You must be thinking to yourself—how is this possible? How can a company sue me for speaking my mind and exercising my constitutional right of freedom of speech? Many unhappy customers like Jennifer Ujimori felt the exact same way when she was served with a lawsuit for the online reviews she posted on Yelp and Angie’s List.[2]

Like the dog training company in Ujimori’s lawsuit, companies are able to sue customers for these reviews by sneaking non-disparagement clauses deep into the fine print of sales contracts. These clauses prohibit a consumer from posting or commenting about their purchase on social media sites.[3] For the most part, a non-disparagement clause is unenforceable:[4] partially because they allow businesses to intimidate customers into silence,[5] and partially because they contradict the basics of contract law by being one-side and lacking consideration.[6]

Ujimori is not the only person who believes non-disparagement clauses stifle a person’s ability to freely express themselves. In 2014, the state of California adopted a ban on non-disparagement clauses and threatened violators with a monetary fine.[7] Stressing the importance of protecting First Amendment rights, two California representatives introduced the Consumer Review Freedom Act, which makes non-disparagement clauses in companies’ sale contracts illegal.[8] The act does not hinder a business’s ability to sue consumers for deceitful reviews but only prevents them from bullying truthful consumers into silence.[9]

If the Consumer Review Freedom Act is enacted consumers will be free to post honest, negative online reviews. However, if the act fails to pass, then be wary—and keep your angry reviews as a draft on your computer.





Photo Source:

Photo Source:



[1] See Mathew S. Adams, Business Owners Beware: Avoid the Temptation to Post Fake Reviews or Feedback Regarding Your Competition on Social Media or Elsewhere on the Internet, Above the Law (Feb. 12, 2015),

[2] Jennifer Ujimori was unhappy with the dog obedience class she paid for and posted negative reviews on Yelp and Angie’s List to alert other consumers. She wrote, “in a nutshell, the services delivered were not as advertised and the owner refused a refund.” The dog obedience company did not offer her a refund but instead served her with a defamation lawsuit for $65,000 claiming statements were false and destroyed her business reputation. See Justin Jouvenal, Negative Yelp, Angie’s List reviews prompt dog obedience business to sue, Wash. Post (Mar. 25, 2015),

[3] See Id.

[4] See Jim Hood, Non-disparagement clauses: a new way to get nothing for something, Consumer Affairs (June 24, 2014),

[5] See Id.

[6] Hood, supra note 4, at 1.

[7] See Tim Cushing, Legislators Introduce Bill Calling For Nationwide Ban On Non-Disparagement Clauses, Tech Dirt (May 8, 2015),

[8] See Herb Weisbaum, Can a company stop you from writing a negative online review? Not if Congress passes this bill, Today Money (Sept. 24, 2014),

[9] See Id.

Blog: Sarbanes-Oxley Act Being Used Outside of the Original Scope


By: Brandon Bybee,

What does a drug dealer deleting text messages off of his phone have in common with an international corporation failing to retain emails regarding a business transaction? Interestingly, both may fall under the application of certain provisions of the Sarbanes-Oxley Act. While many assume that Sarbanes-Oxley only applies to businesses (and some believe exclusively publicly traded companies,) specific sections pertaining to criminal activity apply to not only private companies, but in recent court decisions individual citizens.[1]

Sarbanes-Oxley was enacted in 2002 as a result of the corporate scandals of 2001 and 2002 (Enron and WorldCom). The act was passed by Congress to “deter and punish corporate and accounting fraud and corruption.”[2] Contained within the act under Title VII is §1519, commonly referred to as the anti-shredding provision. This provision makes it a criminal act to destroy or conceal, “…any record, document, or tangible object with the intent to impede, obstruct, or influence the investigation or proper administration of any matter within the jurisdiction of any department or agency of the United States…”.[3] §1519 violators can be fined and imprisoned up to 20 years for what has been deemed as an act of anticipatory obstruction of justice.

While the application of §1519 to corporations concealing documents seems an obvious utilization of the statute, perhaps a more difficult application to anticipate would be charging an individual citizen. In the case of United States v. Keith[4] the FBI determined that the defendant had been utilizing the Limewire downloading interface to search for and download child pornography. As agents approached the defendant’s home he deleted some of the illegal materials that he had downloaded. Among other charges, he was found guilty of violating Sarbanes-Oxley §1519 for anticipatorily obstructing justice by deleting the files. Since the case was under investigation by the FBI this constituted a “…department or agency of the United States…”[5]. The defendant was sentenced to 240 months in prison. This case exemplified the broad application of §1519, even to individual private citizens.

Additional recent applications of §1519 have included an attempt to convict one of the alleged conspirators of the Boston Marathon Bombings.[6] The alleged defendant is accused of deleting pertinent material from his computer relating to the planning of the bombings. Among other illegal actions prosecutors are attempting to charge the conspirator with a violation of §1519. On multiple occasions police officers have been charged with violating §1519. In United States v. McRae[7] a police officer was charged under §1519 for burning a car and a victim’s body as well as falsifying a police report. In United States v. Moyer[8] another police officer was charged for falsifying a report.

An even more popular recent newsworthy issue has brought §1519 into the national spotlight. The Hillary Clinton email deletion scandal has brought §1519 into play as any charges that would be brought against her by the state would likely include an anticipatory obstruction of justice charge.[9] (No charges have been brought against Mrs. Clinton.) Even if former Secretary Clinton deleted files exclusively from her personal computer §1519 has been shown to extend to private citizens.

Finally, and perhaps most disturbingly, some circuits have found that there is no need to establish a connection between the deleting of data or destruction of evidence and the knowledge of a possible criminal investigation in the future. United States v. Moyer[10] asserts that §1519 does not require the government to prove that the defendant knew the obstruction at issue was within jurisdiction of federal government, nor is the government required to prove nexus between defendant’s conduct and specific federal investigation. By asserting that the government need not define a correlating nexus between the defendant’s conduct and a specific federal investigation the case broadens the application of §1519 once more.

What does this mean for the typical American citizen? Essentially, any document, digital file, or tangible item that could be involved in an investigation by a government agency at any time should not be destroyed. If the item is destroyed, technically an individual could be charged with an anticipatory obstruction of justice. While it seems likely that the application of §1519 would only apply to those who knowingly engage in criminal activity, the broad nature of its utilization does bring cause for concern of anyone in possession of large amounts of digital files. It will be interesting to see how broadly the court systems allow the application of §1519 to extend.




[1] See generally Robert F. Mechur, Esq., Yes, Sarbanes-Oxley Applies to Private Companies. Boylan Code Law Library: Articles,; United States v. Keith, 440 F. App’x 503 (7th Cir. 2011).

[2] Christopher R. Chase, To Shred Or Not To Shred: Document Retention Policies And Federal Obstruction of Justice Statutes, 8 Fordham J. Corp. & Fin. L. 721, 740 (2003) citing President’s Statement on Signing the Sarbanes-Oxley Act of 2002, 38 Weekly Comp. Pres. Doc. 1286 (July 30, 2002); David S. Hilzenrath, Anderson’s Collapse May Be Boon to Survivors, Wash. Post, Aug. 24, 2002, at E01 (stating that that passage of the law was in response to the accounting scandals).

[3] Sarbanes-Oxley Act 18 U.S.C.S. § 1519 (2002).

[4] United States v. Keith, 440 F. App’x 503 (7th Cir. 2011).

[5] Sarbanes-Oxley Act 18 U.S.C.S. § 1519 (2002).

[6] United States v. Tsarnaev, 2014 U.S. Dist. LEXIS 134596 (D. Mass. Sept. 24, 2014).

[7] United States v. McRae, 702 F.3d 806 (5th Cir. 2012).

[8] United States v. Moyer, 674 F.3d 192 (3d Cir. 2012).

[9] See generally Ronald D. Rotunda, Hillary’s Emails and the Law. The Wall Street Journal (September 17, 2015 10:15 AM)

[10] United States v. Moyer, 674 F.3d 192 (3d Cir. 2012).


Photo Source:×206.jpg

Photo Source:


Blog: Are you sick of telemarketers and robocalls?


By: Biniam Tesfamariam,

How many of you find it troublesome when a condition to using a company’s service is answering to their programmed machines via telephone? Usually these phone calls will take longer than expected upsetting customers in the process. For more than two decades, Congress and The Federal Communications Commission (“FCC”) have sought to protect consumers from the nuisance, invasion of privacy, cost, and inconvenience of autodialed calls and prerecorded artificial voice messages (robocalls).[1]

Congress found that consumers consider these kinds of calls, “regardless of the content or the initiator of the message, to be a nuisance, are an invasion of privacy, and interfere with interstate commerce”; and that banning such calls except when made for an emergency purpose or when the called party consents to receiving the call, “is the only effective means of protecting telephone consumers from this nuisance and privacy invasion.”[2]

In 2012, the FCC revised its Telephone Consumer Protection Act (“TCPA”) rules to require telemarketers (1) to obtain prior express written consent from consumers before robo calling them, (2) to no longer allow telemarketers to use an “established business relationship” to avoid getting consent from consumers when their home phones, and (3) to require telemarketers to provide an automated, interactive “opt-out” mechanism during each robocall so consumers can immediately tell the telemarketer to stop calling.[3] The FCC enforces the TCPA by conducting investigations and taking enforcement actions against violators.

Satisfactory customer service is a major pillar in any successful business. Recently, companies such as Lyft (similar to Uber) and First National Bank have been put on notice for violating federal telemarketing rules.[4] There are obvious benefits for businesses for coercing consumers to using autodialed messages as part of their telemarketing advertisement scheme. For starters, these businesses do not have to pay someone consistently to perform this task and it allows one to advertise their products at a low cost. Unfortunately for these businesses, the risk of litigation associated with practicing these telemarketing advertisement schemes has risen in recent years. TCPA litigation has trended upward by 63% between 2011 and 2012.[5] Rises in litigation seemed to be fueled by factors such as: an increase in consumer use of cell phones as primary phones, an increased eased of filing TCPA class actions, and defects in consent all contributed to rises in litigation.[6] For businesses, it would be best to make amendments to their operation agreements to remove such practices.

As time and technology progressed, it’s fascinating and productive to see laws being enacted to protect consumers from the nuisance and privacy issues of telemarketing advertisements. More and more consumers are taking advantage of the regulatory process by submitting public complaints on telemarketing. Just last year the FCC received up to 215,000 complaints from the public regarding telemarketing and robocalls.[7] On the Federal Trade Commissions -Bureau of Consumer Protection webpage the first thing you will find is either the option to file a general complaint or information regarding robocalls. As more consumers become knowledgeable on these issues and the options they have to have their voices heard, it will be interesting to see if the numbers in complaints drop in the next couple of years.

[1] See S. Rep. No. 102-178, 1st Sess., 102nd Cong., at 2, 4-5 (1991), reprinted in 1991 U.S.C.A.N. 1968, available at

[2] Id.

[3] 47 C.F.R. § 64.1200

[4] Brian Fung, Lyft Automatically Opts you into Receiving Robocalls. That Doesn’t sit well with the FCC (Sept. 11, 2015, 3:11 PM),

[5] Doug Smith, Andrew Smith, Robocalling and Wireless Numbers: Understanding the Regulatory Landscape (May 2013),

[6] Id.

[7] Brian Fung, Sick of Telemarketers and Robocalls? FCC is Poised for a Crackdown. (May 27, 2015),

Photo Source:


Blog: #Wanted on #WarrantWednesday

Patty_Hearst_FBI_posterBy: Milena Radovic, Associate Manuscripts Editor

In March 2015, Butler County Sheriff’s Office in Ohio posted a series of pictures and brief criminal history of Andrew Marcum on its Facebook page.[1]  The Sheriff’s Office requested that the public help in locating Mr. Marcum[2] because he was wanted for “burglary, kidnapping, domestic violence and criminal endangering.”[3]  Surprisingly, Mr. Marcum commented on the post and stated, “I ain’t tripping half of them don’t even know me.”[4]  Subsequently, the Sheriff’s Office responded with “If you could stop by the Sheriff’s Office, that’d be great,”[5] and then stated, “Hey, it doesn’t hurt to ask.”[6]  On Twitter, the Sheriff Richard K. Jones posted a picture of a jail cell with the caption “Hey Andrew Marcum we’ve got your room ready…”[7]  The following day the Sheriff’s Office arrested Mr. Marcum.[8]

Posts like the one by Butler County Sheriff’s Office are not uncommon. More and more police stations are utilizing social media websites, like Facebook and Twitter to catch criminals and request tips.[9]  These social media posts are essentially “electronic versions of traditional wanted posters,” and often include “a photo, description of the individual and crime, and a contact number for tips.”[10]  In order to protect the anonymity of tipsters and informants, law enforcement departments encourage the community to provide information through phone calls or emails and not directly through comments.[11]

In New York, Illinois, Colorado, and even Canada, police departments use “#warrantwednesday” on social media to catch criminals, while Florida and Indiana utilize “Turn ’em in Tuesday.”[12]  In 2014, New York State Police arrested fifteen as a result of #warrantwednesday posts and in total, arrested twenty-nine people as a result of tips received through social media.  According to Darcy Wells of the New York State Police, there is a spike in Facebook page activity on Warrant Wednesdays, and “Twitter town halls have increased the agency’s Twitter followers—which, ultimately, can help solve crimes and promote public safety.[13]

According to a study by LexisNexis, law enforcement “increasingly [rely] on social media tools to prevent crime, accelerate case closures and develop a dialogue with the public.”[14]  Although it is not rare for police to use social media to catch criminals, this method seems to be far less controversial.  In the past, police departments have faced criticism for using Facebook to catch criminals by creating fake profiles and gaining access to private information through a user’s Facebook friends.[15]  Ultimately, this method may be more effective in creating goodwill and promoting cooperation between citizens and the police.


[1]Tracy Bloom, Wanted Man Arrested in Ohio After Responding to Sheriff’s Facebook Post About Him, KTLA5 (Mar. 4, 2015, 8:55 AM),

[2] Bloom, supra note 1.

[3] Faith, Karimi, Ohio fugitive nabbed after taunting authorities on Facebook, CNN, (last updated Mar. 5, 2015, 9:46 AM).

[4] Bloom, supra note 1; see also Karimi, supra note 3.

[5] Bloom, supra note 1; see also Karimi, supra note 3.

[6] Bloom, supra note 1.

[7] Bloom, supra note 1; see also Karimi, supra note 3.

[8] Bloom, supra note 1; see also Karimi, supra note 3.

[9] See Judy Sutton Taylor, #WarrantWednesdays: Law enforcement jumps on a social media trend to help find criminals, ABA Journal, Mar. 2015, at 9, available at (the title of the online version is Law enforcement jumps on #WarrantWednesdays trend to help find criminals).

[10] Taylor, supra note 9, at 9.

[11] See Taylor, supra note 9, at 9–10.

[12] See Taylor, supra note 9, at 10.

[13] Taylor, supra note 9, at 9–10.

[14] Lexis Nexis, Social Media Use in Law Enforcement:Crime prevention and investigative activities continue to drive usage 3 (2014), available at

[15] See Heather Kelly, Police embrace social media as crime-fighting tool, Facebook, CNN, (last updated Aug. 30, 2012, 5:23 PM).


Blog: Facebook Data Security – Is Your Private Data at Risk on Social Media?

privacy-policy-445153_640By: John Danyluk, Associate Notes & Comments Editor

It is uncertain exactly how much information Facebook has about its users.  The social media giant not only has all of the content uploaded by its 1.35 billion users, it has the information that could be obtained from the staggering 100 billion friendships among those users.  So just how secure is this massive amount of private data, and what would the legal consequences be if a breach occurred?

Facebook suffered one such breach in June 2013.[1]  Although the impact of this particular breach turned out to be relatively minor, it signaled a larger problem for protecting personal data on the internet.  The glitch that occurred in 2013 exposed email addresses and personal phone numbers for contacts even if that data was not visible on Facebook itself.[2]  Although Facebook corrected the problem within twenty-four hours, over six million users had their sensitive personal data exposed.[3]  For these six million individuals, their reasonable expectation of privacy was infringed upon when sensitive details that were not shared on their public profile were not protected.[4]

A data breach not only puts Facebook at significant risk of a public relations nightmare, but it also may result in regulatory investigations from the FTC and civil liability to its users for negligence.[5]  But Facebook would not be left without recourse, as it could institute civil actions under the Computer Fraud and Abuse Act and the Stored Communications Act (among other laws) against the perpetrators.[6]  Additionally, the federal government would likely step in to enforce the criminal provisions of these acts as well.[7]

How can companies like Facebook, who are trusted with sensitive data, prevent data exposure in the future?  In sum, these companies must have “strong security configuration management all the way from the servers through the applications and the user permissions assigned to the data.”[8]  Users of these websites can help themselves as well, by minimizing the number of companies and apps that have access to their personal data.[9]  By taking the time to understand privacy controls and removing apps that the user no longer uses, the threat of one’s privacy being invaded through a data breach can be curtailed.


[1] Tony Bradley, Facebook Breach Highlights Data Security’s “Weakest Link” Syndrome, PCWorld, available at

[2] Id.

[3] Id.

[4] Id.

[5] Evan Brown, Six Interesting Technology Issues Raised in the Facebook IPO, Internetcases, available at

[6] Id.

[7] Id.

[8] Id.

[9] Id.

Blog: The New Four Walls of the Workplace

social-media-488886_640By: Micala MacRae, Associate Notes and Comments Editor

The Supreme Court has recognized workplace harassment as an actionable claim against an employer under Title VII of the Civil Rights Act of 1964.[1]  The rise in social media has created a new medium through which workplace harassment occurs.  Courts are just beginning to confront the issue of when social media harassment may be considered as part of the totality of the circumstances of a Title VII hostile work environment claim.  Traditionally, harassment has occurred through face-to-face verbal and physical acts in the workplace.  However, the changing nature of the workplace has continued to expand with the rise of new technology, which allows employees to stay connected to the work environment at different locations outside the physical boundaries of the office.  Harassment has moved beyond the physical walls of the workplace to the virtual workplace.  The broadening conception of the workplace and increasing use of social media in professional settings has expanded the potential employer liability under Title VII.

Social media has become a powerful communication tool that has fundamentally shifted the way people communicate.  Employers and employees increasingly utilize social media and social networking sites.[2]  While companies have turned to social media as a way to increase their business presence and reduce internal communication costs, there has been the consequence of increased social media harassment.  Although social media and social networking sites are not new forms of communication, their legal implications are just now coming into focus.[3]  Several cases have addressed hostile work environment claims stemming from other forms of electronic communication, there are few addressing claims based on social media communications.[4]

The New Jersey Supreme Court, in Blakey v. Continental Airlines, Inc., was one of the first courts to consider whether an employer is responsible for preventing employee harassment over social media.[5]  In Blakey, an airline employee filed a hostile work environment claim arising from allegedly defamatory statements published by co-workers on her employer’s electronic bulletin board.[6]  The electronic bulletin board was not maintained by the employer, but was accessible to all Continental pilots and crew members.[7]  Employees were also required to access the Forum to learn their flight schedules and assignments.[8]

The court analyzed the case under a traditional hostile work environment framework, concluding that the electronic bulletin board was no different from other social settings in which co-workers might interact.[9]  Although the electronic bulletin board was not part of the physical workplace, the employer had a duty to correct harassment occurring there if the employer obtained a sufficient benefit from the electronic forum as to make it part of the workplace.[10]  The court made clear that an employer does not have an affirmative duty to monitor the forum, but that liability may still attach if the company had direct or constructive knowledge of the content posted there.[11]  The court limited consideration of social media harassment to situations where the employer derived a benefit from the forum and it could therefore be considered part of the employee’s work environment.[12]

Workplace harassment is not longer limited to the traditional four walls of the workplace.  As technology and the boundaries of the workplace have changed, courts have struggled to modernize their framework for assessing hostile work environment claims under Title VII.  These problems will only become exacerbated as society continues to embrace social media throughout our daily lives and employers continue to integrate social media into their business practices.


[1] See Meritor Sav. Bank v. Vinson, 477 U.S. 57, 64-67 (1986) (finding that workplace harassment based on individual’s race, color, religion, sex, or national origin is actionable under Title VII of the Civil Rights Act).

[2] Jeremy Gelms, High-Tech Harassment: Employer Liability Under Title VII for Employee Social Media Misconduct, 87 Wash. L. Rev. 249 (2012).

[3] See, e.g., Kendall K. Hayden, The Proof Is in the Posting: How Social Media Is Changing the Law, 73 Tex. B.J. 188 (2010).

[4] Id.

[5] Jeremy Gelms, High-Tech Harassment: Employer Liability Under Title VII for Employee Social Media Misconduct, 87 Wash. L. Rev. 249 (2012).

[6] Blakey v. Continental Airlines, Inc., 751 A.2d 538 (N.J. 2000).

[7] Id. at 544.

[8] Id.

[9] Id. at 549.

[10] Blakey, 751 A.2d at 551.

[11] Id.

[12] Id.

Blog: The New Meaning of Back Seat Driving

2014-03-04_Geneva_Motor_Show_1186By: Peyton Stroud, Associate Notes and Comments Editor

Are we there yet?  The common adage of road trips has a whole new meaning with the advent of driverless cars.  Imagine a world where the front seat driver can face the backseat passengers, with the car driving itself down the highway.  As of this past January, this dream is becoming a reality.  Automotive giants such as BMW, Audi, and Mercedes-Benz unveiled prototypes of self-driving technologies in the recent 2015 Consumer Electronics Show (CES).[1]  These new vehicle models function autonomously while allowing its passengers to sit back and relax.  Industry experts expect these driverless vehicles to be on the road between 2017 and 2020.[2]

Many current models of cars are already featuring some self-driving technologies including automatic braking systems, adjustable cruise controls, and 360° cameras capable of stopping collisions while at low speeds.[3]  However, this year’s CESs brought more to the table than ever before.  During this year’s CES, Audi unveiled its self-driving car, nicknamed “Jack,” using its system known as the company’s Piloted Driving system.[4]  “Jack” drove an astounding 560 miles to the CES, more than any driverless car has driven before.  Its state of the art system incorporates a series of sensors and laser scanners allowing the car to drive itself in speeds of up to 70 mph.[5]  The Piloted Driving system is intended to be used for highway driving and does not work as well in urban environments, where drivers need to be at the wheel.[6]  Similarly, Mercedes-Benz introduced its driverless model called the Mercedes-Benz F105 Luxury in Motion.[7]  Its new features include a self-driving technology and a zero carbon emissions system, but most notably a new interior design.[8]  The new design allows for the vehicle’s front seats to swing around and face backwards while the vehicle drives its passengers on the highway.[9]

Other technology developers are joining forces with car manufactures to help advance this technology.[10] Nvidia, a large computer chip manufacturer, has introduced the Tegra X1 chip equipping vehicles with deep neural learning, which allows for recognition of pedestrians, cyclists, and other vehicles. More technological innovations on the horizon include systems setting a predetermined route, more adjustable cruise controls, and self-parking technologies.[11]

Legally, self-driving smart cars could pose some significant problems in both the regulatory and data privacy realm.[12]  In a regulatory sense, there are currently no transportation laws regarding self-driving cars.[13]  Furthermore, others remain weary of the “data collection” required by the cars. [14]  However, the most profound legal implication could be liability ridden – Who is responsible when something goes wrong?  More specifically, who is liable if a self-driving cars hits and kills someone?  Who is responsible for the parking ticket when the car did not recognize a no parking sign?[15]  Only four states and the District of Columbia have addressed laws regarding self-driving vehicles.[16]  Some of these states have passed laws allowing manufacturers solely for testing purposes.[17]  In an effort to predict the legal implications of these new cars, lawyers look to current liability laws for guidance.[18]  For example, in cases of parking tickets, the owners of the car will be liable.[19]  In cases of injury, product’s liability law will most likely govern cases of injury thereby allowing the victim to sue both the owner of the car and also the car’s manufacturer.  According to Professor John Villasenor, “product liability law, which holds manufacturers responsible for faulty products, tends to adapt well to new technologies.”[20]  Furthermore, Sebastian Thrun, inventor of driverless cars, opines that these driverless cars could help in reconstructing accidents and making assignment of blame more clear-cut.[21]  In his view, the trial lawyers are the ones in trouble.[22]

However, criminal penalties pose a more significant problem than civil penalties.[23]  Since criminal law centers around the intent of the perpetrator, it will be difficult to figure out how to adapt these laws to technology because robots cannot be charged with a crime.[24]  Further, “the fear of robots” and of a machine malfunctioning raise concerns for the American public.[25]  However, it seems as if Americans are willing to take the risk.  According to the Pew Research Center, nearly half of Americans would ride in a driver-less car.[26]  Time will tell if these self-driving cars will endure public scrutiny.


[1] See Steve Brachmann, Self-driving Cars and Other Automotive Technologies Take Center Stage at CES, (Jan. 11, 2015),

[2] Bill Howard, Self-driving Cars Are More Than A Promise, Extreme Tech (Jan. 12, 2015, 11:45 AM),

[3] Brachmann, supra note 1.

[4] Id.

[5] Brachmann, supra note 1.

[6] Id.

[7] Id.

[8] Id.

[9] Id.

[10] See Howard, supra note 3.

[11] See id.

[12] See id.

[13] See id.

[14] Id.

[15] See Claire Cann Miller, When Driverless Cars Break the Law, N.Y. Times (May 13, 2014), available at

[16] See id.

[17] See id.

[18] See id.

[19] See id.

[20] Miller, supra note 11 (quoting John Villasensor, Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation, Brookings (Apr. 2014), available at

[21] See Miller, supra note 11.

[22] See id.

[23] Id.

[24] See id.

[25] Miller, supra note 11.

[26] See id.

Blog: Is it a Bird? A Plane? No, it's a Drone

By: Arianna White, Associate Staff

As a child, I spent many afternoons with my father and his two helicopter-enthusiast brothers.  We would go to the park and launch remote controlled helicopters and rockets in to the sky.  We flew the large, complex kind of helicopters that could drop packages from great heights and do flips while in the air.  Although craft helicopters are less in vogue today than they were twenty years ago, other small-scale flying devices have recently returned to popular consciousness.  I’m talking, of course, about drones.

When thinking about drones, many people imagine their military application.  Otherwise known as predator drones and Unmanned Combat Aerial Vehicle (UCAV), these machines are used to perform precise strikes of enemy targets.[1]  The use of these drones relies on information gathered by intelligence agencies to identify targets, and a remote operator who controls the drone’s movements.[2]

Beyond their common conception, however, the term drone refers to a larger class of Unmanned Aircraft Systems that have both public and private applications in the United States.[3]  Many Police departments, like the New York City Police Department, use drones to survey the public under the pretext that drones are intended to “check out people to make sure no one is… doing anything illegal.”[4]

Corporations and personal enterprises have also determined that drones can serve in varied, but important roles.  Amazon, for example, is interested in using drones for package delivery and has asked the Federal Aviation Administration (FAA) for permission to develop and test a drone program.[5]  While the FAA has yet to issue the necessary license to Amazon, the company persists in its request that the agency permit its use of drones.[6]

Mexican drug cartels have also developed drones to deliver packages, although their program follows a decidedly less legal route than Amazon’s.[7]  On January 19, 2015, a drone carrying nearly six pounds of methamphetamine crashed in a Mexican city along the Mexico-US border.[8]  In early 2015, a South Carolina man received a fifteen-year prison sentence for his attempt to deliver contraband to a South Carolina Prison.[9]  The crashed drone carried marijuana, cell phones and tobacco on to the prison’s grounds, although the delivery was never received by any of the prison’s inmates.[10]

Given the proliferation of unmanned aircraft, both sophisticated and home made, the FAA lacks a sophisticated policy that effectively regulates their use.  While the “current FAA policy allows recreational drone flights in the U.S.[, it] essentially bars drones from commercial use.”[11]  Although industry analysts expected the FAA to publish its proposed rules by the end of 2014 and begin the notice and comment period, the agency did not meet that goal.[12]  In fact, Gerald Dillingham, the GAO ‘s director of civil aviation said that the “consensus of opinion is the integration of unmanned systems will likely slip from the mandated deadline [and not be finalized] until 2017 or even later.”[13]

During the 112th legislative session, Congress passed the FAA Modernization Act of 2012.[14]  The act was designed to, among other non-drone-related purposes, “encourage the acceleration of unmanned aircraft programs in U.S. airspace.”[15]  Agency guidelines, in place since 1981, currently control the use of personal unmanned aircraft.[16]  Of these, individuals are prohibited from “flying above 400 feet, near crowds, beyond the line of sight or within five miles of an airport.”[17]  These types of guidelines seem reasonable and appropriate to regulate small scale, personal model aircraft and drone use.

However, there is a glaring need for federal policy that addresses and regulates the commercial use of drones.  In the absence of such a policy, local governments have begun to fill the gaps that the FAA left behind.  According to the New York Times, “At least 35 states and several municipalities have introduced legislation to restrict the use of drones in some way.”[18]  These different laws serve various functions, including governing the permissible police uses of drones, defining what type of use constitutes unlawful surveillance, and determining the punishments allowable for violations of the particular law.[19]  By allowing individual local governments to determine their own rules, in the absence of a federal standard, the FAA has missed the opportunity to both promote nationwide responsible drone use and ensure their safe, uniform use across the country.  While other countries, like Canada, Australia, and the United Kingdom have already begun enacting laws that allow commercial use of drones,[20] the United States is still stuck in 2012.




[3] Federal Aviation Administration. Unmanned Aircraft Systems.


















Blog: Smart Guns and Their Constitutional Concerns

By: Jill Smaniotto, Associate Manuscript Editor

Following the shooting death of eighteen year-old Michael Brown by a police officer in Ferguson, Missouri this past summer, the issue of accountability for police firearm use has been at the forefront of public discourse.[1]  A firearms technology startup in Capitola, California known as Yardarm Technologies recently announced that it has developed a product that may provide the real-time information necessary to maintain greater oversight of the use of police force.[2]

While so-called “smart gun” technology has existed for quite some time, technological advances, coupled with the growing concern over mass shootings and police abuse of force, have prompted further development of the technology.[3]  Yardarm’s new product is a two-inch piece of hardware equipped with an accelerometer and a magnetometer that officers snap into the grip of their firearms.[4]

The sensor records information about when, where, and how police officers use their firearms,[5] providing dispatchers with real-time data.[6]  Currently, the technology requires the officer to carry a smartphone; as the device transmits the data by sending a signal to the phone, which then sends the information to Yardarm’s servers for secure storage.[7]  The Yardarm sensor has capabilities to track the gun’s location, whether the gun is in its holster, when new magazines are inserted, and when it is fired.[8]  Yardarm also intends to develop the product further so that it may be able to tell in which direction the gun is fired.[9]  The technology does not feature a remote disabling mechanism.[10]

Initially, Yardarm intended to sell the device on the consumer firearm market.[11]  Early plans for the device focused on tracking in the event of theft or misplacement of the individual’s firearm and remote locking, but the potential political sensitivities of entering the consumer firearm market proved too great a challenge to the ten-employee startup.[12]  Yardarm then decided to switch its focus to law enforcement agencies, which were already showing interest in the burgeoning technology.[13]  The Santa Cruz Sheriff’s Department and Carollton (Texas) Police Department have begun equipping officers’ weapons with the sensors on a trial basis.[14]

            Discussion surrounding the announcement of this new technology has been divisive. Proponents of technology like Yardarm’s new sensor cite the potential benefits to officer safety in the field, as well as the hope for a pool of objective data that may be used to investigate incidents of alleged police brutality.[15]  Law enforcement agencies are hopeful that this technology will help to solve a problem that is “the worst nightmare for any officer in the field”: deputies in trouble and unable to ask for additional assistance.[16]  Additionally, those in favor of the technology expect that the sensors, like dashboard cameras, will provide objective records of incidents when officers used firearms.[17]  This information may run on a two-way street, though, as it could be used “to exonerate an officer accused of misconduct, or to prosecute a criminal in a court of law.”[18]

            Detractors, however, are not comfortable with the potential implications of widespread use of technology.  Guns rights advocates, such as the National Rifle Association (“NRA”) are wary of the impact of smart guns on Second Amendment rights.[19]  Specifically, the NRA has voiced concern that the proliferation of these sensors may open the door to government regulations requiring this technology on personal firearms.[20]  The American Civil Liberties Union (“ACLU”) expressed concern that the sensors may present an invasion of privacy, but tempered that concern by also admitting that such invasion may be a necessary evil in order to attain some much needed transparency into police behavior.[21]

            While this technology is certainly new, the supposed ease of integration[22] and the volatile state of affairs surrounding police use of firearms may combine to create the spark necessary to ignite the widespread employ of such sensors sooner rather than later.  As Yardarm has made clear its intention to solely market the product to law enforcement and military,[23] detractors of the technology may find their criticisms lacking much weight as compared to the vast public safety benefits in the inevitable debate as to what place smart guns may have in our society.


[1] Hunter Stuart, Company Makes Gun Tech That Could Help Prevent Police Brutality, The Huffington Post (Oct. 24, 2014, 11:02 AM),

[2] Id.

[3] Haven Daley, California Startup Unveils Gun Technology for Cops, (Oct. 24, 2014, 6:57 AM),; David Kravets, Silicon Valley Startup Unveils Internet-Connected Smart Guns for Cops, Ars Technica (Oct. 24, 2014, 12:30 PM),

[4] Aaron Tilley, Internet-Connected Guns Are the Next Step for Data-Hungry Police, Forbes (Oct. 24, 2014, 10:00 AM),; Stuart, supra note 1.

[5] Stuart, supra note 1.

[6] Kravets, supra note 3.

[7] Stuart, supra note 1.

[8] Tilley, supra note 4.

[9] Id.

[10] Daley, supra note 3.

[11] Tilley, supra note 4.

[12] Id.; Kravets, supra note 3.

[13] Tilley, supra note 4.

[14] Daley, supra note 3; Kravets, supra note 3; Stuart, supra note 1; Tilley, supra note 4.

[15] Daley, supra note 3; Stuart, supra note 1.

[16] Daley, supra note 3. See also Stuart, supra note 1 (“[T]he technology can be also used to keep police officers safer. When an officer draws his weapon, for example, the gun will send an alert to the police command center and to nearby officers, alerting them to a potentially dangerous situation.”).

[17] Stuart, supra note 1.

[18] Id.

[19] Krave
ts, supra note 3.

[20] Id. See also Daley, supra note 3 (noting that Gun Owners of California spoke to concern of future government mandated use of the technology on personal firearms).

[21] Tilley, supra note 4.

[22] See Tilley, supra note 4 (noting that Yardarm is designing its software to easily fit into existing dispatcher software); Daley, supra note 3 (indicating that the device can fit into the handle of most police guns and relies on Bluetooth technology for data transmission).

[23] Tilley, supra note 4.

Blog: Virtual Adultery: The World of Cyber Cheating

By: Micala MacRae, Associate Notes & Comments Editor

A virtual adultery epidemic has swept the nation. Online chat rooms, Facebook, twitter and other forms of social media have enabled individuals to make virtual connections that some argue are grounds for divorce.  In 1996, a New Jersey man filed for divorce based on adultery after discovering that his wife had been carrying on a “virtual” affair with a man in North Carolina through online chat rooms.[1]  Although the wife never met her cyber-paramour in person the relationship began to take over their lives and she began to neglect her job, family, and marriage.[2]  In the United States the courts have refused to hold virtual relationships reach the level of intimacy necessary for adultery.  Adultery is defined as intimate sexual activity outside of marriage.  However, virtual infidelity has become an increasingly important issue in Family Law.

Virtual infidelity can eventually lead a party to act.  If a spouse travels to meet an online partner in person, courts may infer adultery without much difficulty.  Courts have taken into consideration parents’ excessive time spent online on interactive gaming websites when determining child custody.[3]  When parents are not providing adequate support and care for their children due to their exorbitant time online courts infer from this they have relinquished their parental responsibilities.[4]  Courts may eventually see virtual infidelity as a renouncement of parental duties in divorce proceedings awarding the spouse who did not participate in the virtual infidelity full custody of the children.

Though courts have held virtual infidelity does not satisfy grounds for divorce it may satisfy other requirements such as neglect or abandonment.[5]  The spouse carrying out a virtual relationship abandons the marital relationship and the family when he or she spends great periods of time pursing the virtual relationship.  Many courts are willing to accept that sexual activity that is not proven to rise to the level of intercourse can still constitute legal adultery.[6]  Some courts even disapprove of emotional affairs, which are almost analogous to virtual adultery.

Although virtual infidelity may never involve physical contact courts may rule these virtual relationships that lead to the degradation of the marital relationship are grounds for divorce.  Online infidelity may qualify as adultery when the conduct is a substantial factor in the breakdown of the marriage.  Courts may expand the definition of adultery to include virtual infidelity as a factor in determining whether a divorce should be granted.  The law is behind the pace of technology and the evolution of views on marriage and infidelity.  It may be time to expand the law of adultery to include virtual infidelity, so that relief can be afforded to the victims.


[1] Douglas E. Abrams et al., Contemporary Family Law (3rd ed. 2012)

[2] Id.

[3] Andrew Feldstein, Is Cybersex Grounds for Divorce?,, (last updated Mar. 10, 2014).

[4] Id.

[5] Edward Nelson, Virtual Infidelity: A Ground for Divorce,, (Sept. 11, 2010, 4:18 PM),

[6] Id.

Page 2 of 3

Powered by WordPress & Theme by Anders Norén