Richmond Journal of Law and Technology

The first exclusively online law review.

Category: Blog Posts (Page 3 of 18)

Hang the Rules. They’re More Like Guidelines Anyway.

By: Seth Bruneel,

Autonomous (self-driving) cars are coming. Proponents, enthusiasts, and inventors promise to reduce the amount of auto accidents by up to eighty percent but that still leaves questions for the other twenty percent of the time.[1]  Some of the left-over uncertainty revolves around the questions as to who is held liable and who decides who is liable in auto accidents involving autonomous cars. With many moving parts and many interested parties looking on, someone needs to be planning ahead of the excitement.

Currently, safety issues are regulated by the National Highway Traffic Safety Administration (NHTSA) while state laws control licensing, registration, and insurance.[2]  Twenty-one states have passed legislation related to self-driving vehicles and some of the legislation addressed the testing of such vehicles. [3] Colorado, for example, passed a law this summer that allows autonomous vehicles on the road so long as they obey all the rules of the road such as: seatbelts for passengers, turn signals, and speed limits.[4]

Most states that have passed legislation merely provide various definitions of “autonomous technology” and set out some requirements for the testing of the technology.[5]  A popular element in the statutes is to have a human driver in a position to take control of the vehicle at any time.[6]

Under President Obama, the NHSTA set guidelines for manufactures and developers of autonomous vehicles which included a 15-point safety assessment.[7]  Recently, following the results of a fatal accident involving a car in autonomous mode[8], the Trump administration released “Automated Driving Systems 2.0” which contains suggested guidelines for the testing and development of autonomous vehicles.[9]

Under “Section 1: Voluntary Guidance,” the President’s familiar attitude toward government interference in industry is recognizable; “This Guidance is entirely voluntary, with no compliance requirement or enforcement mechanism.”[10]  President Trump’s strategy when it comes to the regulation of self-driving cars is simple: don’t regulate self-driving cars.[11] This “hands off” approach rolls back the 15-points safety assessment to 12 elements for consideration in an effort to provide some guidance without stifling innovation.[12]  While the two policy outlines share more similarities than obvious at first glance, they both fail to look far enough ahead.

Element 9 of President Trump’s policy attempts to address the behavior of the autonomous driver in the event of a crash.[13]  However, the guidance from the Department of Transportation and NHTSA amounts to little more than common sense. The guidelines leave it to the autonomous systems to self-diagnose and take any action that would either move the car to the side of the road or shut off completely in order to prevent further dangers.[14]

Further, Element 10 is baby step towards answering the question of who is liable in the event of an accident.[15]  The section addresses the need to collect data for testing purposes but fails to require that the data be made available to someone in a position to decide liability. Often, in the event of an accident, the manufacturer will be in the best position to decide who is at fault based on the data recorded by merely plugging into the “black box” and reading the data.[16] This fault-finding tool provides an objective account of what occurred without the reliance upon human memory under traumatic circumstances.

On the topic of liability and insurance, the guidelines offer no substantial direction as to how to determine who needs to carry motor vehicle insurance (or even if the insurance will still be necessary).[17] The guidelines can only urge developers, manufacturers, insurance companies, and lawmakers to “being to consider” what to do when these inevitable questions arise.[18]

Early this month, a United States Senate panel passed a bill that would also hinder specific regulation of autonomous cars by preventing individual states from imposing regulations on performance standards.[19]  This is good news for developers and manufacturers but leaves auto safety groups with valid concerns.

In the end, the technology is quickly approaching and due to the optional nature of federal guidance, many problems remain unaddressed. Great freedom has been granted to innovation and the amount promised benefits of autonomous driving technology dwarf the amount of concerns, someone needs to look way ahead and have an insurance and liability system in place in order to avoid confusion.

[1] Automobile Insurance in the Era of Autonomous Vehicles, 25 (2015).

[2] David Muller, Trump Administration’s New Self-Driving Car Guidance is Deliberately Toothless, CAR AND DRIVER (Sept. 15, 2017, 8:01 AM), (last visited Oct. 5, 2017).

[3] Autonomous Vehicles Self-Driving Vehicles Enacted Legislation, National Conference of State Legislatures (Sept. 21, 2017), (last visited Oct. 5, 2017).

[4] Tamara Chuang, It’s Official: Colorado Passes First Law to Regulate Driverless Vehicles, THE DENVER POST (June 1, 2017, 2:21 PM), (last visited Oct. 5, 2017).

[5] National Conference of State Legislatures, supra note 3.

[6] See id.

[7] Pete Bigelow, Federal Government Releases New Autonomous-Vehicle Policy, CAR AND DRIVER (Sept. 20, 2016, 10:48 AM), (last visited Oct. 5, 2017).

[8] See Pete Bigelow, “A Tesla Crash, but Not Just a Tesla Crash”: NTSB Issues Final Report and Comments on Fatal Tesla Autopilot Crash, CAR AND DRIVER (Oct. 3, 2017, 11:48 AM), (last visited Oct. 5, 2017).

[9] Automated Driving Systems 2.0, (last visited Oct. 5, 2017).

[10] Id. at 2.

[11] See Timothy Lee, Trump’s Self-driving Car Strategy: Don’t Regulate Self-driving Cars, ARSTECHNICA (Sept. 13, 2017, 7:30 AM), (last visited Oct. 5, 2017).

[12] Automated Driving Systems 2.0, supra note 9, at 5-15.

[13] Id. at 13.

[14] Id.

[15] Id. at 14.

[16] Seth Bruneel, The Fast and Furiously Approaching Need for Legal Regulation of Autonomous Driving, 30 BYU Prelaw Rev. 33, 44 (April 1, 2016).

[17] Automated Driving Systems 2.0, supra note 9, at 24.

[18] Id.

[19] David Shepardson, U.S. Senate Panel Puts Self-driving Cars in Fast Lane, REUTERS (Oct. 5, 20177, 10:29 AM), (last visited Oct. 5, 2017).

Image Source:

Censorship in Art: What Should Artists Post on Social Media?

By: James Williams,

Censorship, some people love it, while others loathe it. Does it save society? If so, then from what? Does it harm society? The definition of censorship allows for subjectivity and ambiguity.[1]

Most people know about Michelangelo’s painting in the Sistine Chapel, The Last Judgment, but some may not know that the painting originally featured fully nude characters and had to be covered later for the Church.[2]

In a more recent case, Andres Serrano took a photograph of a crucifix that he had submerged in urine, dubbing it Piss Christ.[3]  Interestingly enough, it was well-received in New York, but North Carolina citizens were not so enthusiastic, and protesters smashed it with a hammer.[4]

With social media, the owners of websites or applications are able to take down content or set restrictions because they are held by private owners. Grace Coddington, Creative Designer at Large for Vogue, made a drawing of a nude figure and posted it on Instagram, where Instagram actually disabled her account.[5]  There are questions about what rights or restrictions these popular platforms should have versus the rights of artists. Some artists claim free speech violations or other violations of the expression, and the classic response is that this is not a governmental ban on free speech. What should artists do regarding vague censorship rules that are haphazardly drafted by privately owned social media platforms?

Instagram has censorship rules that have caused problems rooted in subjectivity and lack of clarity.[6]  Subjectivity makes the censorship policies hard to understand because there are regional differences.[7]  There are special categories of what is “appropriate” for children, which some platforms use as the basis of their policy.[8]  Instagram has a policy[9] against nudity, but it makes a few exceptions for sculptures, breastfeeding mothers, and photos of mastectomy scarring that tend to make it more ambiguous than it is insightful.[10]  The policy specifically mentions a ban on the female nipple[11],  but the elusive male nipple goes unmentioned.

Besides lack of clarity and subjectivity problems, the next problem with censorship rules on social media platforms are the forms of punishment for violating the rules. While some artists’ work is simply removed and the account is still available[12], this punishment is at the lighter end of the spectrum. Artists can lose ALL pictures and access to the account.[13]  This goes beyond simply taking down the picture(s), because they are actually removing the artwork, which may or may not have been the only copy of the art. The ability for platforms to immediately disable the account and then provide notice after its deletion, is similar to civil procedure issues regarding pre-trial seizures, but here there isn’t a chance for the artists to get the art returned nor is there a chance for artists to make a case. It’s not exactly helpful to get notice after the art is taken down, either.

Not all pages are quite as aggressive towards these types of artists. Some websites, like DeviantArt, filter content or access.[14]  Should platforms like Instagram create filters for content to provide minors from accessing the so-called obscene content? DeviantArt has a policy of preventing visitors without an account, or those whose birthdays show they are under 18, from seeing content that has been flagged mature.[15]  They also are more forgiving in the sense that they will not disable the account for that, they will merely take the art down and notify the artist.[16]

Currently, time may be the only resource for artists on platforms like Instagram where the societal standards may shift more in favor of relaxed censorship policies. Otherwise, artists will just have to find platforms similar to DeviantArt that are more permissive and then encourage others to follow in their footsteps.


[1] Censorship, Oxford Dictionaries Online, (2017) (“[T]he suppression or prohibition of any parts of books, films, news, etc. that are considered obscene, politically unacceptable, or a threat to security.”).

[2] Priscilla Frank, A Brief History Of Art Censorship From 1508 To 2014, Huffington Post (updated Jan. 22, 2015),

[3] Id.

 [4] Id.

[5] Alice Newell-Hanson, how artists are responding to instagram’s no-nudity policy, Vice (Aug. 15, 2016, 2:10PM),

[6] See id.

[7] See supra note 2.

[8] See Instagram, (last visited Sep. 30, 2017).

[9] Id.

[10] Supra note 5 (discussing how artists don’t know what will cause images to be removed contrasted to accounts being disabled).

[11] Supra note 7.

[12] Supra note 5.

[13] Id.

[14] DeviantArt, (last visited Sep. 30, 2017).

[15] Id.

[16] DeviantArt, (last visited Sep. 30, 2017).

Image Source:

How a Pair of Earrings Can Weigh in on a Multimillion Dollar Trade Secret Dispute: Waymo v. Uber

By: Ilya Mirov,

In Silicon Valley, losing or gaining an employee can lead to a litany of legal issues. It is common practice for former employers to warn new employers against working on projects related to trade secrets that the transitioned employee might be able to reveal. [1] If this employee begins to work for another firm in a similar field, he may be precluded from disclosing trade secret material from his previous employer. [2] Next month, an interesting case of this kind will be tried—Waymo v. Uber.

This case began with the typical vestiges of Silicon Valley employee transitions—stern letters mailed to the new employer and reminders to the employee of his or her duty not to disclose trade secret information. [3] But instead of ending there or settling out of court, this case is going all the way to trial to be decided by the federal circuit in October of this year. [4] The dispute began in 2016 after Waymo’s former self-driving car engineer, Anthony Levandowski, formed his own self-driving truck technology company (Otto Trucking) and began to work for Uber. [5] Waymo’s evidence includes records of an unusually high volume of confidential file downloads from Levondowski’s computer before he left. [6]

Waymo highlights the security that it employed in protecting its trade secret: “All networks hosting Waymo’s confidential and proprietary information [are] encrypted and [require] passwords and dual-authentication for access,” reads the original complaint. “Computers, tablets, and cell phones… are encrypted, password protected, and subject to other security measures. And Waymo secures its physical facilities by restricting access and then monitoring actual access with security cameras and guards.” [7]

But what effect would it have on the case if Waymo also gave away its secret lidar circuit boards to a going-away employee? As revealed in a recent court filing, Waymo turned its previous-model lidar circuit board, the Grizzly Bear 2 (GBr2), into a pair of earrings gifted to Seval Oz, head of Global Partnerships and Business Development for Google’s self-driving car program from 2011 to 2014. [8] She received the earrings when she left the company in the summer of 2014 to run Continental’s Intelligent Transportation System division. [9]

Pierre-Yves Droz, a Waymo engineer on the self-driving team, revealed the nature of the earrings in a deposition given earlier this month. When handed the earrings and asked if he recognized them, Droz identified them as the Grizzly Bear 2 boards. [10] The Grizzly Bear 3, only a minor improvement over the Grizzly Bear 2, provided the basis for the lawsuit. [11] “This is confidential… [It’s] not something we should give to someone, especially if someone is leaving the company,” he said. [12]

It is likely that that Uber will make the argument that Waymo has lost its trade secret protection for its lidar technology through the distribution of these high-tech earrings. [13] Anthony Levandowski exchanged multiple text messages with Oz in an effort to obtain the earrings over the course of several weeks in July and August as was revealed within the filing. [14]

The case is due to go to trial on October 10 and will be an interesting data point to track in trade secret law.



[1] Derek Handova, Waymo v. Uber: A Gordian Knot Gets Tighter, IPWatchdog, June 15, 2017,

[2] Id.

[3]Mark Harris, Could a Pair of Earrings Hurt Waymo’s Lidar Trade Secrets Lawsuit?, IEEE Spectrum, Sept. 11, 2017,

[4] Waymo LLC v. Uber Techs., Inc., No. 2017-2130, 2017 U.S. App. LEXIS 17665 (Fed. Cir. Sep. 13, 2017)

[5] Id.

[6] Id.

[7] Mark Harris, Could a Pair of Earrings Hurt Waymo’s Lidar Trade Secrets Lawsuit?, IEEE Spectrum, Sept. 11, 2017,

[8] Id.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Id.

Image Source:

Will Formalities and the Emerging Use of Technology in Will Creation

By: Rachel Weinberg-Rue,

Within the digital and Internet landscape, there is increasing room and opportunity for individuals to express themselves. Between a host of build-your-own-blog websites, YouTube, Facebook, Instagram and other media platforms, individuals are able to document their experiences as well as their opinions instantaneously with ease. Thus, it comes as no surprise that more and more people are seeking to use technology to also help document their wills and last wishes.[1] However, there are inherent tensions between testamentary formalities and the use of technology.[2]

In the past, will formalities were strictly enforced and required staunch adherence to the writing, signature, and attestation requirements established by the Wills Act in 1837. [3] These rules lack substantial exceptions. Under strict adherence, even the tiniest of errors or any ambiguity could invalidate the entire will.[4] In many cases, this line of reasoning has resulted in outcomes where the goal of realizing testator intent is clearly frustrated[5], and, strictly applied, the rules are seem inherently incompatible with technology. Will formalities dictate that wills must be in writing, and traditionally, this has excluded digital documents, videos, or other digital media from being used to replace or supplement the writing since most documents were handwritten.[6]

In response, many states have moved past strict adherence, and have adopted the harmless error doctrine.[7] Harmless error allows courts to overlook technical errors in the will creation process and look to extrinsic evidence to help reveal testator intent.[8] Today, with wider acceptance of the harmless error doctrine[9] and the increasing use of extrinsic evidence, it has become easier to prove testator intention, and if testator intention is what really matters, it is possible that technology has become more compatible with the will execution process.

However, using technology to help with will creation creates other unique problems. Increasingly, individuals are trying to save testamentary dispositions as E-wills on word processing files or in other digital formats, which poses substantial problems of proof.[10] For example, it is debatable whether an e-signature can satisfy the signature requirement.[11] Even in states that have adopted the harmless error doctrine, a will without a signature cannot be probated.[12] E-wills are also problematic because they are easier to forge. Electronic files are easy to duplicate and easy to alter after completion. Thus, allowing E-wills to be probated might advance fraudulent incentives. Video wills would also be problematic. Although not as open to forgery, they would pose similar problems with signature viability.

Despite these difficulties, technology may still be helpful to will execution.  Although it might not be effective as a mode of will replacement, the use of digital documents and files can be helpful to supplement the will document. For example, videos can serve as a useful tool for documenting testator intent when extrinsic evidence is needed to corroborate ambiguities in the will language or to understand strange bequests. If a relative decides to challenge a will, a video explanation can serve as proof of what the testator intended and be used to show that the testator was mentally competent.[13] Videos can also be used to document the will execution process and serve as proof that all the will was signed, notarized, and attested properly.[14]

As rules surrounding will creation become more elastically treated and as technology progresses, more and more opportunities for technology to supplement parts of the will execution process will present themselves. It will be interesting to see how states will adapt to these changes and whether states with and without the harmless error doctrine will consider more technology friendly applications of the rules. Perhaps in the future, testators will be able to execute a will via live stream or Skype and be able to document their testamentary wishes on a blog or digital journal.


[1] Writing Your Will: What is a Video Will?, American Bar Association, (last visited Sept. 27, 2017).

[2] David Horton, Tomorrow’s Inheritance: The Frontiers of Estate Planning Formalism, 58 B.C. L. Rev. 539, at 563 (2017) [hereinafter Horton].

[3] Id. at 555.

[4] Wills Act Formalities: Modern Trend,, (last visited Sept. 27, 2017).

[5] Horton, supra note 2, at 271.

[6] See id. at 563-68

[7] See thisMatter, supra note 4.

[8] See Id.

[9] See Id.

[10] Horton, supra note 2, at 563.

[11] Id. at 569.

[12] Id.

[13] American Bar Association, supra note 1.

[14] Id.

Image Source:

Cyberstalking: Enabled by Cheap, Accessible Technology

By: Brooke Throckmorton


With the frequent use of technology today, it should come as no surprise that its reach is abused. While technology can be entertaining, helpful, and time-saving, it can also be terrifying, intrusive, and ultimately enabling to those who want to intimidate or harm others.

In 2009, 14 out of every 1,000 people, age 18 or older, fell victim to stalking.[1] One in four of these stalking victims reported some form of cyberstalking (83% by e-mail, 35% by instant messaging).[2] Stalking is often defined by its effect on the victim, namely, the fear it produces.[3] Stalking creates a type of “psychological prison” for victims that includes feelings of fear, paranoia, shame, isolation, depression and in severe cases, Post Traumatic Stress Disorder (PTSD).[4] Stalking is legally defined as “following or loitering near another” to “annoy or harass that person or to commit a further crime such as assault or battery.”[5] Some statutes incorporate additional elements such as requiring the victim feel “distressed” about their own personal safety or the safety of close friends or family.[6] The definition of cyberstalking adds the element of intimidation through the use of e-mails or the Internet to place “the recipient in fear that an illegal act or injury” will be inflicted upon the recipient or a member of that person’s family or household.[7] The rise in availability of cheap technology has allowed cyberstalking to replace traditional approaches to stalking.[8]

Advanced technology allows stalkers to constantly terrorize their victims by “tracking and monitoring them” as they move throughout the world with their smartphones, computers, and I-pads.[9] In addition to GPS tracking systems installed in virtually every smartphone, social media plays a huge role in enabling abusers to reach their victims. With multiple social media outlets to choose from, it allows stalkers multiple options to observe and intimidate their victims. For example, a 2017 feature introduced on the Snapchat app, called Snap Map, allows your “friends” to view your location at all times if you are “opted-in”.[10] If a user chooses to opt-in and share their location with “friends”, these “friends” can view the user’s location at all times, even if the user is not chatting with them in the app or sending them snapchats.[11] The biggest concern with this new feature is that some users may not understand the implications of turning on their Snap Map location.[12]  In turn, they may be inadvertently sharing their location at all times with potential cyberstalkers.

You may be asking how the law deals with cyberstalking. Good news! There is a federal statute that specifically speaks to the crime of cyberstalking.[13] The statute is titled “Stalking” but contains a provision that refers specifically to using “any interactive computer service or electronic communications service or electronic communication system of interstate commerce” with intent to do harm or place a person under surveillance for such harm.[14] A Virginia man was recently charged, convicted, and sentenced to 41 months in jail under this statute in March of this year.[15] Richard Killebrew, a resident of Norfolk, Virginia, used a computer and cell phone to communicate threatening messages, some of which contained death threats, to multiple victims in Nebraska.[16] As for the state law frontier, some states have enacted specific cyberstalking statutes. Others continue to rely on their stalking statutes and apply the terms to electronic communications.[17] This can be problematic given the unique nature of cyberstalking.[18]

While there are laws in place to bring relief to victims of cyberstalking, you can be proactive by monitoring your own Internet activities. For example, if you have Snapchat, ensure that you are either opted-in or opted-out of Snap Map. If you have a smart phone, you can monitor which apps are using your location. You can find location services in your privacy settings. In these settings, you can view if you are sharing your iPhone location. You can also scroll down to view which apps are using your location, indicated by “never”, “while using”, or “always.”

While the federal government and select state governments have expanded their statutes to explicitly include cyberstalking, all states should have such a provision. Since cyberstalking can be done at any and all times, there should be unique statutory language to speak to the solely electronic communications. While cyberstalking is not wholly preventable, there are means that you can monitor your online activities that can make you less susceptible to cyberstalking.



[1] Katrina Baum, Shannon Catalano, Michael Rand, Stalking Victimization in the United States, U.S. Dep’t of Just., (Jan. 2009),

[2] Katrina Baum, Shannon Catalano, Michael Rand, Stalking Victimization in the United States, U.S. Dep’t of Just., (Jan. 2009),

[3] See Melvin Haung, Keeping Stalkers at Bay in Texas, in Domestic Violence Law 282, 284 (Nancy Lemon ed., 2013).

[4] Id. at 285.

[5] Stalking, Black’s Law Dictionary (10th ed. 2014)

[6] Id.

[7] Cyberstalking, Black’s Law Dictionary (10th ed. 2014).

[8] Supra note 3, at 282.

[9] Id.

[10] See What’s the Deal with Snap Map?, Tech. Safety (Sept. 21, 2017 3:13 PM),

[11] See id.

[12] See id.

[13] See 18 U.S.C. 2261A(2).

[14] Id.

[15]  Virginia Man Sentenced for Cyber Stalking, U.S. Dep’t of Just., (Mar. 13, 2017),

[16] Id.

[17] Is There a Law Against Cyberstalking or Cyberharassment?,,

[18] Id.

Image Source:

Reproductive Technology Creates More than Just Children

By: Hayden-Anne Breedlove

Married couples often do not think about custody issues in the event of divorce. Many couples with children get caught up in the moment and fail to plan what to do in the event of a divorce. With today’s advances in reproductive technology, new legal issues are arising in custody disputes that involve children that have not yet been born.

Recent innovations in reproductive technology allow individuals, who at one time were unable, to become pregnant. The egg, sperm, and womb needed to make a baby can be provided by three separate people or after a person’s death. However, new legal and ethical issues have arisen with the new technology that extend beyond the standard of which parent is more “fit” to act as custodian of the child. Judges are now left with the question of deciding who has custody over a frozen egg post-divorce.

Take for example a married woman diagnosed with a form of cancer that would eliminate the possibility of her getting pregnant after her treatments. What if she chose to have her eggs inseminated by her husband’s sperm before her cancer treatments and then frozen for the couple’s use at a later time? What if the couple gets divorced before they can use these eggs? Who gets to keep them or should they just be destroyed? Should the wife be allowed to birth her ex-husband’s children? If not, the woman would not be able to have any more children since her cancer treatments left her sterile. These issues grant the court both a moral and ethical dilemma in deciding cases.

Davis v. Davis was the first case that addressed this topic.[1] During the marriage, the couple attempted to conceive through in-vitro fertilization.[2] The couple later got divorced, thus giving rise to the dispute at bar.[3] The dispute arose over what to do with the eggs. The wife initially wanted the frozen pre-embryos implanted in her but then decided she wanted them to be donated to childless couples.[4] The husband wanted the eggs to be discarded.[5] The court ruled in favor of the father, allowing for the eggs to be discarded and destroyed, citing the rationale that his interest in not becoming a parent outweighed the interest in the wife who wished to donate the embryos.[6]

The court held in Litowitz v. Litowitz that embryos could not be implanted in the wife post-divorce without the husband’s consent.[7] In this case, a couple was unable to have a child since the wife was unable to produce eggs or give birth.[8] They got eggs from a third party egg donor and fertilized them with the husband’s sperm.[9] Through this process, they had one child, but later got a divorce.[10] After the divorce, the mother sought to have the eggs implanted inside her in order to have another child.[11]

The court seems to be ruling in favor of the party who chooses to have the eggs destroyed, perhaps as a consideration under the Fourth Amendment’s right to privacy.[12] The right to privacy of the parent choosing to not go forward with having a child is stronger than the right of the other parent to have a child.[13] As reproductive technology advances and becomes more common for couples facing challenges with childbirth, the court will continue to have to rule on cases involving this issue.

[1] Recent Case Law on Division of Frozen Embryos in Divorce Proceedings, (last visited Sept. 18, 2017).

[2] Davis v. Davis, 842 S.W.2d 588 (Tenn. 1992).

[3] Id.

[4] Id.

[5] Id.

[6] Id.

[7] Litowitz v. Litowitz, 146 Wn.2d 514, 515 (S.C. Wash. 2002).

[8] Id.

[9] Id.

[10] Id.

[11] Id.

[12] See supra note 2; See also supra note 7.

[13] See supra note 1.

Image Source:

Why Smartphones Have Not Outsmarted the Sun

By: Hayden-Anne Breedlove


As the summer months wind down, many last-minute vacationers try to squeeze in one more beach trip for a relaxing, work-free vacation. However, with today’s all-encompassing access to email, messages, and online work databases, it is easy to “forget” what a vacation is all about. Instead, many professionals spend more time replying to work emails than soaking up the sun. Lawyers’ use of smartphones is basically universal, with most attorneys using them for simple tasks like conducting legal research or scanning documents in depositions.[1] This makes work life easier by allowing attorneys instant access to much needed information. However, for the Vitamin-D deprived individuals who decide to multitask and try to do work on either a cell phone or computer on the beach, they are faced with the pestering problem of being unable to see their smartphone in the bright sunlight. Why is it that in a world where smartphones are a common accessory that have been around for years, technology companies have never implemented a system of viewing a phone screen in the sun?

The technology is out there, as seen in Amazon’s Kindle Paperwhite, an e-reader that sells itself based on the feature that the screen can be easily viewed in the sun.[2] This e-reader employs an E-Ink brand electronic paper display that features sixteen shades of gray, making the text resemble a book, and ultimately making it easy to read in the sun.[3] E-Ink is an old partner of Amazon and a holder of many patents on the technology surrounding the Paperwhite.[4] However, to make it its own, Amazon developed a layer of plastic that sits on top of the E-Ink display and shines light down on it, thus making the E-Ink display look better and easier to read.[5]

In 2016, E-Ink, along with multiple e-reader producing companies, including Sony Electronics, Sony Corporation, Barnes & Noble Inc., LLC, and Inc., faced litigation over patent rights against Research Frontiers, Inc. (RFI).[6] RFI is a corporation that has worked exclusively on “developing suspected particle technology applicable for use in display and light control applications.”[7] RFI alleged patent infringement on three patents involving the particle technology.[8] RFI alleged that E-Ink infringed upon their “491 Patent,” entitled “Light Valve Employing a Film Comprising an Encapsulated Liquid Suspension, and Method of Making Such Film.”[9]

E-Ink made a motion for summary judgment, arguing a lack of dispute of material facts, claiming they were not infringing upon Patent 491.[10] The Court looked towards other patents and the basis of the contested patents.[11] Through this, the Court determined that there were genuine issues of material fact in dispute, thus denying the defendants’ motion for summary judgment.[12]  This case serves as an example of the litigation and importance of patent violation and implementation in technology.

The question still remains why this technology has not been implemented into cell phone display screens. There seem to be issues which involve the implementation of this technology with full color display technology.[13] However, Amazon has the technology through its patent with E-Ink that could lead to the development of a fast, high contrast display.[14] This will be an interesting topic to follow as technology advances, thus leading to the implementation of already existing technology into a device that already makes our lives simpler. Overall, too much sunshine might no longer be a valid excuse to not answer a work-related email.



[1] Legal Research and Law Library Management (Law Journal Press 2015).

[2] Id. at 6.

[3] Id.

[4] See Christopher Mims, Amazon is Working on Displays that Apple and Samsung Can’t Match, Quartz (Aug. 06, 2013),

[5] See id.

[6]  See id.

[7] Research Frontiers, Inc. v. E Ink Corp., 2016 U.S. Dist. LEXIS 44547, 2, 2 (2016).

[8] Id. at 3.

[9] Id.

[10] Id.

[11] Id. at 41.

[12] Id.

[13] See supra note 4.

[14] Id.

Image Source:

Artificial Intelligence May Be the Key To Combating Users’ Abuse of “Facebook Live”


By: Kathleen Pulver,

“Facebook Live” was created with the intention of allowing users to engage more thoroughly with their followers, connect with others instantaneously, and tell stories their own way.[1] Many used the Facebook live function to stream real time shots of protests and marches surrounding this year’s inauguration, and the weeks that followed, including major news outlets.[2] During these protests, some peaceful and some not, the live function allowed people around the world to witness the action as it unfolded, and share their thoughts. More than fifteen thousand people commented on one ABC news video alone.[3] Overall, most people’s experience with Facebook Live has been positive, and it has been used as it was intended. However, in a horrifying new trend, the function has turned into a way for people to showcase terrifying displays of violence against others, and even themselves.[4]

The examples of these horrifying uses abound. In December of 2016 a twelve-year-old girl used Facebook Live to live stream her suicide, as she hung herself in her family’s backyard.[5] The broadcast went on for more than twenty minutes, and remained visible on the young girl’s Facebook page until late that night when a police officer from California notified the local police chief in Georgia.[6] The police have been working ever since to have the video removed.[7] In another well publicized event in January 2017, 4 teenagers tied up and tortured another teen victim while live streaming the attack via Facebook Live.[8] The teens even spoke directly into the camera and commented on the lack of support they were receiving in the comments to the video.[9] There are hundreds of other examples of violence being intentionally or accidentally recorded via Facebook Live, and then streamed for the world, or at least the users’ “friends,” to see. Many people have expressed their outrage with the social media giant, expressing concern over Facebook’s inability to control the content that is allowed to be shown, and their inability to do anything to stop the violence from occurring.[10]

The legal challenge presented with live streaming video is drawing the line between too much protection, therefore just banning all content all together, and no protection, allowing these incidents to occur without ramifications or the ability to stop them. Some people expect Facebook to allow them to post whatever they like, upholding their First Amendment right to free speech, while others argue that uncontrolled posting could lead to violent or inappropriate content being shown to a child. Facebook has already instituted reporting tools in the Live function, similar to the reporting tools available for normal posts.[11] Facebook currently has a team in place 24 hours a day, 7 days a week, to monitor reported posts and if the content is found to violate Facebook’s standards, “it will be removed.”[12] The problem is, not everyone reports videos. For example, on December 28, 2016, a woman died while live streaming a video of herself playing with her toddler.[13] The woman was not the victim of a violent crime, but simply succumbed to a medical condition.[14] The video showed her being to sweat, getting dizzy, and eventually passing out, but no one reported the video.[15] Had someone reported the video as showing inappropriate, or containing disturbing content, a message would have been sent to the Facebook review team, and help may have been provided prior to her death.[16] Facebook has been struggling to find a way to address this problem for months, but thinks they may have found a solution in artificial intelligence.[17]

Artificial intelligence is the “science and engineering of making intelligent machines.”[18] Facebook already uses artificial intelligence to collect data about users to create targeted ads to each user.[19] The computer systems are able to use algorithms to classify data on their own, and determine what to do with it.[20] In this way, artificial intelligence could be used to classify live data as unacceptable under Facebook’s conduct standards and have it reported, or classify it as acceptable and allow the post to continue. It is a waiting game to see whether artificial intelligence will be able to properly combat the problems of inappropriate content in a quick manner to address the Facebook Live function. Ideally, the artificial intelligence will be smart enough to easily detect whether the content is inappropriate or dangerous, instead of simply broadly censoring content for fear it may reach a dangerous level. If the artificial intelligence can toe the line carefully between too much censorship and blocking violent content or providing help as needed, it will likely be the best possible solution to the legal problems presented with live streaming video.



[1] See Facebook Live,

[2] See e.g., BuzzFeed News, Facebook (Nov. 9, 2016),, ABC News, Facebook (Sep. 21, 2016),

[3] See ABC News, Facebook (Sep. 21, 2016),

[4] See e.g., Monica Akhtar, Facebook Live captures Chicago shooting that killed toddler, Washington Post (Feb. 15, 2017, 11:14 AM),

[5] See Corey Charlton, SUICIDE STREAMED ONLINE Girl, 12, streams her own suicide on social media for 20 minutes after being ‘sexually abused by a relative’ – and cops are powerless to take it down, The Sun (Jan. 12, 2017, 8:51 AM),

[6] See id.

[7] See id.

[8] See Jason Meisner, William Lee, & Steve Schmadeke, Brutal Facebook Live attack brings hate-crime charges, condemnation from White House, Chicago Tribune (Jan. 6, 2017, 6:59 AM),

[9] Id.

[10] See e.g., Cleve R. Wootson, Jr., How do you just sit there?’ Family slams viewers who did nothing as woman died on Facebook Live, Washington Post (Jan. 3, 2017),

[11] See Facebook Live,

[12] Id.

[13] See supra note 10.

[14] See id.

[15] See id.

[16] See id.

[17] See Alex Kantrowitz, We Talked To Mark Zuckerberg About Globalism, Protecting Users, And Fixing News, BuzzFeed News (Feb. 16, 2017, 4:01 PM),

[18] John McCarthy, What is Artificial Intelligence?, (last updated Nov. 12, 2007).

[19] See Bernard Marr, 4 Mind-Blowing Ways Facebook Uses Artificial Intelligence, Forbes (Dec. 29, 2016, 1:01 AM),

[20] See id.

Image Source:

The Future of Self-Driving Cars


By: Genevieve deGuzman,

The race to develop autonomous technology has led to the fast-growing rise of autonomous or self-driving cars. Automakers, technology companies, startups, and even governments are getting involved.[1] So how do these self-driving cars actually work? Each automaker uses different technology for their cars, but these cars use either computer vision-based detection or laser beams to generate a 360-degree image of the car’s surrounding area, multiple cameras and radar sensors measure the distance from the car to various objects and obstacles, and a main computer analyzes data, such as size and rate of speed of nearby objects, from the sensors and cameras and compares its stored maps to assess current conditions and predict likely behavior.[2] The cameras also detect traffic lights and signs and help recognize moving objects.[3]

Automakers such as Tesla, General Motors, Toyota, Lexus, Ford, Fiat Chrysler, Honda, Volvo, Volkswagen, and technology companies such as Google, Apple, nuTonomy, and Intel have all joined in the race to develop self-driving cars.[4] This push may be caused by Uber, which is a “digital hybrid of private and public transport” and has made “ride-hailing” so comparatively convenient and cheap that it threatens the car ownership industry.[5] Further, with technology becoming increasingly integrated in and almost detachable from consumer life, self-driving cars are efficient and convenient, allowing the “driver” to interact with their phones and other technology while safely getting to their destination.

In 2016, Uber’s self-driving truck made its first delivery, driving 120 miles with 50,000 cans of beer, changing the future of truck driving and deliveries.[6] Later that year, Uber also tested its autonomous driving technology in San Francisco until California’s Department of Motor Vehicles revoked the registrations for sixteen Uber cars for not marking the cars as test cars.[7] However, Uber contended that their cars do not need self-driving car permits because they were operated with a “safety driver” behind the wheel as the cars’ programming still requires a person behind the wheel to monitor the car and works more like advanced driver assist technologies, like that of Tesla’s autopilot.[8] The revocation of the registrations may have been made in light of the deadly crash of a Tesla’s Model S, which is not a self-driving car but contains self-driving features to assist drivers. Tesla ultimately attributed this accident and two other accidents to “human error, saying the drivers 1) were inattentive, 2) disabled the automation and 3) misused the Summon feature and didn’t heed the vehicle’s warnings.”[9] Unlike Tesla’s autopilot, which focuses on driver assistance, Google’s Waymo is focusing on creating a fully autonomous car but has not put them on the market.

Some self-driving cars have already hit the market, and expectedly, there is a push for national self-driving vehicle regulation standardization. The United States Department of Transportation (DOT) released its Federal Automated Vehicles Policy in September 2016, setting guidelines for highly automated vehicles (HAVs) and lower levels of automation, such as some of the driver-assistance systems already deployed by automakers.[10] The policy guideline includes a 15-point safety assignment to “set clear expectations for manufacturers developing and deploying automated vehicle technologies,” a section that presents a clear distinction between Federal and State responsibilities for regulating automated vehicles, and current and modern regulatory rules.[11] Combined with the recent guidelines, the DOT also issued proposed rules for cars to talk to each other to prevent accidents to “illustrate the government’s embrace of car-safety technology after years of hesitation, even as distractions in vehicles contributed to the biggest annual percentage increase of road fatalities in 50 years” and attempt to fix vehicle deaths and reduce crashes.[12] The cars would use radio communications to send alerts to devices in the cars to warn drivers of risks of collisions, presence of a car in a driver’s blind spot, presence of oncoming traffic, and traffic slowing or stopping.[13]

Although the DOT invoked some standardization, they say nothing about “how it is tested (or even defined), how cars using it will operate, or even who should settle these questions.”[14] On February 14, 2017, the House Subcommittee on Digital Commerce and Consumer Protection held a hearing regarding the deployment of autonomous cars where representatives of General Motors, Toyota, Volvo, and Lyft provided testimony about how the parties think the federal government should regulate the new technology.[15] Automakers and technology companies developing autonomous technology want federal intervention to provide a “broad, consistent framework for testing and deploying their robots,” fearing states creating a “patchwork of regulations.”[16] Federal regulators would allow greater flexibility and wide latitude in how to prove the safety of the autonomous driving technology.[17] Congress will have to decide how to measure the safety of these autonomous cars and dictate the standards of safety they must have as the age of the robocar and its transition into consumer lives seems to be an inevitability.




[1] Matt McFarland, 2016: A tipping point for excitement in self-driving cars, CNN Tech (Dec. 21, 2016), (last visited Feb. 18, 2017).

[2] See Guilbert Gates et al., When Cars Drive Themselves, NY Times (Dec. 14, 2016), (last visited Feb. 18, 2017).

[3] See id.

[4] See id.

[5] See John Gapper, Why would you want to buy a self-driving car?, Financial Times (Dec. 7, 2016), (last visited Feb. 18, 2017).

[6] See Alex Davies, Uber’s Self-Driving Truck Makes its First Delivery: 50,000 Beers, Wired (Oct. 25, 2016), (last visited Feb. 18, 2017).

[7] See Avie Schneider, Uber Stops Self-Driving Test In California After DMV Pulls Registrations, NPR (Dec. 21, 2016), (last visited Feb. 18, 2017).

[8] See id.

[9] See id.

[10] See U.S. Dep’t of Transp., Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety (2006), available at

[11] See id.

[12] See Cecilia Kang, Cats Talking to One Another? They Could Under Proposed Safety Rules, NY Times (Dec. 13, 2016), (last visited Feb. 18, 2017).

[13] See id.

[14] See Alex Davies, Congress Could Make Self-Driving Cars Happen—or Ruin Everything, Wired (Feb. 15, 2017), (last visited Feb. 18, 2017).

[15] See id.

[16] See id.

[17] See id.

Image Source:

Inspection or Detention



By: Eleanor Faust,


Reports have surfaced that in the days preceding President Trump’s executive order effectuating an immigration ban, the Center for American-Islamic Relations (CAIR) filed legal complaints concerning hostile interrogations by Customs and Border Patrol agents.[1] The complaints allege that the agents demanded the travelers unlock their phones and provide them with social media account names and passwords.[2] Courts have held that customs agents have the authority to manually search devices at the border as long as the searches are not made solely on the basis of race or national origin.[3] This does not mean that travelers are required to unlock their phones but if they refuse, they run the risk of being detained for hours for not complying with the agent’s request.[4]

When returning home from a trip abroad, you expect to feel welcomed upon arrival but that has not been the case for many recently. When Sidd Bikkannavar got off the plane in Houston from a personal trip to South America, he was detained by the U.S. Customs and Border Patrol.[5] Bikkannavar is not a foreign traveler visiting the United States. He is a natural born U.S. citizen who works at NASA’s Jet Propulsion Laboratory. He has also undergone a background check and is enrolled in Global Entry to allow expedited entry into the United States.[6] While he was detained the customs agents demanded his phone and access PIN without giving him any information as to why he was being questioned.[7] A major concern is that Bikkannavar had a NASA issued phone that very well could have contained sensitive information that should not have been shared.[8] For a number of different professionals, these types of border searches compromise the confidentiality of information.[9] For example, searching the phone of a doctor or lawyer can reveal private doctor-patient or attorney-client information.[10]

Although there is no legal mechanism to make individuals unlock their phone, the customs agent’s have broad authority to detain travelers which can often be intimidating enough to make a person unlock their phone to avoid being in trouble.[11] Homeland Security Secretary John Kelly is looking to expand customs agent’s authority and is pushing to be able to obtain all international visitor’s social media passwords and financial records upon their arrival into the country.[12] At a meeting with Congress, Kelly told the House Homeland Security Committee, “We want to get on their social media, with passwords: What do you do, what do you say? If they don’t want to cooperate then you don’t come in.”[13] In the meantime, Hassan Shibly, the director of CAIR’s FL branch, advises American citizens to remember that, “you must be allowed entrance to the country. Absolutely don’t unlock the phone, don’t provide social media accounts, and don’t answer questions about your political or religious beliefs. It’s not helpful and it’s not legal.”[14]




[1] See Russell Brandom, Trump’s executive order spurs Facebook and Twitter checks at the border, Verge (Jan. 30, 2017, 9:55 AM),

[2] See id.

[3] See Loren Grush, A US-born NASA scientist was detained at the border until he unlocked his phone, Verge (Feb. 12, 2017, 12:37 PM),

[4] See id.

[5] See id.

[6] See id.

[7] See Seth Schoen, Marcia Hofmann, and Rowan Reynolds, Defending Privacy at the US Border: A Guide for Travelers Carrying Digital Devices, Electronic Frontier Foundation (Dec. 2011),

[8] Id.

[9] See id.

[10] See id.

[11] See Brandom, supra note 1.

[12] See Alexander Smith, US Visitors May Have to Hand Over Social Media Passwords: DHS, NBC News (Feb. 8, 2017, 7:51 AM),

[13] See id.

[14] See Grush, supra note 3.

Image Source:×724.jpg.

Page 3 of 18

Powered by WordPress & Theme by Anders Norén