Richmond Journal of Law and Technology

The first exclusively online law review.

Category: Blog Posts (Page 3 of 18)

Artificial Intelligence May Be the Key To Combating Users’ Abuse of “Facebook Live”


By: Kathleen Pulver,

“Facebook Live” was created with the intention of allowing users to engage more thoroughly with their followers, connect with others instantaneously, and tell stories their own way.[1] Many used the Facebook live function to stream real time shots of protests and marches surrounding this year’s inauguration, and the weeks that followed, including major news outlets.[2] During these protests, some peaceful and some not, the live function allowed people around the world to witness the action as it unfolded, and share their thoughts. More than fifteen thousand people commented on one ABC news video alone.[3] Overall, most people’s experience with Facebook Live has been positive, and it has been used as it was intended. However, in a horrifying new trend, the function has turned into a way for people to showcase terrifying displays of violence against others, and even themselves.[4]

The examples of these horrifying uses abound. In December of 2016 a twelve-year-old girl used Facebook Live to live stream her suicide, as she hung herself in her family’s backyard.[5] The broadcast went on for more than twenty minutes, and remained visible on the young girl’s Facebook page until late that night when a police officer from California notified the local police chief in Georgia.[6] The police have been working ever since to have the video removed.[7] In another well publicized event in January 2017, 4 teenagers tied up and tortured another teen victim while live streaming the attack via Facebook Live.[8] The teens even spoke directly into the camera and commented on the lack of support they were receiving in the comments to the video.[9] There are hundreds of other examples of violence being intentionally or accidentally recorded via Facebook Live, and then streamed for the world, or at least the users’ “friends,” to see. Many people have expressed their outrage with the social media giant, expressing concern over Facebook’s inability to control the content that is allowed to be shown, and their inability to do anything to stop the violence from occurring.[10]

The legal challenge presented with live streaming video is drawing the line between too much protection, therefore just banning all content all together, and no protection, allowing these incidents to occur without ramifications or the ability to stop them. Some people expect Facebook to allow them to post whatever they like, upholding their First Amendment right to free speech, while others argue that uncontrolled posting could lead to violent or inappropriate content being shown to a child. Facebook has already instituted reporting tools in the Live function, similar to the reporting tools available for normal posts.[11] Facebook currently has a team in place 24 hours a day, 7 days a week, to monitor reported posts and if the content is found to violate Facebook’s standards, “it will be removed.”[12] The problem is, not everyone reports videos. For example, on December 28, 2016, a woman died while live streaming a video of herself playing with her toddler.[13] The woman was not the victim of a violent crime, but simply succumbed to a medical condition.[14] The video showed her being to sweat, getting dizzy, and eventually passing out, but no one reported the video.[15] Had someone reported the video as showing inappropriate, or containing disturbing content, a message would have been sent to the Facebook review team, and help may have been provided prior to her death.[16] Facebook has been struggling to find a way to address this problem for months, but thinks they may have found a solution in artificial intelligence.[17]

Artificial intelligence is the “science and engineering of making intelligent machines.”[18] Facebook already uses artificial intelligence to collect data about users to create targeted ads to each user.[19] The computer systems are able to use algorithms to classify data on their own, and determine what to do with it.[20] In this way, artificial intelligence could be used to classify live data as unacceptable under Facebook’s conduct standards and have it reported, or classify it as acceptable and allow the post to continue. It is a waiting game to see whether artificial intelligence will be able to properly combat the problems of inappropriate content in a quick manner to address the Facebook Live function. Ideally, the artificial intelligence will be smart enough to easily detect whether the content is inappropriate or dangerous, instead of simply broadly censoring content for fear it may reach a dangerous level. If the artificial intelligence can toe the line carefully between too much censorship and blocking violent content or providing help as needed, it will likely be the best possible solution to the legal problems presented with live streaming video.



[1] See Facebook Live,

[2] See e.g., BuzzFeed News, Facebook (Nov. 9, 2016),, ABC News, Facebook (Sep. 21, 2016),

[3] See ABC News, Facebook (Sep. 21, 2016),

[4] See e.g., Monica Akhtar, Facebook Live captures Chicago shooting that killed toddler, Washington Post (Feb. 15, 2017, 11:14 AM),

[5] See Corey Charlton, SUICIDE STREAMED ONLINE Girl, 12, streams her own suicide on social media for 20 minutes after being ‘sexually abused by a relative’ – and cops are powerless to take it down, The Sun (Jan. 12, 2017, 8:51 AM),

[6] See id.

[7] See id.

[8] See Jason Meisner, William Lee, & Steve Schmadeke, Brutal Facebook Live attack brings hate-crime charges, condemnation from White House, Chicago Tribune (Jan. 6, 2017, 6:59 AM),

[9] Id.

[10] See e.g., Cleve R. Wootson, Jr., How do you just sit there?’ Family slams viewers who did nothing as woman died on Facebook Live, Washington Post (Jan. 3, 2017),

[11] See Facebook Live,

[12] Id.

[13] See supra note 10.

[14] See id.

[15] See id.

[16] See id.

[17] See Alex Kantrowitz, We Talked To Mark Zuckerberg About Globalism, Protecting Users, And Fixing News, BuzzFeed News (Feb. 16, 2017, 4:01 PM),

[18] John McCarthy, What is Artificial Intelligence?, (last updated Nov. 12, 2007).

[19] See Bernard Marr, 4 Mind-Blowing Ways Facebook Uses Artificial Intelligence, Forbes (Dec. 29, 2016, 1:01 AM),

[20] See id.

Image Source:

The Future of Self-Driving Cars


By: Genevieve deGuzman,

The race to develop autonomous technology has led to the fast-growing rise of autonomous or self-driving cars. Automakers, technology companies, startups, and even governments are getting involved.[1] So how do these self-driving cars actually work? Each automaker uses different technology for their cars, but these cars use either computer vision-based detection or laser beams to generate a 360-degree image of the car’s surrounding area, multiple cameras and radar sensors measure the distance from the car to various objects and obstacles, and a main computer analyzes data, such as size and rate of speed of nearby objects, from the sensors and cameras and compares its stored maps to assess current conditions and predict likely behavior.[2] The cameras also detect traffic lights and signs and help recognize moving objects.[3]

Automakers such as Tesla, General Motors, Toyota, Lexus, Ford, Fiat Chrysler, Honda, Volvo, Volkswagen, and technology companies such as Google, Apple, nuTonomy, and Intel have all joined in the race to develop self-driving cars.[4] This push may be caused by Uber, which is a “digital hybrid of private and public transport” and has made “ride-hailing” so comparatively convenient and cheap that it threatens the car ownership industry.[5] Further, with technology becoming increasingly integrated in and almost detachable from consumer life, self-driving cars are efficient and convenient, allowing the “driver” to interact with their phones and other technology while safely getting to their destination.

In 2016, Uber’s self-driving truck made its first delivery, driving 120 miles with 50,000 cans of beer, changing the future of truck driving and deliveries.[6] Later that year, Uber also tested its autonomous driving technology in San Francisco until California’s Department of Motor Vehicles revoked the registrations for sixteen Uber cars for not marking the cars as test cars.[7] However, Uber contended that their cars do not need self-driving car permits because they were operated with a “safety driver” behind the wheel as the cars’ programming still requires a person behind the wheel to monitor the car and works more like advanced driver assist technologies, like that of Tesla’s autopilot.[8] The revocation of the registrations may have been made in light of the deadly crash of a Tesla’s Model S, which is not a self-driving car but contains self-driving features to assist drivers. Tesla ultimately attributed this accident and two other accidents to “human error, saying the drivers 1) were inattentive, 2) disabled the automation and 3) misused the Summon feature and didn’t heed the vehicle’s warnings.”[9] Unlike Tesla’s autopilot, which focuses on driver assistance, Google’s Waymo is focusing on creating a fully autonomous car but has not put them on the market.

Some self-driving cars have already hit the market, and expectedly, there is a push for national self-driving vehicle regulation standardization. The United States Department of Transportation (DOT) released its Federal Automated Vehicles Policy in September 2016, setting guidelines for highly automated vehicles (HAVs) and lower levels of automation, such as some of the driver-assistance systems already deployed by automakers.[10] The policy guideline includes a 15-point safety assignment to “set clear expectations for manufacturers developing and deploying automated vehicle technologies,” a section that presents a clear distinction between Federal and State responsibilities for regulating automated vehicles, and current and modern regulatory rules.[11] Combined with the recent guidelines, the DOT also issued proposed rules for cars to talk to each other to prevent accidents to “illustrate the government’s embrace of car-safety technology after years of hesitation, even as distractions in vehicles contributed to the biggest annual percentage increase of road fatalities in 50 years” and attempt to fix vehicle deaths and reduce crashes.[12] The cars would use radio communications to send alerts to devices in the cars to warn drivers of risks of collisions, presence of a car in a driver’s blind spot, presence of oncoming traffic, and traffic slowing or stopping.[13]

Although the DOT invoked some standardization, they say nothing about “how it is tested (or even defined), how cars using it will operate, or even who should settle these questions.”[14] On February 14, 2017, the House Subcommittee on Digital Commerce and Consumer Protection held a hearing regarding the deployment of autonomous cars where representatives of General Motors, Toyota, Volvo, and Lyft provided testimony about how the parties think the federal government should regulate the new technology.[15] Automakers and technology companies developing autonomous technology want federal intervention to provide a “broad, consistent framework for testing and deploying their robots,” fearing states creating a “patchwork of regulations.”[16] Federal regulators would allow greater flexibility and wide latitude in how to prove the safety of the autonomous driving technology.[17] Congress will have to decide how to measure the safety of these autonomous cars and dictate the standards of safety they must have as the age of the robocar and its transition into consumer lives seems to be an inevitability.




[1] Matt McFarland, 2016: A tipping point for excitement in self-driving cars, CNN Tech (Dec. 21, 2016), (last visited Feb. 18, 2017).

[2] See Guilbert Gates et al., When Cars Drive Themselves, NY Times (Dec. 14, 2016), (last visited Feb. 18, 2017).

[3] See id.

[4] See id.

[5] See John Gapper, Why would you want to buy a self-driving car?, Financial Times (Dec. 7, 2016), (last visited Feb. 18, 2017).

[6] See Alex Davies, Uber’s Self-Driving Truck Makes its First Delivery: 50,000 Beers, Wired (Oct. 25, 2016), (last visited Feb. 18, 2017).

[7] See Avie Schneider, Uber Stops Self-Driving Test In California After DMV Pulls Registrations, NPR (Dec. 21, 2016), (last visited Feb. 18, 2017).

[8] See id.

[9] See id.

[10] See U.S. Dep’t of Transp., Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety (2006), available at

[11] See id.

[12] See Cecilia Kang, Cats Talking to One Another? They Could Under Proposed Safety Rules, NY Times (Dec. 13, 2016), (last visited Feb. 18, 2017).

[13] See id.

[14] See Alex Davies, Congress Could Make Self-Driving Cars Happen—or Ruin Everything, Wired (Feb. 15, 2017), (last visited Feb. 18, 2017).

[15] See id.

[16] See id.

[17] See id.

Image Source:

Inspection or Detention



By: Eleanor Faust,


Reports have surfaced that in the days preceding President Trump’s executive order effectuating an immigration ban, the Center for American-Islamic Relations (CAIR) filed legal complaints concerning hostile interrogations by Customs and Border Patrol agents.[1] The complaints allege that the agents demanded the travelers unlock their phones and provide them with social media account names and passwords.[2] Courts have held that customs agents have the authority to manually search devices at the border as long as the searches are not made solely on the basis of race or national origin.[3] This does not mean that travelers are required to unlock their phones but if they refuse, they run the risk of being detained for hours for not complying with the agent’s request.[4]

When returning home from a trip abroad, you expect to feel welcomed upon arrival but that has not been the case for many recently. When Sidd Bikkannavar got off the plane in Houston from a personal trip to South America, he was detained by the U.S. Customs and Border Patrol.[5] Bikkannavar is not a foreign traveler visiting the United States. He is a natural born U.S. citizen who works at NASA’s Jet Propulsion Laboratory. He has also undergone a background check and is enrolled in Global Entry to allow expedited entry into the United States.[6] While he was detained the customs agents demanded his phone and access PIN without giving him any information as to why he was being questioned.[7] A major concern is that Bikkannavar had a NASA issued phone that very well could have contained sensitive information that should not have been shared.[8] For a number of different professionals, these types of border searches compromise the confidentiality of information.[9] For example, searching the phone of a doctor or lawyer can reveal private doctor-patient or attorney-client information.[10]

Although there is no legal mechanism to make individuals unlock their phone, the customs agent’s have broad authority to detain travelers which can often be intimidating enough to make a person unlock their phone to avoid being in trouble.[11] Homeland Security Secretary John Kelly is looking to expand customs agent’s authority and is pushing to be able to obtain all international visitor’s social media passwords and financial records upon their arrival into the country.[12] At a meeting with Congress, Kelly told the House Homeland Security Committee, “We want to get on their social media, with passwords: What do you do, what do you say? If they don’t want to cooperate then you don’t come in.”[13] In the meantime, Hassan Shibly, the director of CAIR’s FL branch, advises American citizens to remember that, “you must be allowed entrance to the country. Absolutely don’t unlock the phone, don’t provide social media accounts, and don’t answer questions about your political or religious beliefs. It’s not helpful and it’s not legal.”[14]




[1] See Russell Brandom, Trump’s executive order spurs Facebook and Twitter checks at the border, Verge (Jan. 30, 2017, 9:55 AM),

[2] See id.

[3] See Loren Grush, A US-born NASA scientist was detained at the border until he unlocked his phone, Verge (Feb. 12, 2017, 12:37 PM),

[4] See id.

[5] See id.

[6] See id.

[7] See Seth Schoen, Marcia Hofmann, and Rowan Reynolds, Defending Privacy at the US Border: A Guide for Travelers Carrying Digital Devices, Electronic Frontier Foundation (Dec. 2011),

[8] Id.

[9] See id.

[10] See id.

[11] See Brandom, supra note 1.

[12] See Alexander Smith, US Visitors May Have to Hand Over Social Media Passwords: DHS, NBC News (Feb. 8, 2017, 7:51 AM),

[13] See id.

[14] See Grush, supra note 3.

Image Source:×724.jpg.

FAA Regulation Delays Rollout of Amazon Prime Air



By: Sophie Brasseux,


Along with Super Bowl LI came typical array of Super Bowl ads. One ad that got a lot of attention this year belonged to Amazon. Amazon’s ad featured a woman ordering Doritos using her Amazon Echo.[i] As a Prime Air drone shows up with her delivery, a disclaimer airs stating “Prime Air is not available in some states. Yet.” [ii]

After announcing the development of their drone delivery system this past July, Amazon completed their first test of the drones in December in the UK. [iii]

Amazon advertises Prime Air as a system in which drones would be able to get you your package in thirty minutes or less. [iv] Prime Air would be able to deliver packages up to five pounds and would include “sense and avoid” technology for improved safety and reliability.[v] These drones will have vertical take off and landing skills with the ability of reach altitudes of 100 meters and speeds of 100 kph.[vi] Given the costs required to use these drones, they are designed as a “last resort” in Amazon’s “delivery hierarchy.” [vii] So far, Amazon’s website includes videos of these drones as well as a FAQ section mostly about their testing in the UK. [viii]

One might wonder why this U.S. company is testing in the UK. Back in June 2016, The Federal Aviation Administration published new rules, which took effect in late August. [ix] The new FAA rules replaced the temporary restrictions on drone use by companies, which had previously required companies to apply for a special permit in order to use a drone for their business.[x] The rules allow companies to use drones, but include the requirement that the drone be kept within the line of sight of the operator during use. [xi] Another major restriction is that drones are prohibited from being over individuals not involved with the drone operation. [xii] These restrictions directly effect the way in which Amazon had intended to use their Prime Air service, thus they have moved their testing to the UK where there are currently no such restrictions. [xiii]

Regulations also restrict the times of day commercial drones can be used, flight patterns, and height restrictions. [xiv] Additionally, in order to operate a commercial drone, the FAA requires a remote pilot certificate or a student private pilot’s license, neither of which are required to use a drone for personal use. [xv] One notable benefit of the new FAA rules is that commercial operators do not have to go through a legal procedure to obtain FAA permission to operate anymore. [xvi] The Consumer Technology Association has stated the FAA has struck “an appropriate balance of innovation and safety” with their new rules, but “additional steps are needed such as addressing ‘beyond-line-of-sight’ operations, which will be a true game changer.”[xvii]

At this time, it is unclear what next steps Amazon or the FAA plan to take in order to get Air Prime and other commercial drones to be permitted in the United States. Given the current regulations, it is doubtful we will be seeing these drones in the near future, however, given that the technology has already been developed, it simply does seem to be a matter of time until your packages will be delivered via drone.




[i] See Michelle Castillo, One of Amazon’s delivery drones showed up in a Super Bowl ad, CNBC (Feb. 6, 2017), available at

[ii] See id.

[iii] See id; see also Luke Johnson, 9 things you need to know about the Amazon Prime Air delivery service, Digital Spy (Feb. 7, 2017), available at

[iv], Prime Air, available at

[v] See id.

[vi] See supra note 3.

[vii] See id.

[viii] See supra note 4.

[ix] See Martyn Williams, New FAA rules means you won’t get Amazon drone delivery anytime soon, PCWord, (Jun 21, 2016), available at

[x] See id.

[xi] See id.

[xii] See id.

[xiii] See id; see supra note 3.

[xiv] See supra note 9.

[xv] See id.

[xvi] See id.

[xvii] See Nat Levy & Todd Bishop, FAA issues final commercial drone rules, restricting flights in setback for Amazon’s delivery ambitions, GeekWire (Jun 21, 2016), available at

Image Source:

Putting Words in your Mouth: The Evidentiary Impact of Emerging Voice Editing Software

adobe voco

By: Nick Mirra,


All you have in this life is your word. The human voice serves as the carrier for our words, thoughts, and feelings. Each of us is imparted with unique voice which allows us to be identified amongst a group.[1] Our voice is our vocal finger print. Every word which departs from our lips carries our exclusive trademark assigning words as our own.[2] Because uniqueness of voice is a phenomenon implicitly understood by all humans, our words have become intertwined with our identity. As a result of this interconnection between voice and identity, voice recordings have become easily introducible into evidence, and they serve to relay information in any given case through our own words.

Technology has confounded the reliability of vocal identification. For example, Alexander Graham Bell’s revolutionary invention of the telephone has impacted the use of vocal evidence in court.[3] Upon the advent of the telephone, testimony based on voice recognition has been even further complicated because vocal communication was made possible over long distances while providing relative clarity of voice. Even though the correspondents may be miles apart, the parties are able to communicate with each other effectively.

The next hurdle to vocal evidence since the telephone looms on the horizon. What would it be like if a proponent of a piece of evidence could introduce a voice recording that was clearly the voice of their opponent, but in reality, the opponent wasn’t the one speaking at all? Even further, what if the opponent himself was convinced that it was in fact their voice, but they hotly contest that they ever said the words uttered on the recording? There is a new software program being developed which allows the user to put words in your mouth. Through this program, your own unique and identifiable voice becomes the marionette bending at the will of the puppeteer.

When Adobe unveiled its Project VoCo software in a live press release in November 2016, it shocked the audience.[4] On a stage in front of spectators, an Adobe representative showed the true power of the company’s newest technology.[5] VoCo is a software which enables the user to make a computer say anything the user types into it.[6] This program is not akin to mere text-to-speech conversion software. VoCo can take typed text, and convert it into human speech spoken by anyone’s voice that the user has on file.[7] It can take a recording of a voice, and change one or more words in a spoken sentence, or even create novel sentences altogether.[8] More specifically, VoCo records a 20 minute audio sample, and then anything the user types after that will be read back by the program in the speaker’s own voice.[9] Essentially, the software is Photoshop for the human voice.[10] As the software evolves, the length of the voice sample required for the software to function will exponentially shorten, and the ease of manipulating another’s voice will become increasingly more simple.[11]

The courts will soon be faced with this software which will shake the principles of earwitness evidence. It is important for practitioners to be made aware of Project VoCo so that they can react competently to falsified evidence. The issues will be hard to detect, but VoCo is a plausible explanation for how someone is putting unfavorable words in their opponent’s mouth.




[1] See Sophie Scott, Why do Human Voices Sound the Way they do, BBC, (Dec. 1, 2009)

[2] See Gilbert v. Cal., 388 U.S. 263, 266 (1967).

[3] See e.g. F.M. English, Annotation, Admissibility of sound recordings in evidence, 71 A.L.R.2d 1024 (enumerating instances where telephone calls and voice recordings appear in American Law Reports).

[4] See Adobe Creative Cloud, #VoCo. Adobe MAX 2016 (Sneak Peeks), YouTube (Nov. 4, 2016)

[5] See id.  

[6] See id.

[7] See Nick Statt, Adobe is Working on an Audio App that Lets You Add Words Someone Never Said, The Verge (Nov. 3, 2016)

[8] See Id.

[9] See Id.

[10] See Id.

[11] See Id.

Image Source:×380.jpg.

Dial “A” for Alexa


By: Victoria Linney

Amazon Echo is a hands free speaker that is controlled by your voice.[1] Echo answers to the name Alexa, and plays music, reads audiobooks aloud, gives headlines, and does so much more.[2] But, is one of Alexa’s “skills” the ability to help solve a murder?

This question is being asked in connection with a murder investigation in Bentonville, Arkansas.[3] Police have issued a warrant asking Amazon to produce the transcripts, voice recordings, and other information that an Echo speaker may have recorded the night of the murder.[4] But, the chances of Alexa being helpful in solving the murder are slim. This is because Echo is not recording everything that you say in your home.[5] Instead, the Echo is listening for “hot words” like the word “Alexa.”[6] While the Echo’s microphone is always on,[7] only upon hearing the word “Alexa” does the device begin recording for the amount of time it would take to make a request, and then it sends that audio to Amazon.[8] These recordings are then stored “until the user deletes them through the Echo smartphone app or on Amazon’s website.”[9] The user knows when Echo is sending audio for Amazon to store because the ring on top of the Echo illuminates and turns blue.[10]

Even though there is only a slim chance that the Echo could help solve the murder, Amazon has refused to turn over the data to the prosecutor.[11] Amazon released a statement stating “Amazon will not release customer information without a valid and binding legal demand properly served on us. Amazon objects to overbroad or otherwise inappropriate demands as a matter of course.”[12]

The prosecutor’s request for the Echo data has brought the privacy implications of voice activated speakers and other smart technology to the forefront of legal discussions.[13] Even though police have historically taken other electronics such as computers and cell phones to help solve crimes, the question remains whether new devices with built in microphones that are theoretically always listening should be subjected to the same standard as computers and cell phones.[14] Rather, the question becomes “is there a difference in the reasonable expectation of privacy one should have when dealing with a device that is ‘always on’ in one’s own home?”[15]

The Fourth Amendment provides people the right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”[16] In order to assert a claim under the Fourth Amendment, you must have a “reasonable expectation of privacy, which contains both an objective standpoint and a subjective standpoint.”[17] On one hand, the argument can be made that there is a reasonable expectation of privacy because even though one chooses to put this technology in their home, they are not necessarily consenting to having their private conversations broadcasted to the world. However, in Smith v. Maryland, the Supreme Court held that “a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.”[18] Therefore, since the Alexa requests are sent to Amazon (a third party), it may also be argued that one does not have a reasonable expectation of privacy when it comes to these recordings.

While it is unclear how the courts will deal with the privacy issues that smart devices with always on microphones like Alexa pose, there are avenues for purchasers of these devices to protect themselves. In addition to deleting the recordings in the Alexa app, users are also able to turn off the microphone on the device, by pushing “the microphone button on top of the Echo” and then waiting for the button and the ring to “illuminate bright red to let you know it is not listening.”[19] But, until the answer to the privacy level question for these devices has been determined by courts, turning off the microphone when not in use and deleting requests from the app is probably the safest avenue for purchasers of Alexa to protect their privacy.




[1] See Amazon Echo – Black, Amazon, (last visited Feb. 8, 2017).

[2] See James Kendrick, How to Use the Amazon Echo and Why You Should Get One, Mobile News (Feb. 9, 2016),

[3] See Jake Swearingen, Can an Amazon Echo Testify Against You?, N.Y. Mag. (Dec. 27, 2016),

[4] See Janko Roettgers, Relax: Your Amazon Echo Isn’t Recording Everything You Say, Boston Herald (Dec. 28, 2016),

[5] See id.

[6] Id.

[7] See Times Editorial Board, The Smart Home Has Ears, And it Can’t Keep a Secret, L.A. Times (Jan. 4, 2017),

[8] Swearingen, supra note 3.

[9] Times Editorial Board, supra note 7.

[10] See Tony Bradley, How Amazon Echo Users Can Control Privacy, Forbes (Jan. 5, 2017),

[11] See CNN Wire, Data Recorded on Voice-Activated Amazon Echo Sought by Prosecutor in Arkansas Murder Trial, KLTA (Dec. 28, 2016),

[12] Id.

[13] See Roettgers, supra note 4.

[14] See Amy Wang, Can Alexa Help Solve a Murder? Police Think So – But Amazon Won’t Give Up Her Data, Wash. Post (Dec. 28, 2016),

[15] Id.

[16] U.S. Const. amend. IV.

[17] Andrew L. Rossow, Amazon Echo May Be Sending Its Sound Waves into the Courtroom As Out First ‘Smart Witness’, Huff. Post (Dec. 29, 2016)

[18] Smith v. Maryland, 442 U.S. 735, 743-44 (1979).

[19] Bradley, supra note 10.

Image Source:

Your Personal Web History Could Soon Be for Sale

WASHINGTON, DC - OCTOBER 29:  Speaker-elect of the House Paul Ryan (R-WI) delivers remarks before being sworn in on the floor of the House chamber at the U.S. Capitol October 29, 2015 in Washington, DC. Ryan was elected the 62nd speaker of the House with 236 votes and will attempt to steer that chaotic legislative body following the resignation of former Speaker John Boehner (R-OH).  (Photo by Chip Somodevilla/Getty Images)

By: Brad Stringfellow,

Voting down strict party lines, the Republican-majority Senate recently threw out FCC rules which would have provided consumers with more privacy from Internet Service Providers (ISPs).1 As it stands, ISPs such as Comcast are on an even playing field with free services such as Google or Facebook who are able to capture, package, and sell your activity. The reason the FCC sought to put harsher restrictions on ISPs is that consumers have the choice of whether or not to use free services such as Google or Facebook, but there is little consumers can do to escape an ISP’s oversight: using the internet in almost any capacity is accomplished through an always-watching ISP.

The FCC was unhappy with this lucrative opportunity ISPs have to exploit and sell consumer browsing data, especially since consumers must pay ISPs for internet service. In 2016, the FCC passed a new set of rules entitled Protecting the Privacy of Customers of Broadband and Other Telecommunication Services in 2016.2 The new rules were meant to increase consumer privacy by forcing ISPs to increase data security and privacy measures as well as a measure which would only allow the sale of browsing history if the consumer opted-in.3

The FCC explained their view of ISPs by saying that “ ISPs are in a position to develop highly detailed and comprehensive profiles of their customers – and to do so in a manner that may be completely invisible.”4 In justifying the proposed rules, the FCC explained, “[W]ell-functioning commercial marketplaces rest on informed consent.”5 ISPs were not happy about these new rules as it would cost them significantly to upgrade their infrastructure for the security requirements and lost revenue from selling consumer browsing history.6

Stay petitions on the new rules were filed by organizations composed of advertisers, telecom, broadband, ISPs, and other commercial groups sympathetic to ISPs. The FCC granted the stay on March 1, 2017.7 The Commissioner of the FCC, Michael O’Rielly, noted that “there has been no evidence of any privacy harms, and “no benefit to be gained from increased regulations,” while the new rules “place substantial, unjustified costs on businesses and consumers.”8

On March 23, 2017, the Senate passed a vote disapproving the stayed rules. Congress has the power to overturn agency rules with a simple majority through Chapter 8 Title 5 of the US Code.9 The vote was 50-48, with 50 Republican votes to overturn the rules against 48 Democratic votes to approve the rules, with two absent Republican Senators not voting.10 The vote will now go to the majority-Republican House where it will likely follow suit and throw the rules out.

Speaking of the vote’s outcome, Senator Edward Markey, a Democrat from Massachussets, said, “President Trump may be outraged by fake violations of his own privacy, but every American should be alarmed by the very real violation of privacy that will result of the Republican roll-back of broadband privacy protections. With today’s vote, Senate Republicans have just made it easier for American’s sensitive information about their health, finances and families to be used, shared, and sold to the highest bidder without their permission. The American public wants us to strengthen privacy protections, not weaken them. We should not have to forgo our fundamental right to privacy just because our homes and phones are connected to the internet.”11

After winning the vote, Senate Majority Leader Mitch McConnell justified overturning the regulation as it “makes the internet an uneven playing field, increases complexity, discourages competition, innovation, and infrastructure investment.”12

It is curious to note how strictly the vote the went by party lines. Republicans have been supporters of individual rights and privacy in some regards, but here the desire to let big business work things out amongst themselves seems to won out. From the 2016 Republican Platform, they give a statement on internet privacy:

“We intend to advance policies that protect data privacy while fostering innovation and growth and ensuring the free flow of data across borders…We intend to facilitate access to spectrum by paving the way for high-speed, next-generation broadband deployment and competition on the internet and for internet services.”13

Protecting data privacy is balanced with the need to foster innovation and growth: in this case it seems the need to foster innovation and growth won out. In other elements of the Republican Platform, the party is protective of individual rights against big business, such as medical records and farmers’ data.14 The medical records position is stated, “We applaud the advance of technology in electronic medical records while affirming patient privacy and ownership of personal health information.”15 On farmers’ rights, it reads “   We will advance policies to protect the security, privacy, and most of all, the private ownership of individual farmers’ and ranchers’ data.”16 Additionally, the Republican Platform generally opposes aerial surveillance on US soil, with the exception of observation over borders.17

It seems the Republican Party has some intentions to protect individual privacy rights, and even goes so far as to partly acknowledge it, so it is certainly surprising that not one Republican Senator was willing to vote against such a sweeping grant of ISP power.

Since it seems as though this will inevitably pass through the House, what can be done to protect privacy? Virtual Private Networks (VPNs) are perhaps the easiest way to circumvent ISPs, but there are some downsides. VPNs are completely unregulated, and can just easily sell your browsing history as an ISP if careful scrutiny and selection is not applied.18 One VPN company, Private Internet Access, is jumping on the opportunity by taking a full page ad out in the New York Times calling out the 50 Senators who voted to disapprove the rules, and the potential consequences of increased ISP access to private data.19

This is an unsavory turn which grants sweeping power to ISPs to monitor, package, and sell consumers’ browsing history and activity. Hopefully, some Republicans in the House will be more protective of their constituents’ privacy. Contacting your House Representative may help. If things continue along the same path, internet privacy is about to be substantially changed for the worse.





1 David Shepardson, U.S. Senate Votes to Overturn Obama Broadband Privacy Rules, Reuters (Mar. 23, 2017, 1:50 PM),

2 Protecting the Privacy of Customers of Broadband and Other Telecommunications Services, FCC 2500 (Mar. 31, 2016),

3 Id. at 2502.

4 Id.

5 Id. at 2506.

6 Thorin Klosowski, The Senate Just Voted to Let Internet Sell Your Web History, Life Hacker (Mar. 23, 2017, 1:30 PM),

7 Order Granting Stay Petition in the Matter of Protecting the Privacy of Customers of Broadband and Other Telecommunications Services, FCC 1 (Mar. 1, 2017),

8 Id. at 5.

9 5 U.S.C § 8.


11 Edward J. Markey, Senator Markey Blasts GOP Roll-back of Broadband Privacy Protections, (Mar. 23, 2017),

12 Shepardson, supra note 1.

13 Republican Platform 2016 6,

14 Id. at 18, 36.

15 Id. at 36.

16 Id. at 18.

17 Id. at 13.

18 Klosowski, supra note 6.

19 Private Internet Access, A VPN Provider, Takes Out A Full Page Ad in the New York Times Calling Out 50 Senators,

Image Source:×534.jpg.

Snapchat IPO: A Cautionary Tale


By: Courtney Gilmore,


The much-anticipated public offering has finally been filed as of February 2.[1] Snapchat, formally known as Snap Inc., has officially requested a spot on the New York Stock Exchange under the ticker symbol SNAP.[2] The company took advantage of the JOBS Act (Jumpstart Our Business Startups), which allows companies with less than $1 billion of annual revenue to file for IPOs in secret.[3] The friendly ghost will likely go public in March of this year.[4]

While this is an exciting new endeavor for the everyday social media guru, it may be better suited for the high-risk tolerance investors only. Proceed with caution.

Facebook and Twitter aside, Snapchat boasts itself as a camera company, “giving people the power to express themselves and live in the moment.”[5] On the other hand, “Facebook says its mission is connecting everyone, while Google’s is to organize the world’s information.”[6]

Sure, this sounds like an attractive label for any millennial or investor out there, but beyond this, there is not much out there to lay a stable foundation for Snapchat’s future. For instance, Snapchat has an extremely short financial history.[7] Moreover, the company is labeled as secretive by outsiders and employees alike.[8] Evan Spiegel, Snapchat’s Chief Executive Officer, said in a 2015 note to employees, “‘[k]eeping secrets gives you space to change your mind, until you’re really sure that you’re right.’”[9] So, if Snapchat is unable to be transparent with even its own employees, how will prospective investors be able to keep track of their investments?

Snapchat’s founders are seemingly resistant to give up any control whatsoever. While this is a natural instinct for any sensible businessman or woman, Snapchat’s founders Evan Spiegel and Bobby Murphy, maintain that the shares issued to the public will carry no votes.[10] There are three classes of stock in Snapchat: Class A, Class B, and Class C. Only Spiegel and Murphy will control Class C shares, whereby each share receives 10 votes.[11] Class B shares receive one vote per share and are issued to venture capitalists and those investors that have poured capital into the company before its initial public offering.[12] Finally, the Class A shares will be issued to the public.[13] In addition to no vote shares, Snapchat reportedly has no intention of paying out cash dividends to its investors.[14] Without much control, investors must turn to other factors to weigh their risks and rewards.

In 2016, Snapchat recorded revenue of $404.5 million, but losses amounted to $514.6 million.[15] Although its revenue increased by 600% between 2015 and 2016, Snapchat’s current losses exceed Twitter’s at the time of its own IPO, while Facebook had revenue of $3.7 billion at the same point in its life cycle.[16] This begs the question of whether Snapchat will suffer the same fate that Twitter did when it went public. “‘To me, Snap is Twitter 2.0 – a company with a good growth rate that is losing a ton of cash, coupled w/ a massive valuation.’”[17] Snapchat is seeking a $25 billion valuation, which is sixty-two times its revenue.[18] On the other hand, GoPro, comparable to Snapchat’s “camera company” self-description, trades at one times GoPro sales.[19] Twitter trades at five times its revenue, and Facebook trades at fourteen times its revenue.[20]

Another important factor for investors to consider is the slowing in growth that Snapchat has experienced more recently.[21] Since Facebook’s attempt at similar products to Snapchat’s Stories, Snapchat views have allegedly declined between fifteen and forty percent since August.[22] Instagram’s version of Stories can also be attributed to Snapchat’s recent decline in user status.[23] Of Snapchat’s 158 million users, the majority consists of subscribers ranging between the ages of 18 and 34 years old.[24]

Hosting costs are another concern. Snapchat just signed a deal with Google to host Snapchat’s cloud space for $400 million per year.[25] On the surface this doesn’t seem like anything to get hung up on, except that Snapchat’s revenue last year was just about $400 million.[26] Snapchat’s hosting costs are so large because of the many video features Snapchat offers to consumers.[27] Expenses are also growing by employees.[28] Snapchat has tripled its number of employees to total 1,859 in 2016.[29]

On the upside, Snapchat is in the market of offering new, innovative products (like any logical tech company would). For example, Snapchat added its geofilter options in July 2014.[30] The company went on to release the Spectacles in September 2016.[31] Snapchat’s “foray into hardware and its new identity as a ‘camera company’ could cause investors to value it differently than a pure-play company, where profit margins are typically higher.”[32] Snapchat is also expected to bring in close to $1 billion in revenue by the end of this year.[33]

Moral of the story: while the risks are high, the rewards will likely be higher. Snapchat’s video features certainly distinguish the company from its competitors, as do Snapchat’s endeavors with the Spectacles and more products to hit the market. While the company may be secretive, experiencing minimal user decline, and racking up steep payment obligations, there is still a plethora of innovation to look forward to, and Snapchat remains at the cutting edge of it all. Perhaps Snapchat will not offer stock suitable for the novice investor’s portfolio, but it certainly has the potential to evince high reward for those that are even able to buy in initially. This young company has plenty of room to grow and plenty of buzz to live up to.




[1] See Barbara Ortutay, Snap, Maker of the Teen Social App Snapchat, Files for IPO, The Washington Post (Feb. 2, 2017),

[2] See id.

[3] See Seth Fiegerman & Matt Egan, Snapchat Files for $3 Billion IPO, CNN (Feb. 2, 2017),

[4] See id.

[5] Sarah Frier & Alex Barinka, Can Snapchat’s Culture of Secrecy Survive an IPO?, Bloomberg (Jan. 17, 2017),

[6] Id.

[7] See id.

[8] See id.

[9] See id.

[10] See Tom Zanki, 4 Takeaways From Snap’s IPO Filing, Law360 (Feb. 3, 2017),

[11] See Fiegerman & Egan, supra note 3.

[12] See id.

[13] See id.

[14] See Jen Wieczner, Here How Insanely Expensive Snap’s IPO Will Be, Fortune (Feb. 2, 2017),

[15] See Victoria Woollaston, How Snapchat Turned Dick Pics into a Potentially Multi-Billion Dollar IPO, Wired (Feb. 3, 2017),

[16] See Wieczner, supra note 13; Maya Kosoff, Will the Snapchat I.P.O. Be a Flop?, Vanity Fair (Feb. 2, 2017),

[17] See Fiegerman & Egan, supra note 3.

[18] See Wieczner, supra note 14.

[19] See id.

[20] See Eric Jackson, 4 Reasons to Be Wary of the Snapchat IPO, Forbes (Feb. 7, 2017),

[21] See Kosoff, supra note 16.

[22] See id.

[23] See Vikram Nagarkar, Snapchat IPO: The Pros and Cons of Buying Into Snap Stock Right Now, amigobulls (Feb. 6, 2017),

[24] See Woollaston, supra note 15.

[25] See Jackson, supra note 20.

[26] See id.

[27] See id.

[28] See Wieczner, supra note 14.

[29] See id.

[30] See Woollaston, supra note 15.

[31] See id.

[32] Portia Crowe, Snap Files for its IPO, Revealing Surging Sales Growth and Huge Losses, Business Insider (Feb. 2, 2017),

[33] See Nagarkar, supra note 23.

Image Source:

Put Your Money Where Your Mouth Is

LOS ANGELES, CA - JANUARY 29:  A protester holds a sign during a demonstration against the immigration ban that was imposed by U.S. President Donald Trump at Los Angeles International Airport on January 29, 2017 in Los Angeles, California. Thousands of protesters gathered outside of the Tom Bradley International Terminal at Los Angeles International Airport to denounce the travel ban imposed by President Trump. Protests are taking place at airports across the country.  (Photo by Justin Sullivan/Getty Images)

By: Lindsey McLeod


“Put your money where your mouth is” realized a modern meaning in this past week as individuals concerned about President Trump’s travel ban donated to the American Civil Liberties Union (ACLU) as a means of voicing their objection.[1] The ACLU reportedly received $24 million in online donations in the week following the immigration ban, totaling over six-times the ACLU’s yearly donation average.[2] Most of these donations occurred via online portals, flooding the website with donations from 356,306 people. This isn’t the first time that President Trump has sparked an influx of online donations to the ACLU, as the organization received nearly fifteen million dollars in the weeks following Trump’s election.

This online-centric donation model is consistent with millennial behaviors, as millennials tend to donate online, a realm that has dominated millennial financial tendencies.[3] Such innovative and effective online fundraising campaigns are a trademark of the millennial generation, and the ACLU is getting on board. The start-up business model is commonly associated with trendy work environments, invoking images of Ping-Pong tables and office kegs and tech-obsessed millennials. This start-up model, however, has begun to branch beyond the confines of the tech and app environment and into the realm of civil liberties.

The “Y Combinator” provides a new model for funding early stage startups in which the Y Combinator invests “a small amount of money (120k) in a large number of start ups (105),” these startups then move to Silicon Valley for three months where they are able to work with professionals who are familiar with investment pitches and facilitate a business model that effectively reaches target consumers.[4] Because the Y Combinator is typically associated with its graduates such as Airbnb, Dropbox, and similar start-up model consumer products, the ACLU is seemingly out of place in the market, yet the Y Combinator president, Sam Altman, is interested in the potential success that a collaboration between the two groups may have. Although the ACLU is far from a “start up”, having been established in the early 1900s, the ACLU has a history of working with modern, tech-savvy businesses, such as Twitter, to invoke rapid fundraising participation, and thus a more thorough examination of how to improve the business-model may rapidly expand the ACLU’s national and international presence.[5]

This decision by ACLU to partner with Y Combinator is significant in the impact it may have on the expansion of the ACLU and the services that the ACLU is able to offer. Two significant characteristic of the millennial generation, as noted in Leigh Buchanan’s book entitled Meet the Millennials is that they are “masters of digital communication…[and] are primed to do well by doing good. Almost 70 percent say that giving back and being civically engaged are their highest priorities.”[6] Thus, the decision by the ACLU and Y Combinator represents a decision to engage a civic-minded generation on their turf, so to speak. This move is particularly pertinent at a time in American politics in which millennials are seemingly rejecting the current president.[7] The stronger presence that the ACLU may gain upon completion of the three-month Silicon Valley program may prove to ignite a generation of civically engaged individuals, and perhaps future ACLU lawyers.




[1] Katie Mettler, The ACLU says it got $24 million in online donations this weekend, six times its yearly average, The Washington Post (Jan. 30, 2017)

[2] See id.

[3] See Randy Hawthornw, Understanding What Motivates Millennials to Give to Your NPO, (last visited Feb. 3, 2017).

[4] Y Combinator,

[5] See Sarah Ashley O’Brien, ACLU is participating in elete Silicon Velley accelerator, CNN Tech (Jan. 31, 2017)

[6] Jay Gilbert, The Millennials: A new generation of employees, a new set of engagement policies, Ivey Business Journal (Sept. 2011)

[7] See Cody Boteler, Students plan demonstrations and walkouts to protest Trump’s inauguration, USA Today (Jan. 19, 2017),

Image Source:

Airbus Flying Car Prototype Announced: How Will the Law Adapt?


By: Will MacIlwaine,

In 2016, Urban Air Mobility, a division of Airbus Group, began looking into the possibility of self-flying vehicles.[1] On January 16, Airbus Chief Executive Officer Tom Enders announced that the company plans to test a prototype of a self-flying taxi for a single passenger by the end of 2017.[2] The company’s flying taxi system will be called CityAirbus, and customers will be able to book a taxi using a smartphone device.[3]

While Airbus plans to have a taxi prototype ready by the end of this year, it also hopes to have models of its flying vehicle for sale by as early as 2020.[4] The benefits of flying vehicles seem abundant, two obvious benefits being avoidance of congested roadways, as well as potentially faster travel times. Aside from the sheer convenience of a flying car, Mr. Enders believes that a product such as his company’s prototype could decrease costs for city infrastructure planners, as flying cars would not travel on roads or bridges that are often costly to maintain and repair.[5] Further, air pollution could be reduced significantly in a move toward flying vehicles, as Airbus is committed to making its flying vehicles fully electric.[6]

As intriguing as this idea may seem, there are certainly issues that will need to be addressed, as well as potential legal ramifications that could arise through the introduction of this product. Airbus believes the biggest task its team will face is making its CityAirbus taxi fly on its own, without a pilot.[7] Tesla has introduced a similar autopilot feature for its Tesla Model S automobile, but has faced criticism as reports of accidents have surfaced in the past year. Enders’ team faces an even taller task: ensuring that its autopilot feature is successful in the air.

There are a variety of potential disastrous lawsuits that the CityAirbus technology might cause. For one, if two CityAirbus taxis crash into each other, how is liability determined? Certainly the passengers in the flying vehicles would not be liable, as the passenger is not the one operating the self-flying car. Airbus would most certainly be legally responsible for these accidents. This could also extend further, encompassing situations in which Airbus vehicles malfunction and damage buildings, or worse, injure the passengers of the flying cars.

Regarding the risk of injury while using a CityAirbus taxi, it is likely that passengers would be given extensive warnings about the dangers and risks of using the vehicles. If the user sees these warnings and understands the dangers inherent in flying cars, yet still voluntarily decides to ride in the vehicle, wouldn’t this amount to implied assumption of risk and bar any negligence claims by the passenger against Airbus?

Further, a new legal framework would need to be developed for flying cars. Would flying cars have to abide by speed limits? Would owners of these vehicles who purchase them in 2020 have to obtain a “flying license,” even though the vehicle is self-operated? Would flying cars need insurance just like ordinary cars? Would federal government regulate all of these things, or would the states be responsible for creating guidelines for flying cars?[8]

These are not the only legal questions surrounding flying vehicles. Would there be restricted areas where flying cars could not travel, such as around airports? If so, how would these regulations be legally enforced, when law enforcement officials are busy fulfilling their duties on the ground? Cities and states might be required to purchase similar flying vehicles so that its law enforcement officers could travel in them to enforce these regulations in the air. Wouldn’t this certainly offset and likely exceed the cost savings for city infrastructure planners that Mr. Enders predicted? While only hypothetical questions today, these legal issues will likely arise eventually if the Airbus team is successful in introducing its prototype by the end of this year.

Flying cars could certainly offer obvious advantages, but it seems that Mr. Enders and his team have many questions to consider in its development of CityAirbus if the company is to ensure that its potentially historical technological advancement does not turn into a legal nightmare.



[1] See Forget Self-Driving Cars: Airbus Will Test a Prototype Flying-Taxi by the End of This Year, Reuters, Jan. 16, 2017,

[2] See id.

[3] See id.

[4] See id.

[5] See Victoria Bryan, Airbus CEO Sees ‘Flying Car’ Prototype Ready by End of Year, Reuters, Jan. 16, 2017,

[6] See Jay Bennett, Airbus Wants to Test its Flying Car Prototype This Year, Popular Mechanics, Jan. 16, 2017,

[7] See Forget Self Driving Cars, supra note 1.

[8] See Cory Smith, Soaring to New Heights: Flying Cars and the Law, Michigan Telecomm. & Tech. L. Rev., Oct. 22, 2015,

Image Source:×439.jpg.

Page 3 of 18

Powered by WordPress & Theme by Anders Norén