The first exclusively online law review.

Category: Blog Posts Page 58 of 76

Cyberstalking: Enabled by Cheap, Accessible Technology

By: Brooke Throckmorton

 

With the frequent use of technology today, it should come as no surprise that its reach is abused. While technology can be entertaining, helpful, and time-saving, it can also be terrifying, intrusive, and ultimately enabling to those who want to intimidate or harm others.

In 2009, 14 out of every 1,000 people, age 18 or older, fell victim to stalking.[1] One in four of these stalking victims reported some form of cyberstalking (83% by e-mail, 35% by instant messaging).[2] Stalking is often defined by its effect on the victim, namely, the fear it produces.[3] Stalking creates a type of “psychological prison” for victims that includes feelings of fear, paranoia, shame, isolation, depression and in severe cases, Post Traumatic Stress Disorder (PTSD).[4] Stalking is legally defined as “following or loitering near another” to “annoy or harass that person or to commit a further crime such as assault or battery.”[5] Some statutes incorporate additional elements such as requiring the victim feel “distressed” about their own personal safety or the safety of close friends or family.[6] The definition of cyberstalking adds the element of intimidation through the use of e-mails or the Internet to place “the recipient in fear that an illegal act or injury” will be inflicted upon the recipient or a member of that person’s family or household.[7] The rise in availability of cheap technology has allowed cyberstalking to replace traditional approaches to stalking.[8]

Advanced technology allows stalkers to constantly terrorize their victims by “tracking and monitoring them” as they move throughout the world with their smartphones, computers, and I-pads.[9] In addition to GPS tracking systems installed in virtually every smartphone, social media plays a huge role in enabling abusers to reach their victims. With multiple social media outlets to choose from, it allows stalkers multiple options to observe and intimidate their victims. For example, a 2017 feature introduced on the Snapchat app, called Snap Map, allows your “friends” to view your location at all times if you are “opted-in”.[10] If a user chooses to opt-in and share their location with “friends”, these “friends” can view the user’s location at all times, even if the user is not chatting with them in the app or sending them snapchats.[11] The biggest concern with this new feature is that some users may not understand the implications of turning on their Snap Map location.[12]  In turn, they may be inadvertently sharing their location at all times with potential cyberstalkers.

You may be asking how the law deals with cyberstalking. Good news! There is a federal statute that specifically speaks to the crime of cyberstalking.[13] The statute is titled “Stalking” but contains a provision that refers specifically to using “any interactive computer service or electronic communications service or electronic communication system of interstate commerce” with intent to do harm or place a person under surveillance for such harm.[14] A Virginia man was recently charged, convicted, and sentenced to 41 months in jail under this statute in March of this year.[15] Richard Killebrew, a resident of Norfolk, Virginia, used a computer and cell phone to communicate threatening messages, some of which contained death threats, to multiple victims in Nebraska.[16] As for the state law frontier, some states have enacted specific cyberstalking statutes. Others continue to rely on their stalking statutes and apply the terms to electronic communications.[17] This can be problematic given the unique nature of cyberstalking.[18]

While there are laws in place to bring relief to victims of cyberstalking, you can be proactive by monitoring your own Internet activities. For example, if you have Snapchat, ensure that you are either opted-in or opted-out of Snap Map. If you have a smart phone, you can monitor which apps are using your location. You can find location services in your privacy settings. In these settings, you can view if you are sharing your iPhone location. You can also scroll down to view which apps are using your location, indicated by “never”, “while using”, or “always.”

While the federal government and select state governments have expanded their statutes to explicitly include cyberstalking, all states should have such a provision. Since cyberstalking can be done at any and all times, there should be unique statutory language to speak to the solely electronic communications. While cyberstalking is not wholly preventable, there are means that you can monitor your online activities that can make you less susceptible to cyberstalking.

 

 

[1] Katrina Baum, Shannon Catalano, Michael Rand, Stalking Victimization in the United States, U.S. Dep’t of Just., (Jan. 2009), https://www.justice.gov/sites/default/files/ovw/legacy/2012/08/15/bjs-stalking-rpt.pdf.

[2] Katrina Baum, Shannon Catalano, Michael Rand, Stalking Victimization in the United States, U.S. Dep’t of Just., (Jan. 2009), https://www.justice.gov/sites/default/files/ovw/legacy/2012/08/15/bjs-stalking-rpt.pdf.

[3] See Melvin Haung, Keeping Stalkers at Bay in Texas, in Domestic Violence Law 282, 284 (Nancy Lemon ed., 2013).

[4] Id. at 285.

[5] Stalking, Black’s Law Dictionary (10th ed. 2014)

[6] Id.

[7] Cyberstalking, Black’s Law Dictionary (10th ed. 2014).

[8] Supra note 3, at 282.

[9] Id.

[10] See What’s the Deal with Snap Map?, Tech. Safety (Sept. 21, 2017 3:13 PM), http://www.techsafety.org/blog/2017/7/6/whats-the-deal-with-snap-map.

[11] See id.

[12] See id.

[13] See 18 U.S.C. 2261A(2).

[14] Id.

[15]  Virginia Man Sentenced for Cyber Stalking, U.S. Dep’t of Just., (Mar. 13, 2017), https://www.justice.gov/usao-ne/pr/virginia-man-sentenced-cyber-stalking.

[16] Id.

[17] Is There a Law Against Cyberstalking or Cyberharassment?, HG.org, https://www.hg.org/article.asp?id=31710.

[18] Id.

Image Source: https://www.google.com/search?q=cyberstalking&rlz=1C5CHFA_enUS706US706&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjFvNCsl7fWAhWLKyYKHYZbCXwQ_AUICygC&biw=1184&bih=590#imgrc=l49WQTiho3FbHM

Reproductive Technology Creates More than Just Children

By: Hayden-Anne Breedlove

Married couples often do not think about custody issues in the event of divorce. Many couples with children get caught up in the moment and fail to plan what to do in the event of a divorce. With today’s advances in reproductive technology, new legal issues are arising in custody disputes that involve children that have not yet been born.

Recent innovations in reproductive technology allow individuals, who at one time were unable, to become pregnant. The egg, sperm, and womb needed to make a baby can be provided by three separate people or after a person’s death. However, new legal and ethical issues have arisen with the new technology that extend beyond the standard of which parent is more “fit” to act as custodian of the child. Judges are now left with the question of deciding who has custody over a frozen egg post-divorce.

Take for example a married woman diagnosed with a form of cancer that would eliminate the possibility of her getting pregnant after her treatments. What if she chose to have her eggs inseminated by her husband’s sperm before her cancer treatments and then frozen for the couple’s use at a later time? What if the couple gets divorced before they can use these eggs? Who gets to keep them or should they just be destroyed? Should the wife be allowed to birth her ex-husband’s children? If not, the woman would not be able to have any more children since her cancer treatments left her sterile. These issues grant the court both a moral and ethical dilemma in deciding cases.

Davis v. Davis was the first case that addressed this topic.[1] During the marriage, the couple attempted to conceive through in-vitro fertilization.[2] The couple later got divorced, thus giving rise to the dispute at bar.[3] The dispute arose over what to do with the eggs. The wife initially wanted the frozen pre-embryos implanted in her but then decided she wanted them to be donated to childless couples.[4] The husband wanted the eggs to be discarded.[5] The court ruled in favor of the father, allowing for the eggs to be discarded and destroyed, citing the rationale that his interest in not becoming a parent outweighed the interest in the wife who wished to donate the embryos.[6]

The court held in Litowitz v. Litowitz that embryos could not be implanted in the wife post-divorce without the husband’s consent.[7] In this case, a couple was unable to have a child since the wife was unable to produce eggs or give birth.[8] They got eggs from a third party egg donor and fertilized them with the husband’s sperm.[9] Through this process, they had one child, but later got a divorce.[10] After the divorce, the mother sought to have the eggs implanted inside her in order to have another child.[11]

The court seems to be ruling in favor of the party who chooses to have the eggs destroyed, perhaps as a consideration under the Fourth Amendment’s right to privacy.[12] The right to privacy of the parent choosing to not go forward with having a child is stronger than the right of the other parent to have a child.[13] As reproductive technology advances and becomes more common for couples facing challenges with childbirth, the court will continue to have to rule on cases involving this issue.

[1] Recent Case Law on Division of Frozen Embryos in Divorce Proceedings, http://www.divorcesource.com/research/dl/children/03mar54.shtml (last visited Sept. 18, 2017).

[2] Davis v. Davis, 842 S.W.2d 588 (Tenn. 1992).

[3] Id.

[4] Id.

[5] Id.

[6] Id.

[7] Litowitz v. Litowitz, 146 Wn.2d 514, 515 (S.C. Wash. 2002).

[8] Id.

[9] Id.

[10] Id.

[11] Id.

[12] See supra note 2; See also supra note 7.

[13] See supra note 1.

Image Source: http://www.istockphoto.com/photo/happy-family-gm515779948-88654551.

Why Smartphones Have Not Outsmarted the Sun

By: Hayden-Anne Breedlove

 

As the summer months wind down, many last-minute vacationers try to squeeze in one more beach trip for a relaxing, work-free vacation. However, with today’s all-encompassing access to email, messages, and online work databases, it is easy to “forget” what a vacation is all about. Instead, many professionals spend more time replying to work emails than soaking up the sun. Lawyers’ use of smartphones is basically universal, with most attorneys using them for simple tasks like conducting legal research or scanning documents in depositions.[1] This makes work life easier by allowing attorneys instant access to much needed information. However, for the Vitamin-D deprived individuals who decide to multitask and try to do work on either a cell phone or computer on the beach, they are faced with the pestering problem of being unable to see their smartphone in the bright sunlight. Why is it that in a world where smartphones are a common accessory that have been around for years, technology companies have never implemented a system of viewing a phone screen in the sun?

The technology is out there, as seen in Amazon’s Kindle Paperwhite, an e-reader that sells itself based on the feature that the screen can be easily viewed in the sun.[2] This e-reader employs an E-Ink brand electronic paper display that features sixteen shades of gray, making the text resemble a book, and ultimately making it easy to read in the sun.[3] E-Ink is an old partner of Amazon and a holder of many patents on the technology surrounding the Paperwhite.[4] However, to make it its own, Amazon developed a layer of plastic that sits on top of the E-Ink display and shines light down on it, thus making the E-Ink display look better and easier to read.[5]

In 2016, E-Ink, along with multiple e-reader producing companies, including Sony Electronics, Sony Corporation, Barnes & Noble Inc., BarnesandNoble.com LLC, and Amazon.com Inc., faced litigation over patent rights against Research Frontiers, Inc. (RFI).[6] RFI is a corporation that has worked exclusively on “developing suspected particle technology applicable for use in display and light control applications.”[7] RFI alleged patent infringement on three patents involving the particle technology.[8] RFI alleged that E-Ink infringed upon their “491 Patent,” entitled “Light Valve Employing a Film Comprising an Encapsulated Liquid Suspension, and Method of Making Such Film.”[9]

E-Ink made a motion for summary judgment, arguing a lack of dispute of material facts, claiming they were not infringing upon Patent 491.[10] The Court looked towards other patents and the basis of the contested patents.[11] Through this, the Court determined that there were genuine issues of material fact in dispute, thus denying the defendants’ motion for summary judgment.[12]  This case serves as an example of the litigation and importance of patent violation and implementation in technology.

The question still remains why this technology has not been implemented into cell phone display screens. There seem to be issues which involve the implementation of this technology with full color display technology.[13] However, Amazon has the technology through its patent with E-Ink that could lead to the development of a fast, high contrast display.[14] This will be an interesting topic to follow as technology advances, thus leading to the implementation of already existing technology into a device that already makes our lives simpler. Overall, too much sunshine might no longer be a valid excuse to not answer a work-related email.

 

 

[1] Legal Research and Law Library Management (Law Journal Press 2015).

[2] Id. at 6.

[3] Id.

[4] See Christopher Mims, Amazon is Working on Displays that Apple and Samsung Can’t Match, Quartz (Aug. 06, 2013), https://qz.com/112444/amazon-is-working-on-displays-that-apple-and-samsung-cant-match/.

[5] See id.

[6]  See id.

[7] Research Frontiers, Inc. v. E Ink Corp., 2016 U.S. Dist. LEXIS 44547, 2, 2 (2016).

[8] Id. at 3.

[9] Id.

[10] Id.

[11] Id. at 41.

[12] Id.

[13] See supra note 4.

[14] Id.

Image Source: https://www.usatoday.com/story/tech/columnist/saltzman/2017/07/15/how-use-your-smartphone-beach-and-bright-sunlight/477693001/

Artificial Intelligence May Be the Key To Combating Users’ Abuse of “Facebook Live”

mark-zuckerberg

By: Kathleen Pulver,

“Facebook Live” was created with the intention of allowing users to engage more thoroughly with their followers, connect with others instantaneously, and tell stories their own way.[1] Many used the Facebook live function to stream real time shots of protests and marches surrounding this year’s inauguration, and the weeks that followed, including major news outlets.[2] During these protests, some peaceful and some not, the live function allowed people around the world to witness the action as it unfolded, and share their thoughts. More than fifteen thousand people commented on one ABC news video alone.[3] Overall, most people’s experience with Facebook Live has been positive, and it has been used as it was intended. However, in a horrifying new trend, the function has turned into a way for people to showcase terrifying displays of violence against others, and even themselves.[4]

The examples of these horrifying uses abound. In December of 2016 a twelve-year-old girl used Facebook Live to live stream her suicide, as she hung herself in her family’s backyard.[5] The broadcast went on for more than twenty minutes, and remained visible on the young girl’s Facebook page until late that night when a police officer from California notified the local police chief in Georgia.[6] The police have been working ever since to have the video removed.[7] In another well publicized event in January 2017, 4 teenagers tied up and tortured another teen victim while live streaming the attack via Facebook Live.[8] The teens even spoke directly into the camera and commented on the lack of support they were receiving in the comments to the video.[9] There are hundreds of other examples of violence being intentionally or accidentally recorded via Facebook Live, and then streamed for the world, or at least the users’ “friends,” to see. Many people have expressed their outrage with the social media giant, expressing concern over Facebook’s inability to control the content that is allowed to be shown, and their inability to do anything to stop the violence from occurring.[10]

The legal challenge presented with live streaming video is drawing the line between too much protection, therefore just banning all content all together, and no protection, allowing these incidents to occur without ramifications or the ability to stop them. Some people expect Facebook to allow them to post whatever they like, upholding their First Amendment right to free speech, while others argue that uncontrolled posting could lead to violent or inappropriate content being shown to a child. Facebook has already instituted reporting tools in the Live function, similar to the reporting tools available for normal posts.[11] Facebook currently has a team in place 24 hours a day, 7 days a week, to monitor reported posts and if the content is found to violate Facebook’s standards, “it will be removed.”[12] The problem is, not everyone reports videos. For example, on December 28, 2016, a woman died while live streaming a video of herself playing with her toddler.[13] The woman was not the victim of a violent crime, but simply succumbed to a medical condition.[14] The video showed her being to sweat, getting dizzy, and eventually passing out, but no one reported the video.[15] Had someone reported the video as showing inappropriate, or containing disturbing content, a message would have been sent to the Facebook review team, and help may have been provided prior to her death.[16] Facebook has been struggling to find a way to address this problem for months, but thinks they may have found a solution in artificial intelligence.[17]

Artificial intelligence is the “science and engineering of making intelligent machines.”[18] Facebook already uses artificial intelligence to collect data about users to create targeted ads to each user.[19] The computer systems are able to use algorithms to classify data on their own, and determine what to do with it.[20] In this way, artificial intelligence could be used to classify live data as unacceptable under Facebook’s conduct standards and have it reported, or classify it as acceptable and allow the post to continue. It is a waiting game to see whether artificial intelligence will be able to properly combat the problems of inappropriate content in a quick manner to address the Facebook Live function. Ideally, the artificial intelligence will be smart enough to easily detect whether the content is inappropriate or dangerous, instead of simply broadly censoring content for fear it may reach a dangerous level. If the artificial intelligence can toe the line carefully between too much censorship and blocking violent content or providing help as needed, it will likely be the best possible solution to the legal problems presented with live streaming video.

 

 

[1] See Facebook Live, https://live.fb.com/about/.

[2] See e.g., BuzzFeed News, Facebook (Nov. 9, 2016), https://www.facebook.com/BuzzFeedNews/videos/1300266563327692/, ABC News, Facebook (Sep. 21, 2016), https://www.facebook.com/ABCNews/videos/10154814613813812/.

[3] See ABC News, Facebook (Sep. 21, 2016), https://www.facebook.com/ABCNews/videos/10154814613813812.

[4] See e.g., Monica Akhtar, Facebook Live captures Chicago shooting that killed toddler, Washington Post (Feb. 15, 2017, 11:14 AM), https://www.washingtonpost.com/video/national/facebook-live-captures-chicago-shooting-that-killed-toddler/2017/02/15/10ac4f22-f39b-11e6-9fb1-2d8f3fc9c0ed_video.html.

[5] See Corey Charlton, SUICIDE STREAMED ONLINE Girl, 12, streams her own suicide on social media for 20 minutes after being ‘sexually abused by a relative’ – and cops are powerless to take it down, The Sun (Jan. 12, 2017, 8:51 AM), https://www.thesun.co.uk/news/2594640/girl-12-streams-her-own-suicide-on-facebook-live-for-20-minutes-after-being-sexually-abused-by-a-relative-and-cops-are-powerless-to-take-it-down/.

[6] See id.

[7] See id.

[8] See Jason Meisner, William Lee, & Steve Schmadeke, Brutal Facebook Live attack brings hate-crime charges, condemnation from White House, Chicago Tribune (Jan. 6, 2017, 6:59 AM), http://www.chicagotribune.com/news/local/breaking/ct-facebook-live-attack-video-20170105-story.html.

[9] Id.

[10] See e.g., Cleve R. Wootson, Jr., How do you just sit there?’ Family slams viewers who did nothing as woman died on Facebook Live, Washington Post (Jan. 3, 2017), https://www.washingtonpost.com/news/true-crime/wp/2017/01/03/how-do-you-just-sit-there-family-slams-viewers-who-did-nothing-as-woman-died-on-facebook-live/?tid=a_inl&utm_term=.d2a658044bba.

[11] See Facebook Live, https://live.fb.com/about/.

[12] Id.

[13] See supra note 10.

[14] See id.

[15] See id.

[16] See id.

[17] See Alex Kantrowitz, We Talked To Mark Zuckerberg About Globalism, Protecting Users, And Fixing News, BuzzFeed News (Feb. 16, 2017, 4:01 PM), https://www.buzzfeed.com/alexkantrowitz/we-talked-to-mark-zuckerberg-about-globalism-protecting-user?utm_term=.tn96QG0jk#.ugJ52Om19.

[18] John McCarthy, What is Artificial Intelligence?, http://www-formal.stanford.edu/jmc/whatisai/node1.html (last updated Nov. 12, 2007).

[19] See Bernard Marr, 4 Mind-Blowing Ways Facebook Uses Artificial Intelligence, Forbes (Dec. 29, 2016, 1:01 AM), http://www.forbes.com/sites/bernardmarr/2016/12/29/4-amazing-ways-facebook-uses-deep-learning-to-learn-everything-about-you/#15a49b212591.

[20] See id.

Image Source: http://en.mercopress.com/data/cache/noticias/59308/0x0/mark-zuckerberg.jpg.

Vaping: Not Just Tobacco

title_ny_4thave_0

By: Daniel Eggleston

 

E-cigarettes, also called vape pens, were once heralded as a much safer alternative to traditional cigarettes, and a way for smokers to either kick the habit or decrease cancer risks.[1] Because e-cigs are available in a wide array of flavors and devices (some look like pipes, others like cigarettes, and many look like futuristic gadgets), many members of the public grew concerned of the e-cig’s potential appeal to youngsters.[2] The FDA released statistics corroborating this fear: in “2013-2014, 81% of current youth e-cigarette users cited the availability of appealing flavors as the primary reasons for use.”[3], and that “e-cigarettes . . . [w]ere the most commonly used tobacco product among youth” in both 2014 and 2015.[4]

While these statistics might raise eyebrows by themselves, a new use for vape pens is becoming increasingly more widespread.[5] CNN published a story on vape pens being used to as a vehicle to consume illegal drugs like flakka, methamphetamines, heroin, and marijuana.[6] “Water-soluble synthetics are easily converted into liquid concentrate that can go into the device cartridges and be vaped just like nicotine and other legal substances.”[7] This makes it difficult for law enforcement officers to detect if illicit drug use is occurring or whether an e-cig simply contains flavored tobacco oil.[8] Police have a harder time establishing probable cause because of the uncertainty of an e-cig contains nicotine, or something worse.[9] Furthermore, this masked consumption has also resulted in people unknowingly consuming, and in some cases overdosing, on illegal drugs the user unknowingly consumed.[10]

Researchers at Virginia Commonwealth University received a grant from the Department of Justice to explore “how drug users are increasingly using e-cigarette devices to vape illicit drugs.”[11] Users pass on this knowledge via online drug forums and YouTube tutorials, explaining how meth can be consumed in the workplace, with no one the wiser.[12] What’s more, social media users and celebrity culture are endorsing vape pens as a discreet way to get high in public, in school, or in the workplace.[13]

The research team is testing the efficacy of vape pens in delivering drugs like meth, heroin, marijuana, and others to the user.[14] That vape pens are effective is indisputable given the wide-spread consumption of drugs through the devices – what the researchers are measuring is the dosage levels transmitted in the vapor clouds and analysis of the “commercially available e-liquids to see if the purported contents matched the labels.”[15] The researchers found wide discrepancies between ingredients listed on the labels and what the e-liquids actually contained.[16] Some e-liquids contained drugs that labels specifically claimed they did not contain, prompting the researchers to cite major concern over the lack of regulatory labeling oversight.[17]

The Food and Drug Administration has responded to some of these concerns with increased regulation over the e-cigarette industry.[18] One of these regulations requires “federal approval for most flavored nicotine juices and e-cig devices sold in vape shops.”[19] What remains to be seen, however, is how the FDA responds to the use of e-cigs for their as a vehicle for consuming illicit drugs.

 

 

 

[1] See Sara Ganim & Scott Zamost, Vaping: The latest scourge in drug abuse, CNN, (last visited Sept. 5, 2015) http://www.cnn.com/2015/09/04/us/vaping-abuse/.

[2] See id.

[3] Vaporizers, E-Cigarettes, and other Electronic Nicotine Delivery Systems (ENDS), Food and Drug Admin. (last visited Feb. 13, 2017) https://www.fda.gov/TobaccoProducts/Labeling/ProductsIngredientsComponents/ucm456610.htm#regulation.

[4] See id.

[5] See supra note 1.

[6] See id.

[7] Id.

[8] See id.

[9] See supra note 5.

[10] See id.

[11] Brian McNeill, Shedding light on a vaping trend: Researchers study the use of e-cigarettes for illicit drugs, Va. Commonwealth Univ. News (last visited Feb. 22, 2017) https://news.vcu.edu/article/Shedding_light_on_a_vaping_trend_Researchers_study_the_use_of.

[12] See id.

[13] See id.

[14] See id.

[15] Supra note 10.

[16] See id.

[17] See id.

[18] See Laurie Tarkan, How new rules could kill the vaping boom, Fortune (last visited Sept. 29, 2015) http://fortune.com/2015/09/29/vaping-fda-rules/.

[19] Id.

Image Source: http://assets.hightimes.com/styles/large/s3/title_ny_4thave_0.jpg.

The Future of Self-Driving Cars

images

By: Genevieve deGuzman,

The race to develop autonomous technology has led to the fast-growing rise of autonomous or self-driving cars. Automakers, technology companies, startups, and even governments are getting involved.[1] So how do these self-driving cars actually work? Each automaker uses different technology for their cars, but these cars use either computer vision-based detection or laser beams to generate a 360-degree image of the car’s surrounding area, multiple cameras and radar sensors measure the distance from the car to various objects and obstacles, and a main computer analyzes data, such as size and rate of speed of nearby objects, from the sensors and cameras and compares its stored maps to assess current conditions and predict likely behavior.[2] The cameras also detect traffic lights and signs and help recognize moving objects.[3]

Automakers such as Tesla, General Motors, Toyota, Lexus, Ford, Fiat Chrysler, Honda, Volvo, Volkswagen, and technology companies such as Google, Apple, nuTonomy, and Intel have all joined in the race to develop self-driving cars.[4] This push may be caused by Uber, which is a “digital hybrid of private and public transport” and has made “ride-hailing” so comparatively convenient and cheap that it threatens the car ownership industry.[5] Further, with technology becoming increasingly integrated in and almost detachable from consumer life, self-driving cars are efficient and convenient, allowing the “driver” to interact with their phones and other technology while safely getting to their destination.

In 2016, Uber’s self-driving truck made its first delivery, driving 120 miles with 50,000 cans of beer, changing the future of truck driving and deliveries.[6] Later that year, Uber also tested its autonomous driving technology in San Francisco until California’s Department of Motor Vehicles revoked the registrations for sixteen Uber cars for not marking the cars as test cars.[7] However, Uber contended that their cars do not need self-driving car permits because they were operated with a “safety driver” behind the wheel as the cars’ programming still requires a person behind the wheel to monitor the car and works more like advanced driver assist technologies, like that of Tesla’s autopilot.[8] The revocation of the registrations may have been made in light of the deadly crash of a Tesla’s Model S, which is not a self-driving car but contains self-driving features to assist drivers. Tesla ultimately attributed this accident and two other accidents to “human error, saying the drivers 1) were inattentive, 2) disabled the automation and 3) misused the Summon feature and didn’t heed the vehicle’s warnings.”[9] Unlike Tesla’s autopilot, which focuses on driver assistance, Google’s Waymo is focusing on creating a fully autonomous car but has not put them on the market.

Some self-driving cars have already hit the market, and expectedly, there is a push for national self-driving vehicle regulation standardization. The United States Department of Transportation (DOT) released its Federal Automated Vehicles Policy in September 2016, setting guidelines for highly automated vehicles (HAVs) and lower levels of automation, such as some of the driver-assistance systems already deployed by automakers.[10] The policy guideline includes a 15-point safety assignment to “set clear expectations for manufacturers developing and deploying automated vehicle technologies,” a section that presents a clear distinction between Federal and State responsibilities for regulating automated vehicles, and current and modern regulatory rules.[11] Combined with the recent guidelines, the DOT also issued proposed rules for cars to talk to each other to prevent accidents to “illustrate the government’s embrace of car-safety technology after years of hesitation, even as distractions in vehicles contributed to the biggest annual percentage increase of road fatalities in 50 years” and attempt to fix vehicle deaths and reduce crashes.[12] The cars would use radio communications to send alerts to devices in the cars to warn drivers of risks of collisions, presence of a car in a driver’s blind spot, presence of oncoming traffic, and traffic slowing or stopping.[13]

Although the DOT invoked some standardization, they say nothing about “how it is tested (or even defined), how cars using it will operate, or even who should settle these questions.”[14] On February 14, 2017, the House Subcommittee on Digital Commerce and Consumer Protection held a hearing regarding the deployment of autonomous cars where representatives of General Motors, Toyota, Volvo, and Lyft provided testimony about how the parties think the federal government should regulate the new technology.[15] Automakers and technology companies developing autonomous technology want federal intervention to provide a “broad, consistent framework for testing and deploying their robots,” fearing states creating a “patchwork of regulations.”[16] Federal regulators would allow greater flexibility and wide latitude in how to prove the safety of the autonomous driving technology.[17] Congress will have to decide how to measure the safety of these autonomous cars and dictate the standards of safety they must have as the age of the robocar and its transition into consumer lives seems to be an inevitability.

 

 

 

[1] Matt McFarland, 2016: A tipping point for excitement in self-driving cars, CNN Tech (Dec. 21, 2016), http://money.cnn.com/2016/12/21/technology/2016-year-of-autonomous-car/ (last visited Feb. 18, 2017).

[2] See Guilbert Gates et al., When Cars Drive Themselves, NY Times (Dec. 14, 2016), https://www.nytimes.com/interactive/2016/12/14/technology/how-self-driving-cars-work.html?_r=0 (last visited Feb. 18, 2017).

[3] See id.

[4] See id.

[5] See John Gapper, Why would you want to buy a self-driving car?, Financial Times (Dec. 7, 2016), https://www.ft.com/content/7fad3a62-bb06-11e6-8b45-b8b81dd5d080 (last visited Feb. 18, 2017).

[6] See Alex Davies, Uber’s Self-Driving Truck Makes its First Delivery: 50,000 Beers, Wired (Oct. 25, 2016), https://www.wired.com/2016/10/ubers-self-driving-truck-makes-first-delivery-50000-beers/ (last visited Feb. 18, 2017).

[7] See Avie Schneider, Uber Stops Self-Driving Test In California After DMV Pulls Registrations, NPR (Dec. 21, 2016), http://www.npr.org/sections/thetwo-way/2016/12/21/506525679/uber-stops-self-driving-test-in-california-after-dmv-pulls-registrations (last visited Feb. 18, 2017).

[8] See id.

[9] See id.

[10] See U.S. Dep’t of Transp., Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety (2006), available at https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf.

[11] See id.

[12] See Cecilia Kang, Cats Talking to One Another? They Could Under Proposed Safety Rules, NY Times (Dec. 13, 2016), https://www.nytimes.com/2016/12/13/technology/cars-talking-to-one-another-they-could-under-proposed-safety-rules.html (last visited Feb. 18, 2017).

[13] See id.

[14] See Alex Davies, Congress Could Make Self-Driving Cars Happen—or Ruin Everything, Wired (Feb. 15, 2017), https://www.wired.com/2017/02/congress-give-self-driving-cars-happen-ruin-everything/ (last visited Feb. 18, 2017).

[15] See id.

[16] See id.

[17] See id.

Image Source: https://www.nissanusa.com/content/dam/nissan/blog/articles/autonomous-drive-car/nissan-self-driving-car.jpg.

Inspection or Detention

 

border-searches.jpg.size.custom.crop.1086x724

By: Eleanor Faust,

 

Reports have surfaced that in the days preceding President Trump’s executive order effectuating an immigration ban, the Center for American-Islamic Relations (CAIR) filed legal complaints concerning hostile interrogations by Customs and Border Patrol agents.[1] The complaints allege that the agents demanded the travelers unlock their phones and provide them with social media account names and passwords.[2] Courts have held that customs agents have the authority to manually search devices at the border as long as the searches are not made solely on the basis of race or national origin.[3] This does not mean that travelers are required to unlock their phones but if they refuse, they run the risk of being detained for hours for not complying with the agent’s request.[4]

When returning home from a trip abroad, you expect to feel welcomed upon arrival but that has not been the case for many recently. When Sidd Bikkannavar got off the plane in Houston from a personal trip to South America, he was detained by the U.S. Customs and Border Patrol.[5] Bikkannavar is not a foreign traveler visiting the United States. He is a natural born U.S. citizen who works at NASA’s Jet Propulsion Laboratory. He has also undergone a background check and is enrolled in Global Entry to allow expedited entry into the United States.[6] While he was detained the customs agents demanded his phone and access PIN without giving him any information as to why he was being questioned.[7] A major concern is that Bikkannavar had a NASA issued phone that very well could have contained sensitive information that should not have been shared.[8] For a number of different professionals, these types of border searches compromise the confidentiality of information.[9] For example, searching the phone of a doctor or lawyer can reveal private doctor-patient or attorney-client information.[10]

Although there is no legal mechanism to make individuals unlock their phone, the customs agent’s have broad authority to detain travelers which can often be intimidating enough to make a person unlock their phone to avoid being in trouble.[11] Homeland Security Secretary John Kelly is looking to expand customs agent’s authority and is pushing to be able to obtain all international visitor’s social media passwords and financial records upon their arrival into the country.[12] At a meeting with Congress, Kelly told the House Homeland Security Committee, “We want to get on their social media, with passwords: What do you do, what do you say? If they don’t want to cooperate then you don’t come in.”[13] In the meantime, Hassan Shibly, the director of CAIR’s FL branch, advises American citizens to remember that, “you must be allowed entrance to the country. Absolutely don’t unlock the phone, don’t provide social media accounts, and don’t answer questions about your political or religious beliefs. It’s not helpful and it’s not legal.”[14]

 

 

 

[1] See Russell Brandom, Trump’s executive order spurs Facebook and Twitter checks at the border, Verge (Jan. 30, 2017, 9:55 AM), http://www.theverge.com/2017/1/30/14438280/trump-border-agents-search-social-media-instagram.

[2] See id.

[3] See Loren Grush, A US-born NASA scientist was detained at the border until he unlocked his phone, Verge (Feb. 12, 2017, 12:37 PM), http://www.theverge.com/2017/2/12/14583124/nasa-sidd-bikkannavar-detained-cbp-phone-search-trump-travel-ban.

[4] See id.

[5] See id.

[6] See id.

[7] See Seth Schoen, Marcia Hofmann, and Rowan Reynolds, Defending Privacy at the US Border: A Guide for Travelers Carrying Digital Devices, Electronic Frontier Foundation (Dec. 2011), https://www.eff.org/wp/defending-privacy-us-border-guide-travelers-carrying-digital-devices.

[8] Id.

[9] See id.

[10] See id.

[11] See Brandom, supra note 1.

[12] See Alexander Smith, US Visitors May Have to Hand Over Social Media Passwords: DHS, NBC News (Feb. 8, 2017, 7:51 AM), http://www.nbcnews.com/news/us-news/us-visitors-may-have-hand-over-social-media-passwords-kelly-n718216.

[13] See id.

[14] See Grush, supra note 3.

Image Source: https://www.thestar.com/content/dam/thestar/news/canada/2017/02/18/are-us-border-agents-allowed-to-search-phones-and-other-devices/border-searches.jpg.size.custom.crop.1086×724.jpg.

FAA Regulation Delays Rollout of Amazon Prime Air

 

Amazon-Prime-Air

By: Sophie Brasseux,

 

Along with Super Bowl LI came typical array of Super Bowl ads. One ad that got a lot of attention this year belonged to Amazon. Amazon’s ad featured a woman ordering Doritos using her Amazon Echo.[i] As a Prime Air drone shows up with her delivery, a disclaimer airs stating “Prime Air is not available in some states. Yet.” [ii]

After announcing the development of their drone delivery system this past July, Amazon completed their first test of the drones in December in the UK. [iii]

Amazon advertises Prime Air as a system in which drones would be able to get you your package in thirty minutes or less. [iv] Prime Air would be able to deliver packages up to five pounds and would include “sense and avoid” technology for improved safety and reliability.[v] These drones will have vertical take off and landing skills with the ability of reach altitudes of 100 meters and speeds of 100 kph.[vi] Given the costs required to use these drones, they are designed as a “last resort” in Amazon’s “delivery hierarchy.” [vii] So far, Amazon’s website includes videos of these drones as well as a FAQ section mostly about their testing in the UK. [viii]

One might wonder why this U.S. company is testing in the UK. Back in June 2016, The Federal Aviation Administration published new rules, which took effect in late August. [ix] The new FAA rules replaced the temporary restrictions on drone use by companies, which had previously required companies to apply for a special permit in order to use a drone for their business.[x] The rules allow companies to use drones, but include the requirement that the drone be kept within the line of sight of the operator during use. [xi] Another major restriction is that drones are prohibited from being over individuals not involved with the drone operation. [xii] These restrictions directly effect the way in which Amazon had intended to use their Prime Air service, thus they have moved their testing to the UK where there are currently no such restrictions. [xiii]

Regulations also restrict the times of day commercial drones can be used, flight patterns, and height restrictions. [xiv] Additionally, in order to operate a commercial drone, the FAA requires a remote pilot certificate or a student private pilot’s license, neither of which are required to use a drone for personal use. [xv] One notable benefit of the new FAA rules is that commercial operators do not have to go through a legal procedure to obtain FAA permission to operate anymore. [xvi] The Consumer Technology Association has stated the FAA has struck “an appropriate balance of innovation and safety” with their new rules, but “additional steps are needed such as addressing ‘beyond-line-of-sight’ operations, which will be a true game changer.”[xvii]

At this time, it is unclear what next steps Amazon or the FAA plan to take in order to get Air Prime and other commercial drones to be permitted in the United States. Given the current regulations, it is doubtful we will be seeing these drones in the near future, however, given that the technology has already been developed, it simply does seem to be a matter of time until your packages will be delivered via drone.

 

 

 

[i] See Michelle Castillo, One of Amazon’s delivery drones showed up in a Super Bowl ad, CNBC (Feb. 6, 2017), available at http://www.cnbc.com/2017/02/06/amazon-prime-delivery-drone-gets-super-bowl-li-spotlight.html

[ii] See id.

[iii] See id; see also Luke Johnson, 9 things you need to know about the Amazon Prime Air delivery service, Digital Spy (Feb. 7, 2017), available at http://www.digitalspy.com/tech/feature/a820748/amazon-prime-air-drone-delivery-service/.

[iv] Amazon.com, Prime Air, available at https://www.amazon.com/Amazon-Prime-Air/b?ie=UTF8&node=8037720011.

[v] See id.

[vi] See supra note 3.

[vii] See id.

[viii] See supra note 4.

[ix] See Martyn Williams, New FAA rules means you won’t get Amazon drone delivery anytime soon, PCWord, (Jun 21, 2016), available at http://www.pcworld.com/article/3086790/legal/new-faa-rules-mean-you-wont-get-amazon-drone-delivery-anytime-soon.html.

[x] See id.

[xi] See id.

[xii] See id.

[xiii] See id; see supra note 3.

[xiv] See supra note 9.

[xv] See id.

[xvi] See id.

[xvii] See Nat Levy & Todd Bishop, FAA issues final commercial drone rules, restricting flights in setback for Amazon’s delivery ambitions, GeekWire (Jun 21, 2016), available at http://www.geekwire.com/2016/faa-issues-final-commercial-drone-rules-restricting-flights-setback-amazons-delivery-ambitions/.

Image Source: http://www.droneflit.com/wp-content/uploads/2016/07/Amazon-Prime-Air.jpg.

Putting Words in your Mouth: The Evidentiary Impact of Emerging Voice Editing Software

adobe voco

By: Nick Mirra,

 

All you have in this life is your word. The human voice serves as the carrier for our words, thoughts, and feelings. Each of us is imparted with unique voice which allows us to be identified amongst a group.[1] Our voice is our vocal finger print. Every word which departs from our lips carries our exclusive trademark assigning words as our own.[2] Because uniqueness of voice is a phenomenon implicitly understood by all humans, our words have become intertwined with our identity. As a result of this interconnection between voice and identity, voice recordings have become easily introducible into evidence, and they serve to relay information in any given case through our own words.

Technology has confounded the reliability of vocal identification. For example, Alexander Graham Bell’s revolutionary invention of the telephone has impacted the use of vocal evidence in court.[3] Upon the advent of the telephone, testimony based on voice recognition has been even further complicated because vocal communication was made possible over long distances while providing relative clarity of voice. Even though the correspondents may be miles apart, the parties are able to communicate with each other effectively.

The next hurdle to vocal evidence since the telephone looms on the horizon. What would it be like if a proponent of a piece of evidence could introduce a voice recording that was clearly the voice of their opponent, but in reality, the opponent wasn’t the one speaking at all? Even further, what if the opponent himself was convinced that it was in fact their voice, but they hotly contest that they ever said the words uttered on the recording? There is a new software program being developed which allows the user to put words in your mouth. Through this program, your own unique and identifiable voice becomes the marionette bending at the will of the puppeteer.

When Adobe unveiled its Project VoCo software in a live press release in November 2016, it shocked the audience.[4] On a stage in front of spectators, an Adobe representative showed the true power of the company’s newest technology.[5] VoCo is a software which enables the user to make a computer say anything the user types into it.[6] This program is not akin to mere text-to-speech conversion software. VoCo can take typed text, and convert it into human speech spoken by anyone’s voice that the user has on file.[7] It can take a recording of a voice, and change one or more words in a spoken sentence, or even create novel sentences altogether.[8] More specifically, VoCo records a 20 minute audio sample, and then anything the user types after that will be read back by the program in the speaker’s own voice.[9] Essentially, the software is Photoshop for the human voice.[10] As the software evolves, the length of the voice sample required for the software to function will exponentially shorten, and the ease of manipulating another’s voice will become increasingly more simple.[11]

The courts will soon be faced with this software which will shake the principles of earwitness evidence. It is important for practitioners to be made aware of Project VoCo so that they can react competently to falsified evidence. The issues will be hard to detect, but VoCo is a plausible explanation for how someone is putting unfavorable words in their opponent’s mouth.

 

 

 

[1] See Sophie Scott, Why do Human Voices Sound the Way they do, BBC, (Dec. 1, 2009) https://www.law.georgetown.edu/academics/academic-programs/legal-writing-scholarship/writing-center/upload/rule18.pdf.

[2] See Gilbert v. Cal., 388 U.S. 263, 266 (1967).

[3] See e.g. F.M. English, Annotation, Admissibility of sound recordings in evidence, 71 A.L.R.2d 1024 (enumerating instances where telephone calls and voice recordings appear in American Law Reports).

[4] See Adobe Creative Cloud, #VoCo. Adobe MAX 2016 (Sneak Peeks), YouTube (Nov. 4, 2016)

[5] See id.  

[6] See id.

[7] See Nick Statt, Adobe is Working on an Audio App that Lets You Add Words Someone Never Said, The Verge (Nov. 3, 2016) http://www.theverge.com/2016/11/3/13514088/adobe-photoshop-audio-project-voco.

[8] See Id.

[9] See Id.

[10] See Id.

[11] See Id.

Image Source: https://cdn.arstechnica.net/wp-content/uploads/2016/11/voco-demoed-on-stage-760×380.jpg.

Page 58 of 76

Powered by WordPress & Theme by Anders Norén