Richmond Journal of Law and Technology

The first exclusively online law review.

Author: Sean Livesey (Page 7 of 8)

Sick Pics: Legal Questions Raised by Patients Sending Nude Images to Doctors for Diagnosis


By: Nick Mirra,

Millennials have already infiltrated the workforce for several of the nation’s most time-honored professions. As 2017 draws near, more of the technological natives are earning their ranks among these established fields. For example, the average age of matriculating medical students for the 2015-16 year was 24, which means the average medical student is a millennial.[1] What are the implications of this generation beginning to take the reins of the medical profession? One prominent consequence is that doctors are becoming much more technologically savvy as medical techniques, procedures, and protocols evolve with the incoming influx of millennial medical students. As with any advancement in technology, there are new uncharted legal questions which arise almost as quickly as the technology itself springs to life.

Telemedicine has grown increasingly popular over the last several years.[2] Patients are able to get quality medical attention from the comfort of their homes or offices.[3] Waiting lines are minimized, patients do not have to arrange for transportation to the doctor’s office, and relative costs are decreased for both patient and provider.[4] Any communications that occur over the secure telemedicine program are protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA).[5] There are many benefits to this type of treatment, and they tend to outweigh the drawbacks in the eyes of a majority of doctors.[6]

In a conscious effort to be more connected with the younger generation, doctors are charting avenues to interact with their patients in new and innovative ways, beyond the scope of conventional telemedicine. One such advancement that falls outside the protections of the telemedicine forum is that some doctors are accepting pictures of their patients’ ailments over text message and email for medical assessment. As this practice has emerged, the potential for legal mishap closely followed. It didn’t take long before patients began sending their doctors pictures of their genitalia in order to diagnose a myriad of symptoms.[7] According to one doctor, a majority of these patients are obtaining their doctor’s consent prior to sending the pictures, but some do not.[8] Another potential benefit of this type of doctor-patient interaction is that patients often do not feel as embarrassed as if they were to disrobe in front of a doctor in person.

This emerging trend is crossing into a new plane of legality that has never been addressed before. What happens when the patient who is texting a picture of their genitals to their doctor is a minor?

Receipt of child pornography is in violation of 18 U.S.C. § 2252 which states in pertinent part that it is a crime for any person to knowingly receive child pornography by means of interstate commerce.[9] The statute continues by explaining that receipt by computer satisfies the interstate commerce requirement.[10] Further, “sexually explicit conduct” is defined in part as “lascivious exhibition of genitals… of any person.”[11] Through plain meaning statutory interpretation it is evidently clear that if a doctor consented to receive a picture of a child’s genitals for the purpose of diagnosis, then it could be considered per se child pornography.

A further confounding scenario would be when a doctor receives the images without having given consent to their receipt. Would the doctor have knowingly received the pictures? In regard to child pornography, the 11th Circuit Court of Appeals has held that “a person ‘knowingly receives’ something when he… takes in that thing through the mind or the senses.”[12] The court continued to state that a person does not have to save images to a hard drive in order to receive or be in possession of child pornography.[13]

Technology has advanced the medical profession far beyond what could have been imagined even half a century ago. As progress has been made, new liabilities have also been imposed. Most currently, the scope of telemedicine is still being established. What is the legality of minors sharing explicit images of themselves with their doctors via unsecured telemedicine such as texting or email? This emerging phenomenon continues to generate a host of questions regarding the legality of such exchanges. Until the issue is taken to court, or until the legislature responds, there will continue to be gaps in the law and doctors need to be extremely cautious.


[1] See Age of Applicants to U.S. Medical Schools at Anticipated Matriculation, Association of American Medical Colleges, tbl.A-6, data/factstablea6.pdf (last visited Sep. 20, 2016).

[2] See AIHM Survey of Healthcare Practitioners Shows That Telemedicine Technology Is Ahead of the Current State Medical Board Guidelines, Cio Today, storyid=030000IWF6ZO (last visited Sep. 20, 2016).

[3] See Jessica Harper, Pros and Cons of Telemedicine for Today’s Workers, Us News, (last visited Sep. 20, 2016).

[4] See id.

[5] See Amanda Holpuch, Sexting for your Health, The Guardian, society/2016/apr/07/patients-texting-doctors-genitalia-photos-ethics-law (last visited Sep. 20, 2016).

[6] See Supra note 2.

[7] See Supra note 5.

[8] See id.

[9] See 18 U.S.C. § 2252(a)(2)(A).

[10] See id.

[11] 18 U.S.C. § 2256(2)(v).

[12] United States v. Woods, 684 F.3d 1045, 1057-58 (11th Cir. 2012).

[13] See id.

Photo Source:


Telemedicine Is Set to Expand, But State Licensure Laws Could Limit Growth


By: Ryan Martin,

As recently reported, the global telemedicine industry is expected to grow to $57.92 Billion by the year 2020.[1] While that is still a small share of the total health care industry, it represents a 17.85% compound growth rate, signaling that telemedicine services are here to stay.[2]

Telemedicine, also known as telehealth, aims to provide medical services via electronic communications.[3] Often, these services can help provide medical care in rural areas where the accessibility to physicians is limited.[4] In a typical visit, a patient will “chat” with a physician through a webcam service, then be advised on a treatment or recommended to seek further treatment.[5] While the concept of telemedicine has been around as long as the telephone, it has seen a dramatic take off with the rise of mobile and video technology.[6] The Federal Government is now showing an interest in growing access to these services by providing grants to community hospitals for use in rural areas.[7]

However, as the industry continues to grow there are several legal and regulatory aspects that will need to be addressed to ensure that healthcare providers can provide telemedicine services in a cost effective manner. Among them are restrictions to reimbursement through Medicaid and Medicare, privacy concerns regarding HIPPA, and the threat of malpractice suits resulting from the inability to conduct a full physical examination of the patient.[8] Perhaps the most daunting hurdle—specifically in the United States—is individual state licensing restrictions.[9]

States are responsible for regulating and monitoring healthcare professionals within their state and generally require full licensure to provide services to patients in that state.[10] For example, a physician practicing internal medicine in California would need to be fully licensed by the state of Florida in order to provide a telemedicine consultation to a patient who is located in Florida. While it is understandable that a state would want to protect its citizens from an unlicensed physician, telemedicine transcends geographic boundaries; putting heavy licensing restrictions frustrates its purpose of providing common, low risk services, where the often alternative option is no healthcare service at all.

A few states have amended their state laws to allow for easier access to telemedicine. Several states allow for physicians from boardering states to provide medical services.[11] Ten states have taken steps to establish special telehealth licenses that allow a physician to practice through telemedicine services but not physically in that state.[12] This helps expedite the physician’s license and shortens what is often a lengthy review of her application.[13] However, no state has allowed for direct reciprocity.[14]

The American Telemedicine Association publishes an annual report card for each state, grading their licensure policies from A to F, based on the reasonableness of its telemedicine practice standards, licensure requirements, and policy on Internet prescribing.[15] In its latest report, there were no “A’s” issued, indicating that there is still work to be done if states want to expand telemedicine services.[16]

There is currently one potential resolution to the licensing problem. Seventeen states have signed a Federation of State Medical Boards (FSMB) compact that requires an expedited licensing process for out-of-state practitioners.[17] However, the FSMB does not create federal licensure law and each individual state has to affirmatively adopt the compact.[18] Because of this, the FSMB compact likely falls short of being a sufficiently comprehensive plan.

The future appears to be positive for telemedicine services, but if nothing is done to change the current regulations, telemedicine providers may be stuck with navigating the often-complex state regulations that limit the availability to such services. Should the federal government truly desire to increase healthcare accessibility in rural areas through telemedicine, more will need to be done to alter state licensing regulations.


[1] See Telemedicine Market to Reach $ 57.92 Billion by 2020, Thanks to Evolving Reimbursement Policies; Reveals Market Data Forecast Analysis, PR Newswire, (Sept. 14, 2016),–5792-billion-by-2020-thanks-to-evolving-reimbursement-policies-reveals-market-data-forecast-analysis-593396911.html.

[2] See id.

[3] See What is Telemedicine, American Telemedicine Association, (last visited Sept. 16, 2016).

[4] See Jonah Comstock, How telemedicine, remote patient monitoring help extend care in Mississippi, MobiHealthNews, (Sept. 13, 2016),

[5] See What is Telemedicine, supra note 1.

[6] See id.

[7] See Joseph Goedert, Federal grants give rural telehealth programs a boost, HealthData Management, (Aug. 16, 2016),

[8] See John Donohue, Telemedicine: What the future holds, Healthcare IT News, (Sept. 6, 2016, 11:06:00 AM),; HIPPA Guidelines on Telemedicine, HIPPA Journal , (last visited Sept. 16, 2016); Neil Chesanow, Do Virtual Patient Visits Increase Your Risk of Being Sued?, Medscape, Oct. 22, 2014)

[9] See Kristi VanderLaan Kung, Recent Relaxation of State-level Challenges to Expansion of Telemedicine but Barriers Remain, The National Law Review, (Aug. 18, 2016),

[10] See id.

[11] See Latoya Thomas & Gary Capistrant, State Telemedicine Gaps Analysis, AM. TELEMEDICINE ASS’N 4 (Jan. 2016),–physician-practice-standards-licensure.pdf.

[12] See id.

[13] See id.

[14] See id.

[15] See id.

[16] See Thomas, supra note 11.

[17] See Kung, supra note 9.

[18] See id.

Photo Source:

Self-Driving Vehicles: Legal Ramifications Surrounding the Future of the Auto Industry

A member of the media test drives a Tesla Motors Inc. Model S car equipped with Autopilot in Palo Alto, California, U.S., on Wednesday, Oct. 14, 2015. Tesla Motors Inc. will begin rolling out the first version of its highly anticipated "autopilot" features to owners of its all-electric Model S sedan Thursday. Autopilot is a step toward the vision of autonomous or self-driving cars, and includes features like automatic lane changing and the ability of the Model S to parallel park for you. Photographer: David Paul Morris/Bloomberg via Getty Images

By: Will MacIlwaine,

Over the past few years, auto manufacturers have been experimenting with autopilot features that, in certain situations, essentially allow a vehicle to drive itself. One such vehicle is the Tesla Model S. A recent software update for the Model S allows it to “use its unique combination of cameras, radar, ultrasonic sensors and data to automatically steer down the highway, change lanes, and adjust speed in response to traffic.”[1] Further, this Tesla model has the ability to search for a parking space once the driver has arrived at his or her destination, and will even parallel park the vehicle on its own.[2]

The Tesla must obtain certain data before it can enter into autopilot mode.[3] Among other things, there must be clear lane lines, a consistent travel speed, and the car must be able to sense other vehicles around it.[4] Tesla points out on its website that, although the vehicle does most of the driving for the consumer, drivers must still keep their hands on the steering wheel.[5] Even so, there have been reports of Tesla drivers taking pictures of themselves with their hands off the steering wheel, drinking coffee, reading the paper, or even riding on the roof, while the car does the driving.[6]

Tesla claims that the autopilot feature can make it easier, safer, and more pleasant to deal with traffic.[7] Some drivers of both Tesla vehicles with autopilot features, and drivers of other similar vehicles, would beg to differ.

This past week, reports surfaced of an accident involving a Tesla vehicle that occurred in January of 2016.[8] The accident took place in China, when the Tesla, thought to be in autopilot mode, failed to brake and slammed into a road sweeper, killing the driver.[9]

Later, in May of this year, a man was killed when the autopilot feature failed to recognize the white side of a tractor-trailer against the bright sky, and the brakes were not applied.[10]

Legally, what’s at stake for Tesla in introducing this innovative feature? The National Highway Traffic Safety Administration (“NHTSA”) classifies car automation by levels ranging from one to four.[11] One is akin to a standard car, while four is parallel with a fully capable autopilot vehicle.[12] According to attorney Gabriel Weiner, Tesla’s autopilot feature is similar to a level two classification.[13] Drivers could be fooled by the term “autopilot” and falsely believe that they do not have to be fully alert when the feature is being used.[14] If this is the case, and Tesla fails to warn the customer to always remain alert, the company could be liable if the autopilot feature caused an accident. On the other hand, Tesla’s owner’s manuals state that the driver is still responsible for controlling the vehicle, and a message reminding the user to keep his or her hands on the wheel and to be prepared to take over at any time is displayed on the vehicle’s center screen when the feature is in use.[15]

If a user sees these messages and decides to ignore them, it would seem that Tesla could escape liability, as this could be seen as an implied assumption of risk by the user. Under that theory, if the vehicle user knows and understands the danger that the autopilot feature presents and still voluntarily chooses to use it, there would likely be no liability on the part of Tesla.

In a claim that Tesla acted negligently in selling a car with the autopilot feature, Tesla could also make contributory negligence argument. By failing to have hands on the steering wheel, or by not paying attention to the road, the driver could be contributorily negligent if an accident were to occur.

There are certainly other questions surrounding the autopilot feature. For one, who is legally responsible for a car crash if the car is driving itself?[16] Tesla? The owner of the car? What implications might autopilot malfunctions have on an owner’s drivers license? Will an owner get points on his license, or worse, lose it, if the autopilot feature causes a crash?

The technological breakthrough that the autopilot feature offers is obviously not perfect. It may take years to fully perfect this advancement in the automotive industry. That being said, the question remains: will consumers continue to utilize this compelling feature while potentially sacrificing safety for convenience?


[1] Model S Software Version 7.0, (last visited Sept. 17, 2016).

[2] See id.

[3] See Ryan Bradley, Tesla Autopilot, MIT Technology Review, (last visited Sept. 17, 2016).

[4] See id.

[5] See Model S Software Version, supra note 1.

[6] See Bradley, supra note 3.

[7] See Model S Software Version, supra note 1.

[8] See Neal E. Boudette, Autopilot Cited in Death of Chinese Tesla Driver, New York Times, Sept. 14, 2016,

[9] See id.

[10] See Bill Vlasic & Neal E. Boudette, Self-Driving Tesla Was Involved in Fatal Crash, U.S. Says, New York Times, June 30, 2016,

[11] See William Turton, Tesla’s Autopilot Driving Mode is a Legal Nightmare, Gizmodo (July 23, 2016),

[12] See id.

[13] See id.

[14] See id.

[15] See id.

[16] See id.

Photo Source:

Is Legal Action the Best Way to Curtail Cyberbullying? Instagram Just Offered Another Option


By: Etahjayne Harris,

Instagram, one of the most popular social networks worldwide, has over 500 million monthly active users as of September 2016.[1] The photo-sharing app enables users to take and upload pictures and videos and share them publicly or privately on the app, as well as on a variety of other social networks, such as Facebook and Twitter. For teens, “Instagram is much more than a medium to share photos on—it’s an extension of their identities.” [2] Instagram’s co-founder Kevin Systrom has said that the app was created to, “make it easy for people to share their lives in a beautiful way.”[3] While the offered use and purpose behind Instagram is of a positive nature, the app has unfortunately been used as a tool for cyberbullying. Teens may be more susceptible to cyberbullying due to the significant role social media plays in their lives.

The National Conference of State Legislatures defines cyberbullying as, “the willful and repeated use of cell phones, computers, and other electronic communication devices to threaten others.”[4] A recent study done by the Cyberbullying Research Center (CRC) has shown that over 25% of students in middle school and high school have been cyberbullied in their lifetime.[5] While cyberbullying can take place on any variety of social media platforms such as Facebook, Twitter, or Snapchat, it has been argued that cyberbullying on Instagram is especially bad, “because it’s a very public platform that people use to post photos of themselves—inviting everyone and anyone to judge their appearances in the comment sections.”[6] For teens whose social media presence is closely tied to their self-identity, the effects of cyberbullying on that identity is particularly worrisome.

You may wonder amid these distressing statistics, whether Instagram has taken any measures to mitigate its cyberbullying problem and whether there are legal consequences for cyberbullying. On September 12, 2016, Instagram implemented an update that allows its users, “ to block ‘inappropriate comments’ on their posts and set filters for specific words.”[7] This update gives users more control over what comments get posted on their pictures beyond simply having the ability to delete unwanted comments or block specific users. In a statement released on September 12, 2016, Instagram co-founder Kevin Systrom said that the app is working towards, “keeping Instagram a safe place for self-expression.” This update gives users the chance to push back against cyberbullies and inappropriate comments generally. While this update is great and gives users more control, what are the legal remedies for the victims of cyberbullying?

Nearly every state has enacted some form of a student cyberbullying statute.[8] To be considered cyberbullying, information technology must be used to, “deliberately threaten, harass, or intimidate another person.”[9] Under many state statutes, state schools may be required to specifically address and correct behavior that may be considered cyberbullying through their policies.[10] The swift spread of social networking apps like Instagram and Facebook has been met with an increase in cyberbullying litigation both in the federal and state courts.[11]A frequent issue in applying these student cyberbullying laws is determining whether a school is responsible for protecting students from off-campus online harassment.[12] There is no clear answer as of today.

In spite of these student cyberbullying laws, finding a legal remedy for cyberbullying is complicated by the fact that cyberbullying encompasses such a wide range of behavior. A claim of cyberbullying is not necessarily viable simply because the victim was offended by what someone else commented on their Instagram post, for example; a claim is generally viable if the alleged conduct violated a criminal statute, violated a state student cyberbullying law, or constituted a traditional civil tort. [13] The issue of finding legal relief for cyberbullying is further complicated by the fact that a cyberbully can post or comment anonymously on social media apps like Instagram, making it difficult to trace the origin of the harassing comments. So while there are legal remedies in place for teen victims of cyberbullying, those remedies may be difficult to obtain. For now it appears that the most feasible way to combat cyberbullying on Instagram is to stop it in its tracks by using the new comment filter option.


[1] See Instagram, Number of monthly active Instagram users from January 2013 to June 2016 (in millions), Statista, (last visited Sep. 13, 2016).

[2] Nina Godlewki, If you have over 25 photos on Instagram, you’re no longer cool, TechInsider (May 26, 2016)

[3] Michael Noer, A Conversation with Instagram’s Co-Founder Kevin Systrom, Forbes (Apr. 9, 2012),

[4] Cyberbulliying, National Conference of State Legislatures (Dec. 14, 2010), (last visited 9/13/2016, 5:15pm).

[5] Sameer Hinduja and Justin Patchin, Cyberbullying Victimization (Feb. 2015),

[6] Elise Moreau, What is a Troll, and What is Internet Trolling, About Tech (Feb. 25, 2016),

[7] Brett Molina, Instagram Update Lets Users Filter Comments, USA Today (Sept. 12, 2016),

[8] See Gary D. Nissenbaum and Laura J. Magedoff, Potential Legal Approaches to a Cyberbullying Case, The Young Lawyer Vol.17, No. 9 (Aug. 2013),

[9] Id.

[10] See id.

[11] See id.

[12] See id.

[13] See id.

Image Source:

Apple’s Latest Lawsuit Arises From the iPhone Defect “Touch Disease”


By: Will MacIlwaine,

On September 7, Apple held its annual fall event. The event featured the introduction of the Apple Watch 2, as well as the iPhone 7, which is the first iPhone not to include a headphone jack.

While the fall event might suggest that things are continuously heading in the right direction for the innovative company, a defect associated with the iPhone 6 and 6 Plus devices suggests otherwise. In the United States District Court for the Northern District of California, three iPhone users have filed a class-action lawsuit against Apple for the defect that has been coined “Touch Disease.”[1]

Inside the affected iPhone models are chips that allow a user’s finger and the screen to interact.[2] For some users, these chips are not correctly secured to the logic board of the phone, and fail as a result of the consumer’s normal use of the device.[3] The plaintiffs in this case claim that Apple concealed this defect, which “causes the touchscreens on the iPhones to become unresponsive and fail for their essential purpose as smartphones.”[4] The defect also causes a gray flickering bar to appear at the top of the device’s screen.[5]

All three plaintiffs, Todd Cleary, Jun Bai, and Thomas Davidson, have experienced the “Touch Disease” issue.[6] All three individuals were told that they could pay an additional fee, over $300 dollars, for a replacement phone.[7]

The plaintiffs note that the previous iPhone 5S design included a metal shield to protect the device’s logic board, allowing the phone to better accommodate reasonable use by the consumer.[8] Additionally, the iPhone 5c design used an “underfill” mechanism to reinforce the chips at issue and protect them from normal wear and tear.[9] According to the plaintiffs, the iPhone 6 and 6 Plus models carry neither of these protective features.[10]

The plaintiffs claim that Apple has done nothing to remedy the defect, even though it has had knowledge of the issue for some time.[11] Apple gained this knowledge, according to the plaintiffs, through the records of customer complaints, repair records, and various customer claims.[12] The plaintiffs state that Apple’s lack of action is a result of “unfair, deceptive and/or fraudulent business practices.”[13] They even go as far as to argue that, had they not relied on Apple’s representations regarding the quality of the product, they would have paid less for the iPhones, or would not have purchased them at all.[14]

In their argument plaintiffs first posit that Apple engaged in unfair and deceptive acts in violation of the California Consumers Legal Remedies Act (“CLRA”) by knowingly and intentionally concealing from the customers the fact that the iPhones were faulty.[15] In a general sense, the plaintiffs argue that Apple misrepresented the product, representing that the phones had certain characteristics, uses, or benefits that they did not have.[16] They also argue that Apple misrepresented the quality of the product, and advertised the phones with the intent not to sell them as advertised.[17] Since Apple is in a better position to know the true state of the touchscreen defect, the plaintiffs contend that Apple owed a duty to the customer to disclose this issue.[18]

Further, the plaintiffs argue that since they were deceived in purchasing the phones, which they would not have purchased if they had knowledge about the defects, that Apple was fraudulent in its actions.[19] Among other claims for relief include those based on negligent misrepresentation, unjust enrichment, breach of implied warranty, and claims based on violations of several warranty acts.[20]

The “Touch Disease” only became an issue about six months ago.[21] Further, many users are just now starting to experience this problem on the iPhone 6 and 6 Plus, two years after the release of these models.[22] It’s difficult to believe that Apple knew that this was going to be an issue when it originally advertised and eventually released the phones in September of 2014.[23] Apple, while a powerhouse in the technology industry, cannot predict the future, and it’s certainly plausible that the company had no idea that the affected phones would have this kind of issue more than a year and a half after release. If that’s the case, there does not seem to be any “unfair” or “deceptive” action taken by Apple.[24]

On the other hand, ignoring the issue and continuing to produce and sell these iPhone models after becoming aware of the issue could be seen as deceiving and as misrepresenting the quality of the product, if in fact Apple had reasonable knowledge that the issue was affecting a majority of the models in the iPhone line it was continuing to produce and sell.

In reality though, Apple’s general warranty on its iPhone line is a one-year limited warranty.[25] After that, it is the consumer’s responsibility to pay for necessary repairs. All three plaintiffs in this case had been in possession of their phones for at least one and a half years before beginning to experience this issue.[26]

It would seem to be a bit more complicated if a user experiencing this issue was still within the one-year warranty period, because Apple’s warranties do not include coverage for “defects caused by normal wear and tear or otherwise due to the normal aging of the Apple Product.”[27] Then, the main issue would seem to be whether the defect was really a result of normal wear and tear, compared to a design defect as the plaintiffs claim.[28] If the plaintiffs were still within the warranty period, the claims might have more merit, but since the plaintiffs and many of the individuals having this issue only started to experience it around the two-year mark of owning the phones,[29] this does not seem to be the main subject of potential litigation. In this case, it does not seem likely that the court will side with the customers bringing these claims.

Important to reiterate is the fact that this is a class action. The plaintiffs seek to represent a nationwide class of iPhone 6 and iPhone 6 Plus users. That being said, this could be an issue for Apple if anything actually comes from these claims. Be on the lookout for a response from Apple in the coming weeks.


[1] See Don Reisinger, Apple Is Being Sued Over the iPhone ‘Touch Disease’, Fortune, (Aug. 30, 2016), (last visited Sep 13, 2016).

[2] See id.

[3] See Class Action Complaint ¶ 27, Davidson v. Apple, Inc., No. 5:16-cv-4942, (N.D. Cal. filed Aug. 27, 2016).

[4] See id. ¶ 1.

[5] See id. ¶ 21.

[6] See id. ¶¶ 8-10.

[7] See id.

[8] See Class Action Complaint, supra note 3, ¶ 28.

[9] See id.

[10] See id. ¶ 30.

[11] See id. ¶ 2.

[12] See id. ¶ 35.

[13] See Class Action Complaint, supra note 3, ¶ 4.

[14] See id. ¶ 38.

[15] See id. ¶ 57; Cal. Civ. Code § 1770 (a)(2),(5),(7),(9) (2016).

[16] See Class Action Complaint, supra note 3, ¶ 57; Cal. Civ. Code § 1770 (a)(5) (2016).

[17] See Class Action Complaint, supra note 3, ¶ 57; Cal. Civ. Code § 1770 (a)(9) (2016).

[18] See Class Action Complaint, supra note 3, ¶ 60.

[19] See id. ¶ 82.

[20] See id. ¶ 15-19.

[21] John Matarese, Apple iPhone ‘Disease’ Making Touch Screens Useless, WCPO Cincinnati (Sept. 12, 2016),

[22] See id.

[23] See id.

[24] See Class Action Complaint, supra note 3, ¶ 4.

[25] See Your Hardware Warranty, (last visited Sept. 14, 2016).

[26] See Class Action Complaint, supra note 3, ¶¶ 8-10.

[27] See Your Hardware Warranty, supra note 25.

[28] See Class Action Complaint, supra note 3, ¶ 30.

[29] See Matarese, supra note 21.

Photo Source:×675.jpg.

Identity Misappropriation on Dating Apps: Did You Right Swipe the Right Person?

Tinder dating app photo

By: Tatum Williams,

It is easy to confirm a Tinder profile’s authenticity when the person is someone you know. When this happens, and it often does considering the app boasts an estimated 50 million users[1], beneath the initial awkwardness of swiping across a familiar face, there is assurance. The assurance comes from knowing that the person beyond that profile is who they say they are.

With the slogan, “It’s like real life but better,” Tinder’s innovative design allows users to connect with nearby people using location-based software and basic Facebook information, such as mutual interests and friends.[2] Tinder and apps similar to it, such as Hinge, require this social authentication primarily to put users’ minds at ease over the likelihood of encountering a fake profile.[3] But such is not always the case. Of those who utilize dating apps and online dating, roughly 54% felt that some users seriously misrepresented themselves in their user profile.[4] Evidence of that is the modern phenomenon of “catfishing.” Catfishing, which is defined as the intentional deception of another through the use of a fake profile, typically done in hopes of achieving a romantic connection,[5] has exploited the rise of social media and dating apps. It is now even easier for someone to fabricate a dating profile.[6] Needless to say, the prevalence of dating apps has facilitated deceptive behavior given the availability of anonymous communications.[7]

More often than not, misrepresentations of this nature are harmless. While some people may exaggerate or lie about their preferred television shows or movies, others execute more strategic lies, often pertaining to their age, weight, height, personality traits, interests, monetary status, career aspirations, and even past relationships.[8] Most people are familiar with these kinds of misrepresentations and would agree that these lies, though wrong, are not worthy of serious legal ramifications.[9] Or are they?

This begs the question whether the legal considerations and ramifications associated with dating apps have evolved at the same burgeoning rate as the apps themselves. Unfortunately, the answer is no; dating app users have a low likelihood of success in holding a dating app liable for any harm that the user experiences from his or her interactions with other app users.[10] Most dating apps have combated these potential claims by disclaiming all warranties and representations with regards to other users in their terms of use agreements.[11] And the immunity does not stop there: The Communications Decency Act protects apps from liability based on content posted by users, such as the aforementioned catfishing scams or other misrepresentations.[12] Under the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[13]

Tinder is relatively new—it was only created in 2012[14]—so not many statistics have been conducted on the number of fake profiles currently in existence. With that said, the rapid pace at which the user base is experiencing growth preserves the potential for misrepresentations and catfishing. Despite the protections dating apps and other forms of online dating services receive, several states have made lying on these platforms a criminal offense.[15] This would seem to suggest that the trend of remaining unpunished for misrepresentation or catfishing is diminishing.[16] But until cyberspace governance evolves to the level that dating apps have, it appears that users’ best defense is to swipe wisely.

[1] See Alexis Kleinman, The Typical Tinder User Spends 77 Minutes Tinding Every Day, Huffington Post (Oct. 31, 2014),

[2] See Jessica L. James, Mobile Dating in the Digital Age: Computer-Mediated Communication and Relationship Building on Tinder (May 2015) (unpublished Master of Arts thesis, Texas State University) (on file with author).

[3] See Lindsay Hildebrant, Media and Self Representative Perceptions: Deception in Online Dating (May 2015) (unpublished undergraduate Honors thesis, Pace University) (on file with Pforzheimer Honors College, Pace University).

[4] See James, supra note 2.

[5] See Krystal D’Costa, Catfishing: The Truth About Deception Online, Scientific American Blog (April 25, 2014),

[6] See Keith Wagstaff, Hook, Line and Tinder: Scammers Love Dating Apps, NBC News (April 11, 2014, 5:36 PM),

[7] See Geelan Fahimy, Liable for Your Lies: Misrepresentation Law as a Mechanism for Regulating Behavior on Social Networking Sites, 39 Pepp. L. Rev. 2 (2012).

[8] See id.

[9] See id.

[10] See Greg Mitchell, Digital Dating: Legal Matters to Consider Before Swiping Right, Missouri Lawyers Help Blog (Feb. 12, 2016),

[11] See id.

[12] Doe v. MySpace, Inc., 528 F.3d 413, 416 (5th Cir. 2008).

[13] See Fahimy, supra note 7.

[14] See James, supra note 2.

[15] See Fahimy, supra note 7.

[16] See id.

Photo Source:

A New Class of Worker for the Sharing Economy

"Demand Response" In 2016


By: Ryan Suit,

If you have ever wanted to be paid to do less than you are doing right now, then you might be a fan of “demand response.” Demand response refers to the concept of paying electricity consumers to not use electricity at certain times.[1] Currently, most demand response participants are large commercial industries, and residential participation is still small. But as advancements in technology allow more people and businesses to take part in demand response, paying consumers to not use electricity will have a lot of benefits for consumers, the electric grid, and the environment.[2]

Demand response entails less energy being consumed, which means less energy needs to be produced. For one, this decreases the costs to consumers because they use less energy.[3] Second, demand response allows the grid to be more reliable because there is a lesser likelihood of overload on the grid.[4] Third, it also decreases the amount of carbon dioxide produced by electricity generators who use fossil fuels.[5] Many power plants that use fossil fuels are both inefficient and expensive to operate, so are only turned on when demand for electricity is at its peak. Demand response can prevent the need for these types of plants, and therefore prevent them from producing carbon dioxide pollution, because it helps to balance the supply and demand for electricity by using less energy, rather than producing more.

The big question for demand response is, “Who gets to regulate it?” Section 201 of the Federal Power Act gives the Federal Energy Regulatory Commission (FERC) the power to regulate interstate transmission of wholesale electricity sales.[6] While FERC was given the authority to regulate interstate electricity and wholesale rates, states were left to regulate intrastate electricity sales and retail sales to end-users.[7] Almost one year ago, the Supreme Court held in Learjet that federal natural gas laws do no preempt state laws that regulate any phase of natural gas production.[8] By holding that federal laws did not preempt state regulation of an energy industry, Learjet signaled that the Supreme Court might be in favor of allowing states to regulate demand response.[9] Earlier this year the Court clarified its stance on demand response in FERC v. EPSA.[10] In that case, the Supreme Court confirmed that FERC has the ability to require firms that transmit energy on the grid to accept bids from demand response companies.[11] This makes demand response a much more viable competitor to energy generators, and also creates more of the benefits described above: lower costs, more reliability, and less pollution. Though state regulatory commissions still have the ability to regulate demand response by prohibiting customers in their states from participating in demand response markets[12], the holding in FERC v. EPSA may indicate that “veto” power could be taken away, and FERC might soon have more power to increase demand response schemes.

Demand response is becoming more than just an energy concept, and is gaining more traction both economically and legally. More lawsuits dealing with demand response are likely to be litigated in the near future, but the Supreme Court’s recent rulings show that demand response could be in your near future as well.



[1] See Joel Eisen, FERC v. EPSA and the Path to a Cleaner Energy Sector, 40 Harvard Environmental Law Review 1 (2016).

[2] Id. at 2.

[3] Id.

[4] Id.

[5] Id.

[6] Federal Power Act § 201

[7] Id.

[8] See Oneok, Inc. v. Learjet, Inc.

[9] Ashley Davoli, Demand Response: The Consumer’s Role in Energy Use, Rich. J.L. & Tech. (April 26, 2015)

[10] See Federal Energy Regulatory Commission v. Electric Power Supply Assoc., 136 S.Ct. 760 (2016)

[11] Id. at 763-764.

[12] Id. at 779-780.

Smith et al v. Facebook, Inc. et al: Plaintiffs Allege Facebook is Mining Private Medical Information to Generate Profit


By: Quinn Novak,


If you Google the American Cancer Society and search through their website for information about breast cancer, do you have a reasonable expectation of privacy? Or do you expect that someone is monitoring your activity and collecting your medical searches? Winston Smith believed he had privacy when searching those types of medical websites for cancer information. Smith did not realize that Facebook collects his private medical information from well-respected cancer organizations[1] and uses that private health data to create marketing profiles, targeting him with tailored advertisements based on his private information.[2] When Smith discovered this reality, he initiated a class action lawsuit against Facebook and seven healthcare organizations, including the American Cancer Society, the American Society of Clinical Oncology, and the Melanoma Research Foundation.[3]

Smith filed the complaint on March 15, 2016 in a San Jose, federal court.[4] The case was assigned to Magistrate Judge Nathanael M. Cousins.[5] The three plaintiffs, including Smith, allege that the named defendants violated the Health Insurance Portability and Accountability Act of 1996 (HIPAA), federal Wiretap Acts, and several state statutes.[6] According to HIPAA, because medical data is private, it should be difficult to acquire and companies are not allowed to gather or share medical information without express authorization of the patient.[7] Plaintiffs argue that because users have no idea that their information is being gathered and because Facebook does not disclose on its data and privacy policies that it tracks, collects, and intercepts users’ sensitive medical information and communications, Facebook and the named healthcare organizations violate HIPAA.[8]

Although it is evident that Facebook is harvesting cancer data to generate profit through targeted advertising,[9] it is unclear if the medical website owners have knowledge that Facebook is using their data.[10] However, if the healthcare organizations were aware that Facebook was collecting their user’s data, plaintiff claims that the organizations should have disclosed their relationship with Facebook to their users.[11]

Although Smith seeks certification, damages, restitution, and permanent injunction from all eight defendants, a Facebook spokesperson stated that the “Lawsuit is without merit and we will defend ourselves vigorously.”[12] In rebuttal, a representative from plaintiff’s counsel, Kiesel Law LLP, stated, “When you’re searching private medical information, you don’t realize it’s being sent to Facebook” and states that there is a reasonable expectation of privacy for these types of searches.[13] Fortunately, not all medical websites allow Facebook to track their user’s communications; the Mayo Clinic and Johns Hopkins Medicine website do not allow Facebook to mine their data through the use of cookies.[14] So, for now, if you need to search for medical information about cancer and you don’t want Facebook to keep track of that information, use one of the numerous protected websites. Otherwise, the next time you log onto Facebook, you can reasonably expect to see advertisements across your newsfeed catering to your cancer medical needs.



[1] See Bethy Squires, Facebook is Mining Private Data from Cancer Organizations, New Lawsuit Alleges, Broadly (Mar. 18, 2016, 4:15 PM),

[2] See Nicholas Iovino, Facebook Mines Data Off Cancer Sites, Users Say, Courthouse News Service (Mar. 16, 2016, 7:05 PM),

[3] See Carrie Pallardy, Lawsuit Claims Facebook Mined PHI from Websites of Cleveland Clinic, MD Anderson Cancer Center & More for Advertising Profit, Becker’s Health IT & CIO Review (Mar. 23, 2016),

[4] See Smith et al v. Facebook, Inc. et al, PacerMonitor (Apr. 1, 2016, 12:07 AM),,_Inc_et_al [hereinafter PacerMonitor]; see Neil Versel, Suit Claims Facebook Mines Private Cancer Data, MedCity News (Mar. 23, 2016, 1:21 AM),

[5] See PacerMonitor, supra note 4.

[6] See Versel, supra note 4.

[7] See Squires, supra note 1.

[8] See id.; see Iovino, supra note 2.

[9] See Iovino, supra note 2.

[10] See Squires, supra note 1; see also Versel, supra note 4 (stating that it is unclear whether cancer institutes named in the suit are aware of Facebook’s practices).

[11] See Pallardy, supra note 3.

[12] See Iovino, supra note 2.

[13] See Squires, supra note 1.

[14] See id.


Photo Source:

Are Your Legal (Or Illegal Undertakings) Really Anonymous?



By Celtia van Niekerk,

When Silk Road was developed, it became a haven for illegal activity. Masked by the cryptic underworld of the dark web, many people thought that their activities online were finally free from the peering eyes of law enforcement.

The Developer of Silk Road, Ross Ulbricht was ones such person.

He created Silk Road, a website where narcotics were freely sold—an Amazon of the underworld. In order to sell narcotics, buyers and sellers turned to bitcoin, a digital currency which enabled them to conduct their activities in secrecy… Or so they thought.

The Bitcoin network relies on a shared public ledger called a block chain. This block chain records all transactions and in that way, the amount in each wallet is calculated.[1] This process is made secure through cryptography. The difficulty for law enforcement is that a user’s true identify is kept secret because instead of using your real name like you would at a bank, a user creates a code which serves as their digital signature in the blockchain.[2] But while Bitcoins itself are anonymous, spending them starts a forensic trail that may lead right back to you.[3]

Graduate students at Penn State were the first to crack the cryptography wall—by isolating some of the Bitcoin addresses, they were able to isolate other address and eventually map the IP addresses of over 1000 Bitcoin addresses.[4] But this easier said than done—once bitcoins mix in with other users, the trace is harder to follow as Bitcoin is designed to blur the lines between the IP address and the transaction.[5] According to Sarah Meiklejohn, a computer scientist, once you catch someone buying an illegal product off a website such as Silk Road, the blockchain serves as a history of all their criminal activity.[6]

Some have contended that the Federal government may issue their own cryptocurrency, like Bitcoin which would require a user to verify their real world identity.[7] But a move like this may have no effect on the popularity of Bitcoin, which offer’s their users more anonymity. One thing is for certain, Bitcoin is not as anonymous as once believed, leading law enforcement to take notice.



[1] Bitcoin, How does Bitcoin Work? (last accessed March 14, 2016).

[2] John Bohannon, Why criminals can’t Hide behind Bitcoin, Science, (March 9,2016).

[3] Elliot Maras, How Bitcoin Technology Helps Law Enforcement Catch Criminals, CCN.LA, (March 10, 2016).

[4] Id.

[5] Supra, note 2.

[6] Supra, note 3.

[7] Supra, note 2 (Statement from Bill Gleim, head of machine learning at Coinalytics).


Photo Source:×382.jpg

Page 7 of 8

Powered by WordPress & Theme by Anders Norén