Richmond Journal of Law and Technology

The first exclusively online law review.

Month: September 2016

Personal Genomics for Profit: Who Owns Your Genes?


By: Andrew Toney,

The personal genomics industry has experienced rapid growth since 2006, when influential genome information service companies opened web-based exchange points to their customers.[1] Big players in the personal genomics industry including 23andMe, deCODEme, and Navigenics offer customers a glimpse into their health and ancestry. A customer need only submit a saliva sample for testing and – voila! – they uncover DNA qualities that explain their quirks, their family’s origin, and their susceptibility to certain diseases. Companies target consumers’ intellectual curiosity by providing an answer to life’s greatest question – Who am I? However, the DNA collection process, along with the accuracy of the findings, raises many pertinent legal questions: Who owns the rights to this data? Can third parties view this information, or is consumer data kept safely with the company? Just how accurate are these findings?

Researchers on the subject have developed a working definition of “genetic privacy,” concluding that the term encompasses four aspects: 1) informational privacy that relates to access to personal information; 2) physical privacy, which relates to access to persons and personal spaces; 3) decisional privacy, which relates to access to governmental or other third party interference with personal choices; and 4) proprietary privacy that relates to ownership interest in the human body.[2]

Determining the rights of ownership over genetic information has been a hotly contested issue due to the potential utility of such information in other industries. Companies make individual findings by comparing a customer’s DNA sample with an ever-expanding online genotype/phenotype database. In 2008, the federal government passed the Genetic Information Nondiscrimination Act (“GINA”) in response to increasing concerns of third party access to these massive databases of genetic information. GINA is designed to protect Americans against discrimination based on their genetic information with regard to employment or the receipt of health insurance.[3] Under GINA, an individual’s genetic information is categorized as a confidential medical record and, as a result, prevents employers from disclosing such information to third parties.[4]

What does this mean for consumers? On the one hand, consumers may continue to choose personal genomics because of GINA’s protections against personal data distribution. Likewise, this may encourage an employee to disclose their genetic information to an employer in order to supplement existing medical records. However, rapid growth in this industry, coupled with major technological advancements, show that customers may be storing data on company servers that GINA is not designed to protect.[5] As company databases continue to grow with newly added customer data, they become a juicier target for cyber attacks. Likewise, companies in the industry typically include provisions within their terms of service authorizing them to share information with affiliated entities.[6] Such provisions make clear that the consumer does not have exclusive rights to their own genetic information, nor is there any assurance that the information will remain within a company’s database. If the company collecting consumer data also has rights of ownership over that data, then they are free to treat it as they would any other commodity, including selling or transferring the data to third parties without consumer consent.[7]

Another concern with the personal genomics industry is whether consumers are really getting any substantial data in return for their DNA samples. Specifically within the realm of health data, many genomics companies market their products toward consumers concerned with hereditary health issues. This presents an obvious concern regarding the accuracy of a company’s findings. In 2013, the Food and Drug Administration issued a cease and desist letter to 23andMe, ordering the company to discontinue marketing its product as a means for analyzing consumer health data.[8] The FDA reasoned that foreboding health data would induce customers to seek unnecessary medical procedures or treatments.[9] Likewise, some state health departments have taken stances against customer health data collection in the wake of industry growth.[10] Growing concern at the state and federal levels points back to the original problem behind personal genomics, the protection of individual data from unwanted data distribution.

What the preceding information tells us is that personal genomics is currently opening a new avenue for legal conflicts in the realm of individual privacy. Personal genomics, as an industry, is still a relatively new concept. The unique services offered by this industry have piqued the interest of American consumers and have raised valid personal privacy concerns along the way. Indeed, the industry will no doubt continue experiencing growing pains as a result of such concerns. As databases continue to expand, and companies begin to offer more accurate findings, we may see personal genomics play a larger role in other industries, such as healthcare or family planning. Will this quell the unrest surrounding the storage and trading of genetic data? Until then, take solace in the fact that you, your genes, and your personal genomics service provider are contributing to one of technology’s newest frontiers


[1] D. Gurwitz & Y. Bregman-Eschet, Personal Genomics Services: Whose Genomes?, Eur. J. Hum. Genetics (Mar. 4 2009),

[2] See Anita L. Allen, Genetic Privacy Emerging Concepts and Values, Genetic Secrets: Protecting Privacy And Confidentiality In The Genetic Era 31, 33-34 (Mark Rothstein ed., 1997).

[3] 42 U.S.C. § 2000ff (2016).

[4] See id.

[5] See Barbara Prainsack, What are the Stakes? Genetic Nondiscrimination Legislation and Personal Genomics, ResearchGate (Apr. 2, 2015),

[6] See 23andme Privacy Highlights, (last visited Sept. 18, 2016) (“We may share some or all of your information with other companies under common ownership or control of 23andMe, which may include our subsidiaries, our corporate parent, or any other subsidiaries . . .”). It is worth mentioning here that Google is a major investor in 23andMe, donating $3.9 million in 2007, when the company was still in its second year.

[7] See Gurwits, supra note 1.

[8] R. Green and N. Farahany, Regulation: The FDA is Overcautious on Consumer Genomics, Nature (Jan. 15, 2014),

[9] See id.

[10] Alexis Madrigal, 23andMe to California: We’re not Ceasing or Desisting, Wired (June 24, 2008),

Photo Source:

Sick Pics: Legal Questions Raised by Patients Sending Nude Images to Doctors for Diagnosis


By: Nick Mirra,

Millennials have already infiltrated the workforce for several of the nation’s most time-honored professions. As 2017 draws near, more of the technological natives are earning their ranks among these established fields. For example, the average age of matriculating medical students for the 2015-16 year was 24, which means the average medical student is a millennial.[1] What are the implications of this generation beginning to take the reins of the medical profession? One prominent consequence is that doctors are becoming much more technologically savvy as medical techniques, procedures, and protocols evolve with the incoming influx of millennial medical students. As with any advancement in technology, there are new uncharted legal questions which arise almost as quickly as the technology itself springs to life.

Telemedicine has grown increasingly popular over the last several years.[2] Patients are able to get quality medical attention from the comfort of their homes or offices.[3] Waiting lines are minimized, patients do not have to arrange for transportation to the doctor’s office, and relative costs are decreased for both patient and provider.[4] Any communications that occur over the secure telemedicine program are protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA).[5] There are many benefits to this type of treatment, and they tend to outweigh the drawbacks in the eyes of a majority of doctors.[6]

In a conscious effort to be more connected with the younger generation, doctors are charting avenues to interact with their patients in new and innovative ways, beyond the scope of conventional telemedicine. One such advancement that falls outside the protections of the telemedicine forum is that some doctors are accepting pictures of their patients’ ailments over text message and email for medical assessment. As this practice has emerged, the potential for legal mishap closely followed. It didn’t take long before patients began sending their doctors pictures of their genitalia in order to diagnose a myriad of symptoms.[7] According to one doctor, a majority of these patients are obtaining their doctor’s consent prior to sending the pictures, but some do not.[8] Another potential benefit of this type of doctor-patient interaction is that patients often do not feel as embarrassed as if they were to disrobe in front of a doctor in person.

This emerging trend is crossing into a new plane of legality that has never been addressed before. What happens when the patient who is texting a picture of their genitals to their doctor is a minor?

Receipt of child pornography is in violation of 18 U.S.C. § 2252 which states in pertinent part that it is a crime for any person to knowingly receive child pornography by means of interstate commerce.[9] The statute continues by explaining that receipt by computer satisfies the interstate commerce requirement.[10] Further, “sexually explicit conduct” is defined in part as “lascivious exhibition of genitals… of any person.”[11] Through plain meaning statutory interpretation it is evidently clear that if a doctor consented to receive a picture of a child’s genitals for the purpose of diagnosis, then it could be considered per se child pornography.

A further confounding scenario would be when a doctor receives the images without having given consent to their receipt. Would the doctor have knowingly received the pictures? In regard to child pornography, the 11th Circuit Court of Appeals has held that “a person ‘knowingly receives’ something when he… takes in that thing through the mind or the senses.”[12] The court continued to state that a person does not have to save images to a hard drive in order to receive or be in possession of child pornography.[13]

Technology has advanced the medical profession far beyond what could have been imagined even half a century ago. As progress has been made, new liabilities have also been imposed. Most currently, the scope of telemedicine is still being established. What is the legality of minors sharing explicit images of themselves with their doctors via unsecured telemedicine such as texting or email? This emerging phenomenon continues to generate a host of questions regarding the legality of such exchanges. Until the issue is taken to court, or until the legislature responds, there will continue to be gaps in the law and doctors need to be extremely cautious.


[1] See Age of Applicants to U.S. Medical Schools at Anticipated Matriculation, Association of American Medical Colleges, tbl.A-6, data/factstablea6.pdf (last visited Sep. 20, 2016).

[2] See AIHM Survey of Healthcare Practitioners Shows That Telemedicine Technology Is Ahead of the Current State Medical Board Guidelines, Cio Today, storyid=030000IWF6ZO (last visited Sep. 20, 2016).

[3] See Jessica Harper, Pros and Cons of Telemedicine for Today’s Workers, Us News, (last visited Sep. 20, 2016).

[4] See id.

[5] See Amanda Holpuch, Sexting for your Health, The Guardian, society/2016/apr/07/patients-texting-doctors-genitalia-photos-ethics-law (last visited Sep. 20, 2016).

[6] See Supra note 2.

[7] See Supra note 5.

[8] See id.

[9] See 18 U.S.C. § 2252(a)(2)(A).

[10] See id.

[11] 18 U.S.C. § 2256(2)(v).

[12] United States v. Woods, 684 F.3d 1045, 1057-58 (11th Cir. 2012).

[13] See id.

Photo Source:


Telemedicine Is Set to Expand, But State Licensure Laws Could Limit Growth


By: Ryan Martin,

As recently reported, the global telemedicine industry is expected to grow to $57.92 Billion by the year 2020.[1] While that is still a small share of the total health care industry, it represents a 17.85% compound growth rate, signaling that telemedicine services are here to stay.[2]

Telemedicine, also known as telehealth, aims to provide medical services via electronic communications.[3] Often, these services can help provide medical care in rural areas where the accessibility to physicians is limited.[4] In a typical visit, a patient will “chat” with a physician through a webcam service, then be advised on a treatment or recommended to seek further treatment.[5] While the concept of telemedicine has been around as long as the telephone, it has seen a dramatic take off with the rise of mobile and video technology.[6] The Federal Government is now showing an interest in growing access to these services by providing grants to community hospitals for use in rural areas.[7]

However, as the industry continues to grow there are several legal and regulatory aspects that will need to be addressed to ensure that healthcare providers can provide telemedicine services in a cost effective manner. Among them are restrictions to reimbursement through Medicaid and Medicare, privacy concerns regarding HIPPA, and the threat of malpractice suits resulting from the inability to conduct a full physical examination of the patient.[8] Perhaps the most daunting hurdle—specifically in the United States—is individual state licensing restrictions.[9]

States are responsible for regulating and monitoring healthcare professionals within their state and generally require full licensure to provide services to patients in that state.[10] For example, a physician practicing internal medicine in California would need to be fully licensed by the state of Florida in order to provide a telemedicine consultation to a patient who is located in Florida. While it is understandable that a state would want to protect its citizens from an unlicensed physician, telemedicine transcends geographic boundaries; putting heavy licensing restrictions frustrates its purpose of providing common, low risk services, where the often alternative option is no healthcare service at all.

A few states have amended their state laws to allow for easier access to telemedicine. Several states allow for physicians from boardering states to provide medical services.[11] Ten states have taken steps to establish special telehealth licenses that allow a physician to practice through telemedicine services but not physically in that state.[12] This helps expedite the physician’s license and shortens what is often a lengthy review of her application.[13] However, no state has allowed for direct reciprocity.[14]

The American Telemedicine Association publishes an annual report card for each state, grading their licensure policies from A to F, based on the reasonableness of its telemedicine practice standards, licensure requirements, and policy on Internet prescribing.[15] In its latest report, there were no “A’s” issued, indicating that there is still work to be done if states want to expand telemedicine services.[16]

There is currently one potential resolution to the licensing problem. Seventeen states have signed a Federation of State Medical Boards (FSMB) compact that requires an expedited licensing process for out-of-state practitioners.[17] However, the FSMB does not create federal licensure law and each individual state has to affirmatively adopt the compact.[18] Because of this, the FSMB compact likely falls short of being a sufficiently comprehensive plan.

The future appears to be positive for telemedicine services, but if nothing is done to change the current regulations, telemedicine providers may be stuck with navigating the often-complex state regulations that limit the availability to such services. Should the federal government truly desire to increase healthcare accessibility in rural areas through telemedicine, more will need to be done to alter state licensing regulations.


[1] See Telemedicine Market to Reach $ 57.92 Billion by 2020, Thanks to Evolving Reimbursement Policies; Reveals Market Data Forecast Analysis, PR Newswire, (Sept. 14, 2016),–5792-billion-by-2020-thanks-to-evolving-reimbursement-policies-reveals-market-data-forecast-analysis-593396911.html.

[2] See id.

[3] See What is Telemedicine, American Telemedicine Association, (last visited Sept. 16, 2016).

[4] See Jonah Comstock, How telemedicine, remote patient monitoring help extend care in Mississippi, MobiHealthNews, (Sept. 13, 2016),

[5] See What is Telemedicine, supra note 1.

[6] See id.

[7] See Joseph Goedert, Federal grants give rural telehealth programs a boost, HealthData Management, (Aug. 16, 2016),

[8] See John Donohue, Telemedicine: What the future holds, Healthcare IT News, (Sept. 6, 2016, 11:06:00 AM),; HIPPA Guidelines on Telemedicine, HIPPA Journal , (last visited Sept. 16, 2016); Neil Chesanow, Do Virtual Patient Visits Increase Your Risk of Being Sued?, Medscape, Oct. 22, 2014)

[9] See Kristi VanderLaan Kung, Recent Relaxation of State-level Challenges to Expansion of Telemedicine but Barriers Remain, The National Law Review, (Aug. 18, 2016),

[10] See id.

[11] See Latoya Thomas & Gary Capistrant, State Telemedicine Gaps Analysis, AM. TELEMEDICINE ASS’N 4 (Jan. 2016),–physician-practice-standards-licensure.pdf.

[12] See id.

[13] See id.

[14] See id.

[15] See id.

[16] See Thomas, supra note 11.

[17] See Kung, supra note 9.

[18] See id.

Photo Source:

Self-Driving Vehicles: Legal Ramifications Surrounding the Future of the Auto Industry

A member of the media test drives a Tesla Motors Inc. Model S car equipped with Autopilot in Palo Alto, California, U.S., on Wednesday, Oct. 14, 2015. Tesla Motors Inc. will begin rolling out the first version of its highly anticipated "autopilot" features to owners of its all-electric Model S sedan Thursday. Autopilot is a step toward the vision of autonomous or self-driving cars, and includes features like automatic lane changing and the ability of the Model S to parallel park for you. Photographer: David Paul Morris/Bloomberg via Getty Images

By: Will MacIlwaine,

Over the past few years, auto manufacturers have been experimenting with autopilot features that, in certain situations, essentially allow a vehicle to drive itself. One such vehicle is the Tesla Model S. A recent software update for the Model S allows it to “use its unique combination of cameras, radar, ultrasonic sensors and data to automatically steer down the highway, change lanes, and adjust speed in response to traffic.”[1] Further, this Tesla model has the ability to search for a parking space once the driver has arrived at his or her destination, and will even parallel park the vehicle on its own.[2]

The Tesla must obtain certain data before it can enter into autopilot mode.[3] Among other things, there must be clear lane lines, a consistent travel speed, and the car must be able to sense other vehicles around it.[4] Tesla points out on its website that, although the vehicle does most of the driving for the consumer, drivers must still keep their hands on the steering wheel.[5] Even so, there have been reports of Tesla drivers taking pictures of themselves with their hands off the steering wheel, drinking coffee, reading the paper, or even riding on the roof, while the car does the driving.[6]

Tesla claims that the autopilot feature can make it easier, safer, and more pleasant to deal with traffic.[7] Some drivers of both Tesla vehicles with autopilot features, and drivers of other similar vehicles, would beg to differ.

This past week, reports surfaced of an accident involving a Tesla vehicle that occurred in January of 2016.[8] The accident took place in China, when the Tesla, thought to be in autopilot mode, failed to brake and slammed into a road sweeper, killing the driver.[9]

Later, in May of this year, a man was killed when the autopilot feature failed to recognize the white side of a tractor-trailer against the bright sky, and the brakes were not applied.[10]

Legally, what’s at stake for Tesla in introducing this innovative feature? The National Highway Traffic Safety Administration (“NHTSA”) classifies car automation by levels ranging from one to four.[11] One is akin to a standard car, while four is parallel with a fully capable autopilot vehicle.[12] According to attorney Gabriel Weiner, Tesla’s autopilot feature is similar to a level two classification.[13] Drivers could be fooled by the term “autopilot” and falsely believe that they do not have to be fully alert when the feature is being used.[14] If this is the case, and Tesla fails to warn the customer to always remain alert, the company could be liable if the autopilot feature caused an accident. On the other hand, Tesla’s owner’s manuals state that the driver is still responsible for controlling the vehicle, and a message reminding the user to keep his or her hands on the wheel and to be prepared to take over at any time is displayed on the vehicle’s center screen when the feature is in use.[15]

If a user sees these messages and decides to ignore them, it would seem that Tesla could escape liability, as this could be seen as an implied assumption of risk by the user. Under that theory, if the vehicle user knows and understands the danger that the autopilot feature presents and still voluntarily chooses to use it, there would likely be no liability on the part of Tesla.

In a claim that Tesla acted negligently in selling a car with the autopilot feature, Tesla could also make contributory negligence argument. By failing to have hands on the steering wheel, or by not paying attention to the road, the driver could be contributorily negligent if an accident were to occur.

There are certainly other questions surrounding the autopilot feature. For one, who is legally responsible for a car crash if the car is driving itself?[16] Tesla? The owner of the car? What implications might autopilot malfunctions have on an owner’s drivers license? Will an owner get points on his license, or worse, lose it, if the autopilot feature causes a crash?

The technological breakthrough that the autopilot feature offers is obviously not perfect. It may take years to fully perfect this advancement in the automotive industry. That being said, the question remains: will consumers continue to utilize this compelling feature while potentially sacrificing safety for convenience?


[1] Model S Software Version 7.0, (last visited Sept. 17, 2016).

[2] See id.

[3] See Ryan Bradley, Tesla Autopilot, MIT Technology Review, (last visited Sept. 17, 2016).

[4] See id.

[5] See Model S Software Version, supra note 1.

[6] See Bradley, supra note 3.

[7] See Model S Software Version, supra note 1.

[8] See Neal E. Boudette, Autopilot Cited in Death of Chinese Tesla Driver, New York Times, Sept. 14, 2016,

[9] See id.

[10] See Bill Vlasic & Neal E. Boudette, Self-Driving Tesla Was Involved in Fatal Crash, U.S. Says, New York Times, June 30, 2016,

[11] See William Turton, Tesla’s Autopilot Driving Mode is a Legal Nightmare, Gizmodo (July 23, 2016),

[12] See id.

[13] See id.

[14] See id.

[15] See id.

[16] See id.

Photo Source:

Is Legal Action the Best Way to Curtail Cyberbullying? Instagram Just Offered Another Option


By: Etahjayne Harris,

Instagram, one of the most popular social networks worldwide, has over 500 million monthly active users as of September 2016.[1] The photo-sharing app enables users to take and upload pictures and videos and share them publicly or privately on the app, as well as on a variety of other social networks, such as Facebook and Twitter. For teens, “Instagram is much more than a medium to share photos on—it’s an extension of their identities.” [2] Instagram’s co-founder Kevin Systrom has said that the app was created to, “make it easy for people to share their lives in a beautiful way.”[3] While the offered use and purpose behind Instagram is of a positive nature, the app has unfortunately been used as a tool for cyberbullying. Teens may be more susceptible to cyberbullying due to the significant role social media plays in their lives.

The National Conference of State Legislatures defines cyberbullying as, “the willful and repeated use of cell phones, computers, and other electronic communication devices to threaten others.”[4] A recent study done by the Cyberbullying Research Center (CRC) has shown that over 25% of students in middle school and high school have been cyberbullied in their lifetime.[5] While cyberbullying can take place on any variety of social media platforms such as Facebook, Twitter, or Snapchat, it has been argued that cyberbullying on Instagram is especially bad, “because it’s a very public platform that people use to post photos of themselves—inviting everyone and anyone to judge their appearances in the comment sections.”[6] For teens whose social media presence is closely tied to their self-identity, the effects of cyberbullying on that identity is particularly worrisome.

You may wonder amid these distressing statistics, whether Instagram has taken any measures to mitigate its cyberbullying problem and whether there are legal consequences for cyberbullying. On September 12, 2016, Instagram implemented an update that allows its users, “ to block ‘inappropriate comments’ on their posts and set filters for specific words.”[7] This update gives users more control over what comments get posted on their pictures beyond simply having the ability to delete unwanted comments or block specific users. In a statement released on September 12, 2016, Instagram co-founder Kevin Systrom said that the app is working towards, “keeping Instagram a safe place for self-expression.” This update gives users the chance to push back against cyberbullies and inappropriate comments generally. While this update is great and gives users more control, what are the legal remedies for the victims of cyberbullying?

Nearly every state has enacted some form of a student cyberbullying statute.[8] To be considered cyberbullying, information technology must be used to, “deliberately threaten, harass, or intimidate another person.”[9] Under many state statutes, state schools may be required to specifically address and correct behavior that may be considered cyberbullying through their policies.[10] The swift spread of social networking apps like Instagram and Facebook has been met with an increase in cyberbullying litigation both in the federal and state courts.[11]A frequent issue in applying these student cyberbullying laws is determining whether a school is responsible for protecting students from off-campus online harassment.[12] There is no clear answer as of today.

In spite of these student cyberbullying laws, finding a legal remedy for cyberbullying is complicated by the fact that cyberbullying encompasses such a wide range of behavior. A claim of cyberbullying is not necessarily viable simply because the victim was offended by what someone else commented on their Instagram post, for example; a claim is generally viable if the alleged conduct violated a criminal statute, violated a state student cyberbullying law, or constituted a traditional civil tort. [13] The issue of finding legal relief for cyberbullying is further complicated by the fact that a cyberbully can post or comment anonymously on social media apps like Instagram, making it difficult to trace the origin of the harassing comments. So while there are legal remedies in place for teen victims of cyberbullying, those remedies may be difficult to obtain. For now it appears that the most feasible way to combat cyberbullying on Instagram is to stop it in its tracks by using the new comment filter option.


[1] See Instagram, Number of monthly active Instagram users from January 2013 to June 2016 (in millions), Statista, (last visited Sep. 13, 2016).

[2] Nina Godlewki, If you have over 25 photos on Instagram, you’re no longer cool, TechInsider (May 26, 2016)

[3] Michael Noer, A Conversation with Instagram’s Co-Founder Kevin Systrom, Forbes (Apr. 9, 2012),

[4] Cyberbulliying, National Conference of State Legislatures (Dec. 14, 2010), (last visited 9/13/2016, 5:15pm).

[5] Sameer Hinduja and Justin Patchin, Cyberbullying Victimization (Feb. 2015),

[6] Elise Moreau, What is a Troll, and What is Internet Trolling, About Tech (Feb. 25, 2016),

[7] Brett Molina, Instagram Update Lets Users Filter Comments, USA Today (Sept. 12, 2016),

[8] See Gary D. Nissenbaum and Laura J. Magedoff, Potential Legal Approaches to a Cyberbullying Case, The Young Lawyer Vol.17, No. 9 (Aug. 2013),

[9] Id.

[10] See id.

[11] See id.

[12] See id.

[13] See id.

Image Source:

Apple’s Latest Lawsuit Arises From the iPhone Defect “Touch Disease”


By: Will MacIlwaine,

On September 7, Apple held its annual fall event. The event featured the introduction of the Apple Watch 2, as well as the iPhone 7, which is the first iPhone not to include a headphone jack.

While the fall event might suggest that things are continuously heading in the right direction for the innovative company, a defect associated with the iPhone 6 and 6 Plus devices suggests otherwise. In the United States District Court for the Northern District of California, three iPhone users have filed a class-action lawsuit against Apple for the defect that has been coined “Touch Disease.”[1]

Inside the affected iPhone models are chips that allow a user’s finger and the screen to interact.[2] For some users, these chips are not correctly secured to the logic board of the phone, and fail as a result of the consumer’s normal use of the device.[3] The plaintiffs in this case claim that Apple concealed this defect, which “causes the touchscreens on the iPhones to become unresponsive and fail for their essential purpose as smartphones.”[4] The defect also causes a gray flickering bar to appear at the top of the device’s screen.[5]

All three plaintiffs, Todd Cleary, Jun Bai, and Thomas Davidson, have experienced the “Touch Disease” issue.[6] All three individuals were told that they could pay an additional fee, over $300 dollars, for a replacement phone.[7]

The plaintiffs note that the previous iPhone 5S design included a metal shield to protect the device’s logic board, allowing the phone to better accommodate reasonable use by the consumer.[8] Additionally, the iPhone 5c design used an “underfill” mechanism to reinforce the chips at issue and protect them from normal wear and tear.[9] According to the plaintiffs, the iPhone 6 and 6 Plus models carry neither of these protective features.[10]

The plaintiffs claim that Apple has done nothing to remedy the defect, even though it has had knowledge of the issue for some time.[11] Apple gained this knowledge, according to the plaintiffs, through the records of customer complaints, repair records, and various customer claims.[12] The plaintiffs state that Apple’s lack of action is a result of “unfair, deceptive and/or fraudulent business practices.”[13] They even go as far as to argue that, had they not relied on Apple’s representations regarding the quality of the product, they would have paid less for the iPhones, or would not have purchased them at all.[14]

In their argument plaintiffs first posit that Apple engaged in unfair and deceptive acts in violation of the California Consumers Legal Remedies Act (“CLRA”) by knowingly and intentionally concealing from the customers the fact that the iPhones were faulty.[15] In a general sense, the plaintiffs argue that Apple misrepresented the product, representing that the phones had certain characteristics, uses, or benefits that they did not have.[16] They also argue that Apple misrepresented the quality of the product, and advertised the phones with the intent not to sell them as advertised.[17] Since Apple is in a better position to know the true state of the touchscreen defect, the plaintiffs contend that Apple owed a duty to the customer to disclose this issue.[18]

Further, the plaintiffs argue that since they were deceived in purchasing the phones, which they would not have purchased if they had knowledge about the defects, that Apple was fraudulent in its actions.[19] Among other claims for relief include those based on negligent misrepresentation, unjust enrichment, breach of implied warranty, and claims based on violations of several warranty acts.[20]

The “Touch Disease” only became an issue about six months ago.[21] Further, many users are just now starting to experience this problem on the iPhone 6 and 6 Plus, two years after the release of these models.[22] It’s difficult to believe that Apple knew that this was going to be an issue when it originally advertised and eventually released the phones in September of 2014.[23] Apple, while a powerhouse in the technology industry, cannot predict the future, and it’s certainly plausible that the company had no idea that the affected phones would have this kind of issue more than a year and a half after release. If that’s the case, there does not seem to be any “unfair” or “deceptive” action taken by Apple.[24]

On the other hand, ignoring the issue and continuing to produce and sell these iPhone models after becoming aware of the issue could be seen as deceiving and as misrepresenting the quality of the product, if in fact Apple had reasonable knowledge that the issue was affecting a majority of the models in the iPhone line it was continuing to produce and sell.

In reality though, Apple’s general warranty on its iPhone line is a one-year limited warranty.[25] After that, it is the consumer’s responsibility to pay for necessary repairs. All three plaintiffs in this case had been in possession of their phones for at least one and a half years before beginning to experience this issue.[26]

It would seem to be a bit more complicated if a user experiencing this issue was still within the one-year warranty period, because Apple’s warranties do not include coverage for “defects caused by normal wear and tear or otherwise due to the normal aging of the Apple Product.”[27] Then, the main issue would seem to be whether the defect was really a result of normal wear and tear, compared to a design defect as the plaintiffs claim.[28] If the plaintiffs were still within the warranty period, the claims might have more merit, but since the plaintiffs and many of the individuals having this issue only started to experience it around the two-year mark of owning the phones,[29] this does not seem to be the main subject of potential litigation. In this case, it does not seem likely that the court will side with the customers bringing these claims.

Important to reiterate is the fact that this is a class action. The plaintiffs seek to represent a nationwide class of iPhone 6 and iPhone 6 Plus users. That being said, this could be an issue for Apple if anything actually comes from these claims. Be on the lookout for a response from Apple in the coming weeks.


[1] See Don Reisinger, Apple Is Being Sued Over the iPhone ‘Touch Disease’, Fortune, (Aug. 30, 2016), (last visited Sep 13, 2016).

[2] See id.

[3] See Class Action Complaint ¶ 27, Davidson v. Apple, Inc., No. 5:16-cv-4942, (N.D. Cal. filed Aug. 27, 2016).

[4] See id. ¶ 1.

[5] See id. ¶ 21.

[6] See id. ¶¶ 8-10.

[7] See id.

[8] See Class Action Complaint, supra note 3, ¶ 28.

[9] See id.

[10] See id. ¶ 30.

[11] See id. ¶ 2.

[12] See id. ¶ 35.

[13] See Class Action Complaint, supra note 3, ¶ 4.

[14] See id. ¶ 38.

[15] See id. ¶ 57; Cal. Civ. Code § 1770 (a)(2),(5),(7),(9) (2016).

[16] See Class Action Complaint, supra note 3, ¶ 57; Cal. Civ. Code § 1770 (a)(5) (2016).

[17] See Class Action Complaint, supra note 3, ¶ 57; Cal. Civ. Code § 1770 (a)(9) (2016).

[18] See Class Action Complaint, supra note 3, ¶ 60.

[19] See id. ¶ 82.

[20] See id. ¶ 15-19.

[21] John Matarese, Apple iPhone ‘Disease’ Making Touch Screens Useless, WCPO Cincinnati (Sept. 12, 2016),

[22] See id.

[23] See id.

[24] See Class Action Complaint, supra note 3, ¶ 4.

[25] See Your Hardware Warranty, (last visited Sept. 14, 2016).

[26] See Class Action Complaint, supra note 3, ¶¶ 8-10.

[27] See Your Hardware Warranty, supra note 25.

[28] See Class Action Complaint, supra note 3, ¶ 30.

[29] See Matarese, supra note 21.

Photo Source:×675.jpg.

Identity Misappropriation on Dating Apps: Did You Right Swipe the Right Person?

Tinder dating app photo

By: Tatum Williams,

It is easy to confirm a Tinder profile’s authenticity when the person is someone you know. When this happens, and it often does considering the app boasts an estimated 50 million users[1], beneath the initial awkwardness of swiping across a familiar face, there is assurance. The assurance comes from knowing that the person beyond that profile is who they say they are.

With the slogan, “It’s like real life but better,” Tinder’s innovative design allows users to connect with nearby people using location-based software and basic Facebook information, such as mutual interests and friends.[2] Tinder and apps similar to it, such as Hinge, require this social authentication primarily to put users’ minds at ease over the likelihood of encountering a fake profile.[3] But such is not always the case. Of those who utilize dating apps and online dating, roughly 54% felt that some users seriously misrepresented themselves in their user profile.[4] Evidence of that is the modern phenomenon of “catfishing.” Catfishing, which is defined as the intentional deception of another through the use of a fake profile, typically done in hopes of achieving a romantic connection,[5] has exploited the rise of social media and dating apps. It is now even easier for someone to fabricate a dating profile.[6] Needless to say, the prevalence of dating apps has facilitated deceptive behavior given the availability of anonymous communications.[7]

More often than not, misrepresentations of this nature are harmless. While some people may exaggerate or lie about their preferred television shows or movies, others execute more strategic lies, often pertaining to their age, weight, height, personality traits, interests, monetary status, career aspirations, and even past relationships.[8] Most people are familiar with these kinds of misrepresentations and would agree that these lies, though wrong, are not worthy of serious legal ramifications.[9] Or are they?

This begs the question whether the legal considerations and ramifications associated with dating apps have evolved at the same burgeoning rate as the apps themselves. Unfortunately, the answer is no; dating app users have a low likelihood of success in holding a dating app liable for any harm that the user experiences from his or her interactions with other app users.[10] Most dating apps have combated these potential claims by disclaiming all warranties and representations with regards to other users in their terms of use agreements.[11] And the immunity does not stop there: The Communications Decency Act protects apps from liability based on content posted by users, such as the aforementioned catfishing scams or other misrepresentations.[12] Under the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[13]

Tinder is relatively new—it was only created in 2012[14]—so not many statistics have been conducted on the number of fake profiles currently in existence. With that said, the rapid pace at which the user base is experiencing growth preserves the potential for misrepresentations and catfishing. Despite the protections dating apps and other forms of online dating services receive, several states have made lying on these platforms a criminal offense.[15] This would seem to suggest that the trend of remaining unpunished for misrepresentation or catfishing is diminishing.[16] But until cyberspace governance evolves to the level that dating apps have, it appears that users’ best defense is to swipe wisely.

[1] See Alexis Kleinman, The Typical Tinder User Spends 77 Minutes Tinding Every Day, Huffington Post (Oct. 31, 2014),

[2] See Jessica L. James, Mobile Dating in the Digital Age: Computer-Mediated Communication and Relationship Building on Tinder (May 2015) (unpublished Master of Arts thesis, Texas State University) (on file with author).

[3] See Lindsay Hildebrant, Media and Self Representative Perceptions: Deception in Online Dating (May 2015) (unpublished undergraduate Honors thesis, Pace University) (on file with Pforzheimer Honors College, Pace University).

[4] See James, supra note 2.

[5] See Krystal D’Costa, Catfishing: The Truth About Deception Online, Scientific American Blog (April 25, 2014),

[6] See Keith Wagstaff, Hook, Line and Tinder: Scammers Love Dating Apps, NBC News (April 11, 2014, 5:36 PM),

[7] See Geelan Fahimy, Liable for Your Lies: Misrepresentation Law as a Mechanism for Regulating Behavior on Social Networking Sites, 39 Pepp. L. Rev. 2 (2012).

[8] See id.

[9] See id.

[10] See Greg Mitchell, Digital Dating: Legal Matters to Consider Before Swiping Right, Missouri Lawyers Help Blog (Feb. 12, 2016),

[11] See id.

[12] Doe v. MySpace, Inc., 528 F.3d 413, 416 (5th Cir. 2008).

[13] See Fahimy, supra note 7.

[14] See James, supra note 2.

[15] See Fahimy, supra note 7.

[16] See id.

Photo Source:

Powered by WordPress & Theme by Anders Norén