The first exclusively online law review.

Author: JOLT Page 3 of 50

The PTO is Moving Forward with its Inquiry on the Applicability of AI to Intellectual Property (and Trademarks)

By: Joey Rugari

person's left hand, technology, developer, touch, finger, artificial intelligence, think, control, computer science, electrical engineering

Introduction

It was fairly recently that I discussed, to some extent, the possibility of using artificial intelligence (“AI”) to search the trademark register and determine both whether there are suspicious marks that need to be challenged for cancellation and what is needed to improve the ability to establish the distinctiveness of new image marks in the course of application.[1] There, I discussed the use of technologization (the implementation of modern computer technologies) of trademark in light of the need for harmonization of trademark systems globally, as well as touching on the potential use of AI in the process.[2] There, the primary thrust of the piece was to advocate for the use of technology-based solutions like AI to try and resolve some of the current issues in trademark.[3] Though it is hardly surprising or particularly unexpected, this is exactly the sort of discussion and deliberation that has been going on at the USPTO regarding systematic technological change.[4] In other words, the discussion is moving forward.[5]

The USPTO’s Movement Towards AI Solutions

The USPTO has not been idle in establishing its footing on the potential use of AI. Late last year, the PTO sent out a notice seeking comment on the applicability of AI to intellectual property.[6] In doing so, the primary concerns were made clear – of the available issues facing the utilization or implementation of AI, very few dealt specifically with trademarks.[7] Of the 12 questions asked in the trademark/copyright/trade secret notice, only two of the questions dealt specifically with issues facing trademark.[8] The plurality of questions (six) dealt with issues facing copyright, and the majority of the other questions dealt with general issues facing the use of technology by the office.[9]

With that being said, the questions asked regarding trademarks are the only ones that are important for determining the use of AI as a matter of trademark registration. Those questions were: “7. Would the use of AI in trademark searching impact the registrability of trademarks? If so, how? 8. How, if at all, does AI impact trademark law? Is the existing statutory language in the Lanham Act adequate to address the use of AI in the marketplace?”[10] In short, the issues that the PTO is concerned with are whether registrability will be easier or harder as a result of AI and if the Lanham Act (the primary act determining federal trademark registration and enforcement) would need to be amended as the result of application of AI.

To the first, it’s worth noting that registrability of a trademark (in the Federal Register) requires that mark to be used in commerce.[11] If it’s a goods mark, then the goods need to be sold, at least once.[12] If it’s a service mark, the service has to have been rendered, at least once.[13] The primary effect that AI technologies (that would be used in trademark) would have on the trademark search process is most likely in the determination of distinctiveness. AI would not specifically make a mark more or less distinctive on the Abercrombie scale[14], for instance, but it could affect the rate at which trademark examiners can effectively evaluate whether there are senior users. Namely, by reducing the workload involved by utilizing recognition algorithms that would do point matching on image marks, similarly to proposals to use AI to more quickly evaluate prior potential use.[15] The effect, as the quality of AI learning algorithms improve, would be to make the trademark registration process more efficient.

To the second, AI as it could be used in the process of determining registrability would likely not need any substantive changes to the Lanham Act. Since the AI does not impact the registrability requirements, and since it would only be used in the limited context of establishing that there are no senior users, it would not change the substantive requirements for registration.[16] However, it is worth noting that there may be some need to establish challenges to registrability rulings (in particular regarding senior users) that rely solely on the AI matching. Namely, the process would need to take into account appeals on the grounds of improper matching by AI. It could be the case that a tiered process, starting with an AI-focused search, supplemented by human examiner search and quality assurance, would be necessary to fully capture the needs of trademark registration here. In other words, it might be necessary to amend the Lanham Act in the case that the concerns outlined above are more than mere paranoia.

Concerns About the Use of AI in these Fields

The primary concern – which was alluded to above – in the use of AI is that the learning algorithms are insufficient to effectively do the work required.[17] If humans are necessary a significant portion of the time just to assure effectiveness, then it is hardly the “silver bullet” or even a “powerful tool” towards the effective administration of the system.[18] Another concern is the effectiveness of the available data for training the machine learning systems (what we typically understand as “artificial intelligence”). This is an issue that persists in all fields of intellectual property,[19] and trademark has no particular features that would otherwise distinguish it in that regard.

The PTO Reaches out for Comments

The issue raised above has been echoed by several groups who responded to the notice for comment.[20] The most common concerns were that the AI technologies would need significant training data[21] and that (at least in the field of copyright) procuring such data has certain legal hurdles, like fair use protections[22]. However, those same comments state that the issue is ultimately met by the statutory language and case law (again, in the copyright case).[23]

Ultimately, the other issue that was commented on by the industry was about the protectability of AI-generated works under copyright.[24] While some of this information is transferrable[25], the most valuable feedback (regarding the copyright issues) that transfers over is the question of effect on need to protect databases and data sets.[26] The conclusion being that there needs to be some sort of forced third-party access to prevent data set monopolization and unfair practices that result therefrom.[27] While there wasn’t that much noise from the industry regarding trademarks (that I found) the issues facing copyright law that do transfer over are important to consider. Effective damages, knowing infringement, and monopolization of data sets are something that would universally (and negatively) affect intellectual property.[28]

The USPTO has taken these comments into account.[29] They’re turning the information that they’ve received into a dialogue and developing an online portal where all the feedback can be centralized.[30] Based on the comments by the PTO’s Deputy Director, we can expect their response sometime this spring.[31]

Conclusion

It’s clear that this issue is on the mind of the PTO. It’s clear that they’re looking to work on potentially applying technology solutions to combat efficiency issues where possible. It’s clear that the industry is behind the idea, if carefully applied. It’s clear that the PTO should apply these ideas. The one thing that isn’t clear is exactly how the new standards would be applied, both across IP and in trademark in particular. It is this author’s hope that such solutions are both considered viable and applied carefully.

[1] See Joey Rugari, It’s Hard to Come Up With a Good Title – Or Trademarks. The Technologization of the USPTO’s Filing System Is Tackling The Issue of Those Marks That Shouldn’t Apply. (Maybe Then I Can Think of Something.), JOLT Blog (Sept. 30, 2019), https://jolt.richmond.edu/2019/09/30/its-hard-to-come-up-with-a-good-title-or-trademarks-the-technologization-of-the-usptos-filing-system-is-tackling-the-issue-of-those-marks-that-shouldnt-apply-ma

[2] See Id.

[3] See Id.

[4] Department of Commerce, Patent and Trademark Office, Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation, 84 Fed. Reg. 58141 (Oct. 30, 2019).

[5] See id.

[6] See id.

[7] See id at 58142.

[8] See id.

[9] See id.

[10] Id.

[11] See 15 U.S.C. § 1127 (2018).

[12] See id.

[13] See id.

[14] See Udi Cohen, Artificial Intelligence Will Help to Solve the USPTO’s Patent Quality Problem, IPWatchdog (Nov. 23, 2019), https://www.ipwatchdog.com/2019/11/23/artificial-intelligence-will-help-solve-usptos-patent-quality-problem/id=116302/.

[15] See id.

[16] See 15 U.S.C. § 1127 (2018).

[17] Cf. Eileen McDermott, Users Lament PAIR Changes During USPTO Forum, IPWatchdog (Jan. 30, 2020) (“‘Down the chain you’re finding paralegals and assistants spending hours and hours per day to get basic information about patent applications . . . .’”), https://www.ipwatchdog.com/2020/01/30/users-lament-pair-changes-uspto-forum/id=118409/.

[18] Cf. Cohen, supra note 14.

[19] As a matter of course, any information that affects the use of AI and data sets generally will affect all IP, including trademark.

[20] See Caleb Watney, Comment on Intellectual Property Protection for Artificial Intelligence Innovation, R Street (Jan. 13, 2020), https://www.rstreet.org/2020/01/13/comment-on-intellectual-property-protection-for-artificial-intelligence-innovation/.

[21] See id.

[22] See id.

[23] See Stan Adams, Comments On the USPTO’s Intellectual Property Protection for Artificial Intelligence Innovation, Center for Democracy & Technology (Jan. 16, 2020), https://cdt.org/insights/comments-on-the-usptos-intellectual-property-protection-for-artificial-intelligence-innovation/.

[24] See, e.g., Nigel Cory & Daniel Castro, Comments to the U.S. Patent and Trademark Office on the Impact of Artificial Intelligence on Intellectual Property Law and Policy, Information Technology & Innovation Foundation (Jan. 10, 2020), https://itif.org/publications/2020/01/10/comments-us-patent-and-trademark-office-impact-artificial-intelligence.

[25] While copyright and trademark don’t fully overlap, any issue that affects intellectual property will affect both. The underlying rationales behind copyright and trademark may differ but concerns of legality of using AI in those areas still affect both.

[26] See James Love, KEI Comments on Intellectual Property Protection for Artificial Intelligence Innovation, for USPTO Request for Comments, Knowledge Economy International (Jan. 13, 2020), https://www.keionline.org/32101.

[27] See id.

[28] See generally id.

[29] See Laura Peter, Remarks by Deputy Director Peter at Trust, But Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software, United States Patent and Trademark Office (Feb. 3, 2020), https://www.uspto.gov/about-us/news-updates/remarks-deputy-director-peter-trust-verify-informational-challenges.

[30] See id.

[31] See id.

image source: https://www.pxfuel.com/en/free-photo-eptck

 

Clearview Will Find You

By: Matt Romano

Google, YouTube and Twitter have all sent cease and desist orders to controversial facial recognition app Clearview AI in an effort to stop it taking images to help police

In September of last year, I wrote about law enforcement’s growing use of facial recognition technology and the need for federal regulation on the issue.[1]  Last month, Kashmir Hill of the New York Times reported on one facial recognition app in particular that led a privacy professor at Stanford Law to state, “Absent a very strong privacy law, we’re all screwed.”[2] The app is called Clearview AI, and it is now being used by over six hundred law enforcement agencies around the country ranging from local cops to the FBI.[3] Using Clearview AI, law enforcement input an image of a suspect or victim, and it is compared to a database of images to find a match.[4] What separates this app from other facial recognition software used by law enforcement is where the images in the database come from.[5]  Unlike other image databases that are compiled of mugshots or drivers licenses photos, Clearview AI scrapes image of people from millions of websites including Facebook, Twitter, Instagram, and even Venmo.[6] This method of colleting images has allowed the app to create a database that dwarfs all other databases on the market with over 3 billion images.[7]

Facebook and other social media sites prohibit scraping users’ images like this in their terms of service, but Clearview is doing it anyway.[8]  In defense of his company’s actions, the app’s creator Hoan Ton-That claims that the company has a First Amendment right to access data in the public domain.[9]  Since this statement and Hill’s article, Facebook, Google, YouTube, and Twitter have all sent cease-and-desist letters to Clearview demanding it to stop scraping from their sites.[10] So we will probably find out soon if Ton-That is right. In the meantime, it is pretty safe to assume that if you’re on social media, your image is in Clearview’s database.  When Hill’s image was run through the app, it returned several results, including photos she had never even seen before.[11]

There is no denying that law enforcement having access to Clearview’s database will help them identify more suspects and victims.  In one instance, the Indiana State police identified a suspect in a shooting within twenty minutes of experimenting with the app.[12] The shooter didn’t have a driver’s license or criminal record, so government database were useless.[13] The app was able to match an image of him to a video on the internet that had his name in the description.[14]

Along with the obvious privacy concerns, the app’s algorithm has never been tested by an independent party such as the National Institute of Standards and Technology, so it is unclear how accurate it is.[15]  A researcher at Georgetown University’s Center on Privacy and Technology emphasized that “the larger the database, the larger the risk of misidentification because of the doppelgänger effect.”[16]  There is also the concern over whether law enforcement agencies using a database like this is legal.[17]  Clearview’s lawyer sent out a memo to prospective clients in August ensuring them that law enforcement agencies “do not violate the federal Constitution or relevant existing state biometric and privacy laws when using Clearview for its intended purpose.”[18]  A man in Illinois disagrees and has filed a class action lawsuit against the company for violating the Illinois Biometric Information Privacy Act.[19]  Maybe all this litigation involving Clearview will encourage for the federal government to make regulating law enforcement’s use of facial recognition a priority.

[1] Matt Romano, Lack of Federal Regulations as the Deployment of Facial Recognition Technology Increases Results in Drastic Measures, U. Rich. J. L. & Tech. Blog (Sept. 30, 2019), https://jolt.richmond.edu/2019/09/30/lack-of-federal-regulations-as-the-deployment-of-facial-recognition-technology-increases-results-in-drastic-measures/

[2] Kashmir Hill, The Secretive Company That Might End Privacy as We Know It, N.Y. Times (Jan 18, 2020), https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.

[3] See id.

[4] See id.

[5] See id.

[6] See id.

[7] See id.

[8] See id.

[9] Charlie Wood, Facebook Has Sent Cease-and-Desist Letter to Facial Recognition Start-Up Clearview AI for Scraping Billions of Photos, Business Insider (Feb. 6, 2020), https://www.businessinsider.com/facebook-cease-desist-letter-facial-recognition-cleaview-ai-photo-scraping-2020-2.

[10] See id.

[11] Kashmir Hill, The Secretive Company That Might End Privacy as We Know It, N.Y. Times (Jan 18, 2020), https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.

[12] See id.

[13] See id.

[14] See id.

[15] See id.

[16] See id.

[17] See id.

[18] See id.

[19] Tim Cushing, Lawsuit Says Clearview’s Facial Recognition App Violates Illinois Privacy Laws, Tech Dirt (Jan. 30, 2020), https://www.techdirt.com/articles/20200127/20405043812/lawsuit-says-clearviews-facial-recognition-app-violates-illinois-privacy-laws.shtml.

 

image source: https://www.dailymail.co.uk/news/article-7970371/Google-YouTube-Twitter-send-cease-desist-order-facial-recognition-app-Clearview-AI.html

 

Digital Doctors: How Telemedicine is Dealing with Privacy Risks

By: Brandon Baker

Image result for digital doctors

Technology is everywhere. It has invaded every aspect as our lives for better or for worse. Technology allows us to be in constant contact with one another, no matter how far away the other is. It helps to forecast trends and allows more people to be heard, from more places. Unfortunately, technology also has made us, as a society, extremely reliant upon it for our every need. Instead of driving to the grocery store, we can access an app that will get our groceries for us and deliver them to our door. We can spend countless hours seeing what everyone else on the earth is up to, without ever living a life of our own.

One aspect of technology which has a positive impact on society is telemedicine. Telemedicine is defined as “a method of providing clinical healthcare to someone from a distance by the use of telecommunication and information technology”. [1] Telemedicine, just like technology as a whole, is doing its part to shorten the distance between physicians and patients in rural areas. [2] Additionally, telemedicine can operate in conjunction with data analytical software, ensuring that the patient is receiving the best care possible. [3] Telemedicine truly has the ability to give citizens across the country access to world class medical care, without even having to leave their house. This is a massive breakthrough in rural health care and eliminates the burden that traveling to far away hospitals and clinics puts on the patient. Furthermore, telemedicine can play a major role in the treatment of patients who may not be totally mobile. Telemedicine allows them to skip the hassle and financial burden involved with getting medical transport to take them in for treatment. Telemedicine continues to break down barriers, allowing more and more people the chance to have first-class medical care, no matter where you might be.

While telemedicine has many benefits that have helped and will continue to help many patients across the country, there is an issue of whether that data that is shared electronically is secure and safe? Physicians need to make sure that the data in which they receive by way of the patient is secure pursuant to the Health Insurance Portability and Accountability Act of 1996 (HIPAA), if it is applicable. [4] The great progress that telemedicine has achieved in recent years and the vast benefits of its services could all be at risk if a widespread data hack targeted these providers and stole sensitive information. Due to this concern, telemedicine providers are advised to conduct more frequent tests that examine the vulnerability of their IT systems. [5]

In conclusion, while telemedicine has been praised for its ability to connect individuals from around the country with top-notch medical care, it has not been without its risks or concerns. For telemedicine to continue to grow and allow its benefits to reach every single corner of the country, telemedicine providers need to be wary of the security and privacy risks that are present and how to effectively mitigate them.

[1]See What Are The Latest Trends in Telemedicine in 2018?, Forbes (July 31, 2018), https://www.forbes.com/sites/quora/2018/07/31/what-are-the-latest-trends-in-telemedicine-in-2018/#798893a06b9e

[2] Id.

[3] Id.

[4] See Joseph L. Hall et al, For Telehealth to Succeed, Privacy and Security Risks must be Identified and Addressed, Health Affairs (Feb. 2014), https://www.healthaffairs.org/doi/full/10.1377/hlthaff.2013.0997

[5] Supra note 1.

image source: https://www.chmbinc.com/digital-doctor-visits/

Deepfakes, Hyper-Realistic Masks, and Smart Contact Lenses: How Modern Technology Brings the Reliability of Eye-Witness Accounts and Video Evidence into Question

By: Olivia Akl

Generative Adversarial Networks: The Tech Behind DeepFake and FaceApp

In a world that mass-produces conspiracy theories from the moon landing being fake to black helicopters coming to bring the US under UN control, it can occasionally be hard to tell fact from fiction. That’s only getting harder thanks to technological advancements like deepfake videos[1], hyper-realistic silicone masks[2], and soon-to-be smart contact lenses.[3] When we can’t trust what we see with our own eyes, what can we trust? What does this advancement of technological trickery mean for the reliability of eyewitness accounts and video evidence in courts?

 

A deepfake PSA produced by Buzzfeed in 2018 seemed to show President Obama warning people about the threat deepfakes presented.[4] It ended with the reveal that the person speaking was not President Obama, but rather Jordan Peele doing an impersonation of President Obama overlaid with President Obama’s image using FakeApp and After Effects CC.[5] This video opened many eyes to the power of deepfake technology and how convincing it could be.[6] However, deepfake videos are hardly the first videos to fool people into thinking one person is doing something, when it’s truly another individual.

 

Security videos analyzed by the FBI of a string of robberies in San Diego from 2009 to 2010 led the FBI to offer a $20,000 reward for information that led to the arrest of the so-called “Geezer Bandit.”[7] While at least one witness thought the robber was wearing a “Halloween-style old man” mask, the authorities felt confident in the many other eye-witness accounts that he was a 60-70 year old man[8] and the reward notice described him as such.[9] Surveillance footage from outside the site of the Geezer Bandit’s last robbery on December 2, 2011 showed the supposed 60-70 year old sprinting across a parking lot after a dye-pack exploded.[10] This led the FBI to update its reward notice to include the line: “Possibly wearing a synthetic mask and gloves to hide true physical characteristics.”[11] The “Geezer Bandit” was never caught.

 

These technologies may seem like something out of a Mission Impossible movie or science fiction, but they are real, and the technology is getting both cheaper[12] and better[13]. Another technology that seems straight out of science fiction is a smart contact lens that may be only a few years away.[14] Mojo Vision, a California-based company, has been working on a smart contact lens—the Mojo Lens—for five years.[15] While the idea for the Mojo Lens is a discreet product to replace a smart phone’s screen like a less obvious Google Glass, it’s not a big jump to see how this could alter the wearer’s perception of the world.

 

Deepfake technology has already proven capable of working in real time to overlay one person’s image over a live speech.[16] If a smart contact lens could be hacked or infected with malware that allowed access to the view that an individual sees, is it possible a deepfake could be created for the wearer’s eyes only, altering the wearer’s perception of the world in real time? Imagine a smart contact lens wearer witnesses a crime and describes the perpetrator to the police. In a world with these two technologies, smart contact lenses and live deepfakes, can that eyewitness account be trusted?

 

There is already a worry over the reliability of eyewitness testimony today, without any potentially hacked or malware-ridden smart contact lenses to muddy the waters.[17] Human memory is fallible and people are not often as perceptive as lawyers hope their witnesses are, yet “jurors place heavy weight on eyewitness testimony when deciding whether a suspect is guilty.”[18] In the future, these smart contact lenses will present new issues with eyewitness accounts, perhaps to the point where eyewitnesses will no longer be trusted on the stand.

 

Another new worry for the courts will be if video evidence can be relied upon due to hyper-realistic silicone masks and deepfake technology. If the technology gets beyond what analysis can reveal as false, could an innocent person be framed for a crime using this technology? Even if the technology can be recognized upon analysis of the video, could the analysis be prohibitively costly for a court or defendant to bear? If so, future courts will need to ask: can video evidence be relied upon when there is a lingering issue of its veracity, and how expensive can cases relying on video evidence be allowed to become?

[1] See Daniel Thomas, Deepfakes: A Threat to Democracy or Just a Bit of Fun?, BBC (Jan. 23, 2020), https://www.bbc.com/news/business-51204954.

[2] Matt Simon, Gaze Into These Hyperrealistic Masks and See a Troubling Future, Wired (Jan. 6, 2020, 2:15 PM), https://www.wired.com/story/hyper-realistic-masks/.

[3] Juilan Chokkattu, The Display of the Future Might Be in Your Contact Lens, Wired (Jan. 16, 2020, 8:00 AM), https://www.wired.com/story/mojo-vision-smart-contact-lens/?itm_campaign=BottomRelatedStories_Sections_1.

[4] BuzzFeedVideo, You Won’t Believe What Obama Says in This Video!, YouTube (Apr. 17, 2018), https://www.youtube.com/watch?v=cQ54GDm1eL0.

[5] See id.

[6] See id.

[7] Reward of $20,000 Offered in “Geezer Bandit” Investigation, FBI San Diego (Dec. 15, 2010), https://web.archive.org/web/20101219040727/http://sandiego.fbi.gov/pressrel/pressrel10/sd121510.htm.

[8] FBI still seeking help catching ‘Geezer Bandit’; $20,000 reward offered, Los Angeles Times: L.A. Now (Dec. 15, 2010, 11:28 AM), https://latimesblogs.latimes.com/lanow/2010/12/the-fbi-on-wednesday-renewed-its-plea-for-public-help-in-finding-one-of-the-regions-more-illusive-crooks-the-geezer-bandit.html.

[9] See FBI San Diego supra note 7.

[10] Tony Perry, Geezer Bandit May Not Be a Geezer, Los Angeles Times (Dec. 23, 2011, 12:00AM), https://www.latimes.com/local/la-xpm-2011-dec-23-la-me-geezer-20111223-story.html.

[11] Darrell Foxworth, Reward of $20,000 Offered in “Geezer Bandit” Investigation, FBI (Dec. 2, 2011), https://archives.fbi.gov/archives/sandiego/press-releases/2011/reward-of-20-000-offered-in-geezer-bandit-investigation-1.

[12] See Simon, supra note 2.

[13] Pakinam Amer, Deepfakes Are Getting Better. Should We Be Worried?, Boston Globe (Dec. 13, 2019, 4:07 PM), https://www.bostonglobe.com/2019/12/13/opinion/deepfakes-are-coming-what-do-we-do/.

[14] See Simon, supra note 2.

[15] See id.

[16] Samantha Cole, This Program Makes it Even Easier to Make Deepfakes, Vice: Motherboard (Aug. 19, 2019, 11:50 AM), https://www.vice.com/en_us/article/kz4amx/fsgan-program-makes-it-even-easier-to-make-deepfakes.

[17] Hal Arkowitz, Why Science Tells Us Not to Rely on Eyewitness Accounts, Scientific American: Mind (Jan. 1, 2020), https://www.scientificamerican.com/article/do-the-eyes-have-it/.

[18] See id.

 

image source: https://interestingengineering.com/generative-adversarial-networks-the-tech-behind-deepfake-and-faceapp

Challenges Posed by Online Investing

By: Kirk Kaczmarek

Stock Exchange, Boom, Economy, Pay, Percent, Plus

I clicked the YouTube link, and my phone screen showed the stock portfolio of reddit user u/ControlTheNarrative. His face was stoic in the upper right hand corner of the screen as he showed viewers his 358 contracts for Apple stock valued at $57,684. The clock struck 9:30 AM, and the market opened. u/ControlTheNarrative audibly gagged and closed his eyes in disbelief as his portfolio value immediately plummeted to -$2,600.[1] u/ControlTheNarrative was one of a handful of amateur investors who exploited a glitch to obtain “infinite” leverage when trading on Robinhood in late October 2019.[2]

 

Robinhood is a free stock trading app that allows unsophisticated investors easy access to the stock market.[3] Among the services Robinhood provides is the ability to buy stock on margin.[4] When you buy stock, you simply exchange money for a share of stock. When you buy stock on margin, you borrow money from a broker to purchase the stock, offering your own cash or other securities as collateral; this is called leverage.[5] Robinhood also allows users to sell covered calls, which is a contract for the sale of stock set at a certain price before a set time.[6]

 

Robinhood allowed users to buy stock on margin at a 2:1 debt to capital asset ratio.[7] u/ControlTheNarrative and others would then buy stock on margin, and then immediately sell a covered call on that stock at a price slightly below what they just paid.[8] Normally, a broker would see this transaction and either recall its debt or at least refuse to issue a new loan.[9] Because of a glitch in the app, Robinhood did not behave like a normal broker. Robinhood combined the stock value and covered call value to effectively double the user’s assets.[10] u/ControlTheNarrative and others took advantage of this glitch by using their falsely inflated asset values to buy more stock on margin at a 2:1 ratio without any limit, thus obtaining “infinitely” leveraged positions.[11] Unsurprisingly, this scheme failed spectacularly, leaving the amateur investors severely in debt.[12]

 

Although Robinhood has seemingly run afoul of consumer protection law, it has not suffered legal consequences resulting from the infinite leverage incident.[13] And strangely, while Robinhood has not paid out any ill-gotten gains to users who may have benefited from the glitch, it may still hold liable investors like u/ControlTheNarrative who lost money.[14]

 

Robinhood’s commission-free investing business model is risky, because amateur investors may not behave in ways that do not conform to industry norms. At least two people in addition to u/ControlTheNarrative were able to leverage up to $1 million based on initial capital of only $4,000 and $15,000 respectively.[15] Sophisticated investors may not have taken such outlandish bets. But the risk has not stopped established firms from adapting Robinhood’s model. Charles Schwab and Fidelity removed trading commissions in October 2019.[16]

 

When software glitches can cause such gross investing mismanagement, brokerage firms ought to be held accountable. However, the SEC ought to also exercise caution in issuing new regulations for fear of stifling access to the stock market for amateur investors. Rather, the SEC should enforce consumer protection laws already on the books, while firms should beef up their insurance to protect from future technical failures.

[1] GG Boys, Guy loses $50k swinging during earnings on Robinhood, YouTube (Oct. 2019), https://www.youtube.com/watch?time_continue=0&v=d80ahvRSV8E&feature=emb_title.

 

[2] See Edward Ongweso Jr., A Robinhood Exploit Let Redditors Bet Infinite Money on the Stock Market, Vice (Nov. 6, 2016, 12:48 PM) https://www.vice.com/en_ca/article/gyz9kj/a-robinhood-exploit-let-redditors-bet-infinite-money-on-the-stock-market.

 

[3] See Robinhood, https://robinhood.com/ (last visited Jan. 24, 2020).

 

[4] See Supercharge your Investing, Robinhood, https://robinhood.com/about/gold/.

 

[5] See What is Margin?, Robinhood, https://learn.robinhood.com/articles/3ya72NeLXpiAvpV5QPQVl6/what-is-margin/; and Adam Hayes, Leverage, Investopedia (Apr. 24, 2019), https://www.investopedia.com/terms/l/leverage.asp.

 

[6] See Placing an Options Trade, Robinhood, https://robinhood.com/support/articles/360001227566/placing-an-options-trade/; and Akhilesh Ganti, Covered Call, Investopedia (Oct. 14, 2019), https://www.investopedia.com/terms/c/coveredcall.asp.

 

[7] See Modern Wall Street, Turchman: “Someone hacked Robinhood, bet against Apple & now owes $150,000”, YouTube (Nov. 6, 2019), https://www.youtube.com/watch?v=1SqeS8nSZA0&feature=emb_title.

 

[8] See Ongweso Jr., supra.

 

[9] See Modern Wall Street, supra.

 

[10] See Ongweso Jr., supra.

 

[11] See id.

 

[12] See id.

 

[13] See Regulators could punish Robinhood for glitch (CNBC television broadcast Nov. 6, 2019), https://www.msn.com/en-us/news/videos/regulators-could-punish-robinhood-for-glitch/vp-AAJXirx.

 

[14] See id.

 

[15] See Ongweso Jr., supra.

 

[16] See Kate Rooney, ‘Infinite Leverage’ – some Robinhood users have been trading with unlimited borrowing money, CNBC (Nov. 5, 2019, 1:43 PM),  https://www.cnbc.com/2019/11/05/some-robinhood-users-were-able-to-trade-with-unlimited-borrowed-money.html.

 

image source: https://pixabay.com/photos/stock-exchange-boom-economy-pay-3972311/

 

 

Big Brother is Watching Your Kids

By: Will Garnett

Shocked 12 year old on computer unsupervised

News of multiple governmental organizations using facial recognition technology has sparked a conversation about the protection of children online. The Children’s Online Privacy and Protection Act (COPPA) was enacted in 1998 to protect children using the internet.[1] The act was meant for force websites to take certain precautions when knowingly interacting with individuals under the age of 13, the specifics of those precaution were left to the FTC for regulation.[2] The FTC’s COPPA rules has been revised since its inception, but they have maintained organized standards for websites to follow.[3] These rules require parental consent before personal information about the child can be collected and disseminated.[4] The rules also ensure that once a child’s information is collected properly, it is only transmitted to other entities with the capability of protecting that information.[5] Children currently have access to a lot of different technology which could collect their information, such as mobile games, apps, and social networking sites. Further, as technology creeps into previously unknown domains, COPPA regulations to protect children has become more important.

In January, multiple news outlets contained reports of governmental organizations using facial recognition software as part of their operations.[6] One entity that seems to be favorite of the government is a company called Clearview AI. This company has been selling facial recognition data to over 600 law enforcement agencies in the country.[7] The idea behind using facial recognition data is that images or criminals and other people of interest could be captured innocuously and could later be used to find and apprehend those individuals after an incident. But, Clearview AI has a database of over three billion photos, collected from the las year alone.[8] This company has been able to amass this hefty war chest of photos by craping various parts of the internet for photos, particularly social media platforms.[9] With the current popularity of social media platforms among teens and children, there has been a rise in skepticism about the data collection being done by companies like Clearview AI. With a simple swipe of the finger, people can access Tik Tok videos of children (certainly some under the age of 13) performing fun dances and lip syncing to hit songs. How much of that easy to access information is also being scraped by facial recognition technology and being stored for later use? Further, suppose one believes that the simple capturing of facial images of children is innocuous, consider what other personal information is being captured through videos on apps like Tik Tok. Scraping social media platforms can produce very personal information about someone, such as their full name, home location, current location, and even the contents of their bedroom.

Concern over the private collection and government’s use of facial recognition data has risen to the highest levels of power. Senator Ed Markey of Massachusetts recently sent a letter with a series of questions to Clearview AI to address this very problem.[10] The Senator’s letter raised even more serious questions about personal information collection, such as questions about biometric data that can be collected with facial recognition technology. His letter also addressed how the use of this technology could potentially violate COPPA regulations.[11] Although the alarm-bell has been rung in one branch of the government, there are forces within the FTC which already see COPPA regulations as too cumbersome and far reaching.[12] One FTC board member dissented to the 2012 update of the regulations, stating that they would implicate too many websites.[13] While the government remains split on decisions about regulation and the implementation of new technology, children (those least able to protect themselves) remain in harm’s way. But, what’s new?

[1] See 15 U.S.C §§ 6501-6506 (2019).

[2] See id.

[3] See 16 C.F.R § 312 (2019).

[4] See id.

[5] See id.

[6] See Chris Mills Rodrigo, Democratic senator presses facial recognition company after reports of law enforcement collaboration, The Hill (Jan. 23, 2019), https://thehill.com/policy/technology/479564-democratic-senator-presses-facial-recognition-company-after-reports-of-law.

[7] See id.

[8] See id.

[9] See id.

[10] See id.

[11] See id.

[12] See id.

[13] See 5 Computer Law §28.05 (2019).

image source: https://www.verywellfamily.com/parental-controls-2634209

Will Your Vote Count In 2020? Internet Voting Threatens Vote Legitimacy

By: Garrett Kelly

Leading up to the 2018 midterm elections, Secretary of State Mark Warner announced that voters in designated counties in West Virginia would be able to access their voting ballots on their mobile devices.[1] Since 2018, the Mobile Voting Project, a philanthropy focused on increasing voting accessibility, has expanded its pilot locations to Utah, Oregon, Colorado, and Washington.[2] As of January, 2020 in King County, Washington, all 1.2 million of the County’s voters were able to vote on their mobile devices using a platform called Democracy Live.[3] Ironically, Oregon Senator Ron Wyden has stated that internet voting is “the worst thing you could do in terms of electronic security in America, short of putting ballot boxes on a Moscow street.”[4] In response to the controversial 2016 presidential election, select areas of the United States have begun testing with mobile voting methods.[5] The threats of mobile voting to cyber security are rather tacit, however, the real threat of mobile voting is posed against our legal right to vote. According to the established doctrines such as the 15th Amendment of U.S. Constitution, and Section 21 of The Universal Declaration of Human Rights, the right to vote is granted to us by law.[6] Although the right to vote in “genuine elections” are secured rights, there are currently no federal voting regulations specially tailored to the insurgence of voting technology in the age of the internet.[7]

Advocates claim the current voting system is not working and the lack of voter turnout is leading our democracy to a state of crises.[8] Advocates also acknowledge the need of securing the integrity of every vote but claim that the current voting system does not accommodate voter accessibility which is evidenced by the fact that the voter turnout rate in the U.S. is lower than most other developed countries.[9] Advocates attribute the low voter turnout rate in the U.S. to several factors. One factor advocates point to is that elections are held during the workday, which excludes some working citizens from being able to vote.[10] Additionally, voting is too “old-school.”[11] Voters are less likely to vote if they are forced to congregate at a local church, high school gym, or retirement home and wait in line to cast their vote.[12] Furthermore, a popular concern is that the younger generation will naturally trend toward internet voting because they will have less understanding of paper ballot voting.[13] Advocates point to all of these factors as an explanation for the average voter turnout rate of 55%.[14] Advocates claim that outdated and ineffective voting system has caused too much “friction” between the registration and voting process and is the catalyst for diminished legitimacy for our elected officials.[15]

The question becomes, does the hope of increased voter turnout outweigh the cyber security risks? Perhaps an obvious counter argument to the claim that voter accessibility will increase election legitimacy, is precisely the opposite. If votes are being conducted at the touch of a finger, election legitimacy would decrease because the mere fact that people are not making the effort to wait in line at the polls is evidence that voters are not serious about their votes. An alarming analogy used by cyber security experts when comparing the cyber security risks of internet voting is to the cyber security statistics in the American financial system. Allegedly 5-7% of annual expenses are included in the operating costs of most big banks. It is bad enough that the American financial system is complacently allowing fraud and financial misconduct to continue at such a rate. How would this work in the political setting? Would 5-7% of all votes be the result of fraud? This does not sound like a successful sales pitch to the growing rational apathetic population in the United States. However, the question remains, whether the threat of fraud is greater than the threat of the increasing population of apathetic voters and people are unable to vote due to lack of accessibility.[16] The bottom line, if people are unwilling or unable to vote in person, there vote is guaranteed not to count.[17]

 

 

[1] Tusk Philanthropies, Mobile Voting Project, https://mobilevoting.org/where-is-it-happening/.

[2] See id.

[3] Id.

[4] Miles Parks, In 2020, Some Americans Will Vote On Their Phones. Is That The Future?, NPR (Nov. 7, 2019, 5:01 AM), https://www.npr.org/2019/11/07/776403310/in-2020-some-americans-will-vote-on-their-phones-is-that-the-future.

[5] Mobile Voting Project, supra note 1.

[6] See generally, U.S. Const. amend 15; see also G.A. Res. 217 (III) A, Universal Declaration of Human Rights (Dec. 10, 1948).

[7] Emily Goldberg, America faces a voting security crisis in 2020. Here’s why-and what officials can do about it, Politico, (Sept. 16, 2019, 4:03 pm), https://www.politico.com/story/2019/08/16/voting-security-crisis-q-a-1466704.

[8] Tusk Philanthropies, Mobile Voting Project, https://mobilevoting.org/why-mobile-voting/.

[9] Parks, supra note 4.

[10] See id.

[11] Id.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Goldberg, supra note 7.

[17] See id.

image source: https://static.politico.com/dims4/default/a985f3b/2147483647/resize/1160x/quality/90/?url=https%3A%2F%2Fstatic.politico.com%2F5d%2F4f%2F08ad183c44b1b72837846f0a0d3e%2F190816-votingmachine-getty-773.jpg

California Game-Changers

By: Eric Richard

New Year, New Laws Impacting Public Agencies in California – Part I

With the adoption of the General Data Protection Regulation in 2018, many curious onlookers watched and waited to see whether there would be any like legislation surface in the United States. Federally, those onlookers are still waiting, but some states are taking up the initiative, the most notable being California.[1] Just this month (January 2020), the California Consumer Privacy Act (CCPA) became effective.[2] The new legislation is currently the strongest data privacy law in the United States, providing consumers with ample rights related to accessing their data, having their personal data deleted when requested, and even opting out of having their data sold.[3] Vermont also enacted data protection legislation recently.[4] However, Vermont’s law only covers third parties that buy or resell consumer data, and is not as pervasive.[5]

So, what kinds of businesses need to be scared, or at least aware, of these new laws? The CCPA covers businesses operating in the state of California, such as: ride-hailing services, retailers, mobile service providers, and others that may collect personal data for commercial purposes.[6] Further, the CCPA only applies to companies that have more than $25 million dollars in gross revenues, annually buys, receives or sells personal information of at least 50,000 or more consumers, households, or devices, or derives 50% or more of its annual revenue from selling personal information.[7] Consumers will now be able to see exactly what categories of data any subject company has on them.[8] This includes things like smartphone locations or voice recordings (look out Snapchat).[9] The CCPA even provides specific protection for children.[10] It stipulates that companies must obtain parental permission before selling person details of anyone under the age of 13.[11]

Another California initiative is potentially about to change the way students go through law school in the Golden State. A recent rule change by the State Bar of California will allow state-accredited law schools to teach JD programs entirely online.[12] The result has been two-fold: accredited law schools are now filing applications to offer all online curriculums and non-accredited law schools who currently offer those online curriculums are filing for accreditation.[13] Just like the number of employees who want the option to work from home, this change is likely stemming from the American preference for flexibility in scheduling one’s life. There will be several layers of hoops to jump through in order for all online programs to become accredited, and we’ve yet to see how many students will opt for an online experience as a result, but California is certainly changing the landscap

[1] See Jill Cowan, How California’s New Privacy Law Affects You, NY Times (Jan. 3, 2020), https://www.nytimes.com/2020/01/03/us/ccpa-california-privacy-law.html.

[2] See id.

[3] See id.

[4] See Jason Tashea, Vermont’s new consumer protection law could be a harbinger for tech industry, A.B.A. J. (June. 1, 2019, 12:50 AM), http://www.abajournal.com/magazine/article/sunlight-in-vermont-states-new-consumer-protection-law-regulating-companies-that-buy-or-sell-data-could-be-a-harbinger-for-tech-industry.

[5] See id.

[6] See Cowan, supra note 1.

[7] See Jason Tashea, California’s new data privacy law could change how companies do business in the Golden State, A.B.A. J. (Jan. 1, 2019, 1:50 AM), http://www.abajournal.com/magazine/article/gdpr_california_data_privacy_law.

[8] See Cowan, supra note 1.

[9] See id.

[10] See id.

[11] See id.

[12] See Stephanie Ward, California may offer more opportunities for JDs taught entirely online, A.B.A. J. (Jan. 14, 2020, 6:30 AM), http://www.abajournal.com/web/article/california-may-offer-more-opportunities-for-jds-taught-entirely-online.

[13] See id.

 

image source: https://www.bbklaw.com/news-events/insights/2019/legal-alerts/01/new-year,-new-laws-impacting-public-agencies-i-(1)

5G Fury: What the Latest Generation Could Mean for Attorneys

By: Monica J. Malouf

5g smart city iot wireless silver platter tablet service

The newest generation of mobile networking is upon us. AT&T and Verizon have rolled out their 5G—which stands for 5th generation—plans, and in 2020 both providers intend expansion to nationwide coverage.[1]

In a nutshell, 5G is “a new cellular standard.”[2] It will improve interconnection between users, as well as connection with “smart” devices.[3] It will deliver multi-gigabyte/second rates, decreased latency or lag, increased capacity, and provide “a more uniform user experience.”[4]

Not only will 5G transform cellular network connection, but it will also improve mission-critical communications, “enable[ing] new services that can transform industries with ultra-reliable/available, low latency links—such as remote control of critical infrastructure, vehicles, and medical procedures.”[5] 5G could also generate significant revenue for the U.S. economy, although the full economic effect is yet to be realized.[6]

These improvements obviously appeal to consumers across all network providers. But the appeal comes at a high cost.

To implement nationwide access, the large wireless companies will need to construct or install thousands of new towers. Although they are not “towers,” technically—more like pole-like extensions with booster antennas connected to lamp-posts and stop lights. Nonetheless, these towers prove semi-invasive, and people across the country are displeased.

5G requires that towers be erected in closer proximity to users,[7] which means closer proximity to homes.  AT&T plans to implement 300,000 this year.[8] These towers are eye-sores that the company plans to attach to already existing infrastructure in neighborhoods.[9] Quicker access and connectivity comes a price, right?

Homeowners from New York to Maryland have expressed their concerns about these new cell towers popping up in their neighborhoods.[10] They claim health concerns regarding increased radiation exposure, specifically fears that the exposure causes cancer.[11]  Citizens in Albany have created a group demanding a moratorium on construction until health concerns are addressed.[12]

However, research on the health effects of cell-phone emitted radiation has been inconsistent.[13]  The Telecom industry and even the Federal Communications Commission (FCC) have assured the public that 5G is safe.[14] Additionally, some scientists proffer statistics which show that the human skin can block radio waves at higher frequencies.[15] Nonetheless, people across the world have banded around health concerns surrounding 5G, slowing the implementation of the new generation of mobile networking.[16]

Like any advance in technology, new legal questions arise with the unveiling of the new network. Questions of health, privacy, intellectual property, and other legal implications circle around this next generation of mobile networking. In the race with China to implement nationwide 5G access, the FCC has placed constraints on cities and local governments in their ability to regular 5G within their localities.[17]

The U.S. Conference of Mayors released a statement in opposition following the FCC’s order, stating:

“Despite efforts by local and state governments, including scores of commenters in the agency’s docket, the Commission has embarked on an unprecedented federal intrusion into local (and state) government property rights that will have substantial and continuing adverse impacts on cities and their taxpayers, including reduced funding for essential local government services, and needlessly introduce increased risk of right-of-way and other public safety hazards.”[18]

Legal questions will continue to circle around 5G. Will new towers expose users to new levels of radiation? Will the new connective features among smart devices lead to privacy breaches? Will the new towers affect property prices or other elements of the real estate market? The list continues.

Societal desires for faster, more efficient and innovative data-access move with a momentum that the law seems to never quite match. And while that seems daunting for consumers, it means unchartered territory for attorneys. And with unchartered territory means works. Lots of work.

[1] See Brian X. Chen, What You Need to Know About 5G in 2020, N.Y. Times: Tech Fix ( Jan. 8, 2020), https://www.nytimes.com/2020/01/08/technology/personaltech/5g-mobile-network.html?auth=login-google1tap&login=google1tap.

[2] Id.

[3] See Everything You Need to Know About 5G, Qualcomm: 5g FAQ (last visited Jan. 16, 2020), https://www.qualcomm.com/invention/5g/what-is-5g.

[4] Id.

[5] Id.

[6] See id.

[7] See Chen, supra note 1.

[8] 5G Service is Coming – And So Are Health Concerns Over The Towers That Support It, CBS News (May 29, 2018, 8:39 AM), https://www.cbsnews.com/news/5g-network-cell-towers-raise-health-concerns-for-some-residents/.

[9] See id.

[10] See id.; see also Paul Grondahl, 5G Cell Protests Span from Albany to Europe to Russia, Times Union (May 14, 2019, 8:41 PM), https://www.timesunion.com/news/article/Grondahl-5G-cell-protests-span-from-Albany-to-13843495.php#photo-17439917.

[11] See 5G Service is Coming, supra note 8.

[12] See Grondahl, supra note 10.

[13] See id.

[14] See Tad Simons, The Great Rollout: Will 5G Be a Boon for Lawyers?, Thompson Reuters: Legal Executive Institute (July 23, 2019), http://www.legalexecutiveinstitute.com/5g-lawyers-boon/.

[15] See William J. Broad, The 5G Health Hazard That Isn’t, N.Y. Times (July 16, 2019), https://www.nytimes.com/2019/07/16/science/5g-cellphones-wireless-cancer.html; see also Reality Check Team, Does 5G Pose Health Risks?, BBC News ( July 15, 2019), https://www.bbc.com/news/world-europe-48616174.

[16] See Thomas Seal & Albertina Torsoli, Health Scares Slow the Rollout of 5G Cell Towers in Europe, Bloomberg Businessweek (Jan. 15, 2020, 12:01 AM), https://www.bloomberg.com/news/articles/2020-01-15/health-scares-slow-the-rollout-of-5g-cell-towers-in-europe.

[17] See Simons, supra note 14.

[18] Statement by U.S. Conference of Mayors CEO & Executive Director Tom Cochran on FCC’s Order Subordinating Local Property Rights, The United States Conference of Mayors, https://www.usmayors.org/2018/09/26/statement-by-u-s-conference-of-mayors-ceo-executive-director-tom-cochran-on-fccs-order-subordinating-local-property-rights/.

image source: https://www.computerworld.com/article/3310067/why-5g-will-disappoint-everyone.html

Page 3 of 50

Powered by WordPress & Theme by Anders Norén