Richmond Journal of Law and Technology

The first exclusively online law review.

Category: Blog Posts (Page 2 of 26)

Birds Can’t Land in All Cities

By: Darden Copeland

Have the birds flocked to your city?  And I’m not talking about the ones with feathers—I’m referring to the newest trend in micro-mobility.  Fleets of electric scooters have appeared on the sidewalks of many United States cities, and they have spurred a mixed-bag of negative reactions and legal implications.

Bird Rides, Inc., an electric scooter sharing platform, has plopped hundreds of its Bird scooters without warning in many of our nation’s cities.[1]  The micro-mobility sharing platform operates by way of a smart phone app that allows users to locate available scooters within a nearby radius and ride them a short distance for a small fee.[2]  Bird scooters are dockless, so riders can leave the scooters anywhere they please at the end of their ride.[3]

The 21stcentury has seen a multitude of advancements in mobility and transportation, and some local governments have welcomed the changes with open arms, while others have been a bit more resistant.  For example, ridesharing apps Uber and Lyft have revolutionized the transportation industry by arriving in cities unannounced and saturating the market.[4]  Taking a very similar approach, Bird hopes to do the same, but the scooter sharing company has received way more pushback than Uber and Lyft.[5]  It’s not surprising that Bird is using the same tactic of appearing in cities unannounced not only because of the success of Uber and Lyft, but also because the founder and Chief Executive Officer of Bird earned his wings as the former Chief Operating Officer of Lyft, and the former Vice President of Growth at Uber.[6]

Bird’s unusual flock of disapprovers can be linked to both the suddenness of the scooters’ arrival, and also the sheer abrasiveness of the scooter concept itself.  Unlike Uber and Lyft, the next morning after Bird makes its unannounced delivery of hundreds of scooters, people notice them.  If the scooters are not blocking sidewalks, they’re being ridden at relatively high speeds weaving through crowds of pedestrians, and sometimes breaking a host of local laws and ordinances.

Upon arrival of the scooters, some cities such as Richmond, Virginia, have decided to hastily come up with permitting systems and regulations, while others have taken the opposite approach of outlawing the scooters all-together.  In Richmond, Mayor Levar Stoney pitched a one-year test program which would allow Bird to operate in the city after the company paid fees for each scooter.  Less thrilled about the arrival of the Birds, Norfolk, Virginia city officials have been rounding up the scooters and impounding them for breaking city ordinances such as property abandonment on public land.[7]  According to the City of Norfolk, Bird will have to pay more than $93,365 dollars before the city will release the scooters.[8]  In San Francisco, California, city officials sent cease and desist orders to Bird claiming that the scooters were creating “a public nuisance on the city’s streets and sidewalks and endangering public health and safety.”[9]  The city subsequently enacted an ordinance to require permits for Bird or any other micro-mobility platform seeking to inhabit its streets and sidewalks.[10]  The City of Milwaukee even filed a civil action in court against Bird for public nuisance and consenting to the operation of unregistered motor vehicles.[11]

Cities’ disapproval of the scooters is not only rooted in the safety of pedestrians and riders, but also in liability.[12]  With hundreds of scooters zipping down city streets, a rider could be injured by hitting a pothole or similar road obstruction that would normally not present an issue to passing cars and motorcycles.  This hangs an uncertain cloud of liability over cities that host Bird scooters.[13]  An even stickier issue arises when the scooter riders are the parties at fault for the injuries of others.  Leaving no source of recovery or redress for injured parties, Bird scooter users are not required to carry any form of insurance.[14]  With all of these issues in mind, it makes sense why many municipalities are dissatisfied with Bird for plopping the scooters in their region without any communication or planning.  Will our nation’s cities continue to ruffle their feathers at new technology, or will they acquiesce to new trends in mobility?


[1]See City of Milwaukee v. Bird Rides Inc., 2018 U.S. Dist. LEXIS 187996 No. 18-CV-1066-JPS, at *1 (E. Dist. Wis. Nov. 2, 2018).

[2]See Melia Robinson, A Startup in the West Coast Scooter Sharing Craze is Already Worth $1 Billion – Here’s What It’s Like to Ride a Bird Scooter, Business Insider, May 30, 2018,

[3]See id.

[4]See Tamara Kurtzman, Dockless Scooter Cos. Rewarded for Bad Behavior, Law 360, Sept. 14, 2018,

[5]See, e.g. Laura Newberry, Fed-up Locals Are Setting Electric Scooters on Fire and Burying Them at Sea, L.A. Times, Aug. 10, 2018,

[6]See Kurtzman, Dockless Scooter Cos. Rewarded for Bad Behavior, Law 360, Sept. 14, 2018.

[7]SeeNick Boykin, Norfolk Has 560 Bird Scooters Impounded, Company Owes Over 93k for Them,WTKR, November 13, 2018,

[8]See id.

[9]Michele Satterlund, Sidewalks: The Next Mobility Frontier, Law 360, August 07, 2018,


[11]See Bird, 2018 U.S. Dist. LEXIS 187996 No. 18-CV-1066-JPS at *4.

[12]See Kurtzman, Dockless Scooter Cos. Rewarded for Bad Behavior, Law 360, Sept. 14, 2018.

[13]See id.

[14]See id.

Image Source:

The Real Winner in the Midterm Elections? Social Media

By: Jonathan Walter

Following the 2016 presidential election, there was widespread criticism of social media networks for allowing trolls and bots to shape discourse[1], as well as creating political “echo chambers.”[2] Initially, Facebook denied that this was taking place and refused to acknowledge its role as a news source.[3] This critique only intensified as more information about the extent of the problems came to light. Twitter revealed that millions of tweets came from “highly automated accounts,”[4] while Facebook disclosed that roughly 3,000 ads had been purchased by “inauthentic accounts” that were “likely operated out of Russia.[5] Everything came to a head in September of 2018, when executives from Facebook and Twitter went to Congress for hearings regarding election interference and were grilled by Senators about how they planned to fight bots and “deep fakes.”[6]

However, the 2018 midterm elections have been a different story, and it’s clear that Facebook and Twitter, among others, have learned a lesson. While these problems have not gone away[7], there was generally less criticism after the midterm elections than the presidential election in 2016. This year, Facebook removed 30 Facebook accounts and 85 Instagram account in an attempt to prevent foreign influence on the day of election,[8] and another 800 pages and accounts in the weeks leading up to election day.[9] Similarly, Twitter removed over 10,000 bot accounts that were posting messages discouraging people from voting in the months leading up to election day.[10] Further, Facebook set up a “war room” at its Menlo Park headquarters “to look for and stop election interference in ‘real time.’”[11]

On top of the additional oversight, social media networks have stepped up their get-out-the-vote efforts. For example, Twitter added election labels to candidates’ profiles, live streamed debates, launched the #BeAVoter campaign, and added an Election Day countdown to users’ home timelines that provided information about the user’s polling place and ballot.[12] Similarly, Facebook released a new “Candidate Info” tool to help people learn more about their local candidates through short videos. Even social media services like Snapchat and Instagram have attempted to cultivate goodwill with the general public by helping users register to vote.[13]

Despite these efforts, and their general positive reception, it is important to reiterate that these problems have not gone away.[14] Although social media sites are doing their due diligence, there is actually some evidence that the problem may be getting worse. Facebook is still plagued with large scale misinformation campaigns[15] and researchers at Oxford University found that Twitter had five percent more false content this year than during the 2016 presidential election.[16] Even Twitter’s new page, focused on midterm elections, picked up tweets from conspiracy theorists, people pushing disinformation, and bot accounts.[17] The problem that these social media platforms are going to be trying to solve in the future isn’t fake news and Russian bots, it is more of a misinformation problem. This is very much an old problem that is taking a new form and will require more than just the right algorithm.[18]

All of that being said, there is no denying the fact that progress was made between the 2016 presidential election and the 2018 midterm election. Both Facebook and Twitter acknowledged the mistakes that they made and took positive steps to correct them. We can only hope that they continue to improve as the 2020 presidential election becomes more of a reality.


[1]See Tom McCarthy, How Russia Used Social Media to Divide Americans,The Guardian, Oct. 14, 2017,

[2]See Mostafa M. El-Bermawy, Your Filter Bubble is Destroying Democracy,Wired, Nov. 18, 2016,

[3]See Id.

[4]Jon Fingas, Twitter Bots were Rampant During the US Election, Engadget, Nov. 20, 2016,

[5]Margaret Hartmann, Facebook Haunted by Its Handling of 2016 Election Meddling, N.Y. Mag: Intelligencer, Mar. 20, 2018,

[6]See Adi Robertson & Casey Newton, The 7 Biggest Moments from Wednesday’s Social Media Hearings, The Verge, Sep. 5, 2018,

[7]See Ali Breland, Social Media Companies Grapple with New Form of Political Misinformation, The Hill, Nov. 11, 2018,

[8]Don Reisinger, Facebook Removed 115 Accounts in the Run-Up to the Midterm Elections, Fortune, Nov. 6, 2018,

[9]See Elizabeth Dwoskin & Tony Romm, Facebook Purged Over 800 U.S. Account and Pages for Pushing Political Spam, Washington Post, Oct. 11, 2018,

[10]Todd Spangler, Midterm Elections: Are Facebook, Twitter Doing Enough to Stop Misinformation and Fraud?, Variety, Nov. 4, 2018,


[12]See Bridget Coyne, Five Days Until #ElectionDay 2018, Twitter Blog, Nov. 1, 2018,

[13]See Cecilia Kang, Snapchat Helped Register Over 400,000 Voters, N.Y. Times, Oct. 23, 2018,

[14]See Kevin Roose, Facebook Thwarter Chaos on Election Day. It’s Hardly Clear That Will Last., N.Y. Times, Nov. 7, 2018,


[16]Kate Conger & Adam Satariano, Twitter Says It Is Ready for the Midterms but Rogue Accounts Aren’t Letting Up, N.Y. Times, Nov. 5, 2018,

[17]Mallory Locklear, Twitter’s New Midterm Election Page Already Includes Fake News, Engadget, Oct. 30, 2018,

[18]See Max Read, Facebook Stopped Russia. Is That Enough?N.Y. Mag: Intelligencer,Nov. 8, 2018,

Image Source:

CRISPR/Cas-9 Patent War Comes to Close, For Now

By: Sarah Alberstein

UC Berkeley and MIT’s Broad Institute have been battling over the patent to coveted CRISPR/Cas-9 technology since 2012.[1] CRISPR/Cas-9 technology can be used to “silence mutated organismal DNA, replace it with correct sequences, or both in conjunction…[and] to sustain and lengthen the lifespan of…bacterial cultures by protecting them from viral attack…minimiz[ing] hassle and time spent re-growing cultures following a viral attack while maximizing efficiency for the researcher.”[2] What’s more, unlike previous gene-editing technologies, CRISPR-Cas-9 “makes it possible to observe specific effects of a particular gene and thus allows for more precise data collection and observation” while minimizing down-stream mutation.[3] The implications of CRISPR/Cas-9 are immense. For example, CRISPR/Cas-9 has the potential to cure previously incurable diseases, like Alzheimer’s and HIV, remove malaria from mosquitos, develop new drugs, alter livestock and agricultural crops, and develop new cancer treatments.[4] It is no surprise that there would be controversy over who owns and controls the patent for this powerful technology.

As of April 2018, the U.S. Patent and Trademark Office “had issues 60 CRISPR-related patents to nearly 20 different organizations.”[5] However, there is one patent in particular which has been hotly contested – the patent that covers the use of CRISPR-Case9 to edit DNA in mammals.[6] In May 2012, Berkeley filed a patent application for the use of CRISPR/Cas-9 to “edit genes in various types of cells.”[7] In December 2012, the Broad Institute and MIT filed a patent for the use of CRISPR/Cas-9 to “modify DNA in eukaryotic cells.”[8] In April 2014, the USPTO granted the Broad Institute their December 2012 patent, which UC Berkeley subsequently contested as being too similar to UC Berkeley’s May 2012 patent.[9] In February 2017, the USPTO ruled in favor of the Broad Institute stating that the Broad Institute’s December 2012 patent was not an obvious extension of UC Berkeley’s May 2012 patent.[10] In June 2018, the USPTO granted a patent to UC Berkley for the use of CRISPR/Cas-9 to edit single-stranded RNA, and a patent for the use of CRISPR/Cas-9 to edit genome regions of 10 to 15 nucleotides long.[11]

Finally, in September 2018, the US Court of Appeals for the Federal Circuit upheld the USPTO’s ruling in favor of the Broad Institute’s December 2012 patent for the use of CRISPR/Cas-9 in editing eukaryotic cells.[12] As a result, the Broad Institute has the rights to “commercialize products developed by using the CRISPR/Cas-9 system to make targeted changes to the genomes of eukaryotes – a group of organisms that includes plants and animals…cover[ing] a wide swath of potential CRISPR/Cas-9 products.”[13] While the results of this case seem to indicate that the patent war over CRISPR/Cas-9 technologies is coming to a close, there is still some movement within the industry. UC Berkeley could appeal the US Court of Appeals decision to the US Supreme Court which, given the zeal each institution has exhibited during this patent dispute, is not outside the realm of possibility.[14] Moreover, the CRISPR/Cas-9 technology landscape is ever-evolving. Already, researchers “have discovered new enzymes to replace Cas-9, and modified the CRISPR/Cas-9 system to manipulate the genome in many ways…”[15] It seems then that there are many technological advancements and patent disputes to come.


[1]Jessica Kim Cohen, UC Berkeley and Broad Institute’s Legal Dispute Over CRISPR Ownership: A Timeline of Events,Becker’s Health IT & CIO Report (June 21, 2018),

[2]Sarah Alberstein, CRISPR/Cas-9: The Ethics of Implementation, Grounds: Virginia Journal of Bioethics (Aug. 3, 2016),


[4]Mark Crawford, 8 Ways CRISPR-Cas9 can Change the World, ASME (May 2017),

[5]Jessica Kim Cohen, UC Berkeley and Broad Institute’s Legal Dispute Over CRISPR Ownership: A Timeline of Events,Becker’s Health IT & CIO Report (June 21, 2018),







[12]Heidi Ledford, Pivotal CRISPR Patent Battle won by Broad Institute, Nature(Sept. 10, 2018),




Image Source:

Is SOPIPA the Answer to Student Privacy in the Age of Mobile Technology?

By: Zaq Lacy

The debates surrounding the use of technology in the classroom have raged for many, many years, arguably beginning with the introduction of the blackboard in 1801 and evolving as society has advanced.[1] Regardless of what position you take as to whether technology is beneficial, there can be no doubt that it has become prolific in K-12 classrooms across America, integrating into a wide variety of facets of the classroom and school that directly interact with students.[2] With the level of sophistication that today’s technology has, the rapid expansion of that technology being used by students, and the sheer amount of information being transmitted, these students’ privacy is at risk in three ways: illegal data collection, susceptibility to criminal activity, and identity theft caused by hacking.[3] Many of these students are under the age of 13 and are supposed to be protected by the Child Online Privacy and Protection Act (“COPPA”), a federal statute passed enacted in 1998 which was designed to restrict the collection of personal information from children online.[4]

Unfortunately, a combination of ambiguities and confusion in COPPA’s language[5] and a lack of enforcement by the Federal Trade Commission[6] has resulted in a failure to protect children, particularly those using technology in school.[7] Despite the glaring flaws in COPPA and other current federal laws dealing with student privacy, Congress has made it clear that it will not take steps to remedy this situation, leaving it up to individual states’ legislatures to address the rising concerns.[8] California paved the way with their Student Online Personal Information Protection Act (“SOPIPA”) in 2014,[9] which is regarded as the “most successful and strict piece of privacy legislation” and is the template for a number of other states’ attempts at bolstering protections for students.[10] It was written to fill in the gaps left in federal privacy laws and was the first to target “operator[s] of [I]nternet web site[s], online service[s], online application[s], or mobile application[s],” and applies to any educational technology (“edtech”) companies that reach California K-12 students, regardless of whether such companies are based outside of California.[11]

SOPIPA provides a number of restrictions on what information edtech companies connect collect and what they cannot do with the information they have collected, including selling data for commercial purposes.[12] It also includes affirmative obligations for such companies, requiring that they maintain and enforce appropriate security procedures to prevent “unauthorized access, destruction, use, modification, or disclosure” of student data, and to delete any such data upon request.[13] These are major steps in student privacy, but SOPIPA is still considered an imperfect solution.[14] Among other things, SOPIPA does not appear to hold to the Federal definition of de-identification of data, which companies may use for commercial use.[15] Additionally, questions over enforcement could prove troublesome, particularly where teacher awareness of SOPIPA standards regarding free edtech products is concerned.[16] Despite this, SOPIPA answers a number of the issues that were left untreated by federal law.

Recognizing the potential of SOPIPA, numerous other states have introduced similar legislation, and fifteen other states adopted variations of this law in 2015, adjusting it to fit their needs.[17] Even though there are still some kinks to work out, it is clear that SOPIPA is paving the way to stronger protections for our students’ data privacy when using technology at school.


[1] See Michael Horn, New Research Answers Whether Technology Is Good or Bad for Learning, (Nov. 14, 2017 8:28 am),

[2] See Zaq Lacy, Is Classroom Technology Making Student Privacy Obsolete?, U. of Rich. J. of L. and Tech.: Blog (Nov. 9, 2018),

[3] See Alexis Peddy, Note, Dangerous Classroom “App”-titude: Protecting Student Privacy from Third-Party Educational Service Providers, 17 B.Y.U. Educ. & L. J. 125, 128 (2017).

[4] 15 U.S.C. § 6501(1) (2012).

[5] See Peddy, supra note 3, at 130.

[6] Id. at 135.

[7] Id. at 136.

[8] Id. at 142.

[9] Student Online Personal Information Protection Act of 2014, Cal. Bus. & Prof. Code §§ 22584-22585 (Deering 2014).

[10] See Peddy, supra note 3, at 147.

[11] Dylan Peterson, Note, Edtech and Student Privacy: California Law As a Model, 31 Berkley Tech. L.J. 961, 973 (2016).

[12] See id. at 973-74.

[13] See id. at 975-76.

[14] See id. at 983.

[15] See id. at 992.

[16] See id.

[17] See Tanya Roscorla, More States Pass Laws to Protect Student Data, (Aug. 27 2015),

Image Source:

Spyflying and Spydiving on the Spyhopping Orcas

By: Paxton Rizzo

The Southern Resident Orcas, or Killer Whales as they are more commonly known, are one of the most critically endangered marine mammals in the United States.[1] Currently, their population is at its lowest in three decades with only seventy-four individuals remaining.[2] Since 2005, the Southern Resident Orcas have been on the endangered species list[3] and are protected by the Endangered Species Act.[4]

Three distinct pods of orcas make up the clan that is referred to as the Southern Resident Orcas. Those three pods are J, K, and L pods. Each pod has its own distinct dialect.[5] These pods fall into a specific category of orca known as Resident Orcas and are differentiated from other types of orcas, (Transient and Offshore) because, they do not migrate as much; they have unique dialects amongst pods and communicate frequently; and they hunt primarily fish.[6] The Southern Resident Orcas’ diet consists of mainly salmon (80%).[7] They spend most of the warmer months hunting salmon in the Puget Sound between Canada and the United States and in the winter have been found as far North as Alaska and as far South as Monterey, California.[8] Being tied to a specific area or habitat is an element in the Southern Resident classification as endangered.[9] Many factors of the Southern Resident Orcas’ population and environment place them under the Endangered Species Act, such as the pollution of the water and their food source and the depletion of their primary food source as a result of man made mechanisms such as dams.[10]

Since being classified as endangered in 2005, conservation efforts, though underway even before then, have increased and from the beginning, technology has been utilized in trying to learn about and understand the orcas. Until a few years back, a common form of technology used to learn about the orcas was a satellite tracker.[11] The tracker would be tagged onto the orcas dorsal fin, by piercing their skin, allowing researchers to track how far the orca traveled in a day, week, or month and where exactly they went.[12] In 2016, researchers were trying to learn where the orcas went in the winter so they would be better able to protect them by expanding the area[13] protected for the orcas under the Endangered Species Act.[14] On a tagging mission, a mistake happened that ended several weeks later with a whale succumbing to a bacterial infection.[15] After that, researchers felt a need to find better ways to monitor the orcas.[16]

Today, researchers use a variety of devices to monitor and track the Orcas such as passive acoustic monitors, digital acoustic tags and aerial drones.[17] Unlike the previous satellite tags, the digital acoustic tags attach by suction cups and track the movements of the orca and the sounds it makes and hears; three studies are underway that will be using this technology to learn about the Southern Resident’s time in their summer habitat.[18] Aerial drones allow researchers to view the Orcas from above and take picture of them.[19] By using this method Researchers have been able to observe how the orcas weight fluctuates.[20] Being able to see the orcas from above gives researchers a better angle to gauge orcas’ weight than the previous method of looking at them from the side, where their figure is harder to observe.[21] This method of tracking the orcas weight has been especially helpful in determining which orcas are pregnant and which orcas may be sick.[22] This gives them the opportunity to respond quickly in any attempt they may launch to save the orca.[23] Most notably, this year when observing orca J50, (affectionately known as Scarlet) researchers noticed that, though she had always been small, her fat stores were depleting quickly.[24] Researchers were able to react by giving her medication and attempting to get her food to eat.[25] They had come up with other creative plans to try and save her when it was determined, after not seeing her for awhile that she may be dead.[26]

Currently, the data collected from the technology used to track and monitor the orcas as well as stool samples,[27] are informing a governor’s task force in Washington State. They soon will release recommendations on what changes and long-term solutions need to be made and implemented in order to try and save the Southern Resident orcas.[28]


[1] See Southern Resident Orcas, Endangered Species Coalition (last visited Nov. 10, 2018).

[2] See Drones Helping Scientist Track Orca Health, (last visited Nov. 9 2018).

[3] See Southern Resident Orcas, supra note 1.

[4] 16 USCS § 1531 (LexisNexis Current current through PL 115-269, approved 10/16/18).

[5]  See Southern Resident Orcas, supra note 1.

[6] See Charles Q. Choi, New Killer Whale Species Proposed, Live Science (April 26, 2010, at 3:18 AM, ET),

[7] See Southern Resident Orcas, supra note 1.

[8] FAQ About the Southern Resident Endangered Orcas, The Whale Museum (last visited Nov. 10, 2018).

[9] See16 USCS § 1533 Current through PL 115-269, approved 10/16/18.

[10] See Id, see also Southern Resident Orcas, supra note 1.

[11] See Craig Welch, Orca Killed by Satellite Tag Leads to Criticism of Science Practices, National Geographic (Oct 6, 2016),

[12] See id.

[13] See id.

[14] See16 USCS § 1533(b)(2) Current through PL 115-269, approved 10/16/18.

[15] See id.

[16] See id.

[17] See Spotlight on the Southern Resident Killer Whale – An Interview with NOAA Fisheries Biologist Lynne Barre, NOAA Fisheries (Feb. 13, 2018),

[18] See Using DTAGs to study acoustics and behavior of Southern Resident killer whales, Northwest Fisheries Science Center (last visited Nov. 10, 2018).

[19] See Drones Helping Scientist supra note 2.

[20] See id.

[21] See id.

[22] See id.

[23] See id.

[24] See Lynda V. Mapes, Orca J50 presumed dead but NOAA continues search, The Seattle Times (Sept. 24, 2018, at 7:57 PM),

[25] See id.

[26] See id.

[27] See Spotlight on the Southern Resident Killer Whale supra note 16.

[28] See Task force unveils ‘potential recommendations’ to save killer whales, (last visited Nov. 9, 2018).

Image Source:

What Even is Machine Learning and Why Should Law Students be Wary?

By: Eric Richard

Machine learning is a complex concept. Depending on who you ask, you might get any number of answers. And depending even more on your level of familiarity with technology, you might not get any that make the concept seem any less complex or any more concise. However, at its most basic definition, machine learning is essentially the process of getting a computer to “learn” as a human does.[1] Now I know that there are many out there who would raise an objection to this over simplification, but for anybody outside the technology industry or lacking a degree in computer science, that is the easiest way to sum it up. Machine learning can be likened to the painstaking process of teaching an ignorant child what to do in certain situations in response to observations or real-world interactions.[2] The difference, maybe semi-obviously, is that a machine learns through algorithms as opposed to how a child might learn through negative or positive reinforcement.[3] The trickiest part in the process, needless to say, is exactly how to get a machine to learn.[4]

Once the how has been hurdled, the concern for law students might start to be a little clearer. In the words of Kai-Fu Lee, the former head of Google research in China, the “replacement is happening now.”[5] Routine office work is being done more and more by machines rather than people.[6] You know who does a lot of routine office work such as filing and research for law firms? Newly hired, fresh-out-of-law-school associates. While Lee and others feel that this replacement is akin to a white-collar-worker “doomsday scenario,” there are others who feel it might not be a bad thing.[7] With “low level” work delegated to machines, all attorneys, not just the fresh ones, will have more time for the more difficult work.[8] But how do we get to that point? How does a machine learn to do the work that, up until recent years, has required someone with years of scholastic and professional legal training?

The answer, quite humorously, might a game. A recent project between David Colarusso, director of Suffolk University Law School’s Legal Innovation and Technology (LIT) Lab, and the Stanford Legal Design Lab has attempted to solve the “how” of machine learning with a game born from legal questions posted by thousands of people on Reddit.[9] The game is simple enough. It involves presenting a fact pattern to the player with a question to follow.[10] The question usually consists of identifying what type of legal issue or segment of law can be spotted in the fact pattern.[11] The goal of the question and answer format is to allow a machine to learn how to “issue spot.”[12] If a machine is shown a sentence or a fact pattern with the words “wife” and “kids,” then odds are it is going to identify an issue associated with family law even though the concern might be for a speeding ticket instead.[13] Herein lies the problem with a machine attempting to issue spot on behalf of an attorney looking for precedent related to given fact pattern. The game, however, is the concern. With creative solutions such as a match-making game presenting a way for machines to become better at performing jobs traditionally reserved for fresh-out-of-school lawyers, the market and opportunity for current law students might be dwindling little by little every day.


[1] See Daniel Faggella, What is Machine Learning?, tech emergence (Oct. 29, 2018),

[2] See id.

[3] See id.

[4] See id.

[5] Will Knight, Is Technology About to Decimate White-Collar Work?, MIT Tech. Rev. (Nov. 6, 2017),

[6] See id.

[7] See id.

[8] See Bernard Marr, How AI Ans Machine Learning Are Transforming Law And The Legal Sector, Forbes (May 23, 2018),

[9] See Jason Tashea, New game lets players train AI to spot legal issues, A.B.A. J. (Oct. 16, 2018),

[10] See id.

[11] See id.

[12] See id.

[13] See id.

Image Source:

Electronic vs. Paper Voting: A Legal Battle in Georgia

By: Scottie Fralin

Earlier this week, I cast my vote in the midterm election on a paper ballot. In Georgia, paper ballots have been replaced entirely by Direct Recording Electronic voting machines (DREs), which have no paper trail by which to verify or audit the recording of each elector’s vote.[1] DREs employ computers that record votes directly into the computers’ memory.[2] Some DRE systems are also equipped with a printer, which voters can use to confirm his or her choices before committing them to the computer’s memory.[3] Most states use paper ballots, and some use both paper ballots and DREs with mechanisms to ensure a paper trail.[4] The only states that use DREs without a paper trail and no accompanying paper ballot are Delaware, Georgia, Louisiana, New Jersey, and South Carolina.[5] Colorado, Oregon, and Washington use neither paper ballots nor DREs, and instead vote by mail.[6] The vast majority of states use a combination of paper ballots and DRE systems with a paper trail.[7] In those states, the ballot is typically retained after scanning in case verification or a recount is required.[8] Apparently, manufacturers of DRE voting machines have been so secretive in the past about how the technology works that they have required election officials to sign non-disclosure agreements preventing them from bringing in outside experts who could assess the machines.[9]

The skepticism surrounding electronic voting machines is well-founded, as computers can be vulnerable to viruses and malware. In fact, civil rights groups and voters in Texas and Georgia have filed complaints, alleging that electronic voting machines inexplicably deleted some people’s votes for Democratic candidates or switched them to Republican votes.[10] In August of 2017, the Georgia Coalition for Good Governance filed suit against Brian Kemp, claiming that the DRE voting system in Georgia is unsecure, unverifiable, and compromises the privacy and accuracy of their votes.[11] The Coalition claimed that Defendants’ continued use of the DRE system violated their constitutional rights.[12] Though the court denied the Coalition’s motions for preliminary injunctions, it advised the Defendants that further delay in dealing with the vulnerability of the state’s DRE systems is not tolerable because damage to the integrity of a state’s election system undermines public confidence in the electoral system and the value of voting.[13]

As the court said in Curling v. Kemp, “advanced persistent threats in this data-driven world and ordinary hacking are unfortunately here to stay.”[14] Therefore, especially given the upcoming 2020 elections, if a new balloting system in Georgia is to be launched, it must “address democracy’s critical need for transparent, fair, accurate, and verifiable election processes that guarantee each citizen’s fundamental right to cast an accountable vote.”[15] This Georgia case went up on appeal to the Eleventh Circuit, and state officials argue that the district court judge should have dismissed the suit on the grounds that it violates the government’s entitlement to immunity and improperly subjects the state to suit and discovery.[16] The Coalition argues that granting the state’s request to dismiss the suit would have a chilling effect on voters and voting-rights groups.[17] Despite federal Judge Amy Totenberg’s decision not to replace Georgia’s DREs just weeks before midterm elections, most commentators suggest that by 2020, Georgia’s voting systems will include some form of backup.[18] The public outcry and bad publicity surrounding Georgia’s DREs and their attendant risks is surely something to watch. It might just be a matter of time before legislatures or courts of other states will follow suit and call for an overhaul of election equipment to ensure ballot security.


[1] See Curling v. Kemp, No. 1:17-CV-2989-AT, 2018 U.S. Dist. LEXIS 165741, at *7 (N.D. Ga. Sep. 17, 2018).

[2] See Ballotpedia, Voting Methods and Equipment by State,

[3] See id.

[4] See id.

[5] See id.

[6] See id.

[7] See id.

[8] See Jeremy Laukkonen, Which States Use Electronic Voting? Lifewire, (last updated Nov. 1, 2018).

[9] See Jessica Schulberg, Good News for Russia: 15 States Use Easily Hackable Voting Machines, HuffPost (July 17, 2017),

[10] See Christian Vasquez & Matthew Choi, Voting Machine Errors Already Roil Texas and Georgia Races, Politico, (last updated Nov. 6, 2018).

[11] See Curling v. Kemp, No. 1:17-CV-2989-AT, 2018 U.S. Dist. LEXIS 165741, at *15 (N.D. Ga. Sep. 17, 2018).

[12] See id. at *15.

[13] See id. at *57.

[14] See id.

[15] See id. at *57-58.

[16] See Kayla Goggin, Georgia Officials to Appeal Paper Ballot Ruling to 11th Circuit, Courthouse News Service (Sept. 20, 2018),

[17] See id.

[18] See, e.g., Mark Niesse, Federal Judge Weighs Throwing Out Georgia Electronic Voting Machines, The Atlanta Journal-Constitution (Sept. 12, 2018),–regional-govt–politics/federal-judge-weighs-throwing-out-georgia-electronic-voting-machines/mzhkkHVRl1caitey2igxXI/.

Image Source:–regional-govt–politics/plan-scrap-georgia-electronic-voting-machines-moves-forward/Tw9ib1BzBJPUfuPrY2N5VI/

Data Breaches: The New Normal

By: Sarah Alberstein

It seems that data breaches are all over the news these days, but what exactly is a data breach? According to Norton Security, a data breach is a “security incident in which information is accessed without authorization.”[1] In 2016, the most common information stolen in data breaches were “full names, credit card numbers, and Social Security numbers.”[2] As consumers in an ever-evolving technological landscape, the risk of having such personal information stolen can be alarming. This alarm is only solidified by what seems to be a steady increase in such breaches.

There were 1,300 data breaches in 2017.[3] By July of 2018, there were already over 600 data breaches.[4] What’s more, almost 50% of the breaches in 2018 were “of businesses related to retail, tourism, transportation, utilities, and other professional services that most of us use on a regular basis.”[5] Some of the businesses affected include: Macy’s, Adidas, Sears, Kmart, Delta Airlines, Best Buy, Saks Fifth Avenue, Lord & Taylor, Under Armour’s fitness app, Panera Bread, Forever 21, Whole Foods, Gamestop, Arby’s, Ticketfly, and Facebook.[6] With the frequency of these breaches and the types of industries impacted, it seems that the odds of having your data stolen is relatively high.

There have been some legislative efforts to combat data breaches, and to make consumers more aware when such data breaches occur. Beginning in 2010, individual states began enacting Security Breach Notification Laws which require “private or governmental entities to notify individuals of security breaches involving personally identifiable information.”[7] Security Breach Notification Laws typically include provisions describing which entities must comply with the law, what constitutes personal information, what constitutes a breach, notice requirements, and any exemptions.[8] Now, in 2018, all 50 states have enacted Security Breach Notification Laws.[9] Additionally, all 50 states have “computer crime laws” that target crimes committed using a computer, and some states are individually strengthening their data breach laws by requiring business managing personal data to implement additional security practices like security training, periodic audits,  and centralizing statewide cybersecurity oversight.[10]

Despite this, companies may still attempt to cover up breaches, keeping consumers in the dark. In 2016, the ride-hailing service, Uber, experienced a “major data breach…that exposed the personal information of 57 million people.”[11] This information included names, cellphone numbers, and email addresses.[12] Rather than notifying its Users, Uber paid the hackers a $100,000 ransom to conceal the breach.[13] Uber did not provide public notice of the breach until a year later in 2017.[14] In September 2018, Uber agreed to pay a staggering $148 million in a settlement between Uber, all 50 states, and the District of Columbia, and Uber has promised to develop a new data security policy.[15]

While there is legislation in place, and companies seem to be held responsible for data breaches, there are some things individual consumers can do on their own in order to protect their data. This includes things like reviewing a company’s privacy policy before providing your information, using complex, secure passwords, monitoring your back accounts, checking credit card reports, install security software, back up your files, and occasionally wiping your hard drive.[16] It seems that the legal landscape is constantly playing catch-up with the advancement of technology, but hopefully legislation like Security Breach Notification Laws and the efforts of individual consumers will bring a sense of security to the technological Wild West.


[1] What is a Data Breach?, Norton,

[2] Id.

[3] Rebecca Nanako Juchems, Enough is Enough: 2018 has Seen 600 too Many Data Breaches, Medium (July 24, 2018),

[4] Id.

[5] Id.

[6] Dennis Green & Mary Hanbury, If you Shopped at These 16 Stores in the Last Year, Your Data Might Have Been Stolen, Business Insider (Aug. 22, 2018, 5:49 PM),; David Bisson, The 10 Biggest Data Breaches of 2018…So far, Barkly Blog (Jul. 2018),

[7] Breach of Information, National Conference of State Legislatures,; Security Breach Notification Laws, National Conference of State Legislatures,

[8] Security Breach Notification Laws, National Conference of State Legislatures,

[9] Id.

[10] See Pam Greenberg, Taking Aim at Data Breaches and Cyberattacks, National Conference of State Legislatures (Nov. 2017),

[11] Dan M. Clark, $5.7M Slated for Pa. in Uber Data Breach Settlement, The Legal Intelligencer (Oct. 25, 2018, 2:40 PM),

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Rebecca Nanako Juchems, Enough is Enough: 2018 has Seen 600 too Many Data Breaches, Medium (July 24, 2018),; What is a Data Breach?, Norton,

Image Source:

Ethical dilemmas of Using Artificial Intelligence

By: Brandon Larabee

Some of the ethical dilemmas of using artificial intelligence to address criminal justice issues are familiar to anyone who watched “Person of Interest.” The CBS science-fiction show revolved around the efforts of a team of human beings and “The Machine” — an artificial super-intelligence — to stop crimes before they could happen.

In the real world of criminal justice and the legal system, though, problems not anticipated by “Person of Interest” are cropping up with algorithms are used to predict criminal behavior. Where The Machine was relentlessly rational and unfailing (unless being interfered with), real-world machines are increasingly facing questions about whether they produce outcomes just as biased as the humans who build them.

As with many controversies in the public sphere, a counter-backlash is brewing. Writing recently for Wired, Noam Cohen argued that algorithms (and the computers that crunch the numbers) could as easily be sources of justice as of injustice. Cohen highlighted reporting by The New York Times that eventually led some New York City district attorneys to be more lenient with low-level marijuana offenses.[1]

“But imagine if we turned that spigot of data and incisive algorithms toward those who presume to judge and control us: Algorithms should be another important check on the system, revealing patterns of unfairness with a clarity that can be obscured in day-to-day life,” Cohen writes.[2]

That argument, though, comes amid a sustained pushback against efforts to use algorithms and predictive technology to do everything from making bail decisions to assisting in sentencing to deciding where police should focus their enforcement efforts.

New York City, for example, established an Automated Decision Systems Task Force to start looking at how the city uses its powerful data tools.[3] Activists have criticized a Los Angeles Police Department program that uses computer programs to choose surveillance targets because the data input into the system creates a “racist feedback loop.”[4] The COMPAS algorithm, used to create recidivism scores for judges to consider during sentencing, has been accused of bias against people of color.[5]

There are defenders of algorithms beyond Cohen. Sharad Goel of Stanford University told Nature: International Journal of Science that, in the journal’s words, discrepancies between error rates for whites and people of color “instead reflect the fact that one group is more difficult to make predictions about than another.”[6]

“It turns out that that’s more or less a statistical artifact,” Goel said.[7]

That might come as cold comfort to an offender being sentenced based on a flawed formula: The formula is working against him or her because it has a problem predicting what people of the offender’s race will do, not because it’s biased per se.

Those inclined to seek a compromise have started to float ideas meant to answer the questions of bias while still using the data algorithms produce to (one hopes) improve society. One idea is simply to accept that, by their very nature, algorithms are “biased” — so the public should have more information and more input into what goes into the formulas.[8]

At least one avenue for a possible resolution seems to be closed for now. The U.S. Supreme Court faced a decision last year about whether to take the case of Loomis v. Wisconsin, a frontal assault on the use of COMPAS in sentencing decisions.[9] But the court passed.[10]


[1] Noam Cohen, Algorithms can be a tool for justice — if used the right way, Wired (Oct. 25, 2018, 1:23 PM),

[2] Id.

[3] Mayor de Blasio announces first-in-nation task force to examine automated decision systems used by the city, (May 16, 2018),

[4] George Joseph, The LAPD has a new surveillance formula, powered by Palantir, The Appeal (May 8, 2018),

[5] See Sara Chodosh, Courts use algorithms to help determine sentencing, but random people get the same results, Popular Science (Jan. 18, 2018),

[6] Rachel Courtland, Bias detectives: The researchers striving to make algorithms fair, Nature: International Journal of Science (June 20, 2018),

[7] Id.

[8] Matthias Spielkamp, Inspecting algorithms for bias, MIT Technology Review (June 12, 2017),

[9] Adam Liptak, Sent to prison by a software program’s secret algorithms, N.Y. Times (May 1, 2017),

[10] Loomis v. Wisconsin, SCOTUSBlog,

Image Source:

Is Classroom Technology Making Student Privacy Obsolete?

By: Zaq Lacy

In many schools around the country, classroom technology made its debut in the early to mid-80’s, in the form of Apple II computer labs and the infamous (but so very nostalgic) words, “You have died of dysentery,” thanks in large part to the vision of Steve Jobs and his collaboration with the Minnesota Educational Computing Consortium (MECC) to “save the world by putting computing power in the hands of every kid in America.”[1] Today, the technology available to enhance the learning experience encompasses nearly every aspect of the classroom, from e-texts,[2] to a litany of third-party applications that incorporate social media with cloud-integrated collaboration tools,[3] to biometric identification systems used to pay for lunch.[4] This technology offers previously unheard-of precision in real-time assessment, allowing teachers to assess learning processes as well as responses.[5] Moreover, the use of the technology available today has significant benefits for the classroom and students.[6] Despite the benefits, however, there is an increasing concern over the privacy of our students.[7]

The Family Educational Rights and Privacy Act (FERPA)[8] protects the privacy of student education records[9] but has become as obsolete as the technology that existed when it was passed 40 years ago.[10] As tech companies produce more and more sophisticated software, and integration in the classroom becomes progressively pervasive, so too grows their ability to gather information on the users. Such companies have accumulated immeasurable information on students’ school activities,[11] causing some states, such as California, to take legislative steps to address the growing problem,[12] which some attorneys feel is the number one problem for schools and new educational technology companies.[13] California’s Student Online Personal Information Act (SOPIPA) has served as a model for a number of State legislatures, 15 of which passed similar laws in 2015.[14] Despite the progress that is being made, officials still acknowledge that technology is likely to continue to develop faster than legislation, which will create new problems in the future. [15] So, for now at least, our students are living with privacy protections that are so three years ago.


[1] See Matt Jancer, How You Wound Up Playing The Oregon Train in Computer Class, (Jul. 22, 2016),

[2] See Online Textbooks, Fairfax Cty. Pub. Sch., (last visited Nov. 2, 2018).

[3] See Kathy Dyer, The Ultimate List- 65 Digital Tools and Apps to Support Formative Assessment Practices, (Jan. 9, 2018),

[4] See Natasha Singer, With Tech Taking Over in Schools, Worries Rise, N.Y. Times (Sept. 14, 2014),

[5] See Alvin Vista & Esther Care, Education Assessment in the 21st Century: New Technologies, (Feb. 27, 2017),

[6] See Jared Keengwe & Grace Onchwari, Technology and Early Childhood Education: A Technology Integration Professional Development Model for Practicing Teachers, 37 Early Childhood Educ. J. 209, 210 (2009); see also Effects of Technology on Classrooms and Students, U.S. Dep’t. of Educ., (last visited Nov. 2, 2018)

[7] See Singer, supra note 4.

[8] 20 U.S.C. § 1232(g) (2018); 34 C.F.R. § 99.31 (2018)

[9] Family Educational Rights and Privacy Act, U.S. Dep’t. of Educ., (last visted Nov. 2, 2018).

[10] See Singer, supra note 4.

[11] See id.

[12] Student Online Personal Information Protection Act of 2014, Cal. Bus. & Prof. Code §§ 22584-22585 (Deering 2014); Early Learning Personal Information Protection Act, Cal. Bus. & Prof. Code §§ 22586-22587 (Deering 2017).

[13] See Matthew Johnson, The Top Five Legal Issues for Edtech Startups and Schools, (Apr. 16, 2016).

[14] See Tanya Roscorla, More States Pass Laws to Protect Student Data, (Aug. 27 2015),

[15] See id.

Image Source:

Page 2 of 26

Powered by WordPress & Theme by Anders Norén