By Alexandra Tillman


As Capitol rioters are found and charged, the public has applauded.[1] The rioters have been found through a variety of methods—public tips, videos and photos posted online, some members of the public have even posed as conservatives on dating apps to “catfish” unsuspecting rioters into sharing their photos of the riots and turning those photos over to the FBI.[2]  But law enforcement’s most recent method of finding suspected rioters via facial recognition technology is under scrutiny.[3]


The American Civil Liberties Union (ACLU), Black Lives Matter (BLM) activists, and even Congress—the victims of the recent riots—have questioned the efficacy of law enforcement’s use of facial recognition technology.[4] The technology is far less accurate at identifying people of color than white suspects.[5] A study by the National Institute of Standards and Technology found that Asian and African-American faces were misidentified 10 to 100 times more than white faces.[6] This past June, a black man in Metro Detroit was misidentified by facial recognition software, which led to his arrest and subsequent 30-hour detention as the suspect of a local robbery.[7] Eventually police let him go and dropped the charges stating, “I guess the computer got it wrong.”[8] The ACLU has filed suit, but police continue to use the software nationwide.[9]


Facial recognition technology has long been used against people of color throughout the United States.[10] Following the Black Lives Matter protests this past summer, law enforcement used facial recognition technology to round up activists.[11] Now it is being used against a majority-white mob of rioters who clearly broke the law.[12]


The most popular facial-recognition app, Clearview AI, saw a 26% increase in use in the days following the Capitol riots.[13] The CEO and founder of Clearview AI claims its software does not have issues with racial misidentification.[14] “As a person of mixed race, this is especially important to me,” CEO Hoan Ton-That said in June.[15] “We are very encouraged that our technology has proven accurate in the field and has helped prevent the wrongful identification of people of color.”[16]


But as this technology gains popularity, we must ask whether the ends justify the means. Americans may want law enforcement to use whatever means necessary to bring the Capitol rioters to justice, but it is exactly in justified situations like this where expansions in the use and abuse of this technology can occur. Ignoring flaws in the system now for the “greater good,” and harming people of color down the road is not worth the risk.


To address this problem, Congress is considering banning law enforcement’s use of facial recognition technology.[17]While a ban would certainly stop police from using the technology, it would not address the underlying issue. The problem with this technology is its development and use by a majority white male population. Congress should instead face this problem head on—addressing bias in the creation and implementation of these systems. Strict requirements in law enforcement’s ability to use this technology, like greater accuracy in identifying people of all races equally, only permitting its use in certain cases like violent crime, and mandatory training for law enforcement on how to minimize bias when using these systems.


Until higher standards can be reached in the creation and implementation of facial recognition technology, a temporary ban is reasonable. But a permanent ban could create a situation where the U.S. loses its ability to regulate these systems at all. A ban could drive the use of this technology to outside contractors or even overseas, allowing for increased use and abuse of this technology with no required mechanisms for improvement.[18]


Facial recognition technology can be a valuable tool. But it can only be successful if we work to remove the biases in its creation and use. Increased justice for some and decreased justice for others is not justice. Facial recognition technology can continue to be used only if it results in equal justice for all.


[1] Hannah Hartig, In their own words: How Americans reacted to the rioting at the U.S. Capitol, Pew Research Center,

[2] Lexi Lonas, Dating app says it removed political filter following Capitol riots, The Hill, (Jan. 15, 2021),; Hannah Knowles and Paulina Villegas, Pushed to the edge by the Capitol riot, people are reporting their family and friends to the FBI, The Washington Post (Jan. 16, 2021),; Sara Morrison, The Capitol rioters put themselves all over social media. Now they’re getting arrested, Vox, (Jan. 19, 2021),

[3] Natasha Singer and Cade Metz, Many Facial-Recognition Systems Are Biased, Says U.S. Study, N.Y. Times, (Dec. 19, 2019),

[4] Id.; Tawana Petty, Defending Black Lives Means Banning Facial Recognition, Wired, (July 10, 2020),

[5] Singer and Metz, supra, note 3.

[6] Id.

[7] Bobby Allyn, ‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest of Black Man, NPR, June 24 (2020),

[8] Id.

[9] Id.

[10] Johana Bhuiyan, Facial recognition may help find Capitol rioters—but it could harm others, experts say, L.A. Times, (Feb. 4, 2021),

[11] Petty, supra note 4.

[12] Id.

[13] Kashmir Hill, The facial-recognition app Clearview sees a spike in use after Capitol attack, N.Y. Times, (Jan. 9, 2021),

[14] Rae Hodge, Capitol attack: FBI mum on facial recognition, Clearview AI searches spike, CNET (Jan. 12, 2021),

[15] Id.

[16] Id.

[17] Hodge, supra note 14.

[18] Nicole Perlroth, How the United States Lost to Hackers, N.Y. Times, (Feb. 9, 2021),

Image Source: