By Alexandra Tillman
The future is here. Artificial Intelligence (AI) is being used in virtually every aspect of our lives—smart phones, smart watches, smart cars, even smart refrigerators.
But the problem with the future is that you do not know what lies ahead.
And unfortunately, neither does AI.
AI is being used throughout the criminal justice system, from prediction to adjudication.[1]
But even the creators of AI algorithms will admit they often do not know how the algorithm makes its decisions. In fact, many studies have shown that AI is far from fair and impartial, as many hope and believe it to be.[2]
One study created an algorithm with a seemingly simple objective: to differentiate pictures of wolves from pictures of dogs.[3] From an output perspective, the algorithm seemed to have accomplished its goal, correctly identifying wolves and dogs. But upon analyzing what factors weighed most heavily in its determination, the study found that the algorithm’s most weighted factor was not at all similar to how a human would identify a wolf. The algorithm determined that the most likely indicator of whether the photo was of a wolf…was snow in the background.[4]
Now consider this example when applied to identifying a potential criminal. While we may know what the inputs and outputs are, e.g. years of datasets on criminal offenders, what if we do not know how the system makes its determination. The ultimate result may seem correct, but the truth is we do not fully understand how algorithms create their outputs—and we may not like what we find when we look deeper. What if the AI we already use in the criminal justice system is equally flawed? What is the criminal equivalent of snow = wolf?
One recent study looked at an algorithm used by judges to determine the likelihood of a convicted criminal being a repeat offender.[5] This study found that the algorithm was heavily biased against minorities in creating its output.[6](Smithsonian, id.) So how can we continue to use AI if it is so flawed?
The key to improving is understanding. To justify and bolster its continued use, here are few things the legal community needs to understand about AI:
- Not All AI is created Equal
Several types of systems fall under what many think of as AI. “AI” has actually existed since the 1950s as a branch of computer science focused on performing specific tasks—there are inputs and outputs, but it does not learn on its own.[7]Then in the 1980s a new kind of AI came along called Machine Learning, which processes algorithms and analyzes the results to learn more and improve the algorithm on its own.[8] Finally, in the 2010s came the most recent development in AI, Deep Learning, which uses both machine learning and artificial intelligence “to break down tasks, analyze each subtask and [use] this information to solve new sets of problems.”[9] Each kind of AI serves a different purpose and function. And thanks to the technological advances of the last decade, such as our ability to store and analyze big data,each of these areas has recently grown in popularity and are being used as never before.[10] In other words, AI is not just one thing. It works in many ways with varying objectives, capabilities, and limitations.
- Humans, Datasets, & Algorithms
Humans create algorithms as well as the datasets we plug into those algorithms. And as the myriad of polls and data we see in our daily lives show, data is not always the most accurate representation of the world as it truly is.[11] Data can be biased. So, when humans create datasets and algorithms, they may, often unintentionally, include those biases. This results in outputs that do not represent the world accurately, and those outputs are thus inherently biased.
- AI Does Not Think as Humans Do –
As may be clear from the prior examples, AI has not yet reached the equivalent of human reasoning.[12] In fact, AI does not “reason” at all.[13] It takes the data presented to it, follows the rules of the algorithm, and creates an output. So, it is important for practitioners, policy-makers, and judges alike to be actively skeptical of the inputs and outputs of these systems, by questioning the data, the algorithms, and constantly asking how these systems can be improved. Even though some of these systems can learn, they learn based on the information they are fed, and learning is not equivalent to reasoning.[14]
In order for the pros to outweigh the cons when it comes to criminal justice and AI, its practitioners and judges must know the strengths and limitations inherent in AI. The more active we are in learning how AI works, the more we can use and improve this powerful technology to aid in the criminal justice system—making the system more fair, from investigation to adjudication.
The future may be here, but we still have a long way to go until the future is as fair as we want it to be.
[1] Randy Rieland, Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased? Smithsonian Magazine, (Mar. 5, 2018), https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/.
[2] Id.
[3] Marco Tulio Ribeiro et. al., “Why Should I Trust You?” Explaining the Predictions of Any Classifier (Aug. 9, 2016), https://arxiv.org/pdf/1602.04938.pdf.
[4] Cameron Boozarjomehri, Is This A Wolf? Understanding Bias in Machine Learning, Mitre (2018) https://kde.mitre.org/blog/2018/10/28/is-this-a-wolf-understanding-bias-in-machine-learning/.
[5] Reiland, supra.
[6] Reiland, supra.
[7] Ramdev Canadam Sunder Rao, “New Product Breakthroughs with Recent Advances in Deep Learning and Future Business Opportunities”, Stanford Management Science And Engineering 238 Blog, (July 6, 2017), https://mse238blog.stanford.edu/2017/07/ramdev10/new-product-breakthroughs-with-recent-advances-in-deep-learning-and-future-business-opportunities/.
[8] Id.
[9] Id.
[10] Id.
[11] Nate Silver, The Real Story of 2016, FiveThirtyEight (Jan. 19, 2017), https://fivethirtyeight.com/features/the-real-story-of-2016/.
[12] Brian Bergstein, What AI Still Can’t Do, MIT Technology Review (Feb. 19, 2020), https://www.technologyreview.com/2020/02/19/868178/what-ai-still-cant-do/.
[13] Id.
[14] Id.
Image Source: https://p0.pxfuel.com/preview/729/37/117/advanced-ai-anatomy-artificial.jpg