By: Nicole Gram,

According to the Centers for Disease Control and Prevention (CDC), suicide was the tenth leading cause of death overall in the United States and the second leading cause among individuals between the ages of 15 and 34 in 2015.[1] To provide further perspective on the statistics, there were more than twice as many suicides in the United States as there were homicides.[2] While it is complex and difficult to predict, there are very often signs that an individual is struggling with suicidal thoughts and behaviors.[3] Social media applications provide a forum in which some individuals share emotions and issues they are experiencing. Facebook even experienced a number of live streamed suicides this past year.[4] As a result, as part of a call to action to proactively identify at risk individuals and prevent them from harming themselves, Facebook is using Artificial Intelligence (AI) technology to scan content with pattern recognition for specific phrases that indicate someone may need help.[5] An AI algorithm identifies and prioritizes the posts for action by the thousands of employee content reviewers.[6] The application then prompts the at-risk individual with options for a helpline, tips to address their issues and feelings, an option to contact another friend or will even notify First Responders in critical situations.[7] Users cannot opt out of this technology that is being tested in the US with plans to rollout to most countries, excluding the European Union due to their regulatory restrictions.[8]

Mental health professionals also recognize the advantage of leveraging technology beyond hospitals, emergency rooms, and ICU for psychiatrists’ offices.[9] The focus of their mission is “alleviating suffering with technology”.[10] The large amounts of data on smartphones and in social media applications are a valuable resource to supplement the patient interviews that mental health professionals are dependent on.[11] They too have an application, named Spreading Activation Mobile (SAM), that uses predictive machine learning to analyze speech and determine whether someone is likely to take their own life.[12] SAM is being tested in Cincinnati schools and looks for increases in negative words and/or decreases in positive words based on language, emotional state and social media footprint.[13]

Time is a key factor in the prevention of suicide and advancements in AI are a continuing trend toward performing more and more tasks that could only be performed by humans previously.[14] In the mental health arena, the availability and evolving quality of fresh data is creating improved and more effective algorithms that can drive earlier identification and action.[15] However, there are several legal implications and challenges with using AI as a tool to prevent suicide. Whenever personal data is involved, concerns about privacy and misuse top the list.  Given the mental health content, there is increased sensitivity about who has access and the potential impact on items such as insurance premiums and coverage.[16] With AI algorithms making decisions and driving actions, an equally important consideration is around who holds the moral and legal responsibility to be accountable when harm is caused.[17] This becomes even more complicated by the autonomous nature of AI technology. As the tool learns from experience and more data, it is possible that AI systems will grow to perform actions not anticipated by their creators.[18] Some experts have proposed a legal framework with a governing authority that certifies AI systems so that operators of certified AI systems have limited tort liability while operators of uncertified systems face strict liability.[19] This appears to provide an appropriate balance, at this point in time, with obtaining the advantage of AI tools and algorithms in preventing suicide while ensuring controls exist to mitigate the risks.[20]


[1] See Peter Holley, Teenage Suicide Is Extremely difficult to Predict. That’s Why Some Experts Are Turning to Machines for Help., Wash. Post (Sept. 26, 2017),

[2] See Suicide, Nat’l Inst. of Mental Health (2015),

[3] See Hayley Tsukayama, Facebook Is Using AI to Try to Prevent Suicide, Wash. Post (Nov. 27, 2017),

[4] See id.

[5] See id.

[6] See id.

[7] See id.

[8] See Tsukayama, supra note 3.

[9] See Holley, supra note 1.

[10] See id.

[11] See id.

[12] See id.

[13] See id.

[14] See Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J. Law & Tec 353 (2016).

[15] See id.

[16] See id.

[17] See id.

[18] See id.

[19] See Scherer, supra note 14.

[20] See id.

Image Source:–448339133.html.