TY - JOUR
T1 - A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health
AU - Timmons, Adela C.
AU - Duong, Jacqueline B.
AU - Simo Fiallo, Natalia
AU - Lee, Theodore
AU - Vo, Huong Phuc Quynh
AU - Ahle, Matthew W.
AU - Comer, Jonathan S.
AU - Brewer, La Princess C.
AU - Frazier, Stacy L.
AU - Chaspari, Theodora
N1 - Publisher Copyright:
© The Author(s) 2022.
PY - 2023/9
Y1 - 2023/9
N2 - Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
AB - Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
KW - artificial intelligence
KW - bias
KW - fair aware
KW - mental health equity
UR - http://www.scopus.com/inward/record.url?scp=85144199777&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85144199777&partnerID=8YFLogxK
U2 - 10.1177/17456916221134490
DO - 10.1177/17456916221134490
M3 - Article
C2 - 36490369
AN - SCOPUS:85144199777
SN - 1745-6916
VL - 18
SP - 1062
EP - 1096
JO - Perspectives on Psychological Science
JF - Perspectives on Psychological Science
IS - 5
ER -