Summary

This project seeks to identify and address disparities in automated mobile mental health prediction. Mobile and ubiquitous data can be used to infer the general state of an individual’s mental health, and these algorithmic predictions often have high accuracy. Although these efforts hold great promise for developing and delivering health interventions, they may also be inequitable, reproducing or magnifying existing disparities in healthcare.

 

Researchers and Partners

Kaitlin Costelloprincipal investigatorRutgers University School of Communication and Information
Vivek Singhco-investigatorRutgers University School of Communication and Information
Adana LlanosconsultantRutgers School of Public Health
Aaron Truchilcommunity partnerCamden Coalition of Healthcare Providers
Diana Floegelresearch assistant

 

About this Project


 

Initial Approach and Evolution

Our project centers around current efforts in using mobile and ubiquitous data to infer an individual’s mental health, and these algorithmic predictions often have high accuracy, as is the case with co-investigator Vivek Singh’s work. Such algorithms will be instrumental in the creation of just-in-time adaptive health interventions, which are becoming increasingly common.

Although these tracking efforts hold great promise, they may also be inequitable, reproducing or magnifying existing disparities. The proposed project aims to contextualize and characterize the problem of algorithmic fairness in mental health prediction, to develop a framework for addressing this problem in practice, and to then propose algorithmic solutions based on the developed framework.

 


 This project aims to:

  1. Contextualize and characterize the problem of algorithmic fairness in mental health prediction via an audit of existing mental health-related algorithms;
  2. Develop a framework for defining and understanding fairness by conducting focus groups and interviews with key stakeholders;
  3. Propose a method for improving the fairness of mental health algorithms based on these findings.

 



We have expanded our data collection to include questions about digital phenotyping and contact tracing for COVID-19.

 

Results and Findings

We have conducted 20 interviews about automated digital phenotyping with patients diagnosed with mental health conditions thus far. Published preliminary findings suggest that this population is wary of digital phenotyping for mental health diagnostics, but that such efforts may be more trustworthy if they are highly regulated and only used as an adjuvant to standard care.

We presented preliminary findings in the following talks and papers thus far:

Costello, K.L. & Floegel, D. (2020). “Predictive ads are not doctors”: Mental health surveillance, big tech, and platform capitalism. Proceedings of the 83rd Association for Information Science and Technology 83rd Annual Meeting. Oct 23-28, 2020. Virtual conference due to covid-19. Winner of the 2020 SIG-USE Early Career Best Paper Award.

Senteio, C., Costello, K.L., & Singh, V. (2019). Lifting as we all rise: Addressing challenges to AI bias in healthcare. Human–AI Collaboration in Healthcare workshop, CSCW 2019, Austin, TX, November 9, 2019.

Costello, K.L. & Floegel, D.* (2019). An effort to characterize equity in mobile mental health assessment. SIG-SI (Social Informatics) Symposium. Oct 19, 2019. Melbourne, Australia.

Learn more about the project