Developing AI for Psychological Assessment

13381
image of laptop with data on screen

Developing AI for Psychological Assessment

Josh Oltmanns, Assistant Professor of Psychological & Brain Sciences at Washington University in St. Louis

Abstract: Advances in AI provide an exciting opportunity to improve assessment of personality and mental health. However—despite strong initial validity evidence—studies to-date suffer from overreliance on short text social media data, lack of solid psychometric model evaluation and transparency, and limited clinical relevance. Before implementation, models must be interpretable and equitable, in addition to strongly predictive. In this talk, I will first outline methods of using LLMs for psychological assessment such as fine-tuning and prompt engineering. Next I will describe development, troubleshooting, and validation of AI models from spoken language and video features in life history and diagnostic interviews. Datasets include two samples (N = 1,630 St. Louis community older adults and N = 124 younger adults) representing both older and younger ages and Black and White people who completed personality and mental health assessments. I will describe four projects addressing foundational issues in initial robust multimodal AI models development: (1) fine-tuning of language-based AI models of personality from life narrative interviews achieving accuracy rivaling self-report, (2) comparing Black-White racial differences in the models, (3) clarifying important methodological issues (e.g., “mirror” language models of depression), and (4) implementing multimodal speech acoustic and facial features into video-based models. This work addresses several problems in the current literature such as clinical relevance and interpretability, working with long videos that require computationally intensive modeling, and model generalizability. Throughout I will consider how these methods can bring AI-based assessment closer to real-world clinical settings while avoiding key pitfalls in fairness, interpretability, and generalizability.

Host: Ran Chen

Josh Oltmanns, PhD, is an assistant professor of Psychological & Brain Sciences at Washington University in St. Louis whose work bridges psychology and data science. His lab develops AI-based methods—spanning natural language processing, speech analysis, and multimodal modeling—to better measure personality and mental health in real-world settings. His research blends statistical modeling, machine learning, and psychometrics, with recent projects applying LLMs to clinical interview data. Josh’s work has been supported by the National Institutes of Health and recognized with multiple research awards, and he serves as an Associate Editor for Assessment.