How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child-Welfare
Child welfare (CW) agencies use risk assessment tools as a means to achieve
evidence-based, consistent, and unbiased decision-making. These risk
assessments act as data collection mechanisms and have further evolved into
algorithmic systems in recent years. Moreover, several of these algorithms have
reinforced biased theoretical constructs and predictors because of the easy
availability of structured assessment data. In this study, we critically
examine the Washington Assessment of Risk Model (WARM), a prominent risk
assessment tool that has been adopted by over 30 states in the United States
and has been repurposed into more complex algorithms. We compared WARM against
the narrative coding of casenotes written by caseworkers who used WARM. We
found significant discrepancies between the casenotes and WARM data where WARM
scores did not not mirror caseworkers' notes about family risk. We provide the
SIGCHI community with some initial findings from the quantitative
de-construction of a child-welfare algorithm.
Authors
Devansh Saxena, Charlie Repaci, Melanie Sage, Shion Guha