Can big data help prevent child abuse and neglect?

Published on
June 24, 2019

Emily Putnam-Hornstein thought there had to be a better way to protect kids.

The USC professor of social work had seen the statistics: roughly 7 million children come to the attention of child welfare authorities every year in the United States; one in three American kids will be the subject of maltreatment investigations in their lifetimes.

“Do we really think a third of American children are so endangered they need the intervention of the child welfare system?” she said in a recent interview. “When you flood the system with that many calls of potential abuse and neglect, what you have a very hard time doing is detecting the signal from the noise and identifying which children do need protection.”

So she started researching how data might be utilized to predict which kids were most at risk and in need of services. We live in a world of big data: There’s more information known about every man, woman and child in the United States than ever before, in digital form. Why not use it to protect those youngest, most vulnerable members of society?

She got her chance to put her theory into practice three years ago, when she was invited by Allegheny County, Pennsylvania, to develop a predective analtyics tool there to help screen allegations of child abuse and neglect. That Pittsburgh-based agency — the Office of Children, Youth and Families — had recently come under scrutiny for failing to investigate kids who ultimately died from maltreatment.

Emily Putnam-Hornstein

Emily Putnam-Hornstein

Putnam-Hornstein and a colleague — Rhema Vaithianathan of the Auckland University of Technology in New Zealand — studied thousands of child maltreatment referrals and created an algorithm that would assign a “risk” score to every family reported to Allegheny County child protective services. The idea was to eliminate the biases and randomness of human decision-making.

Now, instead of call screeners relying on their human judgment and taking into consideration only a handful of factors, an algorithm computes the family’s risk based on dozens of determinants from public databases: use of mental health and drug treatment services, criminal histories, receipt of government benefits, and so on. The human screeners still get a say, but if their judgement  goes against the computer’s score, a supervisor makes the call.

“I think we have seen improvements,” said Erin Dalton, the deputy director of the Allegheny County Department of Human Services who oversees the agency’s analytics work. “It’s not the silver bullet that solves all child welfare problems, nor do I think it’s a horrible invasion.”

She cited an April review by Stanford University researchers that found the new system boosted accuracy in certain instances, particularly when deciding which families to investigate, and reduced racial disparities in the number of substantiated cases of abuse and neglect.

But some experts are skeptical. They say these tools just magnify existing biases because families involved in the systems the algorithm considers tend to be poor and people of color.

“The human biases and data biases go hand in hand with one another,” said Kelly Capatosto, senior research associate at the Kirwan Institute for the Study of Race and Ethnicity at Ohio State University. “With these decisions, we think about surveillance and system contact — with police, child welfare agencies, any social welfare-serving agencies. It’s going to be overrepresented in (low-income and minority) communities. It’s not necessarily indicative of where these instances are taking place.”

In her 2018 book, “Automating Inequality,” SUNY political science professor Virginia Eubanks labeled this phenomenon “poverty profiling.”

“Like racial profiling, poverty profiling targets individuals for extra scrutiny based not on their behavior but rather on a personal characteristic: They live in poverty,” she wrote. “Because the model confuses parenting while poor with poor parenting, the (Allegheny Family Screening Tool) views parents who reach out to public programs as risks to their children.”

The county responded that receiving public benefits actually lowers the scores for nearly half of families, and stated that the “unfortunate societal issue of poverty does not negate our responsibility to improve our decision-making capacity for those children coming to our attention.”

The use of big data in the criminal justice system has faced similar criticisms of racial and economic bias in recent years. In April, the Los Angeles Police Department dropped a controversial program that tried to predict who would commit violent crimes.

Some child welfare agencies have been better than others in addressing ethical concerns  — critics point to New York City, for instance. Allegheny County makes its formula available for review by the public. Other places have come under fire for using proprietary algorithms; Illinois cancelled its contract with two Florida firms in part for this reason.

“I think there’s a lot of enthusiasm for harnessing the power of all the administrative data we’ve accumulated to help us make better decisions and help systems operate more effectively and efficiently,” said Dana Weiner, a policy fellow at the University of Chicago’s Chapin Hall who advised New York City on using predictive analytics in child welfare. “We just have to find a way forward that is comfortable based on ethical guidelines and methodological rigor.”

Putnam-Hornstein, the USC professor, acknowledges that there is bias in data, as there is in human decision-making. But she believes technology could ultimately be used to identity and correct those biases.

She is optimistic that these tools, as they work out their kinks and emphasize transparency, can make a difference in the welfare of American children. She has since been advising the child protection agency in Douglas County, Colorado, south of Denver.

“My hope is these models will help our system pay more attention to the relatively small subset of referrals where the risk is particularly high and we will be able to devote more resources to those children and families in a preventive fashion,” she said. “I don’t want anyone to oversell predictive risk modeling. It’s not a crystal ball. It’s not going to solve all our problems. But on the margin if it allows us to make slightly better decisions and identify the high-risk cases and sort those out from the low-risk cases and adjust accordingly, this could be an important development to the field.”