Apply Now for 2024

Fall 2024 On-Campus MSW Application FINAL Deadline: July 16, 2024

Researchers Weigh in on Whether Risk Modeling Can Keep Children Safe

  • Research

A call comes in to the child welfare hotline. The caller reports that a child is being maltreated.

The operator has to make a decision. Are the allegations serious enough to open an investigation? Is the child in immediate danger? What services does this family need?

“Child welfare workers are often dealing with a very partial, imperfect picture of conditions that may place a child at risk of harm,” said Emily Putnam-Hornstein, an assistant professor at the USC School of Social Work. “Yet critical decisions must be made about which referrals are screened in or out and whether a case is opened for services.”

What if in that moment, the hotline operator had access to a model that could scan the vast landscape of data available about that particular family, weighing risk factors such as previous involvement with child protective services and a history of substance abuse while taking into account strengths such as educational achievement and access to supportive resources?

Would that information help the operator decide how to proceed?

It’s an exciting new approach in the field of child safety and service delivery known as predictive risk modeling or risk analytics. In Los Angeles County, which handles approximately 200,000 allegations of child maltreatment each year, leaders are working to vet the concept.

“In the child welfare system, decisions are being made throughout the life of a case,” Putnam-Hornstein said. “Should we accept this referral that came into the hotline and investigate? Are we going to open a case for services? Are we going to reunify a child who has been in foster care? Frankly, we don’t have that many tools at our disposal that would ensure any kind of standardized decision-making process.”

Mining the data

Predictive risk modeling could take advantage of the massive amount of data being collected by various county departments to determine which children are at highest risk of negative outcomes, helping social workers filter through thousands of referrals to ensure resources are being directed to families most in need of services.

The practice has been applied in the private sector for decades, perhaps most notably in credit scoring, in which financial institutions consider various factors such as prior loan applications and credit history to assess an individual’s likelihood of making future payments on time.

However, its application in the field of child welfare is more controversial due to the sensitive nature of the setting. Generating a risk score that indicates the likelihood of future maltreatment or abuse can be ethically problematic and has prompted concern about the potential for misuse.

“All of this information is personal,” said Jacquelyn McCroskey, the John Milner Professor of Child Welfare. “So how do we ensure it is used to enhance the lives of children and families and communities and not to single out or target someone? We must be honest and transparent and acknowledge there are questions that are hard to answer.”

Fesia Davenport, interim director of Los Angeles County’s recently formed Office of Child Protection, acknowledged those concerns during a recent convening of community leaders, scholars and county officials at USC to discuss the use of predictive risk modeling in the regional child welfare system.

“We want to have what could be at times an uncomfortable conversation because there are implications about race and the potential for analytics to exacerbate disproportionality,” she said. “It’s the courageous discussion we need to have.”

Raising the standard

Child welfare officials in Los Angeles County, along with a majority of other counties in California and in many other jurisdictions in the United States, currently rely on a standardized set of risk assessment tools that attempt to introduce consistency and validity to how cases are handled.

However, individual caseworkers are responsible for completing checklists based on statistical models, introducing the possibility of human error and unintended bias. A predictive risk model might eliminate some of those subjective biases, but could it introduce other disparities?

The answer to that question depends in part on how the model is constructed. Should race and ethnicity be considered as a potential risk factor? How about living in a certain low-income neighborhood? Even if the model proves to be relatively accurate in terms of determining the likelihood of negative outcomes, will the results be handled fairly?

“There’s been such a long history of scores and statistics being used to foster disproportionality in communities of color,” said Jennifer Ralls, director of outcomes and community impact at Para Los Niños, a local nonprofit focused on promoting academic success and social well-being among children. “There is already a lack of trust.”

Ralls is also a member of the Community Child Welfare Coalition, a group of community-based agencies that formed in 2012 to address concerns about the county’s approach to child safety. In addition to expressing a sense that community members and families are not being involved in decisions such as whether to place a child in foster care, she said coalition members are hesitant to embrace predictive risk modeling.

“Will it trigger a final decision about removal of a child?” she said. “There is a real concern that data in the absence of context will outweigh human decision making.”

That was a major point of discussion during the recent gathering at USC, with county officials and researchers emphasizing that any predictive model or risk scoring system will not replace a social worker’s assessment of a particular case.

Instead, the model is envisioned as a tool that can help child welfare workers assess information from across county departments and affiliated agencies to gain a better understanding of the strengths and risks of each family.

“I’m always the first to point out that nobody is talking about any kind of risk modeling work that would replace clinical judgment,” Putnam-Hornstein said. “I don’t think we will ever see a tool that can or should replace the assessment of a well-trained caseworker in the home of family.”

Human error

However, researchers have also cautioned against relying too much on the ability of individual caseworkers to assess risk. Studies have repeatedly shown that people greatly overestimate their ability to predict future events.

For instance, researchers asked doctors at a hospital in San Diego to determine whether their patients would be readmitted after being treated and discharged. Rhema Vaithianathan, an economist who has led research projects in New Zealand to develop and implement risk models in health and child protection, said the results were not promising.

“They are no better than the toss of a coin in saying which of their patients would come back to the hospital,” she said.

Other studies have shown that perceptions of whether an event will occur are affected by knowledge about the situation, said Vaithianathan, who serves as a professor of economics at Auckland University of Technology and senior research fellow at Singapore Management University. The more information people have, she said, the more likely they are to predict an adverse event.

The ultimate goal is to strike a balance between statistics and human judgment, an issue being grappled with not only in Los Angeles County but also in New Zealand and Allegheny County, Penn., where government officials and community leaders are also exploring the use of predictive risk modeling in child welfare.

“We’re still a few years away from knowing if it’s the right thing to roll out,” Vaithianathan said. “We’re still in research mode. I worry sometimes that policy makers and practitioners, because they are under so much pressure to do something, just end up adopting things. We need to go slowly.”

Slow and steady

Nothing has been actively implemented in Los Angeles County or any other jurisdiction, although local officials have been testing various predictive models using anonymous data from previous years to create a formula that assesses the risk of fatal, near fatal and critical incidents in the child welfare system.

During the recent convening at USC, county leaders said they are moving forward with plans to release a request for proposals from vendors to build a tool that could be used to predict risk in current child welfare cases.

“The reality is that our resources are not unlimited,” said Jennie Feria, interim executive assistant with the county’s Department of Child and Family Services. “This can help us focus in on which families are going to receive priority for some of these services.”

Feria, who previously served as a supervising child social worker and later oversaw the highest-volume child welfare office in the county, with an average of 1,000 referrals a month, said results of initial tests have indicated that predictive risk modeling is a promising approach worth pursuing.

“By not looking at this possibility, we would be doing our families an injustice,” she said.

A mandate to act

Officials also stressed the findings of a 2014 report by the county’s Blue Ribbon Commission on Child Protection, a group charged with reviewing recent failures in the child protection system, highlighting organizational barriers to child safety and drafting recommendations to reform the system.

In that report, the commission cited a preventive risk analytics program developed in Florida’s Hillsborough County that identified key risk factors associated with poor child welfare outcomes. According to the report, that information helped officials allocate resources more effectively to address those factors, resulting in a major reduction in child fatalities.

The commission called for the implementation of a similar process in Los Angeles County, with an emphasis on ensuring that key services such as health and mental health care, substance abuse treatment, housing support and preventive programs are being directed to families at highest risk of fatalities.

“The county has a mandate to develop this work,” Putnam-Hornstein said. “The key discussion is how risk modeling is implemented and used.”

One strategy to ensure that the tool is not misused is to restrict access to risk scores, perhaps only allowing the hotline operator and a supervisor to view the results of the model, she said. That would prevent caseworkers in the field from being overly concerned due to a high score or ignoring red flags during a family visit because of a low score.

“If a hotline model allows the county to identify the top 10 percent riskiest referrals, perhaps the protocol is simply that those referrals cannot be screened out without an investigation and they are assigned to a more experienced worker,” she said. “What if there was a special unit with lower caseloads and workers who had received extra training in investigations and family engagement? The score would not mean there is anything wrong with those families, simply that it is likely a more complex case that requires more time and expertise.”

Fair and impartial

Along with McCroskey and other researchers at USC, Putnam-Hornstein views her role as a neutral technical resource, a sentiment echoed by community leaders such as Ralls, who said the university can serve as an academic and social justice arbiter to ensure that the community’s voice is heard throughout the process.

“I see USC as a neutral, extremely high-caliber, brilliant and socially just institution with the best intentions to create something that can support more responsive services for kids and families,” Ralls said.

Acknowledging that county officials have been increasingly open as they pursue predictive analytics, Ralls encouraged leaders to engage in ongoing meetings with the community to ensure transparency and emphasized the need for slow, thoughtful implementation.

Ultimately, Putnam-Hornstein remains optimistic that predictive risk modeling will prove to be a valuable tool in child welfare, but warned against viewing it as infallible.

“I think there is a lot of potential, but I don’t think we should oversell it,” she said. “Hopefully risk modeling will allow us to improve on practice as we know it today. But no tool will be perfect.”

To reference the work of our faculty online, we ask that you directly quote their work where possible and attribute it to "FACULTY NAME, a professor in the USC Suzanne Dworak-Peck School of Social Work” (LINK: https://dworakpeck.usc.edu)