February 20, 2026
A Real Algorithm Is Deciding Which Families Get Investigated by CPS
The Allegheny Family Screening Tool uses predictive AI to score families for child welfare investigations. It's running right now in Pennsylvania. FLAGGED asks what happens when the algorithm gets it wrong.
There’s an algorithm running right now in Allegheny County, Pennsylvania that scores families on a scale of 1 to 20. The higher the score, the more likely the system thinks a child in that family is at risk of abuse or neglect. When someone calls the child abuse hotline, the algorithm runs automatically. The caseworker sees the score before they even read the report.
It’s called the Allegheny Family Screening Tool, and it’s one of the most controversial applications of predictive AI in government.
How It Works
The AFST pulls data from county systems: behavioral health records, substance abuse treatment records, public benefits usage, juvenile justice records, and previous child welfare interactions. It feeds all of this into a predictive model trained on historical outcomes. The model looks at patterns in past cases and estimates the probability that a new referral will result in a confirmed finding.
The tool doesn’t make decisions. Officially. It provides a score, and a human caseworker decides whether to screen in or screen out the referral. But multiple studies have shown that caseworkers are heavily influenced by the score. A high score makes investigation more likely. A low score makes dismissal more likely. The algorithm’s recommendation becomes the de facto decision in most cases.
The Bias Problem
Here’s where it gets uncomfortable. The data the algorithm trains on reflects decades of existing bias in child welfare systems. Black families and low-income families interact with public systems more frequently. They use more public benefits. They have more contact with social services. Not because they’re more likely to abuse their children, but because they’re more visible to the systems that generate data.
Virginia Eubanks documented this extensively in her book “Automating Inequality.” She found that the AFST disproportionately flags families who are poor. Using public benefits becomes a risk factor. Seeking mental health treatment becomes a risk factor. The things that should indicate a family is trying to get help instead become evidence that the algorithm uses against them.
A mother who takes her kid to the emergency room three times in a year because she can’t afford a pediatrician generates data. A wealthy mother who takes her kid to a private doctor generates none. The algorithm sees the first mother as higher risk. Not because she’s a worse parent, but because the system can see her.
Every Data Point Gets Flipped
This is the thing that kept me up at night when I was researching FLAGGED. Every piece of evidence that you’re a good parent can be read as a risk indicator by the algorithm.
You sought addiction treatment? That means you had an addiction. Risk factor. You called a domestic violence hotline? That means there was domestic violence in the home. Risk factor. You applied for food stamps? That means you can’t feed your kids independently. Risk factor.
The system penalizes people for asking for help.
Why FLAGGED Exists
FLAGGED follows a single mother and nurse named Simone Ward who does everything right. She’s employed, she’s engaged, she’s actively parenting. But the algorithm has been scoring her family since before her daughter was born, and every interaction with public systems has added to her risk profile.
Every piece of the technology in FLAGGED is real. The Allegheny Family Screening Tool is real. The bias patterns are documented. The feedback loops are studied and published.
I just asked one question: what happens to a specific person when the system decides she’s guilty of something she hasn’t done?
Search “Allegheny Family Screening Tool” and read about it yourself. It’s been running since 2016. Then decide how you feel about it.