Skip to content
Back to Blog

March 15, 2026

Predictive Policing Is Real. Here's How It Actually Works.

The AI surveillance technology behind Seventeen Seconds isn't fiction. Cities across America already use predictive policing algorithms to forecast crime and score citizens. The results are about what you'd expect.

real-science seventeen-seconds AI surveillance predictive-policing law-enforcement-technology

You’ve probably heard the pitch. What if police could stop crime before it happens? Sounds like a movie. It’s not. Predictive policing systems have been running in American cities for over a decade now, and they’re way more advanced than most people realize.

These Systems Already Exist

PredPol (they rebranded to Geolitica after the bad press, which tells you something) was one of the first. It came out of a partnership between the LAPD and UCLA. The system chews through years of historical crime data and spits out 500-by-500-foot boxes on a map. Patrol these boxes, the thinking goes, and you’ll be where the crime is going to happen.

Then there’s Chicago’s Strategic Subject List. The “heat list.” This one went further than predicting locations. It predicted people. The algorithm looked at arrest records, social network connections, and victimization data. Then it assigned individuals a score from 0 to 500. The higher the score, the more likely the system thought you’d be involved in a shooting. As a shooter or as a victim. The algorithm didn’t particularly care which.

At its peak, the list had over 398,000 names on it. People on the list got home visits from police. Not because they’d done anything. Because a math equation said they were statistically interesting.

The Part Where It Goes Wrong

The problems are predictable if you think about them for five minutes. These systems learn from historical data. Historical data reflects historical policing. Communities that got over-policed for decades generate more data points. More data points mean the algorithm sends more officers. More officers mean more arrests. More arrests mean more data. It’s a feedback loop, and it mathematically encodes the racial profiling that civil rights groups have been fighting for sixty years.

A 2019 study from the AI Now Institute confirmed what critics had been saying all along. These tools disproportionately target Black and Latino neighborhoods. The RAND Corporation looked at Chicago’s heat list and found no statistically significant reduction in gun violence. None. The city quietly killed the program in 2020, but similar tools are still running in other cities.

And most of these algorithms are proprietary. The people being scored can’t see the model. Can’t challenge the score. Can’t even find out they’re on a list.

Why I Wrote Seventeen Seconds

I kept reading about these systems and asking myself a simple question. What happens when the gap between prediction and action disappears? We already have automated license plate readers feeding real-time data to fusion centers. Facial recognition flagging people before they reach checkpoints. Gunshot detection networks that automatically pivot cameras.

Every year, the time between “the algorithm thinks something” and “something happens to a person” gets shorter.

Seventeen Seconds takes that trajectory to its logical conclusion. The system doesn’t predict anymore. It acts. And it learned its morality from the same place we all did. The internet.

If that doesn’t unsettle you a little, I haven’t done my job.


ENTER THE SYSTEM

New posts, new releases, and classified research notes. Delivered to your inbox.