March 5, 2026
Your Social Media Feed Is Not Random. It's a Radicalization Pipeline.
Social media recommendation algorithms don't show you what's true. They show you what keeps you scrolling. Research proves they push users toward extreme content. The Mirror explores what happens when someone weaponizes that on purpose.
Try this experiment. Open YouTube and watch any mainstream political commentary video. Then click on one of the recommended videos in the sidebar. Then click another. Keep going for ten clicks.
I guarantee you will end up somewhere that has absolutely nothing to do with where you started. And it will be significantly more extreme.
That’s not a glitch. The system is working exactly the way it was built to work.
How Recommendation Engines Actually Work
Every social media platform optimizes for engagement. Time on platform. That’s the metric. YouTube’s sidebar recommendations, Facebook’s news feed, TikTok’s For You page. They all run on algorithms trained to predict what will keep you watching and scrolling and clicking.
Here’s the thing researchers keep finding: extreme content generates more engagement than moderate content. A leaked 2021 internal Facebook study (thank you Frances Haugen) showed that the platform’s own algorithm was actively promoting divisive and inflammatory posts because they got more reactions and shares. Their own researchers wrote that “our algorithms exploit the human brain’s attraction to divisiveness.”
Guillaume Chaslot used to work on YouTube’s recommendation algorithm. After he left, he built a tool called AlgoTransparency to track where the recommendations actually lead people. What he found was consistent: no matter where you start, the algorithm nudges you toward more extreme content. He tracked recommendation chains that went from jogging videos to conspiracy theories in under ten clicks.
The Christchurch Investigation
After the 2019 Christchurch mosque shootings in New Zealand, the Royal Commission of Inquiry looked at how the attacker was radicalized. The findings were specific and damning.
The shooter didn’t start by searching for extremist content. He started with mainstream grievances. The platforms did the rest. Each recommendation was a little more radical than the last. The algorithm spotted his engagement patterns and accelerated the process. The Commission described it as a “pathway to radicalization” that was self-reinforcing.
It wasn’t a slow drift over years. It was a pipeline, engineered for efficiency. The same engineering that keeps you watching cooking videos for three hours at 1am also works on political content. It just leads somewhere darker.
What If Someone Did It On Purpose?
Everything I just described happened by accident. Nobody at YouTube or Facebook designed the algorithm to radicalize a specific person. It was a side effect. Engagement optimization created radicalization as an emergent behavior.
But what if someone understood exactly how these systems work? What if they knew the psychological pressure points? What if they built a network of content, each piece calibrated to trigger the next recommendation, each step designed to move a specific person further down the pipeline?
Not a hack. Not a data breach. Just someone using the platform exactly the way it was designed to be used. Except with intent.
That’s The Mirror
The Mirror starts with three acts of domestic terrorism committed by people who had no criminal records, no extremist ties, and no warning signs six months earlier. Maya Castillo finds the connection: a platform that maps each user’s psychological vulnerabilities and feeds them a custom radicalization pipeline.
Eight weeks. That’s how long it takes. From concerned citizen to weapon.
Every piece of technology in the book exists right now. The only fictional part is the person who decided to aim it.
Google “Allegheny Family Screening Tool” when you’re done reading this. It’s a real predictive algorithm running right now in Pennsylvania. Then ask yourself how comfortable you are with the people building these systems.