AI Policing Errors: From False Alarms to Wrongful Arrests
On October 20, 2025, 17-year-old Taki Allen, a student in Baltimore, was sitting outside his high school after football practice when an artificial intelligence-enhanced surveillance camera falsely identified the Doritos bag in his pocket as a gun. Within minutes, police arrived, drew their weapons, and forced Allen to his knees while searching him. All they found was a crumpled bag of chips.
Similarly, on December 24, 2025, Angela Lipps, a Tennessee grandmother, was released after spending five months in jail because facial recognition software incorrectly linked her to fraud crimes in North Dakota—a state she had never visited. Police had arrested her at gunpoint while she was babysitting her four grandchildren.
These cases are not isolated incidents. They exemplify how AI systems, despite their flaws, are increasingly relied upon in policing, often with dangerous consequences. The issue lies not just in technical errors but in the human tendency to treat AI-generated probabilities as absolute certainties.
Why AI Policing Fails: Probabilities vs. Certainties
AI systems do not operate on facts; they generate statistically probable outputs based on patterns in training data. For example, when generative AI models like ChatGPT or Claude answer questions, they predict the most likely response—not a verified fact. Asked who invented the light bulb, they may respond with "Thomas Edison," overlooking Joseph Swan’s parallel invention.
This distinction is critical in policing. When AI tools predict crime hotspots or identify suspects, their outputs are probabilities, not certainties. Yet, law enforcement often treats these predictions as definitive, leading to flawed decisions with severe consequences.
The Human Factor: How Uncertainty Turns into Action
AI policing tools are deployed in dozens of U.S. cities, though no public registry tracks their full use. These systems ingest historical crime data to score neighborhoods on predicted risk, directing officers to "hotspots." However, the transition from statistical prediction to operational certainty is where the problem arises.
Once an AI system signals a possible threat, the focus shifts from assessing the certainty of the prediction to determining the appropriate response. The uncertainty inherent in the AI’s output is often lost in this process, resulting in actions based on flawed assumptions.
"AI systems produce probabilities, and people treat them as certainties."
— Researchers studying AI in policing
The Consequences of Misplaced Trust in AI
The cases of Taki Allen and Angela Lipps underscore the human cost of relying on AI without adequate safeguards. Allen’s wrongful arrest at gunpoint and Lipps’ wrongful imprisonment highlight how quickly probabilistic outputs can escalate into traumatic confrontations and unjust detentions.
As AI becomes more integrated into law enforcement, the risks of such errors grow. Without transparency, accountability, and a clear understanding of AI’s limitations, these systems will continue to produce harmful outcomes—undermining both justice and public trust.