Predicted Guilty: How AI Could Reshape Policing and the Justice System
- Mishaal Shahzad
- Aug 18
- 3 min read
From Sci-Fi to the Courtroom:
A few years ago the concept of artificial intelligence seemed like something out of a science fiction movie, and then just like that it became a reality. AI has become deeply woven into our lives, from powering search engines and drafting emails to creating study materials and streamlining work tasks. Artificial intelligence is everywhere, and its presence is only going to grow.
Advancements in artificial intelligence have also started to make their way into our legal system. A key example of this is the development and use of predictive policing, which was designed to predict where crime might occur, and who might commit it.
How It Works:
On paper, predictive policing technology is supposed to help policing be more effective, but in reality the technology can cause more harm than good. Predictive policing systems are fed enormous amounts of information including past arrests, crime reports, patrol patterns, and sometimes even social media activity. AI takes this information and looks for patterns and assigns “risk scores” to areas or individuals.
Issues and Concerns:
Last month in the U.K., an AI system flagged a prison inmate as “likely” to commit violence in the next 48 hours, even though he hadn't threatened anyone, or broken any rules. Suddenly, officers were watching his every move, because a computer told them to. This is one of the many issues with predictive policing technology, the algorithms don’t just try to help prevent crimes, they try to predict the future.
Some of the other issues with predictive policing technology include:
Its potential to reinforce existing stereotypes and biases, further deepening the distrust between communities and law enforcement. Predictive policing tools don’t just flag potential crime locations, they can also single out individuals. Problems arise when these systems repeatedly target already marginalized neighbourhoods or over-policed communities, and when people of colour are flagged simply because they “fit” the narrow patterns the algorithms are trained to look for.
Predictive policing technology also has issues with accuracy. The AI technology predicts crime in certain areas, police focus there, they find more crime, and the AI takes that as proof it was right even though it may have been wrong from the start.
Another key with this technology is that it has the power to significantly influence where police patrol, who gets stopped, and even who ends up under long-term surveillance. The concern is that policing will become based on AI predictions, rather than actual crime and data.
Case Example: Chicago’s Failed Experiment:
In 2012, Chicago launched its “Strategic Subject List,” an algorithm that scored thousands of residents on their likelihood of being involved in a shooting, either as a suspect or a victim. The idea was to intervene early and prevent violence. However it disproportionately targeted young Black and Latino men, many with no criminal history, placing them under heightened police scrutiny simply because a computer said they were “high risk.”
The program was later scrapped after evaluations showed that it had little to no impact on reducing violence, but a significant effect on deepening distrust between police and communities.
Where We Go From Here:
If Canada chooses to adopt predictive policing tools, it is crucial that we implement safeguards.
Some examples include:
Independent Bias Audits: to ensure algorithms aren’t targeting marginalized communities, or further reinforcing discrimination and biases.
Transparency Laws: requiring the disclosure of how / what data is being used, as well as how predictions are being made.
Community Oversight: giving people the opportunity to have a say in how these tools are used within their communities.
Strict Limits and Regulations: Controlling when and where this technology is used. As well as preventing it from being used to target protests and political movements, or being used for things such as immigration enforcement.
The Choice Is Ours:
Ultimately, this technology is here to stay, but we as a society have the power to decide how it is used within our legal systems. If implemented and used correctly, artificial intelligence has the potential to significantly aid our criminal justice system. However, without safeguards such as transparency, accountability, and strong ethical boundaries in place, predictive policing risks further perpetuating the current issues, distrust, and biases within law enforcement and criminal justice systems.
Comments