Police departments around the country — and around the world — increasingly rely on new technology to assist with algorithmic predictions of someone committing a future crime.
This, combined with the technological advances in media platforms like Facebook and Twitter which use Artificial Intelligence (AI) to track and predict individual preferences, it’s becoming more difficult to “escape the surveillant eye.”
This is also true for Canada. Many Canadian police departments are using predictive policing software, but with a twist — predictive policing used to examine where a crime will take place.
As author Mathew Zaia, a J.D. candidate at the University of Ottawa, Canada, argues in a new paper, this type of location-based “crime forecasting” has major implications for entrapment laws.
In his latest paper, Zaia asks tough questions of algorithmic prediction software, and looks at Canadian Supreme Court cases to discuss how the AI developments make it more difficult for defendants to argue entrapment.
Predictive Policing Models
In America, a program widely used by law enforcement called COMPAS analyzes a person’s historical data to assess levels of risk that they pose for recidivism. In the United Kingdom, a few police forces use a “Harm Assessment Risk Tool” (Hart) to try and “forecast” someone’s likelihood of committing a future crime.
These advanced machine learning models also include algorithms to estimate someone’s potential future criminality, mostly by comparing someone’s personal data with known statistics based on previous crimes, places, socioeconomic status, and even behavioral patterns, Zaia writes.
Canada is using some of this data, and taking it a step further, according to Zaia.
In Vancouver, police have begun using spatial analytics and machine learning programs like GeoDash to predict crime in over a 100-meter radius, essentially predicting the next place a crime will occur. Alleged to be over 80 percent accurate, “the algorithms target locations for upcoming criminal activity within a one- to five-hour timespan.”
This incredible “crime forecasting” uses data from police reports, incoming tips, statistics of possible criminality, and potentially even social media to point police to locations where future crime may happen.
Zaia cites the Ottawa Police Strategic Operations Centre (OPSOC) as an example, as it gathers data from crime reports, floor plans in public buildings, and social media to “provide frontline officers with crime statistics and predictive analytics.”
Zaia also questions whether the lines are blurred between using location-based data for public safety and for entrapment.
As defined in R. v. Mack, entrapment in Canadian law, occurs when “the authorities provide a person with an opportunity to commit an offence without acting on a reasonable suspicion that this person is already engaged in criminal activity or pursuant to a bona fide inquiry.”
Moreover, Zaia writes, the Canadian courts acknowledge entrapment in police investigations to “protect against overreaching and discriminatory policing.” To that end, the implications of location crime forecasting in relation to entrapment have already been discussed, particularly by Canada’s Supreme Court Justice Antonia Lamar in Mack.
Justice Lamar said, as discussed by Zaia, that if police had sufficient information that a criminal act was going to occur in a given area, the police could use that information to try and catch a criminal red-handed.
Justice Lamar provided an example, explaining that if police received tips of a thief at a bus terminal — creating a bona fide inquiry — the officers could place a handbag in the middle of the terminal to draw out the criminal.
“Upon seeing someone take the handbag, they could arrest the individual without engaging in entrapment,” Zaia explains. “Despite law enforcement providing an opportunity to anyone in the area to commit a crime, they circumvent the strictures of entrapment because of the bona fide inquiry” created by the crime forecasting algorithms.
In other words, “reasonable suspicion” in the context of entrapment may turn out to be virtual suspicion egged on by the new predictive technologies, Zaia writes.
If sufficiently widespread within policing techniques, such suspicions — like the ones conjured up by the crime forecasting AI — could permit police to “circumvent the strictures of current entrapment law,” as they could claim they were following up on a lead or statistically significant potential threat.
“The concept of using predictive policing to develop a reasonable suspicion or investigate pursuant to a bona fide inquiry suggests that such technology must be more scrupulous for investigations,” Zaia recommends.
Zaia also recommends that law enforcement be more transparent with the location-based crime forecasting by:
- Releasing information to show the inner workings of the algorithm;
- Showing what data is being used in the algorithm’s predictions;
- Allow the algorithms to be observed through their development and updates; and,
- Creating a government-mandated independent oversight body for predictive policing tools.
Mathew Zaia is the Editor-in-Chief of the Revue de Droit d’Ottawa/Ottawa Law Review. Zaia is also a Junior Research Analyst for the Canada Centre for Community Engagement and Prevention of Violence, and a J.D. Candidate at the University of Ottawa.
The full paper can be accessed here.
Emily Riley is a TCR justice reporting intern.