Police departments across the U.S. have been drawn to digital technology for surveillance and predicting crime in the hope that it will make law enforcement more accurate, efficient and effective.
United Nations human rights experts warned on Thursday that they risk reinforcing racial bias and abuse, reports the New York Times.
The U.N. Committee on the Elimination of Racial Discrimination, an 18-member panel, conceded that artificial intelligence in decision-making “can contribute to greater effectiveness in some areas,” but found that the use of facial recognition and other algorithm-driven technologies for law enforcement and immigration control may risk deepening racism and xenophobia and could lead to human rights violations.
In its report, the committee warned that using these technologies can be counterproductive, as communities exposed to discriminatory law enforcement lose trust in the police and become less cooperative.
“Big data and A.I. tools may reproduce and reinforce already existing biases and lead to even more discriminatory practices,” said Dr. Verene Shepherd, who led the panel’s discussions on drafting its findings.
“Machines can be wrong,” she told the Times. “They have been proven to be wrong, so we are deeply concerned about the discriminatory outcome of algorithmic profiling in law enforcement.”
The panel cited the danger that algorithms driving these technologies can draw on biased data, including, historical arrest data about a neighborhood that may reflect racially biased policing practices.
“Such data will deepen the risk of over-policing in the same neighborhood, which in turn may lead to more arrests, creating a dangerous feedback loop,” she said.
The panel’s warnings add to deepening alarm among human rights groups over the largely unregulated use of artificial intelligence across a widening spectrum of government, from social welfare delivery to “digital borders” controlling immigration.
Face Recognition Technology and the Media, Claire Garvey, Georgetown University (PowerPoint)