Limits on law enforcement use of facial recognition software are on the legislative agendas of at least 20 states this year, reports ABC news. Concerns over accuracy and racial bias have already led to curbs in seven states and some two dozen cities.
Law enforcement agencies use facial recognition to solve crimes such as homicides or catch human traffickers. However, when the technology was first being used, the American Civil Liberties Union (ACLU) raised concerns over the technology’s high error rate identifying people of color. At the time, the ACLU said the technology’s biggest danger is that it will be used for “general, suspicionless surveillance systems.”
The issue sparked more debate when facial recognition was used to identify and arrest protestors at racial justice demonstrations in 2020. Microsoft, Amazon and IBM paused their sales of the facial recognition software to the police over complaints about false identifications.
In March, the ACLU sued Clearview AI, a company that provides facial recognition services to law enforcement and private companies, after the New York Times reported that it had illegally stockpiled images of three billion people from Internet sites without their knowledge or permission.
The Chinese government’s extensive video surveillance system, which has been employed in largely Muslim ethnic minority populations, is now frequently cited as an example of the dangers of surveillance technology.
As concerns grow in the U.S., lawmakers say they want to give themselves time to evaluate how and why the technology is being used. In 2019, San Francisco was the first to ban the government’s use of the technology, and in 2020, New York imposed a two-year moratorium on use of the technology in schools, reports ABC news.
Although police groups are calling for prohibitions to be revisited, state lawmakers continue to debate over additional bans and limits on the technology.
This summary was prepared by TCR Justice Reporting Intern Anna Marie Wilder.