[ad_1]
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.
The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.
Around three billion images are said to have been scraped for Clearview AI’s system.
“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.
Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.
In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.
Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.
The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.
Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.
“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.
A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”
Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.
While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.
(Photo by Andy Feliciotti on Unsplash)
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.
[ad_2]
Source link