[ad_1]
Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived.
The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference.
Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said:
“Our work shows that attacks on deepfake detectors could be a real-world threat.
More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes even when an adversary may not be aware of the inner-workings of the machine learning model used by the detector.”
Two scenarios were tested as part of the research:
- The attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model
- The attackers can only query the machine learning model to figure out the probabilities of a frame being classified as real or fake.
In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos.
“We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector,” the researchers wrote.
Deepfakes use a Generative Adversarial Network (GAN) to create fake imagery and even videos with increasingly convincing results. So-called ‘DeepPorn’ has been used to cause embarrassment and even blackmail.
There’s the old saying “I won’t believe it until I see it with my own eyes,” which is why convincing fake content is such a concern. As humans, we’re rather hard-wired to believe what we (think) we can see with our eyes.
In an age of disinformation, people are gradually learning not to believe everything they read—especially when it comes from unverified sources. Teaching people not to necessarily believe the images and video they see is going to pose a serious challenge.
Some hope has been placed on systems to detect and counter deepfakes before they cause harm. Unfortunately, the UC San Diego researchers’ findings somewhat dash those hopes.
“If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, another co-author on the paper.
In separate research from University College London (UCL) last year, experts ranked what they believe to be the most serious AI threats. Deepfakes ranked top of the list.
“People now conduct large parts of their lives online and their online activity can make and break reputations,” said Dr Matthew Caldwell of UCL Computer Science.
One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words.
The video of Pelosi was likely created with the intention of being amusing rather than particularly malicious—but shows how deepfakes could be used to cause disrepute and even influence democratic processes.
As part of a bid to persuade Facebook to change its policies on deepfakes, last year Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”
Now imagine the precise targeting of content provided by platforms like Facebook combined with deepfakes which can’t be detected… actually, perhaps don’t, it’s a rather squeaky bum thought.
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.
[ad_2]
Source link