Data poisoning can’t beat facial recognition – research • The Register | #microsoft | #hacking | #cybersecurity


If there was ever a reason to think data poisoning could fool facial recognition software, a recently published paper showed that reasoning is bunk.

Data poisoning software alters images by manipulating individual pixels. Those changes are invisible to the naked eye, but if effective they make them useless to facial recognition software.

Researchers from Stanford University, Oregon State University and Google teamed up for the paper in which they single out two particular reasons why data poisoning won’t keep people safe. First, the code written to “poison” photographs is freely available online. Second, there’s no reason to assume a poisoned photo will be effective against future recognition models.

Far from providing security, the paper’s authors said data poisoning to prevent facial recognition provides a false sense of security, and could actually harm users who wouldn’t have posted photographs online otherwise. 

The researchers faced off against two data poisoning programs: Fawkes and LowKey, both which subtly alter images at the pixel level that, while invisible to humans, is enough to confuse facial recognition software. Both are freely available online, and that’s problem number one, the authors said. 

“An adaptive model trainer with black-box access to the [poisoning method] employed by users can immediately train a robust model that resists poisoning.” Unfortunately for those poisoning models, their code is freely available online, and the researchers said it will stay that way for as long as the products exist.

With that availability in mind, the paper said it stands to reason facial recognition software companies are aware of poisoning software like Fawkes and LowKey. As the researchers show in the paper, all they needed was black box access to image poisoner code. There’s no reason to assume the major players haven’t already accounted for them, too. 

There’s another problem with data poisoning, though, and that is time. 

“We find there exists an even simpler defensive strategy: model trainers can just wait for better facial recognition systems, which are no longer vulnerable to these particular poisoning attacks,” the paper said. 

In the cases the researchers examined, they didn’t even have to wait that long: Both Fawkes and LowKey were ineffective against versions of facial recognition software released within a year of their appearance online (the same month for LowKey).

There’s no arms race between poisoning and facial recognition to be found here, the researchers said. Poisoning attacks are only effective once, can likely be countered via a black box, and if that fails all the system has to do is wait for an update. 

There’s been plenty of experiments done on fooling facial recognition with varying levels of success, and it looks like data poisoning is yet another unsuccessful attempt at promoting online privacy. 

“In light of this, we argue that users’ only hope is a push for legislation that restricts the use of privacy-invasive facial recognition systems,” the paper said. ®



Original Source link

Leave a Reply

Your email address will not be published.

seventy one − = sixty seven