Friday, March 29, 2024

‘Adversarial glasses’ can fool even state-of-the-art facial-recognition tech

Share

You may have heard about so-called “adversarial” objects that are capable of baffling facial recognition systems, either making them fail to recognize an object completely or prompting them to classify it incorrectly — for example, thinking that a rifle is actually a 3D-printed toy turtle. Well, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill have just found a practical, scaleable, and somewhat scary application — anti-facial-recognition glasses.

Building on previous work by the same group from 2016, the researchers built five pairs of adversarial glasses, which can be successfully used by 90 percent of the population, making them a nearly “universal” solution. When worn, the glasses render wearers undetectable (or, as the researchers describe it, “facilitate misclassification”) even when viewed by the latest machine intelligence facial recognition tech. And far from looking like the kind of goofy disguises individuals might have worn to avoid being recognized in the past, these eyeglasses also appear completely normal to other people.

The eyeglasses were tested successfully against VGG and OpenFace deep neural network-based systems. Although the instructions for building them have not been made publicly available, the researchers say that the glasses could be 3D-printed by users.

Facial recognition has no problem identifying the Owen Wilson on the left. The one on the right? Not so much.

Whether the technology is good or bad depends largely on how you perceive facial recognition. On the one hand, it’s easy to see how privacy advocates would be excited at the prospect of glasses that can help bypass our surveillance society, in which we’re not only photographed 70 times per day, but can also be readily identified through facial recognition. (There are already examples of similar facial recognition disguises available on the market.)

On the other hand, facial recognition is frequently used to keep citizens safe by identifying potentially dangerous individuals in places like airports. For this reason, the researchers have passed on their findings to the Transportation Security Administration (TSA), and recommended that the TSA consider asking passengers to remove seemingly innocuous items like glasses and jewelry in the future, since these “physically realizable attack artifacts” could be used to beat even state-of-the-art recognition systems.

A paper describing the researchers’ work was recently published online, titled “Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition.”

Editors’ Recommendations

  • VibWrite can transform any surface into a secure biometric ID sensor
  • 3D-printed objects can connect to Wi-Fi without electronics
  • Facebook wants to generate custom avatars, emojis from your photos
  • Algorithm learns to predict the perp on ‘CSI’ by binge-watching episodes
  • This 3D-printed textile could enable your clothing to cool you down




Read more

More News