We’ve developed an approach to generate 3D adversarial objects that reliably fool neural networks in the real world, no matter how the objects are looked at.
Neural network based classifiers reach near-human performance in many tasks, and they’re used in high risk, real world systems. Yet, these same neural networks are particularly vulnerable to adversarial examples, carefully perturbed inputs that cause targeted misclassificatio
The spirit of Magritte hides in neural networks : this team has been printing 3D objects that consistently fool machine vision object classifiers. A turtle becomes a rifle, while a cat is consistently recognized as guacamole.
This opens by the way a huge field in hide & seek and camouflage...