"Researchers Demonstrate AI Can Be Fooled"

Researchers at Purdue University have released a new study, which draws further attention to the possibility of tricking Artificial Intelligence (AI) systems used by image recognition tools, such as those implemented in connected cars to identify street signs. The researchers found that a low-cost, effective camera, a projector, and a PC could be used in an attack to trick such AI systems into making incorrect identifications. Their research paper describes the Optical Adversarial Attack (OPAD), which involves the projection of calculated patterns that alter how 3D objects appear to AI-based image recognition systems. In their experiment, they projected a pattern onto a stop sign, thus causing the image recognition to perceive the sign as a speed limit sign instead. This attack method could also work with image recognition tools used in various applications, from military drones to facial recognition systems. OPAD could be used to trick the image recognition systems in military Unmanned Aerial Vehicles (UAVs), weapons systems, and more. If nation-states were to launch such an attack on a large scale, the lives of millions of citizens could be put in danger. The researchers say OPAD shows that an optical system can be used to alter the appearance of faces or for long-range surveillance tasks. They added that OPAD demonstrates that it is feasible to attack real 3D objects without touching them (i.e., changing their appearance to cause AI systems to misidentify them). However, the feasibility of OPAD is limited by the 3D object's surface material and color saturation. The research and demonstration of OPAD could help inform the development of methods to defend against optical attacks. This article continues to discuss the techniques, potential impact, limitations, and mitigation of OPAD. 

Device Security reports "Researchers Demonstrate AI Can Be Fooled"

Submitted by Anonymous on