"UB Researchers Find Vulnerabilities in Safety of AI in Driverless Cars"

Ongoing research conducted by the University of Buffalo looks into how vulnerable Artificial Intelligence (AI) systems in self-driving vehicles are to an attack. Their findings suggest that malicious actors may cause these systems to fail. For example, strategically placing 3D-printed objects on a vehicle can make it invisible to AI-powered radar systems, preventing detection. The research notes that while AI can process a lot of information, it can also get confused and deliver incorrect information if it is provided with special instructions that it was not trained to handle. This article continues to discuss the research on the vulnerabilities faced by AI in driverless cars.

The University of Buffalo reports "UB Researchers Find Vulnerabilities in Safety of AI in Driverless Cars"

Submitted by grigby1

Submitted by Gregory Rigby on