"Penn Engineering Research Discovers Critical Vulnerabilities in AI-Enabled Robots to Increase Safety and Security"
Researchers at the University of Pennsylvania's School of Engineering and Applied Science (Penn Engineering) discovered that certain features of Artificial Intelligence (AI)-governed robots have previously unidentified security vulnerabilities. The research, funded by the National Science Foundation (NSF) and the Army Research Laboratory (ARL), seeks to address the emerging vulnerability to ensure the safe deployment of Large Language Models (LLMs) in robotics. In their new paper, "Jailbreaking LLM-Controlled Robots," the researchers warn that various AI-controlled robots can be manipulated or hacked. This article continues to discuss the team's discovery that AI-governed robots can easily be hacked.
Submitted by Gregory Rigby
on