"IST Researchers Exploit Vulnerabilities of AI-Powered Game Bots"
When you play an online video game, you are likely to compete with bots, which are AI-driven programs that play the game in place of a human. Many of these bots are developed using deep reinforcement learning in which algorithms are trained to learn how to solve various complex decision-making tasks through a reward system. Researchers at the College of Information Sciences and Technology have proven that attackers could easily use deception to defeat game bots trained by deep reinforcement learning. They designed an algorithm to train an adversarial bot. This bot was able to automatically discover and exploit vulnerabilities of reinforcement learning-driven master game bots. Research findings highlight the security threat posed by using reinforcement learning-trained agents as game bots. The study also further informs white-hat hackers on the training of their adversarial agents. This article discusses master game bots powered by reinforcement learning algorithms and how researchers demonstrated the threat presented by such bots.