"Helping Policy Makers to Navigate the National Security Challenges Created by AI"
Artificial Intelligence (AI)-based Large Language Models (LLMs) that drive chatbots such as ChatGPT could pose security risks if exploited by malicious actors. For example, an LLM can be tricked into generating malicious content, an activity known as prompt hacking. The misuse of LLMs is likely to increase, which is one security risk being explored by Turing's Centre for Emerging Technology and Security (CETaS). CETaS aims to help policymakers understand the risks and opportunities presented by AI and other emerging technologies, as well as respond accordingly. Other risks posed by AI include its potential to bolster the cyber offensive capabilities of adversaries and exacerbate disinformation. Policies must strike a balance between mitigating security risks and maximizing the societal benefits of these technologies. AI provides the security community opportunities to address emerging security risks more effectively by enhancing cyber defensive capabilities, automating software development for use within the intelligence and national security community, and more. The approach at CETaS is multidisciplinary and based on evidence, as policymakers require reliable recommendations informed by various perspectives. This article continues to discuss the efforts of Turing's CETaS to help policymakers navigate the security challenges AI creates.