"Pen Testers Need to Hack AI, but Also Question Its Existence"

Samsung has prohibited some uses of ChatGPT, while Ford and Volkswagen have shut down their self-driving car company, and a letter calling for a halt to the training of more powerful Artificial Intelligence (AI) systems has received over 25,000 signatures. Davi Ottenheimer, vice president of trust and digital ethics at Inrupt, a startup that develops digital identity and security solutions, says this is not an overreaction. According to Ottenheimer, the security and safety of Machine Learning (ML) and AI models need improved testing strategies. These models include ChatGPT, autonomous vehicles, and autonomous drones. Ottenheimer, who has prepared a presentation on the topic for the RSA Conference in San Francisco, emphasizes that society needs to have broader discussions about how to test and improve safety, as a steady stream of security researchers and technologists have already found ways to circumvent AI system protections. With the release of ChatGPT in November, interest in AI and ML, which was already on the rise due to data science applications, exploded. The ability of the Large Language Model (LLM) to appear to understand human language and generate coherent responses has led to an increase in proposed applications based on the technology and other forms of AI. ChatGPT has been used to triage security incidents, and a more advanced LLM serves as the foundation of Microsoft's Security Copilot. This article continues to discuss the need for security researchers to further explore whether there are sufficient protections to prevent the misuse of AI models. 

Dark Reading reports "Pen Testers Need to Hack AI, but Also Question Its Existence"

Submitted by Anonymous on