"Critical ChatGPT Plug-in Vulnerabilities Expose Sensitive Data"

Salt Labs researchers discovered three security vulnerabilities in ChatGPT extension functions that could enable unauthorized, zero-click access to users' accounts and services. ChatGPT plug-ins and custom versions of the Artificial Intelligence (AI) system published by developers expand the AI model's capabilities. They enable interactions with external services by granting OpenAI's popular generative AI chatbot access and permission to perform tasks on different third-party websites, including GitHub and Google Drive. The first of the three critical vulnerabilities occurs during the installation of new plug-ins when ChatGPT redirects users to plug-in websites for code approval. Attackers could use this to trick users into approving malicious code, resulting in the automatic installation of unauthorized plug-ins and potential account compromise. This article continues to discuss the potential exploitation and impact of the three critical vulnerabilities affecting ChatGPT.

Dark Reading reports "Critical ChatGPT Plug-in Vulnerabilities Expose Sensitive Data"

Submitted by grigby1

Submitted by Gregory Rigby on