"Azure Health Bot Service Vulnerabilities Possibly Exposed Sensitive Data"

Tenable researchers found vulnerabilities in Microsoft's Azure Health Bot Service that threat actors could have used to access sensitive data. Healthcare organizations can build and deploy Artificial Intelligence (AI)-powered virtual health assistants using the Azure Health Bot Service. Some of these chatbots may need access to sensitive patient information to do their jobs. Tenable discovered a data connection feature that lets bots interact with external data sources. The feature allows the service's backend to make third-party Application Programming Interface (API) requests, and the researchers discovered a way to get around the protections implemented. They found a Server-Side Request Forgery (SSRF) vulnerability that enables attackers to escalate privileges and access cross-tenant resources. This article continues to discuss the Azure Health Bot Service vulnerabilities found by Tenable.

SecurityWeek reports "Azure Health Bot Service Vulnerabilities Possibly Exposed Sensitive Data"

Submitted by grigby1

Submitted by grigby1 CPVI on