"Microsoft AI Researchers Leak 38TB of Private Data"

Microsoft has recently accidentally revealed a huge trove of sensitive internal information dating back over three years via a public GitHub repository.  Security researchers at Wiz discovered the privacy snafu when they found the GitHub repository “robust-models-transfer” which belonged to Microsoft’s AI research division.  The researchers stated that although the repository was meant only to provide access to open-source code and AI models for image recognition, the Azure Storage URL was actually misconfigured to grant permissions on the entire account.  The researchers noted that their scan shows that this account contained 38TB of additional data, including Microsoft employees’ personal computer backups.  The backups contained sensitive personal data, including passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from 359 Microsoft employees.  The researchers stated that in addition to the overly permissive access scope, the token was also misconfigured to allow “full control” permissions instead of read-only.  This means that not only could an attacker view all the files in the storage account, but they could delete and overwrite existing files as well.  The researchers stated that the problem appears to stem from Microsoft’s use of a Shared Access Signature (SAS) token, a signed URL that grants users access to Azure Storage data.  The original SAS token in this incident was first committed to GitHub in July 2020, with its expiry date updated in October 2021 to 30 years hence.  After Wiz reported the incident, Microsoft invalidated the token and replaced it.  

 

Infosecurity reports: "Microsoft AI Researchers Leak 38TB of Private Data"

Submitted by Anonymous on