"New Study Uncovers Text-to-SQL Model Vulnerabilities Allowing Data Theft and DoS Attacks"

A team of researchers from the University of Sheffield has demonstrated methods that exploit Text-to-SQL models to generate malicious code, which could enable adversaries to extract sensitive data and launch Denial-of-Service (DoS) attacks. Xutan Peng, a researcher at the University of Sheffield, stated that various database applications use Artificial Intelligence (AI) algorithms that could transform human questions into SQL queries (Text-to-SQL) to improve user interaction. However, the researchers found that crackers can trick Text-to-SQL models into producing malicious code by posing specially crafted questions. Because such code is automatically run on the database, the effects can be significant. The findings, which were confirmed against two commercial products BAIDU-UNIT and AI2sql, represent the first empirical instance of Natural Language Processing (NLP) models being used in the wild as an attack vector. This article continues to discuss findings from the study on Text-to-SQL model vulnerabilities. 

THN reports "New Study Uncovers Text-to-SQL Model Vulnerabilities Allowing Data Theft and DoS Attacks"

Submitted by Anonymous on