"Researcher Explores Vulnerabilities of AI Systems to Online Misinformation"

A researcher at the University of Texas at Arlington is working to improve the security of Natural Language Generation (NLG) systems, such as those used by the Artificial Intelligence (AI)-driven chatbot ChatGPT, to prevent misuse and abuse that could lead to the spread of false information online. The National Science Foundation (NSF) awarded Shirin Nilizadeh, an assistant professor in the Department of Computer Science and Engineering, a five-year, $567,609 Faculty Early Career Development Program (CAREER) grant for her research. She emphasized the importance of understanding AI's vulnerability to online misinformation, a pressing issue that must be addressed. Nilizadeh's research will include an in-depth examination of the types of attacks that NLG systems are susceptible to, as well as the development of AI-based optimization techniques to test the systems against various attack models. In addition, she will conduct an analysis and characterization of the vulnerabilities that contribute to attacks and develop protective measures for NLG systems. The focus will be on two common NLG methods: summarization and question-answering. This article continues to discuss Nilizadeh's research aimed at increasing the security of NLG systems. 

The University of Texas at Arlington reports "Researcher Explores Vulnerabilities of AI Systems to Online Misinformation"

Submitted by Anonymous on