"AI Chatbots Can Infer an Alarming Amount of Info About You From Your Responses"

New research reveals that Artificial Intelligence (AI)-driven chatbots such as ChatGPT can infer a great deal of sensitive information about the people they are chatting with. The phenomenon stems from how the models' algorithms are trained using broad swathes of web content, a crucial aspect of their functionality, making it difficult to prevent. Martin Vechev, a computer science professor at ETH Zürich in Switzerland who led the research, says that it is unclear how to solve this issue. Vechev and his team discovered that the Large Language Models (LLMs) powering advanced chatbots can accurately infer a disturbing amount of personal information about users, including their race, location, occupation, and more, from seemingly harmless conversations. According to Vechev, scammers could use chatbots' ability to infer sensitive information about a person to harvest sensitive data from users. This article continues to discuss the research on AI chatbots inferring sensitive information about users.

Ars Technica reports "AI Chatbots Can Infer an Alarming Amount of Info About You From Your Responses"

Submitted by grigby1
 

Submitted by grigby1 CPVI on