"Grim Criminal Abuse of ChatGPT is Coming, Europol Warns"

Europol recently warned that criminals are set to take advantage of artificial intelligence like ChatGPT to commit fraud and other cybercrimes.  Created by US startup OpenAI, ChatGPT appeared in November and was quickly seized upon by users amazed at its ability to answer difficult questions clearly, write sonnets or code, and even pass exams.  Europol noted that the potential exploitation of these types of AI systems by criminals provides a grim outlook.  Europol’s new “Innovation Lab” looked at the use of chatbots as a whole but focused on ChatGPT during a series of workshops as it is the highest-profile and most widely used.  Europol found that criminals could use ChatGPT to “speed up the research process significantly” in areas they know nothing about.  This could include drafting text to commit fraud or give information on “how to break into a home, to terrorism, cybercrime, and child sex abuse.” The chatbot’s ability to impersonate speech styles made it particularly effective for phishing, in which users are tempted to click on fake email links that then try to steal their data.  Europol noted that ChatGPT’s ability to quickly produce authentic-sounding text makes it “ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.”  ChatGPT can also be used to write computer code, especially for non-technically minded criminals.  An early study by US-Israeli cyber threat intel company Check Point Research (CPR) showed how the chatbot can be used to infiltrate online systems by creating phishing emails.  Europol noted that while ChatGPT had safeguards, including content moderation, which will not answer questions that have been classified as harmful or biased, these could be circumvented with clever prompts.  

 

SecurityWeek reports: "Grim Criminal Abuse of ChatGPT is Coming, Europol Warns"

Submitted by Anonymous on