"Cybercriminals Bypass OpenAI's Restrictions on Malicious Use"

Cybercriminals have discovered a way to bypass OpenAI's restrictions against using its natural language Artificial Intelligence (AI) model for malicious purposes, according to researchers who have spotted low-level hackers using the company's ChatGPT chatbot for Machine Learning (ML) help in creating malicious scripts. According to security researchers at Check Point, the natural language ChatGPT interface blocks explicit commands to perform malicious actions such as writing phishing emails impersonating banks, or creating malware. However, they have found that this is not true of the Application Programming Interface (API) for OpenAI's GPT-3 natural language models. Check Point reports that the current version of OpenAI's GPT-3 API has very little, if any, anti-abuse protections in place. One way cybercriminals could take advantage of this is by integrating the API into Telegram. Researchers say they discovered a cybercriminal advertising a Telegram bot that offers unrestricted access to the OpenAI API. They tested its capabilities by instructing it to generate a bank phishing email and a script for uploading PDF files to an FTP server. This article continues to discuss the possibility of hackers using an API to bypass OpenAI's barriers and restrictions.

InfoRiskToday reports "Cybercriminals Bypass OpenAI's Restrictions on Malicious Use"

Submitted by Anonymous on