"Dark Web ChatGPT Unleashed: Meet DarkBERT"

The snowball effect caused by Large Language Models (LLMs) such as ChatGPT is still in the early stages. Combined with the open-sourcing of other Generative Pre-Trained Transformer (GPT) models, the number of Artificial Intelligence (AI)-based applications is exploding, and ChatGPT can be used to create highly sophisticated malware. As time passes, applied LLMs will only increase, with each one specializing in its own domain and trained on carefully curated data for a particular purpose. One such application, trained on data from the dark web itself, has just emerged. DarkBERT, as its South Korean creators named it, has arrived and provides an introduction to the dark web. DarkBERT is based on the RoBERTa AI architecture, which was created in 2019. Researchers have discovered it has more performance to offer. To train the model, the researchers crawled the dark web through the Tor network's anonymizing firewall and then filtered the raw data to create a database of the dark web. DarkBERT stems from this database being used to feed the RoBERTa LLM, a model that can analyze and extract useful information from new dark web content. Researchers demonstrated that DarkBERT outperformed other LLMs, which should enable security researchers and law enforcement to delve deeper into the web's darkest corners. This article continues to discuss DarkBERT. 

Tom's Hardware reports "Dark Web ChatGPT Unleashed: Meet DarkBERT"

Submitted by Anonymous on