"Research Finds Models Used to Detect Malicious Users on Popular Social Sites are Vulnerable to Attack"

Research led by Georgia Tech (Georgia Institute of Technology) has resulted in the discovery of a new threat against deep learning models used to detect malicious users on popular e-commerce, social media, and web platforms such as Facebook. These platforms try to bolster their security by creating Machine Learning (ML) and Artificial Intelligence (AI) models that identify and separate malicious and at-risk users from benign users. However, the research team was able to reduce the efficacy of the classification models used to differentiate between malicious and benign users through the application of a newly developed adversarial attack model called PETGEN (Personalized Text Generation). The PETGEN attack model generates text posts that mimic a personalized writing style and have knowledge about a given target site's context. The posts generated by the attack model are also aware of the historical use of a target site and recent topical interests. PETGEN represents the first time researchers successfully performed adversarial attacks on deep user sequence classification models. Srijan Kumar, School of Computational Science and Engineering assistant professor and co-investigator, pointed out that it is important to act as attackers to identify model vulnerabilities and the possible ways in which malicious accounts can circumvent detection systems. The team carried out experiments on two real-world datasets from Yelp and Wikipedia to show that PETGEN significantly reduces the performance of popular deep user sequence embedding-based classification models. Findings from this research pave the path towards the next generation of adversary-aware sequence classification models. This article continues to discuss the new adversarial attack model PETGEN. 

Georgia Tech reports "Research Finds Models Used to Detect Malicious Users on Popular Social Sites are Vulnerable to Attack"

Submitted by Anonymous on