"Modeling Social Media Behaviors to Combat Misinformation"

Social media manipulation is used to spread false narratives, influence democratic processes, and more. However, not everyone with whom you disagree on social media is a bot. Misinformation strategies have continued to evolve. Their detection has been a reactive process, with malicious actors always one step ahead. Alexander Nwala, an assistant professor of data science at William & Mary, seeks to proactively combat these forms of exploitation. With collaborators from the Indiana University Observatory on Social Media, he recently introduced BLOC, a universal language framework for modeling social media behaviors. According to Nwala, the purpose of this framework is not to target a particular behavior, but rather to provide a language that can describe behaviors. The sophistication of bots that emulate human actions has increased over time. Inauthentic coordinated behavior is a common form of deception, manifested by actions that may not appear suspicious at the level of the individual account, but are actually part of a strategy involving multiple accounts. However, not all coordinated or automated behavior is malicious. BLOC does not categorize "good" or "bad" activities, but it does provide researchers with a language to describe social media behaviors, thereby facilitating the identification of potentially malicious actions. This article continues to discuss the work aimed at addressing current and future forms of social media manipulation.

The College of William & Mary reports "Modeling Social Media Behaviors to Combat Misinformation"

Submitted by Gregory Rigby on