"AI Tools Could Boost Social Media Users' Privacy"

According to researchers at the University of Edinburgh, by fighting Artificial Intelligence (AI) with AI, digital assistants could help prevent users from unknowingly revealing their views on social, political, and religious issues. Their findings imply that automated assistants could provide users with real-time advice on how to modify their online behavior in order to mislead AI opinion-detection tools and keep their opinions private. The study is the first to show how Twitter users can hide their opinions from opinion-detecting algorithms that help authoritarian governments or fake news sources target them. Previous research has focused on steps that social media platform owners can take to improve privacy, though the team notes that such actions can be difficult to enforce. Data from over 4,000 Twitter users in the US was used by Edinburgh researchers and academics from New York University Abu Dhabi. The team used the data to examine how AI can predict people's opinions based on their online activities and profile. They also tested designs for an automated assistant to help Twitter users keep their views on potentially divisive topics private. Their findings suggest that a tool could assist users in hiding their views on their profiles by identifying key indicators of their opinions, such as accounts they follow and interact with. This article continues to discuss the team's study on how AI can help strengthen social media users' privacy. 

University of Edinburgh reports "AI Tools Could Boost Social Media Users' Privacy"

Submitted by Anonymous on