SoS Musings #37 - The Double-Edged Sword of AI and ML

Image removed.SoS Musings #37 -

The Double-Edged Sword of AI and ML

 

 


Artificial Intelligence (AI) and Machine Learning (ML) technologies are increasingly being implemented by organizations to protect their assets from cyberattacks. AI is defined as the concept and development of intelligent machines that are capable of performing activities that would usually require human intelligence, such as problem-solving, reasoning, visual perception, speech recognition, language translation, and more. ML is an application or subset of AI involving the use of training algorithms that enables machines to learn from provided data and make decisions. Many applications implement AI and ML to enhance our everyday life. These applications include image recognition, news classification, video surveillance, virtual personal assistants, medical diagnosis, and social media services. The realm of cybersecurity is experiencing an acceleration in the importance and use of AI and ML applications. Stefaan Hinderyckx, the Senior Vice President for Security at NTT Ltd., recently spoke about the growing importance of advanced AI and ML tools in identifying, detecting, and combating cybersecurity threats. Organizations have been encouraged to adopt solutions that can help identify security threats efficiently and quickly as they handle large amounts of data and face challenges in recruiting professionals with the skills needed to maintain the security of their systems against cyber adversaries. The growth of AI and ML in cybersecurity calls for the security community to explore further the benefits and potential risks posed by this technology.

Security professionals can benefit from the use of AI and ML in their operations. According to Avertium's 2019 Cybersecurity and Threat Preparedness Survey to which over 200 cybersecurity and IT executives in the US responded, most professionals believe technology will be playing a significant role in the future of cybersecurity operations. AI and ML were cited by most professionals as technologies that will solve more problems than humans. However, other survey findings still highlight the importance of human intervention in identifying and combating cyber threats, with more than half of the respondents revealing plans to expand their cybersecurity teams. One example of a platform that aims to support human-machine collaboration in the performance of security analysis is PatternEx's Virtual Analysis Platform. Cybersecurity analysts are likely to be overwhelmed by the amount of data produced by employees and customers at their respective companies, making it increasingly difficult to identify data generated by attacks before any damage occurs. The platform developed by the Massachusetts Institute of Technology's (MIT's) startup company, PatternEx, uses Machine Learning models to flag potential attacks and allows cybersecurity analysts to provide feedback to the models, which reduces false positives and increases analyst productivity. In comparison with a generic anomaly detection software program, the Virtual Analyst Platform identified ten times more threats with the help of the same number of alerts generated by the generic system. Another AI tool, called DeepCode, has been developed by Boston University computer scientists in collaboration with researchers at Draper. DeepCode uses a class of ML algorithms known as neural networks to help identify software flaws that could be exploited by hackers to infiltrate corporate networks. The tool is expected to be capable of fixing the software vulnerabilities it identifies in the future. A team of computer scientists led by Prasad Calyam from the University of Missouri designed a deception-based cybersecurity system, called Dolus, that uses ML techniques to mislead malicious actors into thinking they are successfully attacking a targeted site or system in order to give security teams extra time to respond and prevent the success of Distributed Denial-of-Service (DDoS) attacks and Advanced Persistent Threats (APT). Dolus applies ML techniques to improve the detection of and defense against attacks aimed at gaining access to data and resources in small-to-large scale enterprise networks. Although there are many ways the security community can use AI and ML to improve security operations and the prevention of cyberattacks, there are still issues associated with this technology that must be considered, such as potential abuse by threat actors.

Several studies have brought further attention to the potential abuse of ML and AI systems, and other issues surrounding this advanced technology. Dawn Song, a professor at UC Berkeley with a specific focus on the security risks associated with AI and ML, warned of the emergence of new techniques that malicious entities can use to delve into ML systems and manipulate functions. These techniques are known as "adversarial machine learning" methods, which are capable of exposing information used to train an ML algorithm and causing an ML system to produce incorrect output. Researchers at Princeton University conducted a series of studies exploring how an adversary can trick ML systems. They demonstrated three broad types of adversarial ML attacks that target different phases of the ML life cycle, including data poisoning attacks, evasion attacks, and privacy attacks. Data poisoning attacks occur when adversaries inject bad data into an AI system's training set to cause the system to produce incorrect output or predictions. Evasion attacks abuse the successful training and high accuracy of an ML model by modifying its input so that the system incorrectly classifies it, and errorsare unnoticeable to the human eye during real-world decision-making. Adversaries could execute privacy attacks against ML models to retrieve sensitive information such as credit card numbers, health records, and users' locations, using the data in an ML model's training pool. A research group led by the De Montfort University Leicester (DMU) found that online hackers target AI using search engines, social media platforms, and recommendation websites to execute attacks more often than people realize. SHERPA, a project funded by the European Union with a focus on the impact of AI and big data analytics, published a report highlighting this research, which also states that hackers often focus more on manipulating existing AI systems to perform malicious activities instead of introducing novel attacks that apply ML methods. However, security researchers have pointed out how hackers can use ML to launch attacks such as social engineering attacks, ransomware, CAPTCHA violations, and DDoS attacks. ML can enhance social engineering, with its capability to quickly collect information about business and employees that could be used in tactics to trick individuals into giving up sensitive data. For example, criminals successfully impersonated CEOs to steal millions of money in three separate attacks using an AI program trained on hours of their speech from Youtube videos such as TED talks, and other audio sources. Researchers in China and Lancaster University were able to trick Google's bot and spammer detection system, reCAPTCHA, into thinking an AI program is a human user. Ransomware attacks driven by AI can significantly increase the damage inflicted by such attacks in that ML models can disable a system's security measures as well as quickly create convincing malware-loaded fake emails and alter words in messages for each target with the right training data. Other potential uses of AI by adversaries include taking control over digital home assistants, hijacking autonomous military drones, and spreading fake news on social media. There are also concerns about AI-driven systems, such as recommendation engines, with regard to privacy as a result of AI and ML algorithms' inability to forget the customer or user data they use for training. In addition to the problems with controlling data once it's fed into ML algorithms, researchers have also found that AI can be abused into revealing secrets, posing a significant threat to privacy and global security.

The possible exploitation of AI and ML mechanisms for malicious purposes has sparked efforts to protect this technology. Researchers at the Berryville Institute of Machine Learning (BIML) developed a formal risk framework to support the development of secure ML systems. The architectural risk analysis conducted by BIML focuses on issues that engineers and developers must keep in mind when designing and constructing ML systems. Their analysis explored the common elements associated with the setup, training, and deployment of a typical ML system, including raw data, datasets, learning algorithms, inputs, and outputs. The data security risks associated with each of these components, such as adversarial examples, data poisoning, online system manipulation, and more, were then identified, ranked, and categorized to inform the implementation of mitigation controls by engineers and developers. Kaggle, the data science community, held a competition to encourage the exploration of best defenses against AI. Participants were asked to battle against each other using offensive and defensive AI algorithms in hopes of improving insights into how ML systems can be protected against attacks. The growing advancement and execution of AI-based attacks call for continued exploration of such attacks can be prevented.

AI is considered a double-edged sword because it can be used by security teams to improve cybersecurity, or it can be used by hackers to execute heightened attacks. The battle against adversarial ML requires collaborative efforts among researchers, academics, policymakers, and private entities that develop advanced AI systems.
 

Submitted by Anonymous on