Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
Penetration testing (Pen-Testing) detects potential vulnerabilities and exploits by imitating black hat hackers to stop cyber crimes. Despite recent attempts to automate Pen-Testing, the issue of automation is still unresolved. Additionally, the attempts are highly case-specific and ignore the unique characteristics of pen-testing. Moreover, the achieved accuracy is limited, and very sensitive to variations. Also, there are redundancies found in detecting the exploits using non-automated algorithms. This paper concludes the recent study in the Penetration testing field and illustrates the importance of a comprehensive hybrid AI automation framework for pen-testing.
Authored by Verina Saber, Dina ElSayad, Ayman Bahaa-Eldin, Zt Fayed
Deep neural networks have been widely applied in various critical domains. However, they are vulnerable to the threat of adversarial examples. It is challenging to make deep neural networks inherently robust to adversarial examples, while adversarial example detection offers advantages such as not affecting model classification accuracy. This paper introduces common adversarial attack methods and provides an explanation of adversarial example detection. Recent advances in adversarial example detection methods are categorized into two major classes: statistical methods and adversarial detection networks. The evolutionary relationship among different detection methods is discussed. Finally, the current research status in this field is summarized, and potential future directions are highlighted.
Authored by Chongyang Zhao, Hu Li, Dongxia Wang, Ruiqi Liu
With the increased computational efficiency, Deep Neural Network gained more importance in the area of medical diagnosis. Nowadays many researchers have noticed the security concerns of various deep neural network models used for the clinical applications. However an efficient model misbehaves frequently when it confronted with intentionally modified data samples, called adversarial examples. These adversarial examples generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Thus, various adversarial attacks and defense methods certainly stand out from both AI and security networks and have turned into a hot exploration point lately. Adversarial attacks can be expected in various applications of deep learning model especially in healthcare area for disease prediction or classification. It should be properly handled with effective defensive mechanisms or else it may be a great threat to human life. This literature work will help to notice various adversarial attacks and defensive mechanisms. In the field of clinical analysis, this paper gives a detailed research on adversarial approaches on deep neural networks. This paper starts with the speculative establishments, various techniques, and utilization of adversarial attacking strategies. The contributions by the various researchers for the defensive mechanisms against adversarial attacks were also discussed. A few open issues and difficulties are accordingly discussed about, which might incite further exploration endeavors.
Authored by K Priya V, Peter Dinesh
In recent years, machine learning technology has been extensively utilized, leading to increased attention to the security of AI systems. In the field of image recognition, an attack technique called clean-label backdoor attack has been widely studied, and it is more difficult to detect than general backdoor attacks because data labels do not change when tampering with poisoning data during model training. However, there remains a lack of research on malware detection systems. Some of the current work is under the white-box assumption that requires knowledge of machine learning-based models which can be advantageous for attackers. In this study, we focus on clean-label backdoor attacks in malware detection systems and propose a new clean-label backdoor attack under the black-box assumption that does not require knowledge of machine learning-based models, which is riskier. The experimental evaluation of the proposed attack method shows that the attack success rate is up to 80.50\% when the poisoning rate is 14.00\%, demonstrating the effectiveness of the proposed attack method. In addition, we experimentally evaluated the effectiveness of the dimensionality reduction techniques in preventing clean-label backdoor attacks, and showed that it can reduce the attack success rate by 76.00\%.
Authored by Wanjia Zheng, Kazumasa Omote
As artificial intelligent models continue to grow in their capacity and sophistication, they are often trusted with very sensitive information. In the sub-field of adversarial machine learning, developments are geared solely towards finding reliable methods to systematically erode the ability of artificial intelligent systems to perform as intended. These techniques can cause serious breaches of security, interruptions to major systems, and irreversible damage to consumers. Our research evaluates the effects of various white box adversarial machine learning attacks on popular computer vision deep learning models leveraging a public X-ray dataset from the National Institutes of Health (NIH). We make use of several experiments to gauge the feasibility of developing deep learning models that are robust to adversarial machine learning attacks by taking into account different defense strategies, such as adversarial training, to observe how adversarial attacks evolve over time. Our research details how a variety white box attacks effect different components of InceptionNet, DenseNet, and ResNeXt and suggest how the models can effectively defend against these attacks.
Authored by Ilyas Bankole-Hameed, Arav Parikh, Josh Harguess
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model s accuracy under white-box attack decreases by 9.2S\% 31.1S\% better than others, and average success rates are 74.87\% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting
Authored by Xuanwei Lin, Chen Dong, Ximeng Liu, Yuanyuan Zhang
AI is one of the most popular field of technologies nowadays. Developers implement these technologies everywhere forgetting sometimes about its robustness to unobvious types of traffic. This omission can be used by attackers, who are always seeking to develop new attacks. So, the growth of AI is highly correlates with the rise of adversarial attacks. Adversarial attacks or adversarial machine learning is a technique when attackers attempt to fool ML systems with deceptive data. They can use inconspicuous, natural-looking perturbations in input data to mislead neural networks without inferring into a model directly and often without the risk to be detected. Adversarial attacks usually are divided into three primary axes - the security violation, poisoning and evasion attacks, which further can be categorized on “targeted”, “untargeted”, “whitebox” and “blackbox” types. This research examines most of the adversarial attacks are known by 2023 relating to all these categories and some others.
Authored by Natalie Grigorieva, Sergei Petrenko
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
ChatGPT, a conversational Artificial Intelligence, has the capacity to produce grammatically accurate and persuasively human responses to numerous inquiry types from various fields. Both its users and applications are growing at an unbelievable rate. Sadly, abuse and usage often go hand in hand. Since the words produced by AI are nearly comparable to those produced by humans, the AI model can be used to influence people or organizations in a variety of ways. In this paper, we test the accuracy of various online tools widely used for the detection of AI-generated and Human generated texts or responses.
Authored by Prerana Singh, Aditya Singh, Sameer Rathi, Sonika Vasesi
With the increasing deployment of machine learning models across various domains, ensuring AI security has become a critical concern. Model evasion, a specific area of concern, involves attackers manipulating a model s predictions by perturbing the input data. The Fast Gradient Sign Method (FGSM) is a well-known technique for model evasion, typically used in white-box settings where the attacker has direct access to the model s architecture. In this method, the attacker intelligently manipulates the inputs to cause mispredictions by accessing the gradients of the input. To address the limitations of FGSM in black-box settings, we propose an extension of this approach called FGSM on ZOO. This method leverages the Zeroth Order Optimization (ZOO) technique to intellectually manipulate the inputs. Unlike white-box attacks, black-box attacks rely solely on observing the model s input-output behavior without access to its internal structure or parameters. We conducted experiments using the MNIST Digits and CIFAR datasets to establish a baseline for vulnerability assessment and to explore future prospects for securing models. By examining the effectiveness of FGSM on ZOO in these experiments, we gain insights into the potential vulnerabilities and the need for improved security measures in AI systems
Authored by Aravindhan G, Yuvaraj Govindarajulu, Pavan Kulkarni, Manojkumar Parmar
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting,
Authored by Xuanwei Lin, Chen Dong, Ximeng Liu, Yuanyuan Zhang