AI is one of the most popular field of technologies nowadays. Developers implement these technologies everywhere forgetting sometimes about its robustness to unobvious types of traffic. This omission can be used by attackers, who are always seeking to develop new attacks. So, the growth of AI is highly correlates with the rise of adversarial attacks. Adversarial attacks or adversarial machine learning is a technique when attackers attempt to fool ML systems with deceptive data. They can use inconspicuous, natural-looking perturbations in input data to mislead neural networks without inferring into a model directly and often without the risk to be detected. Adversarial attacks usually are divided into three primary axes - the security violation, poisoning and evasion attacks, which further can be categorized on “targeted”, “untargeted”, “whitebox” and “blackbox” types. This research examines most of the adversarial attacks are known by 2023 relating to all these categories and some others.
Authored by Natalie Grigorieva, Sergei Petrenko
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
ChatGPT, a conversational Artificial Intelligence, has the capacity to produce grammatically accurate and persuasively human responses to numerous inquiry types from various fields. Both its users and applications are growing at an unbelievable rate. Sadly, abuse and usage often go hand in hand. Since the words produced by AI are nearly comparable to those produced by humans, the AI model can be used to influence people or organizations in a variety of ways. In this paper, we test the accuracy of various online tools widely used for the detection of AI-generated and Human generated texts or responses.
Authored by Prerana Singh, Aditya Singh, Sameer Rathi, Sonika Vasesi
With the increasing deployment of machine learning models across various domains, ensuring AI security has become a critical concern. Model evasion, a specific area of concern, involves attackers manipulating a model s predictions by perturbing the input data. The Fast Gradient Sign Method (FGSM) is a well-known technique for model evasion, typically used in white-box settings where the attacker has direct access to the model s architecture. In this method, the attacker intelligently manipulates the inputs to cause mispredictions by accessing the gradients of the input. To address the limitations of FGSM in black-box settings, we propose an extension of this approach called FGSM on ZOO. This method leverages the Zeroth Order Optimization (ZOO) technique to intellectually manipulate the inputs. Unlike white-box attacks, black-box attacks rely solely on observing the model s input-output behavior without access to its internal structure or parameters. We conducted experiments using the MNIST Digits and CIFAR datasets to establish a baseline for vulnerability assessment and to explore future prospects for securing models. By examining the effectiveness of FGSM on ZOO in these experiments, we gain insights into the potential vulnerabilities and the need for improved security measures in AI systems
Authored by Aravindhan G, Yuvaraj Govindarajulu, Pavan Kulkarni, Manojkumar Parmar
Security vulnerabilities are weaknesses of software due for instance to design flaws or implementation bugs that can be exploited and lead to potentially devastating security breaches. Traditionally, static code analysis is recognized as effective in the detection of software security vulnerabilities but at the expense of a high human effort required for checking a large number of produced false positive cases. Deep-learning methods have been recently proposed to overcome such a limitation of static code analysis and detect the vulnerable code by using vulnerability-related patterns learned from large source code datasets. However, the use of these methods for localizing the causes of the vulnerability in the source code, i.e., localize the statements that contain the bugs, has not been extensively explored. In this work, we experiment the use of deep-learning and explainability methods for detecting and localizing vulnerability-related statements in code fragments (named snippets). We aim at understanding if the code features adopted by deep-learning methods to identify vulnerable code snippets can also support the developers in debugging the code, thus localizing the vulnerability’s cause Our work shows that deep-learning methods can be effective in detecting the vulnerable code snippets, under certain conditions, but the code features that such methods use can only partially face the actual causes of the vulnerabilities in the code.CCS Concepts• Security and privacy \rightarrow Vulnerability management; Systems security; Malware and its mitigation; \cdot Software and its engineering \rightarrow Software testing and debugging.
Authored by Alessandro Marchetto
In the ever-changing world of blockchain technology, the emergence of smart contracts has completely transformed the way agreements are executed, offering the potential for automation and trust in decentralized systems. Despite their built-in security features, smart contracts still face persistent vulnerabilities, resulting in significant financial losses. While existing studies often approach smart contract security from specific angles, such as development cycles or vulnerability detection tools, this paper adopts a comprehensive, multidimensional perspective. It delves into the intricacies of smart contract security by examining vulnerability detection mechanisms and defense strategies. The exploration begins by conducting a detailed analysis of the current security challenges and issues surrounding smart contracts. It then delves into established frameworks for classifying vulnerabilities and common security flaws. The paper examines existing methods for detecting, and repairing contract vulnerabilities, evaluating their effectiveness. Additionally, it provides a comprehensive overview of the existing body of knowledge in smart contract security-related research. Through this systematic examination, the paper aims to serve as a valuable reference and provide a comprehensive understanding of the multifaceted landscape of smart contract security.
Authored by Nayantara Kumar, Niranjan Honnungar V, Sharwari Prakash, J Lohith
Unmanned aerial vehicles (UAVs) are increasingly adopted to perform various military, civilian, and commercial tasks in recent years. To assure the reliability of UAVs during these tasks, anomaly detection plays an important role in today s UAV system. With the rapid development of AI hardware and algorithms, leveraging AI techniques has become a prevalent trend for UAV anomaly detection. While existing AI-enabled UAV anomaly detection schemes have been demonstrated to be promising, they also raise additional security concerns about the schemes themselves. In this paper, we perform a study to explore and analyze the potential vulnerabilities in state-of-the-art AI-enabled UAV anomaly detection designs. We first validate the existence of security vulnerability and then propose an iterative attack that can effectively exploit the vulnerability and bypass the anomaly detection. We demonstrate the effectiveness of our attack by evaluating it on a state-of-the-art UAV anomaly detection scheme, in which our attack is successfully launched without being detected. Based on the understanding obtained from our study, this paper also discusses potential defense directions to enhance the security of AI-enabled UAV anomaly detection.
Authored by Ashok Raja, Mengjie Jia, Jiawei Yuan
Software vulnerability detection (SVD) aims to identify potential security weaknesses in software. SVD systems have been rapidly evolving from those being based on testing, static analysis, and dynamic analysis to those based on machine learning (ML). Many ML-based approaches have been proposed, but challenges remain: training and testing datasets contain duplicates, and building customized end-to-end pipelines for SVD is time-consuming. We present Tenet, a modular framework for building end-to-end, customizable, reusable, and automated pipelines through a plugin-based architecture that supports SVD for several deep learning (DL) and basic ML models. We demonstrate the applicability of Tenet by building practical pipelines performing SVD on real-world vulnerabilities.
Authored by Eduard Pinconschi, Sofia Reis, Chi Zhang, Rui Abreu, Hakan Erdogmus, Corina Păsăreanu, Limin Jia
The increasement of blockchain applications has brought about many security issues, with smart contract vulnerabilities causing significant financial losses. The majority of current smart contract vulnerability detection methods predominantly rely on static analysis of the source code and predefined expert rules. However, these approaches exhibit certain limitations, characterized by their restricted scalability and lower detection accuracy. Therefore in this paper, we use graph neural networks to perform smart contract vulnerability detection at the bytecode level, aiming to address the aforementioned issues. In particular, we propose a novel detection model. In order to acquire a comprehensive understanding of the dependencies among individual functions within a smart contract, we first construct a Program Dependency Graph(PDG) of functions, extract function-level features using graph neural networks, then augment function-level features using a self-attentive mechanism to learn the dependencies between functions, and finally aggregate function-level features for detecting the vulnerabilities. Our model possesses the capability to identify the subtle nuances in the interactions and interdependencies among different functions, consequently enhancing the precision of vulnerability detection. Experimental results show the performance of the method compared to existing smart contract vulnerability detection methods across multiple evaluation metrics.
Authored by Yuyan Sun, Shiping Huang, Guozheng Li, Ruidong Chen, Yangyang Liu, Qiwen Jiang
With the increasing number and types of APP vulnerabilities, the detection technology and methods need to be enriched and personalized according to different types of security vulnerabilities. Therefore, a single detection technology can no longer meet the needs of business security diversity. First of all, the new detection method needs to clarify the relevant features of APP business security; Secondly, the new detection method needs to re-adapt the features related to APP business security; Thirdly, the new detection method needs to be trained and applied according to different AI algorithms. In view of this, we designed an APP privacy information leakage detection scheme based on deep learning. This scheme specifically selects business security-related features for the type of privacy information leakage vulnerability of APP, and then performs feature processing and adaptation to become the input parameters of CNN network algorithm. Finally, we train and call the CNN network algorithm. We selected the APP of the Telecom Tianyi Space App Store for experiment to evaluate the effectiveness of our APP privacy information leakage detection system based on CNN network. The experimental results show that the detection accuracy of our proposed detection system has achieved the desired effect.
Authored by Nishui Cai, Tianting Chen, Lei Shen
Cybersecurity is the practice of preventing cyberattacks on vital infrastructure and private data. Government organisations, banks, hospitals, and every other industry sector are increasingly investing in cybersecurity infrastructure to safeguard their operations and the millions of consumers who entrust them with their personal information. Cyber threat activity is alarming in a world where businesses are more interconnected than ever before, raising concerns about how well organisations can protect themselves from widespread attacks. Threat intelligence solutions employ Natural Language Processing to read and interpret the meaning of words and technical data in various languages and find trends in them. It is becoming increasingly precise for machines to analyse various data sources in multiple languages using NLP. This paper aims to develop a system that targets software vulnerability detection as a Natural Language Processing (NLP) problem with source code treated as texts and addresses the automated software vulnerability detection with recent advanced deep learning NLP models. We have created and compared various deep learning models based on their accuracy and the best performer achieved 95\% accurate results. Furthermore we have also made an effort to predict which vulnerability class a particular source code belongs to and also developed a robust dashboard using FastAPI and ReactJS.
Authored by Kanchan Singh, Sakshi Grover, Ranjini Kumar
This paper presents a vulnerability detection scheme for small unmanned aerial vehicle (UAV) systems, aiming to enhance their security resilience. It initiates with a comprehensive analysis of UAV system composition, operational principles, and the multifaceted security threats they face, ranging from software vulnerabilities in flight control systems to hardware weaknesses, communication link insecurities, and ground station management vulnerabilities. Subsequently, an automated vulnerability detection framework is designed, comprising three tiers: information gathering, interaction analysis, and report presentation, integrated with fuzz testing techniques for thorough examination of UAV control systems. Experimental outcomes validate the efficacy of the proposed scheme by revealing weak password issues in the target UAV s services and its susceptibility to abnormal inputs. The study not only confirms the practical utility of the approach but also contributes valuable insights and methodologies to UAV security, paving the way for future advancements in AI-integrated smart gray-box fuzz testing technologies.
Authored by He Jun, Guo Zihan, Ni Lin, Zhang Shuai
The growth of the Internet of Things (IoT) is leading to some restructuring and transformation of everyday lives. The number and diversity of IoT devices have increased rapidly, enabling the vision of a smarter environment and opening the door to further automation, accompanied by the generation and collection of enormous amounts of data. The automation and ongoing proliferation of personal and professional data in the IoT have resulted in countless cyber-attacks enabled by the growing security vulnerabilities of IoT devices. Therefore, it is crucial to detect and patch vulnerabilities before attacks happen in order to secure IoT environments. One of the most promising approaches for combating cybersecurity vulnerabilities and ensuring security is through the use of artificial intelligence (AI). In this paper, we provide a review in which we classify, map, and summarize the available literature on AI techniques used to recognize and reduce cybersecurity software vulnerabilities in the IoT. We present a thorough analysis of the majority of AI trends in cybersecurity, as well as cutting-edge solutions.
Authored by Heba Khater, Mohamad Khayat, Saed Alrabaee, Mohamed Serhani, Ezedin Barka, Farag Sallabi
The increasing number of security vulnerabilities has become an important problem that needs to be solved urgently in the field of software security, which means that the current vulnerability mining technology still has great potential for development. However, most of the existing AI-based vulnerability detection methods focus on designing different AI models to improve the accuracy of vulnerability detection, ignoring the fundamental problems of data-driven AI-based algorithms: first, there is a lack of sufficient high-quality vulnerability data; second, there is no unified standardized construction method to meet the standardized evaluation of different vulnerability detection models. This all greatly limits security personnel’s in-depth research on vulnerabilities. In this survey, we review the current literature on building high-quality vulnerability datasets, aiming to investigate how state-of-the-art research has leveraged data mining and data processing techniques to generate vulnerability datasets to facilitate vulnerability discovery. We also identify the challenges of this new field and share our views on potential research directions.
Authored by Yuhao Lin, Ying Li, MianXue Gu, Hongyu Sun, Qiuling Yue, Jinglu Hu, Chunjie Cao, Yuqing Zhang
In various fields, such as medical engi-neering or aerospace engineering, it is difficult to apply the decisions of a machine learning (ML) or a deep learning (DL) model that do not account for the vast amount of human limitations which can lead to errors and incidents. Explainable Artificial Intelligence (XAI) comes to explain the results of artificial intelligence software (ML or DL) still considered black boxes to understand their decisions and adopt them. In this paper, we are interested in the deployment of a deep neural network (DNN) model able to predict the Remaining Useful Life (RUL) of a turbofan engine of an aircraft. Shapley s method was then applied in the explanation of the DL results. This made it possible to determine the participation rate of each parameter in the RUL and to identify the most decisive parameters for extending or shortening the RUL of the turbofan engine.
Authored by Anouar BOUROKBA, Ridha HAMDI, Mohamed Njah
Alzheimer’s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer’s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
Explainable AI (XAI) is a topic of intense activity in the research community today. However, for AI models deployed in the critical infrastructure of communications networks, explainability alone is not enough to earn the trust of network operations teams comprising human experts with many decades of collective experience. In the present work we discuss some use cases in communications networks and state some of the additional properties, including accountability, that XAI models would have to satisfy before they can be widely deployed. In particular, we advocate for a human-in-the-Ioop approach to train and validate XAI models. Additionally, we discuss the use cases of XAI models around improving data preprocessing and data augmentation techniques, and refining data labeling rules for producing consistently labeled network datasets.
Authored by Sayandev Mukherjee, Jason Rupe, Jingjie Zhu
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visual-based, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This work examines whether the resolution of a programming guide is related to academic success in a introductory programming course at the Andrés Bello University (Chile). We investigated whether the guide, which consists of 52 exercises which are not mandatory to solve, helps predict the failure of the first test of this course by first-year students. Furthermore, the use of the unified SHAP and XAI framework is proposed to analyze and understand how programming guides influence student performance. The study includes a literature review of previous related studies, a descriptive analysis of the data collected, and a discussion of the practical and theoretical implications of the study. The results obtained will be useful to improve student support strategies and decision making related to the use of guides as an educational tool.
Authored by Gaston Sepulveda, Billy Peralta, Marcos Levano, Pablo Schwarzenberg, Orietta Nicolis
Forest fire is a problem that cannot be overlooked as it occurs every year and covers many areas. GISTDA has recognized this problem and created the model to detect burn scars from satellite imagery. However, it is effective only to some extent with additional manual correction being often required. An automated system is enriched with learning capacity is the preferred tool to support this decision-making process. Despite the improved predictive performance, the underlying model may not be transparent or explainable to operators. Reasoning and annotation of the results are essential for this problem, for which the XAI approach is appropriate. In this work, we use the SHAP framework to describe predictive variables of complex neural models such as DNN. This can be used to optimize the model and provide overall accuracy up to 99.85 \% for the present work. Moreover, to show stakeholders the reason and the contributed factors involved such as the various indices that use the reflectance of the wavelength (e.g. NIR and SWIR).
Authored by Tonkla Maneerat
Despite intensive research, survival rate for pancreatic cancer, a fatal and incurable illness, has not dramatically improved in recent years. Deep learning systems have shown superhuman ability in a considerable number of activities, and recent developments in Artificial Intelligence (AI) have led to its widespread use in predictive analytics of pancreatic cancer. However, the improvement in performance is the result of model complexity being raised, which transforms these systems into “black box” methods and creates uncertainty about how they function and, ultimately, how they make judgements. This ambiguity has made it difficult for deep learning algorithms to be accepted in important field like healthcare, where their benefit may be enormous. As a result, there has been a significant resurgence in recent years of scholarly interest in the topic of Explainable Artificial Intelligence (XAI), which is concerned with the creation of novel techniques for interpreting and explaining deep learning models. In this study, we utilize Computed Tomography (CT) images and Clinical data to predict and analyze pancreatic cancer and survival rate respectively. Since pancreatic tumors are small to identify, the region marking through XAI will assist medical professionals to identify the appropriate region and determine the presence of cancer. Various features are taken into consideration for survival prediction. The most prominent features can be identified with the help of XAI, which in turn aids medical professionals in making better decisions. This study mainly focuses on the XAI strategy for deep and machine learning models rather than prediction and survival methodology.
Authored by Srinidhi B, M Bhargavi