Most proposals for securing control systems are heuristic in nature, and while they increase the protection of their target, the security guarantees they provide are unclear. This paper proposes a new way of modeling the security guarantees of a Cyber-Physical System (CPS) against arbitrary false command attacks. As our main case study, we use the most popular testbed for control systems security. We first propose a detailed formal model of this testbed and then show how the original configuration is vulnerable to a single-actuator attack. We then propose modifications to the control system and prove that our modified system is secure against arbitrary, single-actuator attacks.
Authored by John Castellanos, Mohamed Maghenem, Alvaro Cardenas, Ricardo Sanfelice, Jianying Zhou
The increasing complexity and interconnectedness of Industrial Control Systems (ICSs) necessitate the integration of safety and security measures. Ensuring the protection of both personnel and critical assets has become a necessity. As a result, an integrated risk assessment approach is essential to comprehensively identify and address potential hazards and vulnerabilities. However, the data sources needed for an integrated risk assessment comes in many forms. In this context, Automation Markup Language (AutomationML or AML) emerges as a valuable solution to facilitate data exchange and integration in the risk assessment process. The benefits of utilizing AML include improved interoperability, enhanced documentation, and seamless collaboration between stakeholders. A model, filled with information relevant to integrated risk assessment, is developed to illustrate the effectiveness of AML. Ultimately, this paper showcases how AML serves as a valuable information model in meeting the growing need for comprehensive safety and security risk assessment in ICSs.
Authored by Pushparaj Bhosale, Wolfgang Kastner, Thilo Sauter
Recently, the manufacturing industry is changing into a smart manufacturing era with the development of 5G, artificial intelligence, and cloud computing technologies. As a result, Operational Technology (OT), which controls and operates factories, has been digitized and used together with Information Technology (IT). Security is indispensable in the smart manu-facturing industry as a problem with equipment, facilities, and operations in charge of manufacturing can cause factory shutdown or damage. In particular, security is required in smart factories because they implement automation in the manufacturing industry by monitoring the surrounding environment and collecting meaningful information through Industrial IoT (IIoT). Therefore, in this paper, IIoT security proposed in 2022 and recent technology trends are analyzed and explained in order to understand the current status of IIoT security technology in a smart factory environment.
Authored by Jihye Kim, Jaehyoung Park, Jong-Hyouk Lee
The Internet of Things, or IoT, is a paradigm in which devices interact with the physical world through sensors and actuators, while still communicating with other computers over various types of networks. IoT devices can be found in many environments, often in the hands of non-technical users. This presents unique security concerns, since compromised devices can be used not only for typical objectives like network footholds, but also to cause harm in the real world (for instance, by unlocking the door to a house or changing safety configurations in an industrial control system). This work in progress paper presents a series of laboratory exercises under development at a large Midwestern university that introduces undergraduate cyber security engineering students to the Internet of Things and its (in)security considerations. The labs will be part of a 400-level technical elective course offered to cyber security engineering majors. The design of the labs has been grounded in the experiential learning process. The concepts in each lab module are couched in hands-on activities and integrate real world problems into the laboratory environment. The laboratory exercises are conducted using an Internet testbed and a combination of actual IoT devices and virtualized devices to showcase various IoT environments, vulnerabilities, and attacks.
Authored by Megan Ryan, Julie Rursch
Anomaly and intrusion detection in industrial cyber-physical systems has attracted a lot of attention in recent years. Deep learning techniques that require huge datasets are actively researched nowadays. The great challenge is that the real data on such systems, especially security-related data, is confidential, and a methodology for dataset generation is required. In this paper, the authors consider this challenge and introduce the methodology of dataset generation for research on the security of industrial water treatment facilities. The authors describe in detail two stages of the proposed methodology: the definition of a technological process and creating a testbed. The paper ends with a conclusion and future work prospects.
Authored by Evgenia Novikova, Elena Fedorchenko, Igor Saenko
The paper proposes an algorithm for verifying the authenticity of automated process control system actuators based on the HART standard, which can act as the main or additional measure of protection against threats to the integrity of the system. The principle of operation of the HART standard is considered, a theoretical algorithm is given, additional technical solutions that increase its reliability are considered, as well as scenarios of possible attacks.
Authored by D. Lyubushkina, A. Olennikov, A. Zakharov
While the introduction of cyber physical systems (CPS) into society is progressing toward the realization of Society 5.0, the threat of cyberattacks on IoT devices(IoT actuators) that have actuator functions to bring about physical changes in the real world among the IoT devices that constitute the CPS is increasing. In order to prepare for unauthorized control of IoT actuators caused by cyberattacks that are evolving daily, such as zero-day attacks that exploit unknown vulnerabilities in programs, it is an urgent issue to strengthen the CPS, which will become the social infrastructure of the future. In this paper, I explain, in particular, the security requirements for IoT actuators that exert physical action as feedback from cyberspace to the physical space, and a security framework for control that changes the real world, based on changes in cyberspace, where attackers are persistently present. And, I propose a security scheme for IoT actuators that integrates a new concept of security known as Zero Trust, as the Zero Trust IoT Security Framework (ZeTiots-FW).
Authored by Nobuhiro Kobayashi
The rapid development in IT and OT system makes interactions among themselves and with humans immerse in the information flows from the physical to cyberspace. The traditional view of cyber-security faces challenges of deliberate cyber-attacks and unpredictable failures. Hence, cyber resilience is a fundamental property that protects critical missions. In this paper, we presented a mission-oriented security framework to establish and enhance cyber-resilience in design and action. The definition of mission-oriented security is given to extend CIA metrics of cyber-security, and the process of mission executions is analyzed to distinguish the critical factors of cyber-resilience. The cascading failures in inter-domain networks and false data injection in the cyber-physical system are analyzed in the case study to demonstrate how the mission-oriented security framework can enhance cyber resilience.
Authored by Xinli Xiong, Qian Yao, Qiankun Ren
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Machine-learning-based approaches have emerged as viable solutions for automatic detection of container-related cyber attacks. Choosing the best anomaly detection algorithms to identify such cyber attacks can be difficult in practice, and it becomes even more difficult for zero-day attacks for which no prior attack data has been labeled. In this paper, we aim to address this issue by adopting an ensemble learning strategy: a combination of different base anomaly detectors built using conventional machine learning algorithms. The learning strategy provides a highly accurate zero-day container attack detection. We first architect a testbed to facilitate data collection and storage, model training and inference. We then perform two case studies of cyber attacks. We show that, for both case studies, despite the fact that individual base detector performance varies greatly between model types and model hyperparameters, the ensemble learning can consistently produce detection results that are close to the best base anomaly detectors. Additionally, we demonstrate that the detection performance of the resulting ensemble models is on average comparable to the best-performing deep learning anomaly detection approaches, but with much higher robustness, shorter training time, and much less training data. This makes the ensemble learning approach very appealing for practical real-time cyber attack detection scenarios with limited training data.
Authored by Shuai Guo, Thanikesavan Sivanthi, Philipp Sommer, Maëlle Kabir-Querrec, Nicolas Coppik, Eshaan Mudgal, Alessandro Rossotti
Deploying Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) makes them vulnerable to increasing vectors of security and privacy attacks. In this context, a wide range of advanced machine/deep learningbased solutions have been designed to accurately detect security attacks. Specifically, supervised learning techniques have been widely applied to train attack detection models. However, the main limitation of such solutions is their inability to detect attacks different from those seen during the training phase, or new attacks, also called zero-day attacks. Moreover, training the detection model requires significant data collection and labeling, which increases the communication overhead, and raises privacy concerns. To address the aforementioned limits, we propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern. Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs’ privacy, and minimizing the communication overhead. The in-depth experiment on a recent network traffic dataset shows that the proposed system achieved a high detection rate while minimizing the false positive rate, and the detection delay.
Authored by Abdelaziz Korba, Abdelwahab Boualouache, Bouziane Brik, Rabah Rahal, Yacine Ghamri-Doudane, Sidi Senouci
An intrusion detection system (IDS) is a crucial software or hardware application that employs security mechanisms to identify suspicious activity in a system or network. According to the detection technique, IDS is divided into two, namely signature-based and anomaly-based. Signature-based is said to be incapable of handling zero-day attacks, while anomaly-based is able to handle it. Machine learning techniques play a vital role in the development of IDS. There are differences of opinion regarding the most optimal algorithm for IDS classification in several previous studies, such as Random Forest, J48, and AdaBoost. Therefore, this study aims to evaluate the performance of the three algorithm models, using the NSL-KDD and UNSW-NB15 datasets used in previous studies. Empirical results demonstrate that utilizing AdaBoost+J48 with NSL-KDD achieves an accuracy of 99.86\%, along with precision, recall, and f1-score rates of 99.9\%. These results surpass previous studies using AdaBoost+Random Tree, with an accuracy of 98.45\%. Furthermore, this research explores the effectiveness of anomaly-based systems in dealing with zero-day attacks. Remarkably, the results show that anomaly-based systems perform admirably in such scenarios. For instance, employing Random Forest with the UNSW-NB15 dataset yielded the highest performance, with an accuracy rating of 99.81\%.
Authored by Nurul Fauzi, Fazmah Yulianto, Hilal Nuha
This paper seeks to understand how zero- day vulnerabilities relate to traded markets. People in trade and development are reluctant to talk about zero-day vulnerabilities. Thanks to years of research, in addition to interviews, The majority of thepublic documentation about Mr. Cesar Cerrudo s 0-day vulnerabilities are examinedby him, and he talks to experts in many computer security domains about them. In this research, we gave a summary of the current malware detection technologies and suggest a fresh zero-day malware detection and prevention model that is capable of efficiently separating malicious from benign zero-day samples. We also discussed various methods used to detect malicious files and present the results obtained from these methods.
Authored by Atharva Deshpande, Isha Patil, Jayesh Bhave, Aum Giri, Nilesh Sable, Gurunath Chavan
Android is the most popular smartphone operating system with a market share of 68.6\% in Apr 2023. Hence, Android is a more tempting target for cybercriminals. This research aims at contributing to the ongoing efforts to enhance the security of Android applications and protect users from the ever-increasing sophistication of malware attacks. Zero-day attacks pose a significant challenge to traditional signature-based malware detection systems, as they exploit vulnerabilities that are unknown to all. In this context, static analysis can be an encouraging approach for detecting malware in Android applications, leveraging machine learning (ML) and deep learning (DL)-based models. In this research, we have used single feature and combination of features extracted from the static properties of mobile apps as input(s) to the ML and DL based models, enabling it to learn and differentiate between normal and malicious behavior. We have evaluated the performance of those models based on a diverse dataset (DREBIN) comprising of real-world Android applications features, including both benign and zero-day malware samples. We have achieved F1 Score 96\% from the multi-view model (DL Model) in case of Zero-day malware scenario. So, this research can be helpful for mitigating the risk of unknown malware.
Authored by Jabunnesa Sara, Shohrab Hossain
The most serious risk to network security can arise from a zero-day attack. Zero-day attacks are challenging to identify as they exhibit unseen behavior. Intrusion detection systems (IDS) have gained considerable attention as an effective tool for detecting such attacks. IDS are deployed in network systems to monitor the network and to detect any potential threats. Recently, a lot of Machine learning (ML) and Deep Learning (DL) techniques have been employed in Intrusion Detection Systems, and it has been found that these techniques can detect zero-day attacks efficiently. This paper provides an overview of the background, importance, and different types of ML and DL techniques adopted for detecting zero-day attacks. Then it conducts a comprehensive review of recent ML and DL techniques for detecting zero-day attacks and discusses the associated issues. Further, we analyze the results and highlight the research challenges and future scope for improving the ML and DL approaches for zero-day attack detection.
Authored by Nowsheen Mearaj, Arif Wani
In this paper, we propose a novel approach for detecting zero-day attacks on networked autonomous systems (AS). The proposed approach combines CNN and LSTM algorithms to offer efficient and precise detection of zero-day attacks. We evaluated the proposed approach’s performance against various ML models using a real-world dataset. The experimental results demonstrate the effectiveness of the proposed approach in detecting zero-day attacks in networked AS, achieving better accuracy and detection probability than other ML models.
Authored by Hassan Alami, Danda Rawat
This paper reports on work in progress on incorporating a possibility of zero-day attacks into security risk metrics. System security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zero-day exploits. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. We propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some base likelihoods of zero-day exploits, robust risk metrics assume worst-case Probabilistic or Bayesian AG scenario allowing for a controlled deviation of actual likelihoods of zero-day exploits from their base values. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. These robust risk metrics interpolate between the corresponding probabilistic or Bayesian AG model on the one hand and purely antagonistic game-theoretic model on the other hand. Popular k-zero day security metric is a particular case of the proposed metric.
Authored by Vladimir Marbukh
This paper reports on work in progress on security metrics combining risks of known and zero-day attacks. We assume that system security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zeroday exploits and impact of successful attacks is quantified by system loss function. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. After averaging the system loss function over likelihoods of known exploits, we propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some prior likelihoods of zero-day exploits, robust risk metrics are identified with the worst-case Bayesian AG scenario subject to a controlled deviation of actual likelihoods of zero-day exploits from their priors. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. We argue that the proposed risk metric quantifies potential benefits of system configuration diversification, such as Moving Target Defense, for mitigation of the system/attacker information asymmetry.
Authored by Vladimir Marbukh
Digital Twin can be developed to represent a certain soil carbon emissions ecosystem that takes into account various parameters such as the type of soil, vegetation, climate, human interaction, and many more. With the help of sensors and satellite imagery, real-time data can be collected and fed into the digital model to simulate and predict soil carbon emissions. However, the lack of interpretable prediction results and transparent decision-making results makes Digital Twin unreliable, which could damage the management process. Therefore, we proposed an explainable artificial intelligence (XAI) empowered Digital Twin for better managing soil carbon emissions through AI-enabled proximal sensing. We validated our XAIoT-DT components by analyzing real-world soil carbon content datasets. The preliminary results demonstrate that our framework is a reliable tool for managing soil carbon emissions with relatively high prediction results at a low cost.
Authored by Di An, YangQuan Chen
Authored by Ayshah Chan, Maja Schneider, Marco Körner
Alzheimer s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Authored by Jan Stodt, Christoph Reich, Nathan Clarke
The rapid advancement in Deep Learning (DL) proposes viable solutions to various real-world problems. However, deploying DL-based models in some applications is hindered by their black-box nature and the inability to explain them. This has pushed Explainable Artificial Intelligence (XAI) research toward DL-based models, aiming to increase the trust by reducing their opacity. Although many XAI algorithms were proposed earlier, they lack the ability to explain certain tasks, i.e. image captioning (IC). This is caused by the IC task nature, e.g. the presence of multiple objects from the same category in the captioned image. In this paper we propose and investigate an XAI approach for this particular task. Additionally, we provide a method to evaluate XAI algorithms performance in the domain1.
Authored by Modafar Al-Shouha, Gábor Szűcs
The results of the Deep Learning (DL) are indisputable in different fields and in particular that of the medical diagnosis. The black box nature of this tool has left the doctors very cautious with regard to its estimates. The eXplainable Artificial Intelligence (XAI) recently seemed to lift this challenge by providing explanations to the DL estimates. Several works are published in the literature offering explanatory methods. We are interested in this survey to present an overview on the application of XAI in Deep Learning-based Magnetic Resonance Imaging (MRI) image analysis for Brain Tumor (BT) diagnosis. In this survey, we divide these XAI methods into four groups, the group of the intrinsic methods and three groups of post-hoc methods which are the activation based, the gradientr based and the perturbation based XAI methods. These XAI tools improved the confidence on the DL based brain tumor diagnosis.
Authored by Hana Charaabi, Hiba Mzoughi, Ridha Hamdi, Mohamed Njah
This paper delves into the nascent paradigm of Explainable AI (XAI) and its pivotal role in enhancing the acceptability of growing AI systems that are shaping the Digital Management 5.0 era. XAI holds significant promise, promoting compliance with legal and ethical standards and offering transparent decision-making tools. The imperative of interpretable AI systems to counter the black box effect and adhere to data protection laws like GDPR is highlighted. This paper aims to achieve a dual objective. Firstly, it provides an indepth understanding of the emerging XAI paradigm, helping practitioners and academics project their future research trajectories. Secondly, it proposes a new taxonomy of XAI models with potential applications that could facilitate AI acceptability. Although the academic literature reflects a crucial lack of exploration into the full potential of XAI, existing models remain mainly theoretical and lack practical applications. By bridging the gap between abstract models and the pragmatic implementation of XAI in management, this paper breaks new ground by launching the scientific foundations of XAI in the upcoming era of Digital Management 5.0.
Authored by Samia Gamoura