The rapid development in IT and OT system makes interactions among themselves and with humans immerse in the information flows from the physical to cyberspace. The traditional view of cyber-security faces challenges of deliberate cyber-attacks and unpredictable failures. Hence, cyber resilience is a fundamental property that protects critical missions. In this paper, we presented a mission-oriented security framework to establish and enhance cyber-resilience in design and action. The definition of mission-oriented security is given to extend CIA metrics of cyber-security, and the process of mission executions is analyzed to distinguish the critical factors of cyber-resilience. The cascading failures in inter-domain networks and false data injection in the cyber-physical system are analyzed in the case study to demonstrate how the mission-oriented security framework can enhance cyber resilience.
Authored by Xinli Xiong, Qian Yao, Qiankun Ren
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Machine-learning-based approaches have emerged as viable solutions for automatic detection of container-related cyber attacks. Choosing the best anomaly detection algorithms to identify such cyber attacks can be difficult in practice, and it becomes even more difficult for zero-day attacks for which no prior attack data has been labeled. In this paper, we aim to address this issue by adopting an ensemble learning strategy: a combination of different base anomaly detectors built using conventional machine learning algorithms. The learning strategy provides a highly accurate zero-day container attack detection. We first architect a testbed to facilitate data collection and storage, model training and inference. We then perform two case studies of cyber attacks. We show that, for both case studies, despite the fact that individual base detector performance varies greatly between model types and model hyperparameters, the ensemble learning can consistently produce detection results that are close to the best base anomaly detectors. Additionally, we demonstrate that the detection performance of the resulting ensemble models is on average comparable to the best-performing deep learning anomaly detection approaches, but with much higher robustness, shorter training time, and much less training data. This makes the ensemble learning approach very appealing for practical real-time cyber attack detection scenarios with limited training data.
Authored by Shuai Guo, Thanikesavan Sivanthi, Philipp Sommer, Maëlle Kabir-Querrec, Nicolas Coppik, Eshaan Mudgal, Alessandro Rossotti
Deploying Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) makes them vulnerable to increasing vectors of security and privacy attacks. In this context, a wide range of advanced machine/deep learningbased solutions have been designed to accurately detect security attacks. Specifically, supervised learning techniques have been widely applied to train attack detection models. However, the main limitation of such solutions is their inability to detect attacks different from those seen during the training phase, or new attacks, also called zero-day attacks. Moreover, training the detection model requires significant data collection and labeling, which increases the communication overhead, and raises privacy concerns. To address the aforementioned limits, we propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern. Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs’ privacy, and minimizing the communication overhead. The in-depth experiment on a recent network traffic dataset shows that the proposed system achieved a high detection rate while minimizing the false positive rate, and the detection delay.
Authored by Abdelaziz Korba, Abdelwahab Boualouache, Bouziane Brik, Rabah Rahal, Yacine Ghamri-Doudane, Sidi Senouci
An intrusion detection system (IDS) is a crucial software or hardware application that employs security mechanisms to identify suspicious activity in a system or network. According to the detection technique, IDS is divided into two, namely signature-based and anomaly-based. Signature-based is said to be incapable of handling zero-day attacks, while anomaly-based is able to handle it. Machine learning techniques play a vital role in the development of IDS. There are differences of opinion regarding the most optimal algorithm for IDS classification in several previous studies, such as Random Forest, J48, and AdaBoost. Therefore, this study aims to evaluate the performance of the three algorithm models, using the NSL-KDD and UNSW-NB15 datasets used in previous studies. Empirical results demonstrate that utilizing AdaBoost+J48 with NSL-KDD achieves an accuracy of 99.86\%, along with precision, recall, and f1-score rates of 99.9\%. These results surpass previous studies using AdaBoost+Random Tree, with an accuracy of 98.45\%. Furthermore, this research explores the effectiveness of anomaly-based systems in dealing with zero-day attacks. Remarkably, the results show that anomaly-based systems perform admirably in such scenarios. For instance, employing Random Forest with the UNSW-NB15 dataset yielded the highest performance, with an accuracy rating of 99.81\%.
Authored by Nurul Fauzi, Fazmah Yulianto, Hilal Nuha
This paper seeks to understand how zero- day vulnerabilities relate to traded markets. People in trade and development are reluctant to talk about zero-day vulnerabilities. Thanks to years of research, in addition to interviews, The majority of thepublic documentation about Mr. Cesar Cerrudo s 0-day vulnerabilities are examinedby him, and he talks to experts in many computer security domains about them. In this research, we gave a summary of the current malware detection technologies and suggest a fresh zero-day malware detection and prevention model that is capable of efficiently separating malicious from benign zero-day samples. We also discussed various methods used to detect malicious files and present the results obtained from these methods.
Authored by Atharva Deshpande, Isha Patil, Jayesh Bhave, Aum Giri, Nilesh Sable, Gurunath Chavan
Android is the most popular smartphone operating system with a market share of 68.6\% in Apr 2023. Hence, Android is a more tempting target for cybercriminals. This research aims at contributing to the ongoing efforts to enhance the security of Android applications and protect users from the ever-increasing sophistication of malware attacks. Zero-day attacks pose a significant challenge to traditional signature-based malware detection systems, as they exploit vulnerabilities that are unknown to all. In this context, static analysis can be an encouraging approach for detecting malware in Android applications, leveraging machine learning (ML) and deep learning (DL)-based models. In this research, we have used single feature and combination of features extracted from the static properties of mobile apps as input(s) to the ML and DL based models, enabling it to learn and differentiate between normal and malicious behavior. We have evaluated the performance of those models based on a diverse dataset (DREBIN) comprising of real-world Android applications features, including both benign and zero-day malware samples. We have achieved F1 Score 96\% from the multi-view model (DL Model) in case of Zero-day malware scenario. So, this research can be helpful for mitigating the risk of unknown malware.
Authored by Jabunnesa Sara, Shohrab Hossain
The most serious risk to network security can arise from a zero-day attack. Zero-day attacks are challenging to identify as they exhibit unseen behavior. Intrusion detection systems (IDS) have gained considerable attention as an effective tool for detecting such attacks. IDS are deployed in network systems to monitor the network and to detect any potential threats. Recently, a lot of Machine learning (ML) and Deep Learning (DL) techniques have been employed in Intrusion Detection Systems, and it has been found that these techniques can detect zero-day attacks efficiently. This paper provides an overview of the background, importance, and different types of ML and DL techniques adopted for detecting zero-day attacks. Then it conducts a comprehensive review of recent ML and DL techniques for detecting zero-day attacks and discusses the associated issues. Further, we analyze the results and highlight the research challenges and future scope for improving the ML and DL approaches for zero-day attack detection.
Authored by Nowsheen Mearaj, Arif Wani
In this paper, we propose a novel approach for detecting zero-day attacks on networked autonomous systems (AS). The proposed approach combines CNN and LSTM algorithms to offer efficient and precise detection of zero-day attacks. We evaluated the proposed approach’s performance against various ML models using a real-world dataset. The experimental results demonstrate the effectiveness of the proposed approach in detecting zero-day attacks in networked AS, achieving better accuracy and detection probability than other ML models.
Authored by Hassan Alami, Danda Rawat
This paper reports on work in progress on incorporating a possibility of zero-day attacks into security risk metrics. System security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zero-day exploits. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. We propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some base likelihoods of zero-day exploits, robust risk metrics assume worst-case Probabilistic or Bayesian AG scenario allowing for a controlled deviation of actual likelihoods of zero-day exploits from their base values. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. These robust risk metrics interpolate between the corresponding probabilistic or Bayesian AG model on the one hand and purely antagonistic game-theoretic model on the other hand. Popular k-zero day security metric is a particular case of the proposed metric.
Authored by Vladimir Marbukh
This paper reports on work in progress on security metrics combining risks of known and zero-day attacks. We assume that system security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zeroday exploits and impact of successful attacks is quantified by system loss function. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. After averaging the system loss function over likelihoods of known exploits, we propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some prior likelihoods of zero-day exploits, robust risk metrics are identified with the worst-case Bayesian AG scenario subject to a controlled deviation of actual likelihoods of zero-day exploits from their priors. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. We argue that the proposed risk metric quantifies potential benefits of system configuration diversification, such as Moving Target Defense, for mitigation of the system/attacker information asymmetry.
Authored by Vladimir Marbukh
Digital Twin can be developed to represent a certain soil carbon emissions ecosystem that takes into account various parameters such as the type of soil, vegetation, climate, human interaction, and many more. With the help of sensors and satellite imagery, real-time data can be collected and fed into the digital model to simulate and predict soil carbon emissions. However, the lack of interpretable prediction results and transparent decision-making results makes Digital Twin unreliable, which could damage the management process. Therefore, we proposed an explainable artificial intelligence (XAI) empowered Digital Twin for better managing soil carbon emissions through AI-enabled proximal sensing. We validated our XAIoT-DT components by analyzing real-world soil carbon content datasets. The preliminary results demonstrate that our framework is a reliable tool for managing soil carbon emissions with relatively high prediction results at a low cost.
Authored by Di An, YangQuan Chen
Authored by Ayshah Chan, Maja Schneider, Marco Körner
Alzheimer s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Authored by Jan Stodt, Christoph Reich, Nathan Clarke
The rapid advancement in Deep Learning (DL) proposes viable solutions to various real-world problems. However, deploying DL-based models in some applications is hindered by their black-box nature and the inability to explain them. This has pushed Explainable Artificial Intelligence (XAI) research toward DL-based models, aiming to increase the trust by reducing their opacity. Although many XAI algorithms were proposed earlier, they lack the ability to explain certain tasks, i.e. image captioning (IC). This is caused by the IC task nature, e.g. the presence of multiple objects from the same category in the captioned image. In this paper we propose and investigate an XAI approach for this particular task. Additionally, we provide a method to evaluate XAI algorithms performance in the domain1.
Authored by Modafar Al-Shouha, Gábor Szűcs
The results of the Deep Learning (DL) are indisputable in different fields and in particular that of the medical diagnosis. The black box nature of this tool has left the doctors very cautious with regard to its estimates. The eXplainable Artificial Intelligence (XAI) recently seemed to lift this challenge by providing explanations to the DL estimates. Several works are published in the literature offering explanatory methods. We are interested in this survey to present an overview on the application of XAI in Deep Learning-based Magnetic Resonance Imaging (MRI) image analysis for Brain Tumor (BT) diagnosis. In this survey, we divide these XAI methods into four groups, the group of the intrinsic methods and three groups of post-hoc methods which are the activation based, the gradientr based and the perturbation based XAI methods. These XAI tools improved the confidence on the DL based brain tumor diagnosis.
Authored by Hana Charaabi, Hiba Mzoughi, Ridha Hamdi, Mohamed Njah
This paper delves into the nascent paradigm of Explainable AI (XAI) and its pivotal role in enhancing the acceptability of growing AI systems that are shaping the Digital Management 5.0 era. XAI holds significant promise, promoting compliance with legal and ethical standards and offering transparent decision-making tools. The imperative of interpretable AI systems to counter the black box effect and adhere to data protection laws like GDPR is highlighted. This paper aims to achieve a dual objective. Firstly, it provides an indepth understanding of the emerging XAI paradigm, helping practitioners and academics project their future research trajectories. Secondly, it proposes a new taxonomy of XAI models with potential applications that could facilitate AI acceptability. Although the academic literature reflects a crucial lack of exploration into the full potential of XAI, existing models remain mainly theoretical and lack practical applications. By bridging the gap between abstract models and the pragmatic implementation of XAI in management, this paper breaks new ground by launching the scientific foundations of XAI in the upcoming era of Digital Management 5.0.
Authored by Samia Gamoura
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visualbased, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
This work proposes an interpretable Deep Learning framework utilizing Vision Transformers (ViT) for the classification of remote sensing images into land use and land cover (LULC) classes. It uses the Shapley Additive Explanations (SHAP) values to achieve two-stage explanations: 1) bandwise feature importance per class, showing which band assists the prediction of each class and 2) spatial-wise feature understanding, explaining which embedded patches per band affected the network’s performance. Experimental results on the EuroSAT dataset demonstrate the ViT’s accurate classification with an overall accuracy 96.86 \%, offering improved results when compared to popular CNN models. Heatmaps in each one of the dataset’s existing classes highlight the effectiveness of the proposed framework in the band explanation and the feature importance.
Authored by Anastasios Temenos, Nikos Temenos, Maria Kaselimi, Anastasios Doulamis, Nikolaos Doulamis
Understanding the temperature dependence of acoustic and photoacoustic (PA) properties is important for the characterization of materials and measurements in various applications. Ultrasound methods have been developed to estimate these properties, but they require careful consideration of multiple variables and steps to obtain reliable results. This study aimed to develop an automated system for simultaneous characterization of acoustic and PA properties of materials. The system was designed to minimize operator errors, ensuring robust temperature control and reproducibility for acoustic measurements. This was made possible through the integration of a commercially available PA imaging system with a custom-built platform specifically tailored for ultrasound-based acoustic characterization. This platform consisted of both hardware and software modules. The system was evaluated with NaCl solutions at different concentrations and a gelatin/agar cubic phantom prepared with uniformly distributed magnetic nanoparticles serving as optical absorbers. Results obtained from the NaCl solution samples exhibited a high Lin s concordance coefficient (above 0.9) with previously reported studies. In the ultrasound/PA experiment, temperature dependences of the speed of sound and PA intensity revealed a strong Pearson s correlation coefficient (0.99), with both measurements exhibiting a monotonic increase as anticipated for water-based materials. These findings demonstrate the accuracy and stability of the developed system for acoustic property measurements.
Authored by Ricardo Bordonal, João Uliana, Lara Pires, Ernesto Mazón, Antonio Carneiro, Theo Pavan
In this work, we investigated the design of low loss and wideband shear horizontal surface acoustic wave (SH-SAW) acoustic delay lines (ADLs) on a sapphire-based thin-film lithium niobate on insulator (LNOI) platform. The SH-SAW propagates in a Y-cut LN/SiO2 double-layer thin film atop the sapphire substrate, where the significant acoustic impedance mismatch between the thin film and the substrate confines the acoustic energy at the surface, thus minimizing the propagation loss. The single-phase unidirectional transducers (SPUDT) used in this work is implemented with gold (Au) to maximize the electromechanical coupling as well as the directionality. The proposed ADLs based on YX-LN/SiO2/Sapphire centered at 830 MHz showed a minimum insertion loss (IL) of 3 dB, a wide fractional bandwidth (FBW) of 4.19\%, and a low propagation loss (PL) of 2.51 dB/mm, which yields an effective quality factor (QPL) exceeds 2,700. These results demonstrate the competitive performance of the proposed devices compared to state-of-the-art thin film LN ADLs, offering extremely low propagation loss for RF signal processing.
Authored by Chia-Hsien Tsai, Tzu-Hsuan Hsu, Zhi-Qiang Lee, Cheng-Chien Lin, Ya-Ching Yu, Shao-Siang Tung, Ming-Huang Li
This paper presents the design of a MEMS resonator with capacitive transduction as an acoustic sensor, intended for cantilever-enhanced photoacoustic spectroscopy. The sensor employs area-variable capacitive detection by surrounding the silicon resonator with dense comb teeth. To reduce gas damping effects on the resonator motion, the anchor height is increased to 260 µm. This approach successfully resolves the capacitance detection sensitivity and motion damping trade-off commonly seen in acoustic detection. Experimental results exhibit a maximum sensitivity of 3749 mV/Pa at the resonant frequency of 1870 Hz with a 15 V bias voltage. The equivalent noise has a peak value of 7.9 µPa/Hz1/2 and the noise sources are analyzed.
Authored by Yonggang Yin, Danyang Ren, Yuqi Wang, Da Gao, Junhui Shi
This work presents a modified AlN/Sapphire layered SAW structure localized partial removal of AlN thin film and sapphire, respectively. The SAW propagation and resonance characteristics of the proposed structure with periodic grooves and voids are analyzed using finite element method (FEM). Compared with conventional AlN-based SAW, the proposed structure with optimization configuration and parameters effectively improves the K2 while maintaining a high V, meanwhile eliminates spurious modes. It is demonstrated that the Sezawa mode on the proposed SAW resonator structure offers operating frequencies above 5GHz, K2 values above 6.5\%, and an excellent impedance ratio of 98dB, which makes it a potential candidate for advanced 5G applications.
Authored by Huiling Liu, Qiaozhen Zhang, Hao Sun, Yuandong Gu, Nan Wang