We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
6G networks are beginning to take shape, and it is envisaged that they will be made up of networks from different vendors, and with different technologies, in what is known as the network-of-networks. The topology will be constantly changing, allowing it to adapt to the capacities available at any given moment. 6G networks will be managed automatically and natively by AI, but allowing direct management of learning by technical teams through Explainable AI. In this context, security becomes an unprecedented challenge. In this paper we present a flexible architecture that integrates the necessary modules to respond to the needs of 6G, focused on managing security, network and services through choreography intents that coordinate the capabilities of different stakeholders to offer advanced services.
Authored by Rodrigo Asensio-Garriga, Alejandro Zarca, Antonio Skarmeta
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
This paper introduces a novel AI-driven ontology-based framework for disease diagnosis and prediction, leveraging the advancements in machine learning and data mining. We have constructed a comprehensive ontology that maps the complex relationships between a multitude of diseases and their manifested symptoms. Utilizing Semantic Web Rule Language (SWRL), we have engineered a set of robust rules that facilitate the intelligent prediction of diseases, embodying the principles of NLP for enhanced interpretability. The developed system operates in two fundamental stages. Initially, we define a sophisticated class hierarchy within our ontology, detailing the intricate object and data properties with precision—a process that showcases our application of computer vision techniques to interpret and categorize medical imagery. The second stage focuses on the application of AI-powered rules, which are executed to systematically extract and present detailed disease information, including symptomatology, adhering to established medical protocols. The efficacy of our ontology is validated through extensive evaluations, demonstrating its capability to not only accurately diagnose but also predict diseases, with a particular emphasis on the AI methodologies employed. Furthermore, the system calculates a final risk score for the user, derived from a meticulous analysis of the results. This score is a testament to the seamless integration of AI and ML in developing a user-centric diagnostic tool, promising a significant impact on future research in AI, ML, NLP, and robotics within the medical domain.
Authored by K. Suneetha, Ashendra Saxena
A decentralized and secure architecture made possible by blockchain technology is what Web 3.0 is known for. By offering a secure and trustworthy platform for transactions and data storage, this new paradigm shift in the digital world promises to transform the way we interact with the internet. Data is the new oil, thus protecting it is equally crucial. The foundation of the web 3.0 ecosystem, which provides a secure and open method of managing user data, is blockchain technology. With the launch of Web 3.0, demand for seamless communication across numerous platforms and technologies has increased. Blockchain offers a common framework that makes it possible for various systems to communicate with one another. The decentralized nature of blockchain technology almost precludes hacker access to the system, ushering in a highly secure Web 3.0. By preserving the integrity and validity of data and transactions, blockchain helps to build trust in online transactions. AI can be integrated with blockchain to enhance its capabilities and improve the overall user experience. We can build a safe and intelligent web that empowers users, gives them more privacy, and gives them more control over their online data by merging blockchain and AI. In this article, we emphasize the value of blockchain and AI technologies in achieving Web 3.0 s full potential for a secure internet and propose a Blockchain and AI empowered framework. The future of technology is now driven by the power of blockchain, AI, and web 3.0, providing a secure and efficient way to manage digital assets and data.
Authored by Akshay Suryavanshi, Apoorva G, Mohan N, Rishika M, Abdul N
With the rapid growth in information technology and being called the Digital Era, it is very evident that no one can survive without internet or ICT advancements. The day-to-day life operations and activities are dependent on these technologies. The latest technology trends in the market and industry are computing power, Smart devices, artificial intelligence, Robotic process automation, metaverse, IOT (Internet of things), cloud computing, Edge computing, Block chain and much more in the coming years. When looking at all these aspect and advancements, one common thing is cloud computing and data which must be protected and safeguarded which brings in the need for cyber/cloud security. Hence cloud security challenges have become an omnipresent concern for organizations or industries of any size where it has gone from a small incident to threat landscape. When it comes to data and cyber/ cloud security there are lots of challenges seen to safeguard these data. Towards that it is necessary that everyone must be aware of the latest technological advancements, evolving cyber threats, data as a valuable asset, Human Factor, Regulatory compliance, Cyber resilience. To handle all these challenges, security and risk prediction framework is proposed in this paper. This framework PRCSAM (Predictive Risk and Complexity Score Assessment Model) will consider factors like impact and likelihood of the main risks, threats and attacks that is foreseen in cloud security and the recommendation of the Risk management framework with automatic risk assessment and scoring option catering to Information security and privacy risks. This framework will help management and organizations in making informed decisions on the cyber security strategy as this is a data driven, dynamic \& proactive approach to cyber security and its complexity calculation. This paper also discusses on the prediction techniques using Generative AI techniques.
Authored by Kavitha Ayappan, J.M Mathana, J Thangakumar
Artificial Intelligence (AI) holds great potential for enhancing Risk Management (RM) through automated data integration and analysis. While the positive impact of AI in RM is acknowledged, concerns are rising about unintended consequences. This study explores factors like opacity, technology and security risks, revealing potential operational inefficiencies and inaccurate risk assessments. Through archival research and stakeholder interviews, including chief risk officers and credit managers, findings highlight the risks stemming from the absence of AI regulations, operational opacity, and information overload. These risks encompass cybersecurity threats, data manipulation uncertainties, monitoring challenges, and biases in algorithms. The study emphasizes the need for a responsible AI framework to address these emerging risks and enhance the effectiveness of RM processes. By advocating for such a framework, the authors provide practical insights for risk managers and identify avenues for future research in this evolving field.
Authored by Abdelmoneim Metwally, Salah Ali, Abdelnasser Mohamed
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Penetration testing (Pen-Testing) detects potential vulnerabilities and exploits by imitating black hat hackers to stop cyber crimes. Despite recent attempts to automate Pen-Testing, the issue of automation is still unresolved. Additionally, the attempts are highly case-specific and ignore the unique characteristics of pen-testing. Moreover, the achieved accuracy is limited, and very sensitive to variations. Also, there are redundancies found in detecting the exploits using non-automated algorithms. This paper concludes the recent study in the Penetration testing field and illustrates the importance of a comprehensive hybrid AI automation framework for pen-testing.
Authored by Verina Saber, Dina ElSayad, Ayman Bahaa-Eldin, Zt Fayed
Software vulnerability detection (SVD) aims to identify potential security weaknesses in software. SVD systems have been rapidly evolving from those being based on testing, static analysis, and dynamic analysis to those based on machine learning (ML). Many ML-based approaches have been proposed, but challenges remain: training and testing datasets contain duplicates, and building customized end-to-end pipelines for SVD is time-consuming. We present Tenet, a modular framework for building end-to-end, customizable, reusable, and automated pipelines through a plugin-based architecture that supports SVD for several deep learning (DL) and basic ML models. We demonstrate the applicability of Tenet by building practical pipelines performing SVD on real-world vulnerabilities.
Authored by Eduard Pinconschi, Sofia Reis, Chi Zhang, Rui Abreu, Hakan Erdogmus, Corina Păsăreanu, Limin Jia
This paper presents a vulnerability detection scheme for small unmanned aerial vehicle (UAV) systems, aiming to enhance their security resilience. It initiates with a comprehensive analysis of UAV system composition, operational principles, and the multifaceted security threats they face, ranging from software vulnerabilities in flight control systems to hardware weaknesses, communication link insecurities, and ground station management vulnerabilities. Subsequently, an automated vulnerability detection framework is designed, comprising three tiers: information gathering, interaction analysis, and report presentation, integrated with fuzz testing techniques for thorough examination of UAV control systems. Experimental outcomes validate the efficacy of the proposed scheme by revealing weak password issues in the target UAV s services and its susceptibility to abnormal inputs. The study not only confirms the practical utility of the approach but also contributes valuable insights and methodologies to UAV security, paving the way for future advancements in AI-integrated smart gray-box fuzz testing technologies.
Authored by He Jun, Guo Zihan, Ni Lin, Zhang Shuai
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
This study presents a novel approach for fortifying network security systems, crucial for ensuring network reliability and survivability against evolving cyber threats. Our approach integrates Explainable Artificial Intelligence (XAI) with an en-semble of autoencoders and Linear Discriminant Analysis (LDA) to create a robust framework for detecting both known and elusive zero-day attacks. We refer to this integrated method as AE- LDA. Our method stands out in its ability to effectively detect both known and previously unidentified network intrusions. By employing XAI for feature selection, we ensure improved inter-pretability and precision in identifying key patterns indicative of network anomalies. The autoencoder ensemble, trained on benign data, is adept at recognising a broad spectrum of network behaviours, thereby significantly enhancing the detection of zero-day attacks. Simultaneously, LDA aids in the identification of known threats, ensuring a comprehensive coverage of potential network vulnerabilities. This hybrid model demonstrates superior performance in anomaly detection accuracy and complexity management. Our results highlight a substantial advancement in network intrusion detection capabilities, showcasing an effective strategy for bolstering network reliability and resilience against a diverse range of cyber threats.
Authored by Fatemeh Stodt, Fabrice Theoleyre, Christoph Reich
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
As vehicles increasingly embed digital systems, new security vulnerabilities are also being introduced. Computational constraints make it challenging to add security oversight layers on top of core vehicle systems, especially when the security layers rely on additional deep learning models for anomaly detection. To improve security-aware decision-making for autonomous vehicles (AV), this paper proposes a bi-level security framework. The first security level consists of a one-shot resource allocation game that enables a single vehicle to fend off an attacker by optimizing the configuration of its intrusion prevention system based on risk estimation. The second level relies on a reinforcement learning (RL) environment where an agent is responsible for forming and managing a platoon of vehicles on the fly while also dealing with a potential attacker. We solve the first problem using a minimax algorithm to identify optimal strategies for each player. Then, we train RL agents and analyze their performance in forming security-aware platoons. The trained agents demonstrate superior performance compared to our baseline strategies that do not consider security risk.
Authored by Dominic Phillips, Talal Halabi, Mohammad Zulkernine
As a recent breakthrough in generative artificial intelligence, ChatGPT is capable of creating new data, images, audio, or text content based on user context. In the field of cybersecurity, it provides generative automated AI services such as network detection, malware protection, and privacy compliance monitoring. However, it also faces significant security risks during its design, training, and operation phases, including privacy breaches, content abuse, prompt word attacks, model stealing attacks, abnormal structure attacks, data poisoning attacks, model hijacking attacks, and sponge attacks. This paper starts from the risks and events that ChatGPT has recently faced, proposes a framework for analyzing cybersecurity in cyberspace, and envisions adversarial models and systems. It puts forward a new evolutionary relationship between attackers and defenders using ChatGPT to enhance their own capabilities in a changing environment and predicts the future development of ChatGPT from a security perspective.
Authored by Chunhui Hu, Jianfeng Chen
The adoption of IoT in a multitude of critical infrastructures revolutionizes several sectors, ranging from smart healthcare systems to financial organizations and thermal and nuclear power plants. Yet, the progressive growth of IoT devices in critical infrastructure without considering security risks can damage the user’s privacy, confidentiality, and integrity of both individuals and organizations. To overcome the aforementioned security threats, we proposed an AI and onion routing-based secure architecture for IoT-enabled critical infrastructure. Here, we first employ AI classifiers that classify the attack and non-attack IoT data, where attack data is discarded from further communication. In addition, the AI classifiers are secure from data poisoning attacks by incorporating an isolation forest algorithm that efficiently detects the poisoned data and eradicates it from the dataset’s feature space. Only non-attack data is forwarded to the onion routing network, which offers triple encryption to encrypt IoT data. As the onion routing only processes non-attack data, it is less computationally expensive than other baseline works. Moreover, each onion router is associated with blockchain nodes that store the verifying tokens of IoT data. The proposed architecture is evaluated using performance parameters, such as accuracy, precision, recall, training time, and compromisation rate. In this proposed work, SVM outperforms by achieving 97.7\% accuracy.
Authored by Nilesh Jadav, Rajesh Gupta, Sudeep Tanwar
The model called CSAI-4-CPS is proposed to characterize the use of Artificial Intelligence in Cybersecurity applied to the context of CPS - Cyber-Physical Systems. The model aims to establish a methodology being able to self-adapt using shared machine learning models, without incurring the loss of data privacy. The model will be implemented in a generic framework, to assess accuracy across different datasets, taking advantage of the federated learning and machine learning approach. The proposed solution can facilitate the construction of new AI cybersecurity tools and systems for CPS, enabling a better assessment and increasing the level of security/robustness of these systems more efficiently.
Authored by Hebert Silva
Existing defense strategies against adversarial attacks (AAs) on AI/ML are primarily focused on examining the input data streams using a wide variety of filtering techniques. For instance, input filters are used to remove noisy, misleading, and out-of-class inputs along with a variety of attacks on learning systems. However, a single filter may not be able to detect all types of AAs. To address this issue, in the current work, we propose a robust, transferable, distribution-independent, and cross-domain supported framework for selecting Adaptive Filter Ensembles (AFEs) to minimize the impact of data poisoning on learning systems. The optimal filter ensembles are determined through a Multi-Objective Bi-Level Programming Problem (MOBLPP) that provides a subset of diverse filter sequences, each exhibiting fair detection accuracy. The proposed framework of AFE is trained to model the pristine data distribution to identify the corrupted inputs and converges to the optimal AFE without vanishing gradients and mode collapses irrespective of input data distributions. We presented preliminary experiments to show the proposed defense outperforms the existing defenses in terms of robustness and accuracy.
Authored by Arunava Roy, Dipankar Dasgupta
Operating systems are essential software components for any computer. The goal of computer system manu-facturers is to provide a safe operating system that can resist a range of assaults. APTs (Advanced Persistent Threats) are merely one kind of attack used by hackers to penetrate organisations (APT). Here, we will apply the MITRE ATT&CK approach to analyze the security of Windows and Linux. Using the results of a series of vulnerability tests conducted on Windows 7, 8, 10, and Windows Server 2012, as well as Linux 16.04, 18.04, and its most current version, we can establish which operating system offers the most protection against future assaults. In addition, we have shown adversarial reflection in response to threats. We used ATT &CK framework tools to launch attacks on both platforms.
Authored by Hira Sikandar, Usman Sikander, Adeel Anjum, Muazzam Khan
Cloud service uses CAPTCHA to protect itself from malicious programs. With the explosive development of AI technology and the emergency of third-party recognition services, the factors that influence CAPTCHA’s security are going to be more complex. In such a situation, evaluating the security of mainstream CAPTCHAs in cloud services is helpful to guide better CAPTCHA design choices for providers. In this paper, we evaluate and analyze the security of 6 mainstream CAPTCHA image designs in public cloud services. According to the evaluation results, we made some suggestions of CAPTCHA image design choices to cloud service providers. In addition, we particularly discussed the CAPTCHA images adopted by Facebook and Twitter. The evaluations are separated into two stages: (i) using AI techniques alone; (ii) using both AI techniques and third-party services. The former is based on open source models; the latter is conducted under our proposed framework: CAPTCHAMix.
Authored by Xiaojiang Zuo, Xiao Wang, Rui Han