In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by Navneet Kaur, Lav Gupta
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
Generative Artificial Intelligence (AI) has increasingly been used to enhance threat intelligence and cyber security measures for organizations. Generative AI is a form of AI that creates new data without relying on existing data or expert knowledge. This technology provides decision support systems with the ability to automatically and quickly identify threats posed by hackers or malicious actors by taking into account various sources and data points. In addition, generative AI can help identify vulnerabilities within an organization s infrastructure, further reducing the potential for a successful attack. This technology is especially well-suited for security operations centers (SOCs), which require rapid identification of threats and defense measures. By incorporating interesting and valuable data points that previously would have been missed, generative AI can provide organizations with an additional layer of defense against increasingly sophisticated attacks.
Authored by Venkata Saddi, Santhosh Gopal, Abdul Mohammed, S. Dhanasekaran, Mahaveer Naruka
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
With the increasing deployment of machine learning models across various domains, ensuring AI security has become a critical concern. Model evasion, a specific area of concern, involves attackers manipulating a model s predictions by perturbing the input data. The Fast Gradient Sign Method (FGSM) is a well-known technique for model evasion, typically used in white-box settings where the attacker has direct access to the model s architecture. In this method, the attacker intelligently manipulates the inputs to cause mispredictions by accessing the gradients of the input. To address the limitations of FGSM in black-box settings, we propose an extension of this approach called FGSM on ZOO. This method leverages the Zeroth Order Optimization (ZOO) technique to intellectually manipulate the inputs. Unlike white-box attacks, black-box attacks rely solely on observing the model s input-output behavior without access to its internal structure or parameters. We conducted experiments using the MNIST Digits and CIFAR datasets to establish a baseline for vulnerability assessment and to explore future prospects for securing models. By examining the effectiveness of FGSM on ZOO in these experiments, we gain insights into the potential vulnerabilities and the need for improved security measures in AI systems
Authored by Aravindhan G, Yuvaraj Govindarajulu, Pavan Kulkarni, Manojkumar Parmar
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
The 2021 T-Mobile breach conducted by John Erin Binns resulted in the theft of 54 million customers' personal data. The attacker gained entry into T-Mobile's systems through an unprotected router and used brute force techniques to access the sensitive information stored on the internal servers. The data stolen included names, addresses, Social Security Numbers, birthdays, driver's license numbers, ID information, IMEIs, and IMSIs. We analyze the data breach and how it opens the door to identity theft and many other forms of hacking such as SIM Hijacking. SIM Hijacking is a form of hacking in which bad actors can take control of a victim's phone number allowing them means to bypass additional safety measures currently in place to prevent fraud. This paper thoroughly reviews the attack methodology, impact, and attempts to provide an understanding of important measures and possible defense solutions against future attacks. We also detail other social engineering attacks that can be incurred from releasing the leaked data.
Authored by Christopher Faircloth, Gavin Hartzell, Nathan Callahan, Suman Bhunia