Artificial Intelligence (AI) and Machine Learning (ML) models, while powerful, are not immune to security threats. These models, often seen as mere data files, are executable code, making them susceptible to attacks. Serialization formats like .pickle, .HDF5, .joblib, .ONNX etc. commonly used for model storage, can inadvertently allow arbitrary code execution, a vulnerability actively exploited by malicious actors. Furthermore, the execution environment for these models, such as PyTorch and TensorFlow, lacks robust sandboxing, enabling the creation of computational graphs that can perform I/O operations, interact with files, communicate over networks, and even spawn additional processes, underscoring the importance of ensuring the safety of the code executed within these frameworks. The emergence of Software Development Kits (SDKs) like ClearML, designed for tracking experiments and managing model versions, adds another layer of complexity and risk. Both open-source and enterprise versions of these SDKs have vulnerabilities that are just beginning to surface, posing additional challenges to the security of AI/ML systems. In this paper, we delve into these security challenges, exploring attacks, vulnerabilities, and potential mitigation strategies to safeguard AI and ML deployments.
Authored by Natalie Grigorieva
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
In this work, we provide an in-depth characterization study of the performance overhead for running Transformer models with secure multi-party computation (MPC). MPC is a cryptographic framework for protecting both the model and input data privacy in the presence of untrusted compute nodes. Our characterization study shows that Transformers introduce several performance challenges for MPC-based private machine learning inference. First, Transformers rely extensively on “softmax” functions. While softmax functions are relatively cheap in a non-private execution, softmax dominates the MPC inference runtime, consuming up to 50\% of the total inference runtime. Further investigation shows that computing the maximum, needed for providing numerical stability to softmax, is a key culprit for the increase in latency. Second, MPC relies on approximating non-linear functions that are part of the softmax computations, and the narrow dynamic ranges make optimizing softmax while maintaining accuracy quite difficult. Finally, unlike CNNs, Transformer-based NLP models use large embedding tables to convert input words into embedding vectors. Accesses to these embedding tables can disclose inputs; hence, additional obfuscation for embedding access patterns is required for guaranteeing the input privacy. One approach to hide address accesses is to convert an embedding table lookup into a matrix multiplication. However, this naive approach increases MPC inference runtime significantly. We then apply tensor-train (TT) decomposition, a lossy compression technique for representing embedding tables, and evaluate its performance on embedding lookups. We show the trade-off between performance improvements and the corresponding impact on model accuracy using detailed experiments.
Authored by Yongqin Wang, Edward Suh, Wenjie Xiong, Benjamin Lefaudeux, Brian Knott, Murali Annavaram, Hsien-Hsin Lee
Electronic media knowledge is unprecedently increasing in recent years. In almost all security control areas, traffic control, weather monitoring, video conferences, social media etc., videos and multimedia data analysis practices are used. As a consequence, it is necessary to retain and transmit these data, by considering the security and privacy issues. IN this research study, a new Div-Mod Stego algorithm is combined with the Multi-Secret Sharing method along with temporary frame reordering and Genetic algorithm to implement high-end security in the process of video sharing. The qualitative and quantitative analysis has also been carried out to compare the performance of the proposed model with the other existing models. A computer analysis shows that the proposed solution would satisfy the requirements of the real-time application.
Authored by R. Logeshwari, Rajasekar Velswamy, Subhashini R, Karunakaran V
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
This article proposes a security protection technology based on active dynamic defense technology. Solved unknown threats that traditional rule detection methods cannot detect, effectively resisting purposeless virus spread such as worms; Isolate new unknown viruses, Trojans, and other attack threats; Strengthen terminal protection, effectively solve east-west horizontal penetration attacks in the internal network, and enhance the adversarial capabilities of the internal network. Propose modeling user behavior habits based on machine learning algorithms. By using historical behavior models, abnormal user behavior can be detected in real-time, network danger can be perceived, and proactive changes in network defense strategies can be taken to increase the difficulty of attackers. To achieve comprehensive and effective defense, identification, and localization of network attack behaviors, including APT attacks.
Authored by Fu Yu
The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
Using Intrusion Detection Systems (IDS) powered by artificial intelligence is presented in the proposed work as a novel method for enhancing residential security. The overarching goal of the study is to design, develop, and evaluate a system that employs artificial intelligence techniques for real-time detection and prevention of unauthorized access in response to the rising demand for such measures. Using anomaly detection, neural networks, and decision trees, which are all examples of machine learning algorithms that benefit from the incorporation of data from multiple sensors, the proposed system guarantees the accurate identification of suspicious activities. Proposed work examines large datasets and compares them to conventional security measures to demonstrate the system s superior performance and prospective impact on reducing home intrusions. Proposed work contributes to the field of residential security by proposing a dependable, adaptable, and intelligent method for protecting homes against the ever-changing types of infiltration threats that exist today.
Authored by Jeneetha J, B.Vishnu Prabha, B. Yasotha, Jaisudha J, C. Senthilkumar, V.Samuthira Pandi
Intrusion Detection Systems (IDS) are critical for detecting and mitigating cyber threats, yet the opaqueness of machine learning models used within these systems poses challenges for understanding their decisions. This paper proposes a novel approach to address this issue by integrating SHAP (SHapley Additive exPlanations) values with Large Language Models (LLMs). With the aim of enhancing transparency and trust in IDS, this approach demonstrates how the combination facilitates the generation of human-understandable explanations for detected anomalies, drawing upon the CICIDS2017 dataset. The LLM effectively articulates significant features identified by SHAP values, offering coherent responses regarding influential predictors of model outcomes.
Authored by Abderrazak Khediri, Hamda Slimi, Ayoub Yahiaoui, Makhlouf Derdour, Hakim Bendjenna, Charaf Ghenai
This paper proposes an AI-based intrusion detection method for the ITRI AI BOX information security application. The packets captured by AI BOX are analyzed to determine whether there are network attacks or abnormal traffic according to AI algorithms. Adjust or isolate some unnatural or harmful network data transmission behaviors if detected as abnormal. AI models are used to detect anomalies and allow or restrict data transmission to ensure the information security of devices. In future versions, it will also be able to intercept packets in the field of information technology (IT) and operational technology (OT). It can be applied to the free movement between heterogeneous networks to assist in data computation and transformation. This paper uses the experimental test to realize the intrusion detection method, hoping to add value to the AI BOX information security application. When IT and OT fields use AI BOX to detect intrusion accurately, it will protect the smart factory or hospital from abnormal traffic attacks and avoid causing system paralysis, extortion, and other dangers. We have built the machine learning model, packet sniffing functionality, and the operating system setting of the AI BOX environment. A public dataset has been used to test the model, and the accuracy has achieved 99\%, and the Yocto Project environment has been available in the AI Box and tested successfully.
Authored by Jiann-Liang Chen, Zheng-Zhun Chen, Youg-Sheng Chang, Ching-Iang Li, Tien-I Kao, Yu-Ting Lin, Yu-Yi Xiao, Jian-Fu Qiu
With the continuous development of Autonomous Vehicles (AVs), Intrusion Detection Systems (IDSs) became essential to ensure the security of in-vehicle (IV) networks. In the literature, classic machine learning (ML) metrics used to evaluate AI-based IV-IDSs present significant limitations and fail to assess their robustness fully. To address this, our study proposes a set of cyber resiliency metrics adapted from MITRE s Cyber Resiliency Metrics Catalog, tailored for AI-based IV-IDSs. We introduce specific calculation methods for each metric and validate their effectiveness through a simulated intrusion detection scenario. This approach aims to enhance the evaluation and resilience of IV-IDSs against advanced cyber threats and contribute to safer autonomous transportation.
Authored by Hamza Khemissa, Mohammed Bouchouia, Elies Gherbi
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
The emergence of large language models (LLMs) has brought forth remarkable capabilities in various domains, yet it also poses inherent risks to trustfulness, encompassing concerns such as toxicity, stereotype bias, adversarial robustness, ethics, privacy, and fairness. Particularly in sensitive applications like customer support chatbots, AI assistants, and digital information automation, which handle privacy-sensitive data, the adoption of generative pre-trained transformer (GPT) models is pervasive. However, ensuring robust security measures to mitigate potential security vulnerabilities is imperative. This paper advocates for a proactive approach termed "security shift-left," which emphasizes integrating security measures early in the development lifecycle to bolster the security posture of LLM-based applications. Our proposed method leverages basic machine learning (ML) techniques and retrieval-augmented generation (RAG) to effectively address security concerns. We present empirical evidence validating the efficacy of our approach with one LLM-based security application designed for the detection of malicious intent, utilizing both open-source datasets and synthesized datasets. By adopting this security shift-left methodology, developers can confidently develop LLM-based applications with robust security protection, safeguarding against potential threats and vulnerabilities.
Authored by Qianlong Lan, Anuj Kaul, Nishant Pattanaik, Piyush Pattanayak, Vinothini Pandurangan
This paper introduces a novel AI-driven ontology-based framework for disease diagnosis and prediction, leveraging the advancements in machine learning and data mining. We have constructed a comprehensive ontology that maps the complex relationships between a multitude of diseases and their manifested symptoms. Utilizing Semantic Web Rule Language (SWRL), we have engineered a set of robust rules that facilitate the intelligent prediction of diseases, embodying the principles of NLP for enhanced interpretability. The developed system operates in two fundamental stages. Initially, we define a sophisticated class hierarchy within our ontology, detailing the intricate object and data properties with precision—a process that showcases our application of computer vision techniques to interpret and categorize medical imagery. The second stage focuses on the application of AI-powered rules, which are executed to systematically extract and present detailed disease information, including symptomatology, adhering to established medical protocols. The efficacy of our ontology is validated through extensive evaluations, demonstrating its capability to not only accurately diagnose but also predict diseases, with a particular emphasis on the AI methodologies employed. Furthermore, the system calculates a final risk score for the user, derived from a meticulous analysis of the results. This score is a testament to the seamless integration of AI and ML in developing a user-centric diagnostic tool, promising a significant impact on future research in AI, ML, NLP, and robotics within the medical domain.
Authored by K. Suneetha, Ashendra Saxena
The rapid advancement of cloud technology has resulted in the emergence of many cloud service providers. Microsoft Azure is one among them to provide a flexible cloud computing platform that can scale business to exceptional heights. It offers extensive cloud services and is compatible with a wide range of developer tools, databases, and operating systems. In this paper, a detailed analysis of Microsoft Azure in the cloud computing era is performed. For this reason, the three significant Azure services, namely, the Azure AI (Artificial Intelligence) and Machine Learning (ML) Service, Azure Analytics Service and Internet of Things (IoT) are investigated. The paper briefs on the Azure Cognitive Search and Face Service under AI and ML service and explores this service s architecture and security measures. The proposed study also surveys the Data Lake and Data factory Services under Azure Analytics Service. Subsequently, an overview of Azure IoT service, mainly IoT Hub and IoT Central, is discussed. Along with Microsoft Azure, other providers in the market are Google Compute Engine and Amazon Web Service. The paper compares and contrasts each cloud service provider based on their computing capability.
Authored by Sreyes K, Anushka K, Dona Davis, N. Jayapandian
The advent of Generative AI has marked a significant milestone in artificial intelligence, demonstrating remarkable capabilities in generating realistic images, texts, and data patterns. However, these advancements come with heightened concerns over data privacy and copyright infringement, primarily due to the reliance on vast datasets for model training. Traditional approaches like differential privacy, machine unlearning, and data poisoning only offer fragmented solutions to these complex issues. Our paper delves into the multifaceted challenges of privacy and copyright protection within the data lifecycle. We advocate for integrated approaches that combines technical innovation with ethical foresight, holistically addressing these concerns by investigating and devising solutions that are informed by the lifecycle perspective. This work aims to catalyze a broader discussion and inspire concerted efforts towards data privacy and copyright integrity in Generative AI.CCS CONCEPTS• Software and its engineering Software architectures; • Information systems World Wide Web; • Security and privacy Privacy protections; • Social and professional topics Copyrights; • Computing methodologies Machine learning.
Authored by Dawen Zhang, Boming Xia, Yue Liu, Xiwei Xu, Thong Hoang, Zhenchang Xing, Mark Staples, Qinghua Lu, Liming Zhu
This work introduces an innovative security system prototype tailored explicitly for paying guest accommodations or hostels, blending Internet of Things (IoT), artificial intelligence (AI), machine learning algorithms, and web crawling technologies. The core emphasis revolves around facial recognition, precisely distinguishing between known and unknown individuals to manage entry effectively. The system, integrating camera technology, captures visitor images and employs advanced face recognition algorithms for precise face classification. In instances where faces remain unrecognized, the system leverages web crawling to retrieve potential intruder details. Immediate notifications, featuring captured images, are swiftly dispatched to users through email and smartphone alerts, enabling prompt responses. Operated within a wireless infrastructure governed by a Raspberry Pi, this system prioritizes cost-effectiveness and user-friendliness. Rigorously tested across diverse environments encompassing homes, paying guest accommodations, and office spaces, this research establishes a remarkable balance between cutting-edge technology and pragmatic security applications. This solution offers an affordable and efficient security option tailored explicitly for the unique needs of contemporary hostels and paying guest accommodations, ensuring heightened security without exorbitant expenses.
Authored by Pallavi Kumar, Janani. K, Sri N, Sai K, D. Reddy
Artificial Intelligence (AI) holds great potential for enhancing Risk Management (RM) through automated data integration and analysis. While the positive impact of AI in RM is acknowledged, concerns are rising about unintended consequences. This study explores factors like opacity, technology and security risks, revealing potential operational inefficiencies and inaccurate risk assessments. Through archival research and stakeholder interviews, including chief risk officers and credit managers, findings highlight the risks stemming from the absence of AI regulations, operational opacity, and information overload. These risks encompass cybersecurity threats, data manipulation uncertainties, monitoring challenges, and biases in algorithms. The study emphasizes the need for a responsible AI framework to address these emerging risks and enhance the effectiveness of RM processes. By advocating for such a framework, the authors provide practical insights for risk managers and identify avenues for future research in this evolving field.
Authored by Abdelmoneim Metwally, Salah Ali, Abdelnasser Mohamed
We propose a new security risk assessment approach for Machine Learning-based AI systems (ML systems). The assessment of security risks of ML systems requires expertise in ML security. So, ML system developers, who may not know much about ML security, cannot assess the security risks of their systems. By using our approach, a ML system developers can easily assess the security risks of the ML system. In performing the assessment, the ML system developer only has to answer the yes/no questions about the specification of the ML system. In our trial, we confirmed that our approach works correctly. CCS CONCEPTS • Security and privacy; • Computing methodologies → Artificial intelligence; Machine learning;
Authored by Jun Yajima, Maki Inui, Takanori Oikawa, Fumiyoshi Kasahara, Ikuya Morikawa, Nobukazu Yoshioka
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
A fast expanding topic of study on automated AI is focused on the prediction and prevention of cyber-attacks using machine learning algorithms. In this study, we examined the research on applying machine learning algorithms to the problems of strategic cyber defense and attack forecasting. We also provided a technique for assessing and choosing the best machine learning models for anticipating cyber-attacks. Our findings show that machine learning methods, especially random forest and neural network models, are very accurate in predicting cyber-attacks. Additionally, we discovered a number of crucial characteristics, such as source IP, packet size, and malicious traffic that are strongly associated with the likelihood of cyber-attacks. Our results imply that automated AI research on cyber-attack prediction and security planning has tremendous promise for enhancing cyber-security and averting cyber-attacks.
Authored by Ravikiran Madala, N. Vijayakumar, Nandini N, Shanti Verma, Samidha Chandvekar, Devesh Singh
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka