Procurement is a critical step in the setup of systems, as reverting decisions made at this point is typically time-consuming and costly. Especially Artificial Intelligence (AI) based systems face many challenges, starting with unclear and unknown side parameters at design time of the systems, changing ecosystems and regulations, as well as problems of overselling capabilities of systems by vendors. Furthermore, the AI Act puts forth a great deal of additional requirements for operators of critical AI systems, like risk management and transparency measures, thus making procurement even more complex. In addition, the number of providers of AI systems is drastically increasing. In this paper we provide guidelines for the procurement of AI based systems that support the decision maker in identifying the key elements for the procurement of secure AI systems, depending on the respective technical and regulatory environment. Furthermore, we provide additional resources for utilizing these guidelines in practical procurement.
Authored by Peter Kieseberg, Christina Buttinger, Laura Kaltenbrunner, Marlies Temper, Simon Tjoa
Over the years, mobile applications have brought about transformative changes in user interactions with digital services. Many of these apps however, are free and offer convenience at the cost of exchanging personal data. This convenience, however, comes with inherent risks to user privacy and security. This paper introduces a comprehensive methodology that evaluates the risks associated with sharing sensitive data through mobile applications. Building upon the Hierarchical Weighted Risk Scoring Model (HWRSM), this paper proposes an evaluation methodology for HWRSM, keeping in mind the implications of such risk scoring on real-world security scenarios. The methodology employs innovative risk scoring, considering various factors to assess potential security vulnerabilities related to sensitive terms. Practical assessments involving diverse set of Android applications, particularly in data-intensive categories, reveal insights into data privacy practices, vulnerabilities, and alignment with HWRSM scores. By offering insights into testing, validation, real-world findings, and model effectiveness, the paper aims to provide practical considerations to mobile application security discussions, facilitating informed approaches to address security and privacy concerns.
Authored by Trishla Shah, Raghav Sampangi, Angela Siegel
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Artificial intelligence (AI) has been successfully used in cyber security for enhancing comprehending, investigating, and evaluating cyber threats. It can effectively anticipate cyber risks in a more efficient way. AI also helps in putting in place strategies to safeguard assets and data. Due to their complexity and constant development, it has been difficult to comprehend cybersecurity controls and adopt the corresponding cyber training and security policies and plans.Given that both cyber academics and cyber practitioners need to have a deep comprehension of cybersecurity rules, artificial intelligence (AI) in cybersecurity can be a crucial tool in both education and awareness. By offering an in-depth demonstration of how AI may help in cybersecurity education and awareness and in creating policies fast and to the needed level, this study focuses on the efficiency of AI-driven mechanisms in strengthening the entire cyber security education life cycle.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Artificial Intelligence (AI) holds great potential for enhancing Risk Management (RM) through automated data integration and analysis. While the positive impact of AI in RM is acknowledged, concerns are rising about unintended consequences. This study explores factors like opacity, technology and security risks, revealing potential operational inefficiencies and inaccurate risk assessments. Through archival research and stakeholder interviews, including chief risk officers and credit managers, findings highlight the risks stemming from the absence of AI regulations, operational opacity, and information overload. These risks encompass cybersecurity threats, data manipulation uncertainties, monitoring challenges, and biases in algorithms. The study emphasizes the need for a responsible AI framework to address these emerging risks and enhance the effectiveness of RM processes. By advocating for such a framework, the authors provide practical insights for risk managers and identify avenues for future research in this evolving field.
Authored by Abdelmoneim Metwally, Salah Ali, Abdelnasser Mohamed
Artificial intelligence (AI) has emerged as one of the most formative technologies of the century and further gains importance to solve the big societal challenges (e.g. achievement of the sustainable development goals) or as a means to stay competitive in today’s global markets. The role as a key enabler in many areas of our daily life leads to a growing dependence, which has to be managed accordingly to mitigate negative economic, societal or privacy impacts. Therefore, the European Union is working on an AI Act, which defines concrete governance, risk and compliance (GRC) requirements. One of the key demands of this regulation is the operation of a risk management system for High-Risk AI systems. In this paper, we therefore present a detailed analysis of relevant literature in this domain and introduce our proposed approach for an AI Risk Management System (AIRMan).
Authored by Simon Tjoa, Peter Temper, Marlies Temper, Jakob Zanol, Markus Wagner, Andreas Holzinger
We propose a new security risk assessment approach for Machine Learning-based AI systems (ML systems). The assessment of security risks of ML systems requires expertise in ML security. So, ML system developers, who may not know much about ML security, cannot assess the security risks of their systems. By using our approach, a ML system developers can easily assess the security risks of the ML system. In performing the assessment, the ML system developer only has to answer the yes/no questions about the specification of the ML system. In our trial, we confirmed that our approach works correctly. CCS CONCEPTS • Security and privacy; • Computing methodologies → Artificial intelligence; Machine learning;
Authored by Jun Yajima, Maki Inui, Takanori Oikawa, Fumiyoshi Kasahara, Ikuya Morikawa, Nobukazu Yoshioka
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Cloud computing has become increasingly popular in the modern world. While it has brought many positives to the innovative technological era society lives in today, cloud computing has also shown it has some drawbacks. These drawbacks are present in the security aspect of the cloud and its many services. Security practices differ in the realm of cloud computing as the role of securing information systems is passed onto a third party. While this reduces managerial strain on those who enlist cloud computing it also brings risk to their data and the services they may provide. Cloud services have become a large target for those with malicious intent due to the high density of valuable data stored in one relative location. By soliciting help from the use of honeynets, cloud service providers can effectively improve their intrusion detection systems as well as allow for the opportunity to study attack vectors used by malicious actors to further improve security controls. Implementing honeynets into cloud-based networks is an investment in cloud security that will provide ever-increasing returns in the hardening of information systems against cyber threats.
Authored by Eric Toth, Md Chowdhury
A fast expanding topic of study on automated AI is focused on the prediction and prevention of cyber-attacks using machine learning algorithms. In this study, we examined the research on applying machine learning algorithms to the problems of strategic cyber defense and attack forecasting. We also provided a technique for assessing and choosing the best machine learning models for anticipating cyber-attacks. Our findings show that machine learning methods, especially random forest and neural network models, are very accurate in predicting cyber-attacks. Additionally, we discovered a number of crucial characteristics, such as source IP, packet size, and malicious traffic that are strongly associated with the likelihood of cyber-attacks. Our results imply that automated AI research on cyber-attack prediction and security planning has tremendous promise for enhancing cyber-security and averting cyber-attacks.
Authored by Ravikiran Madala, N. Vijayakumar, Nandini N, Shanti Verma, Samidha Chandvekar, Devesh Singh
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by Navneet Kaur, Lav Gupta
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
Generative Artificial Intelligence (AI) has increasingly been used to enhance threat intelligence and cyber security measures for organizations. Generative AI is a form of AI that creates new data without relying on existing data or expert knowledge. This technology provides decision support systems with the ability to automatically and quickly identify threats posed by hackers or malicious actors by taking into account various sources and data points. In addition, generative AI can help identify vulnerabilities within an organization s infrastructure, further reducing the potential for a successful attack. This technology is especially well-suited for security operations centers (SOCs), which require rapid identification of threats and defense measures. By incorporating interesting and valuable data points that previously would have been missed, generative AI can provide organizations with an additional layer of defense against increasingly sophisticated attacks.
Authored by Venkata Saddi, Santhosh Gopal, Abdul Mohammed, S. Dhanasekaran, Mahaveer Naruka
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Penetration testing (Pen-Testing) detects potential vulnerabilities and exploits by imitating black hat hackers to stop cyber crimes. Despite recent attempts to automate Pen-Testing, the issue of automation is still unresolved. Additionally, the attempts are highly case-specific and ignore the unique characteristics of pen-testing. Moreover, the achieved accuracy is limited, and very sensitive to variations. Also, there are redundancies found in detecting the exploits using non-automated algorithms. This paper concludes the recent study in the Penetration testing field and illustrates the importance of a comprehensive hybrid AI automation framework for pen-testing.
Authored by Verina Saber, Dina ElSayad, Ayman Bahaa-Eldin, Zt Fayed
Deep neural networks have been widely applied in various critical domains. However, they are vulnerable to the threat of adversarial examples. It is challenging to make deep neural networks inherently robust to adversarial examples, while adversarial example detection offers advantages such as not affecting model classification accuracy. This paper introduces common adversarial attack methods and provides an explanation of adversarial example detection. Recent advances in adversarial example detection methods are categorized into two major classes: statistical methods and adversarial detection networks. The evolutionary relationship among different detection methods is discussed. Finally, the current research status in this field is summarized, and potential future directions are highlighted.
Authored by Chongyang Zhao, Hu Li, Dongxia Wang, Ruiqi Liu
With the increased computational efficiency, Deep Neural Network gained more importance in the area of medical diagnosis. Nowadays many researchers have noticed the security concerns of various deep neural network models used for the clinical applications. However an efficient model misbehaves frequently when it confronted with intentionally modified data samples, called adversarial examples. These adversarial examples generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Thus, various adversarial attacks and defense methods certainly stand out from both AI and security networks and have turned into a hot exploration point lately. Adversarial attacks can be expected in various applications of deep learning model especially in healthcare area for disease prediction or classification. It should be properly handled with effective defensive mechanisms or else it may be a great threat to human life. This literature work will help to notice various adversarial attacks and defensive mechanisms. In the field of clinical analysis, this paper gives a detailed research on adversarial approaches on deep neural networks. This paper starts with the speculative establishments, various techniques, and utilization of adversarial attacking strategies. The contributions by the various researchers for the defensive mechanisms against adversarial attacks were also discussed. A few open issues and difficulties are accordingly discussed about, which might incite further exploration endeavors.
Authored by K Priya V, Peter Dinesh
In recent years, machine learning technology has been extensively utilized, leading to increased attention to the security of AI systems. In the field of image recognition, an attack technique called clean-label backdoor attack has been widely studied, and it is more difficult to detect than general backdoor attacks because data labels do not change when tampering with poisoning data during model training. However, there remains a lack of research on malware detection systems. Some of the current work is under the white-box assumption that requires knowledge of machine learning-based models which can be advantageous for attackers. In this study, we focus on clean-label backdoor attacks in malware detection systems and propose a new clean-label backdoor attack under the black-box assumption that does not require knowledge of machine learning-based models, which is riskier. The experimental evaluation of the proposed attack method shows that the attack success rate is up to 80.50\% when the poisoning rate is 14.00\%, demonstrating the effectiveness of the proposed attack method. In addition, we experimentally evaluated the effectiveness of the dimensionality reduction techniques in preventing clean-label backdoor attacks, and showed that it can reduce the attack success rate by 76.00\%.
Authored by Wanjia Zheng, Kazumasa Omote
As artificial intelligent models continue to grow in their capacity and sophistication, they are often trusted with very sensitive information. In the sub-field of adversarial machine learning, developments are geared solely towards finding reliable methods to systematically erode the ability of artificial intelligent systems to perform as intended. These techniques can cause serious breaches of security, interruptions to major systems, and irreversible damage to consumers. Our research evaluates the effects of various white box adversarial machine learning attacks on popular computer vision deep learning models leveraging a public X-ray dataset from the National Institutes of Health (NIH). We make use of several experiments to gauge the feasibility of developing deep learning models that are robust to adversarial machine learning attacks by taking into account different defense strategies, such as adversarial training, to observe how adversarial attacks evolve over time. Our research details how a variety white box attacks effect different components of InceptionNet, DenseNet, and ResNeXt and suggest how the models can effectively defend against these attacks.
Authored by Ilyas Bankole-Hameed, Arav Parikh, Josh Harguess
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model s accuracy under white-box attack decreases by 9.2S\% 31.1S\% better than others, and average success rates are 74.87\% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting
Authored by Xuanwei Lin, Chen Dong, Ximeng Liu, Yuanyuan Zhang