The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
Systems with artificial intelligence components, so-called AI-based systems, have gained considerable attention recently. However, many organizations have issues with achieving production readiness with such systems. As a means to improve certain software quality attributes and to address frequently occurring problems, design patterns represent proven solution blueprints. While new patterns for AI-based systems are emerging, existing patterns have also been adapted to this new context. The goal of this study is to provide an overview of design patterns for AI-based systems, both new and adapted ones. We want to collect and categorize patterns, and make them accessible for researchers and practitioners. To this end, we first performed a multivocal literature review (MLR) to collect design patterns used with AI-based systems. We then integrated the created pattern collection into a web-based pattern repository to make the patterns browsable and easy to find. As a result, we selected 51 resources (35 white and 16 gray ones), from which we extracted 70 unique patterns used for AI-based systems. Among these are 34 new patterns and 36 traditional ones that have been adapted to this context. Popular pattern categories include architecture (25 patterns), deployment (16), implementation (9), or security \& safety (9). While some patterns with four or more mentions already seem established, the majority of patterns have only been mentioned once or twice (51 patterns). Our results in this emerging field can be used by researchers as a foundation for follow-up studies and by practitioners to discover relevant patterns for informing the design of AI-based systems.
Authored by Lukas Heiland, Marius Hauser, Justus Bogner
The advent of Generative AI has marked a significant milestone in artificial intelligence, demonstrating remarkable capabilities in generating realistic images, texts, and data patterns. However, these advancements come with heightened concerns over data privacy and copyright infringement, primarily due to the reliance on vast datasets for model training. Traditional approaches like differential privacy, machine unlearning, and data poisoning only offer fragmented solutions to these complex issues. Our paper delves into the multifaceted challenges of privacy and copyright protection within the data lifecycle. We advocate for integrated approaches that combines technical innovation with ethical foresight, holistically addressing these concerns by investigating and devising solutions that are informed by the lifecycle perspective. This work aims to catalyze a broader discussion and inspire concerted efforts towards data privacy and copyright integrity in Generative AI.CCS CONCEPTS• Software and its engineering Software architectures; • Information systems World Wide Web; • Security and privacy Privacy protections; • Social and professional topics Copyrights; • Computing methodologies Machine learning.
Authored by Dawen Zhang, Boming Xia, Yue Liu, Xiwei Xu, Thong Hoang, Zhenchang Xing, Mark Staples, Qinghua Lu, Liming Zhu
The HTTP protocol is the backbone for how traffic is communicated over the Internet and between web applications and users. Introduced in 1997 with HTTP 1.0 and 1.1, HTTP has gone through several developmental changes throughout the years. HTTP/1.1 suffers from several issues. Namely only allowing a one-to-one connection. HTTP/2 allowed for multiplexed connections. Additionally, HTTP/2 attempted to address the security issues that were faced by the prior version of HTTP by allowing administrators to enable HTTPS, as well as enable certificates to help ensure the encryption and protection of data between users and the web application. One of the major issues HTTP/2 faces is that it allows users to have multiplexed connections, but when there is an error and data needs to be retransmitted, this leads to head of line blocking. HTTP/3 is a new protocol that was proposed for formalization to the IETF in June of 2022. One of the first major changes is that unlike prior versions of HTTP that used the TCP/IP method of networking for data transmission, HTTP/3 uses UDP for data transmission. Prior research has focused on the protocol itself or investigating how certain types of attacks affect your web architecture that uses QUIC and HTTP/3. One area lacking research in this topic is how to secure web architecture in the cloud that uses this new protocol. To this end, we will be investigating how logging can be used to secure your web architecture and this protocol in the cloud.
Authored by Jacob Koch, Emmanuel Gyamfi
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
With the increased computational efficiency, Deep Neural Network gained more importance in the area of medical diagnosis. Nowadays many researchers have noticed the security concerns of various deep neural network models used for the clinical applications. However an efficient model misbehaves frequently when it confronted with intentionally modified data samples, called adversarial examples. These adversarial examples generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Thus, various adversarial attacks and defense methods certainly stand out from both AI and security networks and have turned into a hot exploration point lately. Adversarial attacks can be expected in various applications of deep learning model especially in healthcare area for disease prediction or classification. It should be properly handled with effective defensive mechanisms or else it may be a great threat to human life. This literature work will help to notice various adversarial attacks and defensive mechanisms. In the field of clinical analysis, this paper gives a detailed research on adversarial approaches on deep neural networks. This paper starts with the speculative establishments, various techniques, and utilization of adversarial attacking strategies. The contributions by the various researchers for the defensive mechanisms against adversarial attacks were also discussed. A few open issues and difficulties are accordingly discussed about, which might incite further exploration endeavors.
Authored by K Priya V, Peter Dinesh
In the ever-changing world of blockchain technology, the emergence of smart contracts has completely transformed the way agreements are executed, offering the potential for automation and trust in decentralized systems. Despite their built-in security features, smart contracts still face persistent vulnerabilities, resulting in significant financial losses. While existing studies often approach smart contract security from specific angles, such as development cycles or vulnerability detection tools, this paper adopts a comprehensive, multidimensional perspective. It delves into the intricacies of smart contract security by examining vulnerability detection mechanisms and defense strategies. The exploration begins by conducting a detailed analysis of the current security challenges and issues surrounding smart contracts. It then delves into established frameworks for classifying vulnerabilities and common security flaws. The paper examines existing methods for detecting, and repairing contract vulnerabilities, evaluating their effectiveness. Additionally, it provides a comprehensive overview of the existing body of knowledge in smart contract security-related research. Through this systematic examination, the paper aims to serve as a valuable reference and provide a comprehensive understanding of the multifaceted landscape of smart contract security.
Authored by Nayantara Kumar, Niranjan Honnungar V, Sharwari Prakash, J Lohith
The increasement of blockchain applications has brought about many security issues, with smart contract vulnerabilities causing significant financial losses. The majority of current smart contract vulnerability detection methods predominantly rely on static analysis of the source code and predefined expert rules. However, these approaches exhibit certain limitations, characterized by their restricted scalability and lower detection accuracy. Therefore in this paper, we use graph neural networks to perform smart contract vulnerability detection at the bytecode level, aiming to address the aforementioned issues. In particular, we propose a novel detection model. In order to acquire a comprehensive understanding of the dependencies among individual functions within a smart contract, we first construct a Program Dependency Graph(PDG) of functions, extract function-level features using graph neural networks, then augment function-level features using a self-attentive mechanism to learn the dependencies between functions, and finally aggregate function-level features for detecting the vulnerabilities. Our model possesses the capability to identify the subtle nuances in the interactions and interdependencies among different functions, consequently enhancing the precision of vulnerability detection. Experimental results show the performance of the method compared to existing smart contract vulnerability detection methods across multiple evaluation metrics.
Authored by Yuyan Sun, Shiping Huang, Guozheng Li, Ruidong Chen, Yangyang Liu, Qiwen Jiang
This paper presents a vulnerability detection scheme for small unmanned aerial vehicle (UAV) systems, aiming to enhance their security resilience. It initiates with a comprehensive analysis of UAV system composition, operational principles, and the multifaceted security threats they face, ranging from software vulnerabilities in flight control systems to hardware weaknesses, communication link insecurities, and ground station management vulnerabilities. Subsequently, an automated vulnerability detection framework is designed, comprising three tiers: information gathering, interaction analysis, and report presentation, integrated with fuzz testing techniques for thorough examination of UAV control systems. Experimental outcomes validate the efficacy of the proposed scheme by revealing weak password issues in the target UAV s services and its susceptibility to abnormal inputs. The study not only confirms the practical utility of the approach but also contributes valuable insights and methodologies to UAV security, paving the way for future advancements in AI-integrated smart gray-box fuzz testing technologies.
Authored by He Jun, Guo Zihan, Ni Lin, Zhang Shuai
The Internet of Things (IoT) refers to the growing network of connected physical objects embedded with sensors, software and connectivity. While IoT has potential benefits, it also introduces new cyber security risks. This paper provides an overview of IoT security issues, vulnerabilities, threats, and mitigation strategies. The key vulnerabilities arising from IoT s scale, ubiquity and connectivity include inadequate authentication, lack of encryption, poor software security, and privacy concerns. Common attacks against IoT devices and networks include denial of service, ransom-ware, man-in-the-middle, and spoofing. An analysis of recent literature highlights emerging attack trends like swarm-based DDoS, IoT botnets, and automated large-scale exploits. Recommended techniques to secure IoT include building security into architecture and design, access control, cryptography, regular patching and upgrades, activity monitoring, incident response plans, and end-user education. Future technologies like blockchain, AI-enabled defense, and post-quantum cryptography can help strengthen IoT security. Additional focus areas include shared threat intelligence, security testing, certification programs, international standards and collaboration between industry, government and academia. A robust multilayered defense combining preventive and detective controls is required to combat rising IoT threats. This paper provides a comprehensive overview of the IoT security landscape and identifies areas for continued research and development.
Authored by Luis Cambosuela, Mandeep Kaur, Rani Astya
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
Distributed computation and AI processing at the edge has been identified as an efficient solution to deliver real-time IoT services and applications compared to cloud-based paradigms. These solutions are expected to support the delay-sensitive IoT applications, autonomic decision making, and smart service creation at the edge in comparison to traditional IoT solutions. However, existing solutions have limitations concerning distributed and simultaneous resource management for AI computation and data processing at the edge; concurrent and real-time application execution; and platform-independent deployment. Hence, first, we propose a novel three-layer architecture that facilitates the above service requirements. Then we have developed a novel platform and relevant modules with integrated AI processing and edge computer paradigms considering issues related to scalability, heterogeneity, security, and interoperability of IoT services. Further, each component is designed to handle the control signals, data flows, microservice orchestration, and resource composition to match with the IoT application requirements. Finally, the effectiveness of the proposed platform is tested and have been verified.
Authored by Sewwandi Nisansala, Gayal Chandrasiri, Sonali Prasadika, Upul Jayasinghe
Blockchain and artificial intelligence are two technologies that, when combined, have the ability to help each other realize their full potential. Blockchains can guarantee the accessibility and consistent admittance to integrity safeguarded big data indexes from numerous areas, allowing AI systems to learn more effectively and thoroughly. Similarly, artificial intelligence (AI) can be used to offer new consensus processes, and hence new methods of engaging with Blockchains. When it comes to sensitive data, such as corporate, healthcare, and financial data, various security and privacy problems arise that must be properly evaluated. Interaction with Blockchains is vulnerable to data credibility checks, transactional data leakages, data protection rules compliance, on-chain data privacy, and malicious smart contracts. To solve these issues, new security and privacy-preserving technologies are being developed. AI-based blockchain data processing, either based on AI or used to defend AI-based blockchain data processing, is emerging to simplify the integration of these two cutting-edge technologies.
Authored by Ramiz Salama, Fadi Al-Turjman
Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ∼ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
Authored by Zacharias Anastasakis, Konstantinos Psychogyios, Terpsi Velivassaki, Stavroula Bourou, Artemis Voulkidis, Dimitrios Skias, Antonis Gonos, Theodore Zahariadis
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
Artificial intelligence (AI) was engendered by the rapid development of high and new technologies, which altered the environment of business financial audits and caused problems in recent years. As the pioneers of enterprise financial monitoring, auditors must actively and proactively adapt to the new audit environment in the age of AI. However, the performances of the auditors during the adaptation process are not so favorable. In this paper, methods such as data analysis and field research are used to conduct investigations and surveys. In the process of applying AI to the financial auditing of a business, a number of issues are discovered, such as auditors' underappreciation, information security risks, and liability risk uncertainty. On the basis of the problems, related suggestions for improvement are provided, including the cultivation of compound talents, the emphasis on the value of auditors, and the development of a mechanism for accepting responsibility.
Authored by Wenfeng Xiao