We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Data in AI-Empowered Electric Vehicles is protected by using blockchain technology for immutable and verifiable transactions, in addition to high-strength encryption methods and digital signatures. This research paper compares and evaluates the security mechanisms for V2X communication in AI-enabled EVs. The purpose of the study is to ensure the reliability of security measures by evaluating performance metrics including false positive rate, false negative rate, detection accuracy, processing time, communication latency, computational resources, key generation time, and throughput. A comprehensive experimental approach is implemented using a diverse dataset gathered from actual V2X communication condition. The evaluation reveals that the security mechanisms perform inconsistently. Message integrity verification obtains the highest detection accuracy with a low false positive rate of 2\% and a 0\% false negative rate. Traffic encryption has a low processing time, requiring only 10 milliseconds for encryption and decryption, and adds only 5 bytes of communication latency to V2X messages. The detection accuracy of intrusion detection systems is adequate at 95\%, but they require more computational resources, consuming 80\% of the CPU and 150 MB of memory. In particular attack scenarios, certificate-based authentication and secure key exchange show promise. Certificate-based authentication mitigates MitM attacks with a false positive rate of 3\% and a false negative rate of 1\%. Secure key exchange thwarts replication attacks with a false positive rate of 0 and a false negative rate of 2. Nevertheless, their efficacy varies based on the attack scenario, highlighting the need for adaptive security mechanisms. The evaluated security mechanisms exhibit varying rates of throughput. Message integrity verification and traffic encryption accomplish high throughput, enabling 1 Mbps and 800 Kbps, respectively, of secure data transfer rates. Overall, the results contribute to the comprehension of V2X communication security challenges in AI-enabled EVs. Message integrity verification and traffic encryption have emerged as effective mechanisms that provide robust security with high performance. The study provides insight for designing secure and dependable V2X communication systems. Future research should concentrate on enhancing V2X communication s security mechanisms and exploring novel approaches to resolve emerging threats.
Authored by Edward V, Dhivya. S, M.Joe Marshell, Arul Jeyaraj, Ebenezer. V, Jenefa. A
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
A Survey on the Integration and Optimization of Large Language Models in Edge Computing Environments
In this survey, we delve into the integration and optimization of Large Language Models (LLMs) within edge computing environments, marking a significant shift in the artificial intelligence (AI) landscape. The paper investigates the development and application of LLMs in conjunction with edge computing, highlighting the advantages of localized data processing such as reduced latency, enhanced privacy, and improved efficiency. Key challenges discussed include the deployment of LLMs on resource-limited edge devices, focusing on computational demands, energy efficiency, and model scalability. This comprehensive analysis underscores the transformative potential and future implications of combining LLMs with edge computing, paving the way for advanced AI applications across various sectors.
Authored by Sarthak Bhardwaj, Pardeep Singh, Mohammad Pandit
The exponential growth of web documents has resulted in traditional search engines producing results with high recall but low precision when queried by users. In the contemporary internet landscape, resources are made available via hyperlinks which may or may not meet the expectations of the user. To mitigate this issue and enhance the level of pertinence, it is imperative to examine the challenges associated with querying the semantic web and progress towards the advancement of semantic search engines. These search engines generate outcomes by prioritizing the semantic significance of the context over the structural composition of the content. This paper outlines a proposed architecture for a semantic search engine that utilizes the concept of semantics to refine web search results. The resulting output would consist of ontologically based and contextually relevant outcomes pertaining to the user s query.
Authored by Ganesh D, Ajay Rastogi
Right to education is a basic need of every child and every society across the globe. Ever since the internet revolution and technological upgradation takes place, education system starts evolving from traditional way to smarter way. Covid-19 and industrial revolution has made smart education a global business that is now even penetrating to rural footprints of remote locations. Use of smart devices, IoT based communications and AI techniques have increased the cyberattack surface over the smart education system. Moreover, lack of cyber awareness and absence of essential cyber sanity checks has exposed the vulnerability in smart education system. A study of technology evolution of education to smart education and its penetration across the globe, details of smart education ecosystem, role of various stakeholders are discussed in this paper. It also covers most trending cyber-attacks, history of reported cyber-attacks in smart education sector. Further, in order to make smart educational cyber space more secure, proactive preventive measures and cyber sanity actions to mitigate such attacks are also discussed.
Authored by Sandeep Sarowa, Munish Kumar, Vijay Kumar, Bhisham Bhanot
The advent of Generative AI has marked a significant milestone in artificial intelligence, demonstrating remarkable capabilities in generating realistic images, texts, and data patterns. However, these advancements come with heightened concerns over data privacy and copyright infringement, primarily due to the reliance on vast datasets for model training. Traditional approaches like differential privacy, machine unlearning, and data poisoning only offer fragmented solutions to these complex issues. Our paper delves into the multifaceted challenges of privacy and copyright protection within the data lifecycle. We advocate for integrated approaches that combines technical innovation with ethical foresight, holistically addressing these concerns by investigating and devising solutions that are informed by the lifecycle perspective. This work aims to catalyze a broader discussion and inspire concerted efforts towards data privacy and copyright integrity in Generative AI.CCS CONCEPTS• Software and its engineering Software architectures; • Information systems World Wide Web; • Security and privacy Privacy protections; • Social and professional topics Copyrights; • Computing methodologies Machine learning.
Authored by Dawen Zhang, Boming Xia, Yue Liu, Xiwei Xu, Thong Hoang, Zhenchang Xing, Mark Staples, Qinghua Lu, Liming Zhu
The exponential growth of web documents has resulted in traditional search engines producing results with high recall but low precision when queried by users. In the contemporary internet landscape, resources are made available via hyperlinks which may or may not meet the expectations of the user. To mitigate this issue and enhance the level of pertinence, it is imperative to examine the challenges associated with querying the semantic web and progress towards the advancement of semantic search engines. These search engines generate outcomes by prioritizing the semantic significance of the context over the structural composition of the content. This paper outlines a proposed architecture for a semantic search engine that utilizes the concept of semantics to refine web search results. The resulting output would consist of ontologically based and contextually relevant outcomes pertaining to the user s query.
Authored by Ganesh D, Ajay Rastogi
With the rapid growth in information technology and being called the Digital Era, it is very evident that no one can survive without internet or ICT advancements. The day-to-day life operations and activities are dependent on these technologies. The latest technology trends in the market and industry are computing power, Smart devices, artificial intelligence, Robotic process automation, metaverse, IOT (Internet of things), cloud computing, Edge computing, Block chain and much more in the coming years. When looking at all these aspect and advancements, one common thing is cloud computing and data which must be protected and safeguarded which brings in the need for cyber/cloud security. Hence cloud security challenges have become an omnipresent concern for organizations or industries of any size where it has gone from a small incident to threat landscape. When it comes to data and cyber/ cloud security there are lots of challenges seen to safeguard these data. Towards that it is necessary that everyone must be aware of the latest technological advancements, evolving cyber threats, data as a valuable asset, Human Factor, Regulatory compliance, Cyber resilience. To handle all these challenges, security and risk prediction framework is proposed in this paper. This framework PRCSAM (Predictive Risk and Complexity Score Assessment Model) will consider factors like impact and likelihood of the main risks, threats and attacks that is foreseen in cloud security and the recommendation of the Risk management framework with automatic risk assessment and scoring option catering to Information security and privacy risks. This framework will help management and organizations in making informed decisions on the cyber security strategy as this is a data driven, dynamic \& proactive approach to cyber security and its complexity calculation. This paper also discusses on the prediction techniques using Generative AI techniques.
Authored by Kavitha Ayappan, J.M Mathana, J Thangakumar
Procurement is a critical step in the setup of systems, as reverting decisions made at this point is typically time-consuming and costly. Especially Artificial Intelligence (AI) based systems face many challenges, starting with unclear and unknown side parameters at design time of the systems, changing ecosystems and regulations, as well as problems of overselling capabilities of systems by vendors. Furthermore, the AI Act puts forth a great deal of additional requirements for operators of critical AI systems, like risk management and transparency measures, thus making procurement even more complex. In addition, the number of providers of AI systems is drastically increasing. In this paper we provide guidelines for the procurement of AI based systems that support the decision maker in identifying the key elements for the procurement of secure AI systems, depending on the respective technical and regulatory environment. Furthermore, we provide additional resources for utilizing these guidelines in practical procurement.
Authored by Peter Kieseberg, Christina Buttinger, Laura Kaltenbrunner, Marlies Temper, Simon Tjoa
Artificial Intelligence (AI) holds great potential for enhancing Risk Management (RM) through automated data integration and analysis. While the positive impact of AI in RM is acknowledged, concerns are rising about unintended consequences. This study explores factors like opacity, technology and security risks, revealing potential operational inefficiencies and inaccurate risk assessments. Through archival research and stakeholder interviews, including chief risk officers and credit managers, findings highlight the risks stemming from the absence of AI regulations, operational opacity, and information overload. These risks encompass cybersecurity threats, data manipulation uncertainties, monitoring challenges, and biases in algorithms. The study emphasizes the need for a responsible AI framework to address these emerging risks and enhance the effectiveness of RM processes. By advocating for such a framework, the authors provide practical insights for risk managers and identify avenues for future research in this evolving field.
Authored by Abdelmoneim Metwally, Salah Ali, Abdelnasser Mohamed
Artificial intelligence (AI) has emerged as one of the most formative technologies of the century and further gains importance to solve the big societal challenges (e.g. achievement of the sustainable development goals) or as a means to stay competitive in today’s global markets. The role as a key enabler in many areas of our daily life leads to a growing dependence, which has to be managed accordingly to mitigate negative economic, societal or privacy impacts. Therefore, the European Union is working on an AI Act, which defines concrete governance, risk and compliance (GRC) requirements. One of the key demands of this regulation is the operation of a risk management system for High-Risk AI systems. In this paper, we therefore present a detailed analysis of relevant literature in this domain and introduce our proposed approach for an AI Risk Management System (AIRMan).
Authored by Simon Tjoa, Peter Temper, Marlies Temper, Jakob Zanol, Markus Wagner, Andreas Holzinger
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by Navneet Kaur, Lav Gupta
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
In the ever-changing world of blockchain technology, the emergence of smart contracts has completely transformed the way agreements are executed, offering the potential for automation and trust in decentralized systems. Despite their built-in security features, smart contracts still face persistent vulnerabilities, resulting in significant financial losses. While existing studies often approach smart contract security from specific angles, such as development cycles or vulnerability detection tools, this paper adopts a comprehensive, multidimensional perspective. It delves into the intricacies of smart contract security by examining vulnerability detection mechanisms and defense strategies. The exploration begins by conducting a detailed analysis of the current security challenges and issues surrounding smart contracts. It then delves into established frameworks for classifying vulnerabilities and common security flaws. The paper examines existing methods for detecting, and repairing contract vulnerabilities, evaluating their effectiveness. Additionally, it provides a comprehensive overview of the existing body of knowledge in smart contract security-related research. Through this systematic examination, the paper aims to serve as a valuable reference and provide a comprehensive understanding of the multifaceted landscape of smart contract security.
Authored by Nayantara Kumar, Niranjan Honnungar V, Sharwari Prakash, J Lohith
Software vulnerability detection (SVD) aims to identify potential security weaknesses in software. SVD systems have been rapidly evolving from those being based on testing, static analysis, and dynamic analysis to those based on machine learning (ML). Many ML-based approaches have been proposed, but challenges remain: training and testing datasets contain duplicates, and building customized end-to-end pipelines for SVD is time-consuming. We present Tenet, a modular framework for building end-to-end, customizable, reusable, and automated pipelines through a plugin-based architecture that supports SVD for several deep learning (DL) and basic ML models. We demonstrate the applicability of Tenet by building practical pipelines performing SVD on real-world vulnerabilities.
Authored by Eduard Pinconschi, Sofia Reis, Chi Zhang, Rui Abreu, Hakan Erdogmus, Corina Păsăreanu, Limin Jia
The increasing number of security vulnerabilities has become an important problem that needs to be solved urgently in the field of software security, which means that the current vulnerability mining technology still has great potential for development. However, most of the existing AI-based vulnerability detection methods focus on designing different AI models to improve the accuracy of vulnerability detection, ignoring the fundamental problems of data-driven AI-based algorithms: first, there is a lack of sufficient high-quality vulnerability data; second, there is no unified standardized construction method to meet the standardized evaluation of different vulnerability detection models. This all greatly limits security personnel’s in-depth research on vulnerabilities. In this survey, we review the current literature on building high-quality vulnerability datasets, aiming to investigate how state-of-the-art research has leveraged data mining and data processing techniques to generate vulnerability datasets to facilitate vulnerability discovery. We also identify the challenges of this new field and share our views on potential research directions.
Authored by Yuhao Lin, Ying Li, MianXue Gu, Hongyu Sun, Qiuling Yue, Jinglu Hu, Chunjie Cao, Yuqing Zhang
The Internet of Things (IoT) heralds a innovative generation in communication via enabling regular gadgets to supply, receive, and percentage records easily. IoT applications, which prioritise venture automation, aim to present inanimate items autonomy; they promise increased consolation, productivity, and automation. However, strong safety, privateness, authentication, and recuperation methods are required to understand this goal. In order to assemble give up-to-quit secure IoT environments, this newsletter meticulously evaluations the security troubles and risks inherent to IoT applications. It emphasises the vital necessity for architectural changes.The paper starts by conducting an examination of security worries before exploring emerging and advanced technologies aimed at nurturing a sense of trust, in Internet of Things (IoT) applications. The primary focus of the discussion revolves around how these technologies aid in overcoming security challenges and fostering an ecosystem for IoT.
Authored by Pranav A, Sathya S, HariHaran B
The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
In coalition military operations, secure and effective information sharing is vital to the success of the mission. Protected Core Networking (PCN) provides a way for allied nations to securely interconnect their networks to facilitate the sharing of data. PCN, and military networks in general, face unique security challenges. Heterogeneous links and devices are deployed in hostile environments, while motivated adversaries launch cyberattacks at ever-increasing pace, volume, and sophistication. Humans cannot defend these systems and networks, not only because the volume of cyber events is too great, but also because there are not enough cyber defenders situated at the tactical edge. Thus, autonomous, machine-speed cyber defense capabilities are needed to protect mission-critical information systems from cyberattacks and system failures. This paper discusses the motivation for adding autonomous cyber defense capabilities to PCN and outlines a path toward implementing these capabilities. We propose to leverage existing reference architectures, frameworks, and enabling technologies, in order to adapt autonomous cyber defense concepts to the PCN context. We highlight expected challenges of implementing autonomous cyber defense agents for PCN, including: defining the state space and action space that will be necessary for monitoring and for generating recovery plans; implementing a suite of models, sensors, actuators, and agents specific to the PCN context; and designing metrics and experiments to measure the efficacy of such a system.
Authored by Alexander Velazquez, Joseph Mathews, Roberto Lopes, Tracy Braun, Frederica Free-Nelson
Multi-agent systems offer the advantage of performing tasks in a distributed and decentralized manner, thereby increasing efficiency and effectiveness. However, building these systems also presents challenges in terms of communication, security, and data integrity. Blockchain technology has the potential to address these challenges and to revolutionize the way that data is stored and shared, by providing a tamper-evident log of events in event-driven distributed multi-agent systems. In this paper, we propose a blockchain-based approach for event-sourcing in such systems, which allows for the reliable and transparent recording of events and state changes. Our approach leverages the decentralized nature of blockchains to provide a tamperresistant event log, enabling agents to verify the integrity of the data they rely on.
Authored by Ayman Cherif, Youssef Achir, Mohamed Youssfi, Mouhcine Elgarej, Omar Bouattane