Generative AI technology is being applied in various fields. However, the advancement of these technologies also raises cybersecurity issues. In fact, there are cases of cyber attack using Generative AI, and the number is increasing. Therefore, this paper analyzes the potential cybersecurity issues associated with Generative AI. First, we looked at the fields where Generative AI is used. Representatively, Generative AI is being used in text, image, video, audio, and code. Based on these five fields, cybersecurity issues that may occur in each field were analyzed. Finally, we discuss the obligations necessary for the future development and use of Generative AI.
Authored by Subin Oh, Taeshik Shon
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
The 2023 CS curriculum by ACM, IEEE, and AAAI identifies security as an independent knowledge area that develops the “security mindset” so that students are ready for the “continual changes” in computing. Likewise, the curriculum emphasises the coverage of “uses”, and “shortcomings/pitfalls” of practical AI-tools like ChatGPT. This paper presents our endeavors to approach those goals with the design of an Information Security course. Our course design bears the following distinct features: Certificate-readiness, where we align the knowledge areas with major security/ethical hacking certificates; Coverage of ChatGPT, where the uses of ChatGPT for assisting security tasks and security issues caused by ChatGPT usage are both addressed for the first time in the teaching; “Learn defending from attackers perspective”, where labs of both offensive and defensive natures are developed to equally sharpen ethical hacking and hardening skills, and to facilitate the discussion on legal/ethical implications; Current and Representative, where ajust-enough set of representative and/or current security topics are selected in order and covered in respective modules in the most current form. In addition, we generalize our design principles and strategies, with the hope to shed lights on similar efforts in other institutions.
Authored by Yang Wang, Margaret McCoey, Qian Hu, Maryam Jalalitabar
This article presents two main objectives: (1) To synthesize the digital asset management process using AI TRiSM. (2) To study the results of the digital asset management process using AI TRiSM. Consequently, the administration of digital assets will bring about an increase in the organization s overall efficiency through the implementation of technology that utilizes artificial intelligence to drive the management system. On the other hand, having a vast volume of information within an organization may result in management issues and a lack of transparency. A multitude of organizations are making preparations to put AI TRiSM ideas into practice. The analysis revealed that the mean value is 4.91, while the standard deviation is 0.14. A digital asset management platform that can be used to track usage inside an organization can be developed with the help of the AI TRiSM model. This will help establish trust, decrease risk, and guarantee workplace security.
Authored by Pinyaphat Tasatanattakool, Panita Wannapiroon, Prachyanun Nilsook
The network of smart physical object has a significant impact on the growth of urban civilization. The evidence has been cited from the digital sources such as scientific journals, conferences and publications, etc. Along with other security services, these kinds of structured, sophisticated data have addressed a number of security-related challenges. Here, many forms of cutting-edge machine learning and AI techniques are used to research how merging two or more algorithms with AI and ML might make the internet of things more safe. The main objective of this paper is it explore the applications of how ML and AI that can be used to improve IOT security.
Authored by Brijesh Singh, Santosh Sharma, Ravindra Verma
Artificial Intelligence (AI) and Machine Learning (ML) models, while powerful, are not immune to security threats. These models, often seen as mere data files, are executable code, making them susceptible to attacks. Serialization formats like .pickle, .HDF5, .joblib, .ONNX etc. commonly used for model storage, can inadvertently allow arbitrary code execution, a vulnerability actively exploited by malicious actors. Furthermore, the execution environment for these models, such as PyTorch and TensorFlow, lacks robust sandboxing, enabling the creation of computational graphs that can perform I/O operations, interact with files, communicate over networks, and even spawn additional processes, underscoring the importance of ensuring the safety of the code executed within these frameworks. The emergence of Software Development Kits (SDKs) like ClearML, designed for tracking experiments and managing model versions, adds another layer of complexity and risk. Both open-source and enterprise versions of these SDKs have vulnerabilities that are just beginning to surface, posing additional challenges to the security of AI/ML systems. In this paper, we delve into these security challenges, exploring attacks, vulnerabilities, and potential mitigation strategies to safeguard AI and ML deployments.
Authored by Natalie Grigorieva
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
With UAVs on the rise, accurate detection and identification are crucial. Traditional unmanned aerial vehicle (UAV) identification systems involve opaque decision-making, restricting their usability. This research introduces an RF-based Deep Learning (DL) framework for drone recognition and identification. We use cutting-edge eXplainable Artificial Intelligence (XAI) tools, SHapley Additive Explanations (SHAP), and Local Interpretable Model-agnostic Explanations(LIME). Our deep learning model uses these methods for accurate, transparent, and interpretable airspace security. With 84.59\% accuracy, our deep-learning algorithms detect drone signals from RF noise. Most crucially, SHAP and LIME improve UAV detection. Detailed explanations show the model s identification decision-making process. This transparency and interpretability set our system apart. The accurate, transparent, and user-trustworthy model improves airspace security.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
6G networks are beginning to take shape, and it is envisaged that they will be made up of networks from different vendors, and with different technologies, in what is known as the network-of-networks. The topology will be constantly changing, allowing it to adapt to the capacities available at any given moment. 6G networks will be managed automatically and natively by AI, but allowing direct management of learning by technical teams through Explainable AI. In this context, security becomes an unprecedented challenge. In this paper we present a flexible architecture that integrates the necessary modules to respond to the needs of 6G, focused on managing security, network and services through choreography intents that coordinate the capabilities of different stakeholders to offer advanced services.
Authored by Rodrigo Asensio-Garriga, Alejandro Zarca, Antonio Skarmeta
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Artificial intelligence (AI) has emerged as one of the most formative technologies of the century and further gains importance to solve the big societal challenges (e.g. achievement of the sustainable development goals) or as a means to stay competitive in today’s global markets. The role as a key enabler in many areas of our daily life leads to a growing dependence, which has to be managed accordingly to mitigate negative economic, societal or privacy impacts. Therefore, the European Union is working on an AI Act, which defines concrete governance, risk and compliance (GRC) requirements. One of the key demands of this regulation is the operation of a risk management system for High-Risk AI systems. In this paper, we therefore present a detailed analysis of relevant literature in this domain and introduce our proposed approach for an AI Risk Management System (AIRMan).
Authored by Simon Tjoa, Peter Temper, Marlies Temper, Jakob Zanol, Markus Wagner, Andreas Holzinger
Despite the tremendous impact and potential of Artificial Intelligence (AI) for civilian and military applications, it has reached an impasse as learning and reasoning work well for certain applications and it generally suffers from a number of challenges such as hidden biases and causality. Next, “symbolic” AI (not as efficient as “sub-symbolic” AI), offers transparency, explainability, verifiability and trustworthiness. To address these limitations, neuro-symbolic AI has been emerged as a new AI field that combines efficiency of “sub-symbolic” AI with the assurance and transparency of “symbolic” AI. Furthermore, AI (that suffers from aforementioned challenges) will remain inadequate for operating independently in contested, unpredictable and complex multi-domain battlefield (MDB) environment for the foreseeable future and the AI enabled autonomous systems will require human in the loop to complete the mission in such a contested environment. Moreover, in order to successfully integrate AI enabled autonomous systems into military operations, military operators need to have assurance that these systems will perform as expected and in a safe manner. Most importantly, Human-Autonomy Teaming (HAT) for shared learning and understanding and joint reasoning is crucial to assist operations across military domains (space, air, land, maritime, and cyber) at combat speed with high assurance and trust. In this paper, we present a rough guide to key research challenges and perspectives of neuro symbolic AI for assured and trustworthy HAT.
Authored by Danda Rawat
Unsupervised cross-domain NER task aims to solve the issues when data in a new domain are fully-unlabeled. It leverages labeled data from source domain to predict entities in unlabeled target domain. Since training models on large domain corpus is time-consuming, in this paper, we consider an alternative way by introducing syntactic dependency structure. Such information is more accessible and can be shared between sentences from different domains. We propose a novel framework with dependency-aware GNN (DGNN) to learn these common structures from source domain and adapt them to target domain, alleviating the data scarcity issue and bridging the domain gap. Experimental results show that our method outperforms state-of-the-art methods.
Authored by Luchen Liu, Xixun Lin, Peng Zhang, Lei Zhang, Bin Wang
In the context of increasing digitalization and the growing reliance on intelligent systems, the importance of network information security has become paramount. This study delves into the exploration of network information security technologies within the framework of a digital intelligent security strategy. The aim is to comprehensively analyze the diverse methods and techniques employed to ensure the confidentiality, integrity, and availability of digital assets in the contemporary landscape of cybersecurity challenges. Key methodologies include the review and analysis of encryption algorithms, intrusion detection systems, authentication protocols, and anomaly detection mechanisms. The investigation also encompasses the examination of emerging technologies like blockchain and AI-driven security solutions. Through this research, we seek to provide a comprehensive understanding of the evolving landscape of network information security, equipping professionals and decision-makers with valuable insights to fortify digital infrastructure against ever-evolving threats.
Authored by Yingshi Feng
The objective of this study is to examine the key factors that contribute to the enhancement of financial network security through the utilization of blockchain technology and artificial intelligence (AI) tools. In this study, we utilize Google Trend Analytics and VOSviewer to examine the interrelationships among significant concepts in the domain of financial security driven by blockchain technology. The findings of the study provide significant insights and recommendations for various stakeholders, such as government entities, policymakers, regulators, and professionals in the field of information technology. Our research aims to enhance the comprehension of the intricate relationship between blockchain technology and AI tools in bolstering financial network security by revealing the network connections among crucial aspects. The aforementioned findings can be utilized as a valuable resource for facilitating future joint endeavors with the objective of enhancing financial inclusion and fostering community well-being. Through the utilization of blockchain technology and artificial intelligence (AI), it is possible to collaboratively strive towards the establishment of a financial ecosystem that is both more secure and inclusive. This endeavor aims to guarantee the well-being and stability of both individuals and enterprises.
Authored by Kuldeep Singh, Shivaprasad G.
Recent developments in generative artificial intelligence are bringing great concerns for privacy, security and misinformation. Our work focuses on the detection of fake images generated by text-to-image models. We propose a dual-domain CNN-based classifier that utilizes image features in both the spatial and frequency domain. Through an extensive set of experiments, we demonstrate that the frequency domain features facilitate high accuracy, zero-transfer learning between different generative models, and faster convergence. To our best knowledge, this is the first effective detector against generative models that are finetuned for a specific subject.
Authored by Eric Ji, Boxiang Dong, Bharath Samanthula, Na Zhou
With the continuous enrichment of intelligent applications, it is anticipated that 6G will evolve into a ubiquitous intelligent network. In order to achieve the vision of full-scenarios intelligent services, how to collaborate AI capabilities in different domains is an urgent issue. After analyzing potential use cases and technological requirements, this paper proposes an endto-end (E2E) cross-domain artificial intelligence (AI) collaboration framework for next-generation mobile communication systems. Two potential technical solutions, namely cross-domain AI management and orchestration and RAN-CN convergence, are presented to facilitate intelligent collaboration in both E2E scenarios and the edge network. Furthermore, we have validated the performance of a cross-domain federated learning algorithm in a simulated environment for the prediction of received signal power. While ensuring the security and privacy of terminal data, we have analyzed the communication overhead caused by cross-domain training.
Authored by Zexu Li, Zhen Li, Xiong Xiong, Dongjie Liu
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Authored by Felipe De Magalhaes, Mahdi Nikdast, Gabriela Nicolescu
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
Currently, research on 5G communication is focusing increasingly on communication techniques. The previous studies have primarily focused on the prevention of communications disruption. To date, there has not been sufficient research on network anomaly detection as a countermeasure against on security aspect. 5g network data will be more complex and dynamic, intelligent network anomaly detection is necessary solution for protecting the network infrastructure. However, since the AI-based network anomaly detection is dependent on data, it is difficult to collect the actual labeled data in the industrial field. Also, the performance degradation in the application process to real field may occur because of the domain shift. Therefore, in this paper, we research the intelligent network anomaly detection technique based on domain adaptation (DA) in 5G edge network in order to solve the problem caused by data-driven AI. It allows us to train the models in data-rich domains and apply detection techniques in insufficient amount of data. For Our method will contribute to AI-based network anomaly detection for improving the security for 5G edge network.
Authored by Hyun-Jin Kim, Jonghoon Lee, Cheolhee Park, Jong-Geun Park
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
The vision and key elements of the 6th generation (6G) ecosystem are being discussed very actively in academic and industrial circles. In this work, we provide a timely update to the 6G security vision presented in our previous publications to contribute to these efforts. We elaborate further on some key security challenges for the envisioned 6G wireless systems, explore recently emerging aspects, and identify potential solutions from an additive perspective. This speculative treatment aims explicitly to complement our previous work through the lens of developments of the last two years in 6G research and development.
Authored by Gürkan Gur, Pawani Porambage, Diana Osorio, Attila Yavuz, Madhusanka Liyanage