With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
In the context of increasing digitalization and the growing reliance on intelligent systems, the importance of network information security has become paramount. This study delves into the exploration of network information security technologies within the framework of a digital intelligent security strategy. The aim is to comprehensively analyze the diverse methods and techniques employed to ensure the confidentiality, integrity, and availability of digital assets in the contemporary landscape of cybersecurity challenges. Key methodologies include the review and analysis of encryption algorithms, intrusion detection systems, authentication protocols, and anomaly detection mechanisms. The investigation also encompasses the examination of emerging technologies like blockchain and AI-driven security solutions. Through this research, we seek to provide a comprehensive understanding of the evolving landscape of network information security, equipping professionals and decision-makers with valuable insights to fortify digital infrastructure against ever-evolving threats.
Authored by Yingshi Feng
With the continuous enrichment of intelligent applications, it is anticipated that 6G will evolve into a ubiquitous intelligent network. In order to achieve the vision of full-scenarios intelligent services, how to collaborate AI capabilities in different domains is an urgent issue. After analyzing potential use cases and technological requirements, this paper proposes an endto-end (E2E) cross-domain artificial intelligence (AI) collaboration framework for next-generation mobile communication systems. Two potential technical solutions, namely cross-domain AI management and orchestration and RAN-CN convergence, are presented to facilitate intelligent collaboration in both E2E scenarios and the edge network. Furthermore, we have validated the performance of a cross-domain federated learning algorithm in a simulated environment for the prediction of received signal power. While ensuring the security and privacy of terminal data, we have analyzed the communication overhead caused by cross-domain training.
Authored by Zexu Li, Zhen Li, Xiong Xiong, Dongjie Liu
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Authored by Felipe De Magalhaes, Mahdi Nikdast, Gabriela Nicolescu
The vision and key elements of the 6th generation (6G) ecosystem are being discussed very actively in academic and industrial circles. In this work, we provide a timely update to the 6G security vision presented in our previous publications to contribute to these efforts. We elaborate further on some key security challenges for the envisioned 6G wireless systems, explore recently emerging aspects, and identify potential solutions from an additive perspective. This speculative treatment aims explicitly to complement our previous work through the lens of developments of the last two years in 6G research and development.
Authored by Gürkan Gur, Pawani Porambage, Diana Osorio, Attila Yavuz, Madhusanka Liyanage
Edge computing enables the computation and analytics capabilities to be brought closer to data sources. The available literature on AI solutions for edge computing primarily addresses just two edge layers. The upper layer can directly communicate with the cloud and comprises one or more IoT edge devices that gather sensing data from IoT devices present in the lower layer. However, industries mainly adopt a multi-layered architecture, referred to as the ISA-95 standard, to isolate and safeguard their assets. In this architecture, only the upper layer is connected to the cloud, while the lower layers of the hierarchy get to interact only with the neighbouring layers. Due to these added intermediate layers (and IoT edge devices) between the top and lower layers, existing AI solutions for typical two-layer edge architectures may not be directly applicable in this scenario. Moreover, not all industries prefer to send and store their private data in the cloud. Implementing AI solutions tailored to a hierarchical edge architecture would increase response time and maintain the same degree of security by working within the ISA-95-compliant network architecture. This paper explores a possible strategy for deploying a centralized federated learning-based AI solution in a hierarchical edge architecture and demonstrates its efficacy through a real deployment scenario.
Authored by Narendra Bisht, Subhasri Duttagupta
As a result of globalization, the COVID-19 pandemic and the migration of data to the cloud, the traditional security measures where an organization relies on a security perimeter and firewalls do not work. There is a shift to a concept whereby resources are not being trusted, and a zero-trust architecture (ZTA) based on a zero-trust principle is needed. Adapting zero trust principles to networks ensures that a single insecure Application Protocol Interface (API) does not become the weakest link comprising of Critical Data, Assets, Application and Services (DAAS). The purpose of this paper is to review the use of zero trust in the security of a network architecture instead of a traditional perimeter. Different software solutions for implementing secure access to applications and services for remote users using zero trust network access (ZTNA) is also summarized. A summary of the author s research on the qualitative study of “Insecure Application Programming Interface in Zero Trust Networks” is also discussed. The study showed that there is an increased usage of zero trust in securing networks and protecting organizations from malicious cyber-attacks. The research also indicates that APIs are insecure in zero trust environments and most organization are not aware of their presence.
Authored by Farhan Qazi
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
Active cyber defense mechanisms are necessary to perform automated, and even autonomous operations using intelligent agents that defend against modern/sophisticated AI-inspired cyber threats (e.g., ransomware, cryptojacking, deep-fakes). These intelligent agents need to rely on deep learning using mature knowledge and should have the ability to apply this knowledge in a situational and timely manner for a given AI-inspired cyber threat. In this paper, we describe a ‘domain-agnostic knowledge graph-as-a-service’ infrastructure that can support the ability to create/store domain-specific knowledge graphs for intelligent agent Apps to deploy active cyber defense solutions defending real-world applications impacted by AI-inspired cyber threats. Specifically, we present a reference architecture, describe graph infrastructure tools, and intuitive user interfaces required to construct and maintain large-scale knowledge graphs for the use in knowledge curation, inference, and interaction, across multiple domains (e.g., healthcare, power grids, manufacturing). Moreover, we present a case study to demonstrate how to configure custom sets of knowledge curation pipelines using custom data importers and semantic extract, transform, and load scripts for active cyber defense in a power grid system. Additionally, we show fast querying methods to reach decisions regarding cyberattack detection to deploy pertinent defense to outsmart adversaries.
Authored by Prasad Calyam, Mayank Kejriwal, Praveen Rao, Jianlin Cheng, Weichao Wang, Linquan Bai, Sriram Nadendla, Sanjay Madria, Sajal Das, Rohit Chadha, Khaza Hoque, Kannappan Palaniappan, Kiran Neupane, Roshan Neupane, Sankeerth Gandhari, Mukesh Singhal, Lotfi Othmane, Meng Yu, Vijay Anand, Bharat Bhargava, Brett Robertson, Kerk Kee, Patrice Buzzanell, Natalie Bolton, Harsh Taneja
The advent of Generative AI has marked a significant milestone in artificial intelligence, demonstrating remarkable capabilities in generating realistic images, texts, and data patterns. However, these advancements come with heightened concerns over data privacy and copyright infringement, primarily due to the reliance on vast datasets for model training. Traditional approaches like differential privacy, machine unlearning, and data poisoning only offer fragmented solutions to these complex issues. Our paper delves into the multifaceted challenges of privacy and copyright protection within the data lifecycle. We advocate for integrated approaches that combines technical innovation with ethical foresight, holistically addressing these concerns by investigating and devising solutions that are informed by the lifecycle perspective. This work aims to catalyze a broader discussion and inspire concerted efforts towards data privacy and copyright integrity in Generative AI.CCS CONCEPTS• Software and its engineering Software architectures; • Information systems World Wide Web; • Security and privacy Privacy protections; • Social and professional topics Copyrights; • Computing methodologies Machine learning.
Authored by Dawen Zhang, Boming Xia, Yue Liu, Xiwei Xu, Thong Hoang, Zhenchang Xing, Mark Staples, Qinghua Lu, Liming Zhu
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
Cybersecurity is the practice of preventing cyberattacks on vital infrastructure and private data. Government organisations, banks, hospitals, and every other industry sector are increasingly investing in cybersecurity infrastructure to safeguard their operations and the millions of consumers who entrust them with their personal information. Cyber threat activity is alarming in a world where businesses are more interconnected than ever before, raising concerns about how well organisations can protect themselves from widespread attacks. Threat intelligence solutions employ Natural Language Processing to read and interpret the meaning of words and technical data in various languages and find trends in them. It is becoming increasingly precise for machines to analyse various data sources in multiple languages using NLP. This paper aims to develop a system that targets software vulnerability detection as a Natural Language Processing (NLP) problem with source code treated as texts and addresses the automated software vulnerability detection with recent advanced deep learning NLP models. We have created and compared various deep learning models based on their accuracy and the best performer achieved 95\% accurate results. Furthermore we have also made an effort to predict which vulnerability class a particular source code belongs to and also developed a robust dashboard using FastAPI and ReactJS.
Authored by Kanchan Singh, Sakshi Grover, Ranjini Kumar
The growth of the Internet of Things (IoT) is leading to some restructuring and transformation of everyday lives. The number and diversity of IoT devices have increased rapidly, enabling the vision of a smarter environment and opening the door to further automation, accompanied by the generation and collection of enormous amounts of data. The automation and ongoing proliferation of personal and professional data in the IoT have resulted in countless cyber-attacks enabled by the growing security vulnerabilities of IoT devices. Therefore, it is crucial to detect and patch vulnerabilities before attacks happen in order to secure IoT environments. One of the most promising approaches for combating cybersecurity vulnerabilities and ensuring security is through the use of artificial intelligence (AI). In this paper, we provide a review in which we classify, map, and summarize the available literature on AI techniques used to recognize and reduce cybersecurity software vulnerabilities in the IoT. We present a thorough analysis of the majority of AI trends in cybersecurity, as well as cutting-edge solutions.
Authored by Heba Khater, Mohamad Khayat, Saed Alrabaee, Mohamed Serhani, Ezedin Barka, Farag Sallabi
AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this position paper, we address the security of AI code generators by identifying a novel data poisoning attack that results in the generation of vulnerable code. Next, we devise an extensive evaluation of how these attacks impact state-of-the-art models for code generation. Lastly, we discuss potential solutions to overcome this threat.
Authored by Cristina Improta
Visible Light Security 2022 - Visible Light Communication (VLC) is one of technology for the sixth generation (6G) wireless communication and also broadcast system. VLC systems are more resistant against Radio Frequency interference and unsusceptible to security like most RF wireless networks. Since VLC is one of suitable candidate for enforcing data security in future wireless networks. This paper considers improving the security of the next generation of wireless communications by using wireless device fingerprints in visible light communication, which could be used potentially for ATSC broadcasting applications. In particular, we aim to provide a detailed proposal for developing novel wireless security solutions using Visible light communication device fingerprinting techniques. The objectives are two-fold: (1) to provide a systematic review of AI-based wireless device fingerprint identification method and (2) to identify VLC transmitter, with respect to the ATSC physical layer modulation scheme, by analysing the differences in the modulated constellations signaled received by photo-diode, which will be proved by laboratory experimentation.
Authored by Ziqi Liu, Dayu Shi, Samia Oukemeni, Xun Zhang
Distributed computation and AI processing at the edge has been identified as an efficient solution to deliver real-time IoT services and applications compared to cloud-based paradigms. These solutions are expected to support the delay-sensitive IoT applications, autonomic decision making, and smart service creation at the edge in comparison to traditional IoT solutions. However, existing solutions have limitations concerning distributed and simultaneous resource management for AI computation and data processing at the edge; concurrent and real-time application execution; and platform-independent deployment. Hence, first, we propose a novel three-layer architecture that facilitates the above service requirements. Then we have developed a novel platform and relevant modules with integrated AI processing and edge computer paradigms considering issues related to scalability, heterogeneity, security, and interoperability of IoT services. Further, each component is designed to handle the control signals, data flows, microservice orchestration, and resource composition to match with the IoT application requirements. Finally, the effectiveness of the proposed platform is tested and have been verified.
Authored by Sewwandi Nisansala, Gayal Chandrasiri, Sonali Prasadika, Upul Jayasinghe
Advanced metamorphic malware and ransomware use techniques like obfuscation to alter their internal structure with every attack. Therefore, any signature extracted from such attack, and used to bolster endpoint defense, cannot avert subsequent attacks. Therefore, if even a single such malware intrudes even a single device of an IoT network, it will continue to infect the entire network. Scenarios where an entire network is targeted by a coordinated swarm of such malware is not beyond imagination. Therefore, the IoT era also requires Industry-4.0 grade AI-based solutions against such advanced attacks. But AI-based solutions need a large repository of data extracted from similar attacks to learn robust representations. Whereas, developing a metamorphic malware is a very complex task and requires extreme human ingenuity. Hence, there does not exist abundant metamorphic malware to train AI-based defensive solutions. Also, there is currently no system that could generate enough functionality preserving metamorphic variants of multiple malware to train AI-based defensive systems. Therefore, to this end, we design and develop a novel system, named X-Swarm. X-Swarm uses deep policy-based adversarial reinforcement learning to generate swarm of metamorphic instances of any malware by obfuscating them at the opcode level and ensuring that they could evade even capable, adversarial-attack immune endpoint defense systems.
Authored by Mohit Sewak, Sanjay Sahay, Hemant Rathore
Network Intrusion Detection Systems (IDSs) have been used to increase the level of network security for many years. The main purpose of such systems is to detect and block malicious activity in the network traffic. Researchers have been improving the performance of IDS technology for decades by applying various machine-learning techniques. From the perspective of academia, obtaining a quality dataset (i.e. a sufficient amount of captured network packets that contain both malicious and normal traffic) to support machine learning approaches has always been a challenge. There are many datasets publicly available for research purposes, including NSL-KDD, KDDCUP 99, CICIDS 2017 and UNSWNB15. However, these datasets are becoming obsolete over time and may no longer be adequate or valid to model and validate IDSs against state-of-the-art attack techniques. As attack techniques are continuously evolving, datasets used to develop and test IDSs also need to be kept up to date. Proven performance of an IDS tested on old attack patterns does not necessarily mean it will perform well against new patterns. Moreover, existing datasets may lack certain data fields or attributes necessary to analyse some of the new attack techniques. In this paper, we argue that academia needs up-to-date high-quality datasets. We compare publicly available datasets and suggest a way to provide up-to-date high-quality datasets for researchers and the security industry. The proposed solution is to utilize the network traffic captured from the Locked Shields exercise, one of the world’s largest live-fire international cyber defence exercises held annually by the NATO CCDCOE. During this three-day exercise, red team members consisting of dozens of white hackers selected by the governments of over 20 participating countries attempt to infiltrate the networks of over 20 blue teams, who are tasked to defend a fictional country called Berylia. After the exercise, network packets captured from each blue team’s network are handed over to each team. However, the countries are not willing to disclose the packet capture (PCAP) files to the public since these files contain specific information that could reveal how a particular nation might react to certain types of cyberattacks. To overcome this problem, we propose to create a dedicated virtual team, capture all the traffic from this team’s network, and disclose it to the public so that academia can use it for unclassified research and studies. In this way, the organizers of Locked Shields can effectively contribute to the advancement of future artificial intelligence (AI) enabled security solutions by providing annual datasets of up-to-date attack patterns.
Authored by Maj. Halisdemir, Hacer Karacan, Mauno Pihelgas, Toomas Lepik, Sungbaek Cho
The 2021 T-Mobile breach conducted by John Erin Binns resulted in the theft of 54 million customers' personal data. The attacker gained entry into T-Mobile's systems through an unprotected router and used brute force techniques to access the sensitive information stored on the internal servers. The data stolen included names, addresses, Social Security Numbers, birthdays, driver's license numbers, ID information, IMEIs, and IMSIs. We analyze the data breach and how it opens the door to identity theft and many other forms of hacking such as SIM Hijacking. SIM Hijacking is a form of hacking in which bad actors can take control of a victim's phone number allowing them means to bypass additional safety measures currently in place to prevent fraud. This paper thoroughly reviews the attack methodology, impact, and attempts to provide an understanding of important measures and possible defense solutions against future attacks. We also detail other social engineering attacks that can be incurred from releasing the leaked data.
Authored by Christopher Faircloth, Gavin Hartzell, Nathan Callahan, Suman Bhunia