The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
In the ever-evolving landscape of cybersecurity threats, Intrusion detection systems are critical in protecting network and server infrastructure in the ever-changing spectrum of cybersecurity threats. This research introduces a hybrid detection approach that uses deep learning techniques to improve intrusion detection accuracy and efficiency. The proposed prototype combines the strength of the XGBoost and MaxPooling1D algorithms within an ensemble model, resulting in a stable and effective solution. Through the fusion of these methodologies, the hybrid detection system achieves superior performance in identifying and mitigating various types of intrusions. This paper provides an overview of the prototype s architecture, discusses the benefits of using deep learning in intrusion detection, and presents experimental results showcasing the system s efficacy.
Authored by Vishnu Kurnala, Swaraj Naik, Dhanush Surapaneni, Ch. Reddy
Using Intrusion Detection Systems (IDS) powered by artificial intelligence is presented in the proposed work as a novel method for enhancing residential security. The overarching goal of the study is to design, develop, and evaluate a system that employs artificial intelligence techniques for real-time detection and prevention of unauthorized access in response to the rising demand for such measures. Using anomaly detection, neural networks, and decision trees, which are all examples of machine learning algorithms that benefit from the incorporation of data from multiple sensors, the proposed system guarantees the accurate identification of suspicious activities. Proposed work examines large datasets and compares them to conventional security measures to demonstrate the system s superior performance and prospective impact on reducing home intrusions. Proposed work contributes to the field of residential security by proposing a dependable, adaptable, and intelligent method for protecting homes against the ever-changing types of infiltration threats that exist today.
Authored by Jeneetha J, B.Vishnu Prabha, B. Yasotha, Jaisudha J, C. Senthilkumar, V.Samuthira Pandi
Intrusion Detection Systems (IDS) are critical for detecting and mitigating cyber threats, yet the opaqueness of machine learning models used within these systems poses challenges for understanding their decisions. This paper proposes a novel approach to address this issue by integrating SHAP (SHapley Additive exPlanations) values with Large Language Models (LLMs). With the aim of enhancing transparency and trust in IDS, this approach demonstrates how the combination facilitates the generation of human-understandable explanations for detected anomalies, drawing upon the CICIDS2017 dataset. The LLM effectively articulates significant features identified by SHAP values, offering coherent responses regarding influential predictors of model outcomes.
Authored by Abderrazak Khediri, Hamda Slimi, Ayoub Yahiaoui, Makhlouf Derdour, Hakim Bendjenna, Charaf Ghenai
This paper proposes an AI-based intrusion detection method for the ITRI AI BOX information security application. The packets captured by AI BOX are analyzed to determine whether there are network attacks or abnormal traffic according to AI algorithms. Adjust or isolate some unnatural or harmful network data transmission behaviors if detected as abnormal. AI models are used to detect anomalies and allow or restrict data transmission to ensure the information security of devices. In future versions, it will also be able to intercept packets in the field of information technology (IT) and operational technology (OT). It can be applied to the free movement between heterogeneous networks to assist in data computation and transformation. This paper uses the experimental test to realize the intrusion detection method, hoping to add value to the AI BOX information security application. When IT and OT fields use AI BOX to detect intrusion accurately, it will protect the smart factory or hospital from abnormal traffic attacks and avoid causing system paralysis, extortion, and other dangers. We have built the machine learning model, packet sniffing functionality, and the operating system setting of the AI BOX environment. A public dataset has been used to test the model, and the accuracy has achieved 99\%, and the Yocto Project environment has been available in the AI Box and tested successfully.
Authored by Jiann-Liang Chen, Zheng-Zhun Chen, Youg-Sheng Chang, Ching-Iang Li, Tien-I Kao, Yu-Ting Lin, Yu-Yi Xiao, Jian-Fu Qiu
Cloud computing has become increasingly popular in the modern world. While it has brought many positives to the innovative technological era society lives in today, cloud computing has also shown it has some drawbacks. These drawbacks are present in the security aspect of the cloud and its many services. Security practices differ in the realm of cloud computing as the role of securing information systems is passed onto a third party. While this reduces managerial strain on those who enlist cloud computing it also brings risk to their data and the services they may provide. Cloud services have become a large target for those with malicious intent due to the high density of valuable data stored in one relative location. By soliciting help from the use of honeynets, cloud service providers can effectively improve their intrusion detection systems as well as allow for the opportunity to study attack vectors used by malicious actors to further improve security controls. Implementing honeynets into cloud-based networks is an investment in cloud security that will provide ever-increasing returns in the hardening of information systems against cyber threats.
Authored by Eric Toth, Md Chowdhury
With the continuous development of Autonomous Vehicles (AVs), Intrusion Detection Systems (IDSs) became essential to ensure the security of in-vehicle (IV) networks. In the literature, classic machine learning (ML) metrics used to evaluate AI-based IV-IDSs present significant limitations and fail to assess their robustness fully. To address this, our study proposes a set of cyber resiliency metrics adapted from MITRE s Cyber Resiliency Metrics Catalog, tailored for AI-based IV-IDSs. We introduce specific calculation methods for each metric and validate their effectiveness through a simulated intrusion detection scenario. This approach aims to enhance the evaluation and resilience of IV-IDSs against advanced cyber threats and contribute to safer autonomous transportation.
Authored by Hamza Khemissa, Mohammed Bouchouia, Elies Gherbi
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
The recent 5G networks aim to provide higher speed, lower latency, and greater capacity; therefore, compared to the previous mobile networks, more advanced and intelligent network security is essential for 5G networks. To detect unknown and evolving 5G network intrusions, this paper presents an artificial intelligence (AI)-based network threat detection system to perform data labeling, data filtering, data preprocessing, and data learning for 5G network flow and security event data. The performance evaluations are first conducted on two well-known datasets-NSL-KDD and CICIDS 2017; then, the practical testing of proposed system is performed in 5G industrial IoT environments. To demonstrate detection against network threats in real 5G environments, this study utilizes the 5G model factory, which is downscaled to a real smart factory that comprises a number of 5G industrial IoT-based devices.
Authored by Jonghoon Lee, Hyunjin Kim, Chulhee Park, Youngsoo Kim, Jong-Geun Park
Facing the urgent requirement for effective emergency management, our study introduces a groundbreaking approach leveraging the capabilities of open-source Large Language Models (LLMs), notably LLAMA2. This system is engineered to enhance public emergency assistance by swiftly processing and classifying emergencies communicated through social media and direct messaging. Our innovative model interprets user descriptions to analyze context and integrate it with existing Situation Reports, streamlining the alert process to government agencies with crucial information. Importantly, during peak emergency times when conventional systems are under stress, our LLM-based solution provides critical support by offering straightforward guidance to individuals and facilitating direct communication of their circumstances to emergency responders. This advancement significantly bolsters the efficiency and efficacy of crisis response mechanisms.
Authored by Hakan Otal, Abdullah Canbaz
While code review is central to the software development process, it can be tedious and expensive to carry out. In this paper, we investigate whether and how Large Language Models (LLMs) can aid with code reviews. Our investigation focuses on two tasks that we argue are fundamental to good reviews: (i) flagging code with security vulnerabilities and (ii) performing software functionality validation, i.e., ensuring that code meets its intended functionality. To test performance on both tasks, we use zero-shot and chain-of-thought prompting to obtain final “approve or reject” recommendations. As data, we employ seminal code generation datasets (HumanEval and MBPP) along with expert-written code snippets with security vulnerabilities from the Common Weakness Enumeration (CWE). Our experiments consider a mixture of three proprietary models from OpenAI and smaller open-source LLMs. We find that the former outperforms the latter by a large margin. Motivated by promising results, we finally ask our models to provide detailed descriptions of security vulnerabilities. Results show that 36.7 \% of LLM-generated descriptions can be associated with true CWE vulnerabilities.CCS CONCEPTS• Software and its engineering → Software verification and validation; Software development techniques.
Authored by Rasmus Jensen, Vali Tawosi, Salwa Alamir
In this survey, we delve into the integration and optimization of Large Language Models (LLMs) within edge computing environments, marking a significant shift in the artificial intelligence (AI) landscape. The paper investigates the development and application of LLMs in conjunction with edge computing, highlighting the advantages of localized data processing such as reduced latency, enhanced privacy, and improved efficiency. Key challenges discussed include the deployment of LLMs on resource-limited edge devices, focusing on computational demands, energy efficiency, and model scalability. This comprehensive analysis underscores the transformative potential and future implications of combining LLMs with edge computing, paving the way for advanced AI applications across various sectors.
Authored by Sarthak Bhardwaj, Pardeep Singh, Mohammad Pandit
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
The emergence of large language models (LLMs) has brought forth remarkable capabilities in various domains, yet it also poses inherent risks to trustfulness, encompassing concerns such as toxicity, stereotype bias, adversarial robustness, ethics, privacy, and fairness. Particularly in sensitive applications like customer support chatbots, AI assistants, and digital information automation, which handle privacy-sensitive data, the adoption of generative pre-trained transformer (GPT) models is pervasive. However, ensuring robust security measures to mitigate potential security vulnerabilities is imperative. This paper advocates for a proactive approach termed "security shift-left," which emphasizes integrating security measures early in the development lifecycle to bolster the security posture of LLM-based applications. Our proposed method leverages basic machine learning (ML) techniques and retrieval-augmented generation (RAG) to effectively address security concerns. We present empirical evidence validating the efficacy of our approach with one LLM-based security application designed for the detection of malicious intent, utilizing both open-source datasets and synthesized datasets. By adopting this security shift-left methodology, developers can confidently develop LLM-based applications with robust security protection, safeguarding against potential threats and vulnerabilities.
Authored by Qianlong Lan, Anuj Kaul, Nishant Pattanaik, Piyush Pattanayak, Vinothini Pandurangan
This study investigates the performance and security indicators of mainstream large language models in Chinese generation tasks. It explores potential security risks associated with these models and offers suggestions for improvement. The study utilizes publicly available datasets to assess Chinese language generation tasks, develops datasets and multidimensional security rating standards for security task evaluations, compares the performance of three models across 5 Chinese tasks and 6 security tasks, and conducts Pearson correlation analysis using GPT-4 and questionnaire surveys. Furthermore, the study implements automatic scoring based on GPT-3.5-Turbe. The experimental findings indicate that the models excel in Chinese language generation tasks. ERNIE Bot outperforms in the evaluation of ideology and ethics, ChatGPT excels in rumor and falsehood and privacy security assessments, and Claude performs well in assessing factual fallacy and social prejudice. The fine-tuned model demonstrates high accuracy in security tasks, yet all models exhibit security vulnerabilities. Integration into the prompt project proves to be effective in mitigating security risks. It is recommended that both domestic and foreign models adhere to the legal frameworks of each country, reduce AI hallucinations, continuously expand corpora, and update iterations accordingly.
Authored by Yu Zhang, Yongbing Gao, Weihao Li, Zirong Su, Lidong Yang
LLMs face content security risks such as prompt information injection, insecure output processing, sensitive information leakage, and over-dependence, etc. By constructing a firewall for LLMs with intelligent detection strategies and introducing multi-engine detection capabilities such as rule matching, semantic computing, and AI models, we can intelligently detect and dispose of inputs and outputs of the LLMs, and realize the full-time on-line security protection of LLM applications. The system is tested on open-source LLMs, and there is a significant improvement in terms of the detection rate of insecure content.
Authored by Tianrui Huang, Lina You, Nishui Cai, Ting Huang
Deep Learning Large Language Models (LLMs) have the potential to automate and simplify code writing tasks. One of the emerging applications of LLMs is hardware design, where natural language interaction can be used to generate, annotate, and correct code in a Hardware Description Language (HDL), such as Verilog. This work provides an overview of the current state of using LLMs to generate Verilog code, highlighting their capabilities, accuracy, and techniques to improve the design quality. It also reviews the existing benchmarks to evaluate the correctness and quality of generated HDL code, enabling a fair comparison of different models and strategies.
Authored by Erik Hollander, Ewout Danneels, Karel-Brecht Decorte, Senne Loobuyck, Arne Vanheule, Ian Van Kets, Dirk Stroobandt
AI pair programmers, such as GitHub s Copilot, have shown great success in automatic code generation. However, such large language model-based code generation techniques face the risk of introducing security vulnerabilities to codebases. In this work, we explore the direction of fine-tuning large language models for generating more secure code. We use real-world vulnerability fixes as our fine-tuning dataset. We craft a code-generation scenario dataset (C/C++) for evaluating and comparing the pre-trained and fine-tuned models. Our experiments on GPT-J show that the fine-tuned GPT-J achieved 70.4\% and 64.5\% ratios of non-vulnerable code generation for C and C++, respectively, which has a 10\% increase for C and a slight increase for C++ compared with the pre-trained large language model.
Authored by Junjie Li, Aseem Sangalay, Cheng Cheng, Yuan Tian, Jinqiu Yang
The development of AI computing has reached a critical inflection point. The scale of large-scale AI neural network model parameters has grown rapidly to “pre-trillion-scale” level. The computing needs of training large-scale AI neural network models have reached “exa-scale” level. Besides, AI Foundation Model also affects the correctness of AI applications, and becoming a new information security issue. Future AI development will be pushed by progress of computing power (supercomputer), algorithm (neural network model and parameter scale), and application (foundation model and downstream fine tuning). In particular, the computational efficiency of AI will be a key factor in the commercialization and popularization of AI applications.
Authored by Bor-Sung Liang
Active cyber defense mechanisms are necessary to perform automated, and even autonomous operations using intelligent agents that defend against modern/sophisticated AI-inspired cyber threats (e.g., ransomware, cryptojacking, deep-fakes). These intelligent agents need to rely on deep learning using mature knowledge and should have the ability to apply this knowledge in a situational and timely manner for a given AI-inspired cyber threat. In this paper, we describe a ‘domain-agnostic knowledge graph-as-a-service’ infrastructure that can support the ability to create/store domain-specific knowledge graphs for intelligent agent Apps to deploy active cyber defense solutions defending real-world applications impacted by AI-inspired cyber threats. Specifically, we present a reference architecture, describe graph infrastructure tools, and intuitive user interfaces required to construct and maintain large-scale knowledge graphs for the use in knowledge curation, inference, and interaction, across multiple domains (e.g., healthcare, power grids, manufacturing). Moreover, we present a case study to demonstrate how to configure custom sets of knowledge curation pipelines using custom data importers and semantic extract, transform, and load scripts for active cyber defense in a power grid system. Additionally, we show fast querying methods to reach decisions regarding cyberattack detection to deploy pertinent defense to outsmart adversaries.
Authored by Prasad Calyam, Mayank Kejriwal, Praveen Rao, Jianlin Cheng, Weichao Wang, Linquan Bai, Sriram Nadendla, Sanjay Madria, Sajal Das, Rohit Chadha, Khaza Hoque, Kannappan Palaniappan, Kiran Neupane, Roshan Neupane, Sankeerth Gandhari, Mukesh Singhal, Lotfi Othmane, Meng Yu, Vijay Anand, Bharat Bhargava, Brett Robertson, Kerk Kee, Patrice Buzzanell, Natalie Bolton, Harsh Taneja
The exponential growth of web documents has resulted in traditional search engines producing results with high recall but low precision when queried by users. In the contemporary internet landscape, resources are made available via hyperlinks which may or may not meet the expectations of the user. To mitigate this issue and enhance the level of pertinence, it is imperative to examine the challenges associated with querying the semantic web and progress towards the advancement of semantic search engines. These search engines generate outcomes by prioritizing the semantic significance of the context over the structural composition of the content. This paper outlines a proposed architecture for a semantic search engine that utilizes the concept of semantics to refine web search results. The resulting output would consist of ontologically based and contextually relevant outcomes pertaining to the user s query.
Authored by Ganesh D, Ajay Rastogi
This paper introduces a novel AI-driven ontology-based framework for disease diagnosis and prediction, leveraging the advancements in machine learning and data mining. We have constructed a comprehensive ontology that maps the complex relationships between a multitude of diseases and their manifested symptoms. Utilizing Semantic Web Rule Language (SWRL), we have engineered a set of robust rules that facilitate the intelligent prediction of diseases, embodying the principles of NLP for enhanced interpretability. The developed system operates in two fundamental stages. Initially, we define a sophisticated class hierarchy within our ontology, detailing the intricate object and data properties with precision—a process that showcases our application of computer vision techniques to interpret and categorize medical imagery. The second stage focuses on the application of AI-powered rules, which are executed to systematically extract and present detailed disease information, including symptomatology, adhering to established medical protocols. The efficacy of our ontology is validated through extensive evaluations, demonstrating its capability to not only accurately diagnose but also predict diseases, with a particular emphasis on the AI methodologies employed. Furthermore, the system calculates a final risk score for the user, derived from a meticulous analysis of the results. This score is a testament to the seamless integration of AI and ML in developing a user-centric diagnostic tool, promising a significant impact on future research in AI, ML, NLP, and robotics within the medical domain.
Authored by K. Suneetha, Ashendra Saxena
In this paper, we design and develop a new multimedia distribution platform that mainly utilizes containerization and microservice architecture technologies. Using our approach, the multimedia service source code located in a repository such as Git can be built into a container image for distribution and management, and the process of delivering it to the target edge device can be performed through a pipeline. In addition, distributed edge devices can be built into clusters with various connection profiles and utilized for services. Real-time monitoring functions are provided to ensure stable service operation even after the service is deployed. To implement this complex service platform, we follow the microservice architecture method. Stable operation was confirmed even during an operational test period of over a year. This technology is expected to help deploy multimedia services conveniently and quickly and manage them stably and efficiently.
Authored by Jongbin Park
Right to education is a basic need of every child and every society across the globe. Ever since the internet revolution and technological upgradation takes place, education system starts evolving from traditional way to smarter way. Covid-19 and industrial revolution has made smart education a global business that is now even penetrating to rural footprints of remote locations. Use of smart devices, IoT based communications and AI techniques have increased the cyberattack surface over the smart education system. Moreover, lack of cyber awareness and absence of essential cyber sanity checks has exposed the vulnerability in smart education system. A study of technology evolution of education to smart education and its penetration across the globe, details of smart education ecosystem, role of various stakeholders are discussed in this paper. It also covers most trending cyber-attacks, history of reported cyber-attacks in smart education sector. Further, in order to make smart educational cyber space more secure, proactive preventive measures and cyber sanity actions to mitigate such attacks are also discussed.
Authored by Sandeep Sarowa, Munish Kumar, Vijay Kumar, Bhisham Bhanot