The network of smart physical object has a significant impact on the growth of urban civilization. The evidence has been cited from the digital sources such as scientific journals, conferences and publications, etc. Along with other security services, these kinds of structured, sophisticated data have addressed a number of security-related challenges. Here, many forms of cutting-edge machine learning and AI techniques are used to research how merging two or more algorithms with AI and ML might make the internet of things more safe. The main objective of this paper is it explore the applications of how ML and AI that can be used to improve IOT security.
Authored by Brijesh Singh, Santosh Sharma, Ravindra Verma
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
Data in AI-Empowered Electric Vehicles is protected by using blockchain technology for immutable and verifiable transactions, in addition to high-strength encryption methods and digital signatures. This research paper compares and evaluates the security mechanisms for V2X communication in AI-enabled EVs. The purpose of the study is to ensure the reliability of security measures by evaluating performance metrics including false positive rate, false negative rate, detection accuracy, processing time, communication latency, computational resources, key generation time, and throughput. A comprehensive experimental approach is implemented using a diverse dataset gathered from actual V2X communication condition. The evaluation reveals that the security mechanisms perform inconsistently. Message integrity verification obtains the highest detection accuracy with a low false positive rate of 2\% and a 0\% false negative rate. Traffic encryption has a low processing time, requiring only 10 milliseconds for encryption and decryption, and adds only 5 bytes of communication latency to V2X messages. The detection accuracy of intrusion detection systems is adequate at 95\%, but they require more computational resources, consuming 80\% of the CPU and 150 MB of memory. In particular attack scenarios, certificate-based authentication and secure key exchange show promise. Certificate-based authentication mitigates MitM attacks with a false positive rate of 3\% and a false negative rate of 1\%. Secure key exchange thwarts replication attacks with a false positive rate of 0 and a false negative rate of 2. Nevertheless, their efficacy varies based on the attack scenario, highlighting the need for adaptive security mechanisms. The evaluated security mechanisms exhibit varying rates of throughput. Message integrity verification and traffic encryption accomplish high throughput, enabling 1 Mbps and 800 Kbps, respectively, of secure data transfer rates. Overall, the results contribute to the comprehension of V2X communication security challenges in AI-enabled EVs. Message integrity verification and traffic encryption have emerged as effective mechanisms that provide robust security with high performance. The study provides insight for designing secure and dependable V2X communication systems. Future research should concentrate on enhancing V2X communication s security mechanisms and exploring novel approaches to resolve emerging threats.
Authored by Edward V, Dhivya. S, M.Joe Marshell, Arul Jeyaraj, Ebenezer. V, Jenefa. A
This article proposes a security protection technology based on active dynamic defense technology. Solved unknown threats that traditional rule detection methods cannot detect, effectively resisting purposeless virus spread such as worms; Isolate new unknown viruses, Trojans, and other attack threats; Strengthen terminal protection, effectively solve east-west horizontal penetration attacks in the internal network, and enhance the adversarial capabilities of the internal network. Propose modeling user behavior habits based on machine learning algorithms. By using historical behavior models, abnormal user behavior can be detected in real-time, network danger can be perceived, and proactive changes in network defense strategies can be taken to increase the difficulty of attackers. To achieve comprehensive and effective defense, identification, and localization of network attack behaviors, including APT attacks.
Authored by Fu Yu
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
Deep neural networks have been widely applied in various critical domains. However, they are vulnerable to the threat of adversarial examples. It is challenging to make deep neural networks inherently robust to adversarial examples, while adversarial example detection offers advantages such as not affecting model classification accuracy. This paper introduces common adversarial attack methods and provides an explanation of adversarial example detection. Recent advances in adversarial example detection methods are categorized into two major classes: statistical methods and adversarial detection networks. The evolutionary relationship among different detection methods is discussed. Finally, the current research status in this field is summarized, and potential future directions are highlighted.
Authored by Chongyang Zhao, Hu Li, Dongxia Wang, Ruiqi Liu
Unmanned aerial vehicles (UAVs) are increasingly adopted to perform various military, civilian, and commercial tasks in recent years. To assure the reliability of UAVs during these tasks, anomaly detection plays an important role in today s UAV system. With the rapid development of AI hardware and algorithms, leveraging AI techniques has become a prevalent trend for UAV anomaly detection. While existing AI-enabled UAV anomaly detection schemes have been demonstrated to be promising, they also raise additional security concerns about the schemes themselves. In this paper, we perform a study to explore and analyze the potential vulnerabilities in state-of-the-art AI-enabled UAV anomaly detection designs. We first validate the existence of security vulnerability and then propose an iterative attack that can effectively exploit the vulnerability and bypass the anomaly detection. We demonstrate the effectiveness of our attack by evaluating it on a state-of-the-art UAV anomaly detection scheme, in which our attack is successfully launched without being detected. Based on the understanding obtained from our study, this paper also discusses potential defense directions to enhance the security of AI-enabled UAV anomaly detection.
Authored by Ashok Raja, Mengjie Jia, Jiawei Yuan
As a common network attack method, False data injection attack (FDLA) can often cause serious consequences to the power system due to its strong concealment. Attackers utilize grid topology information to carefully construct covert attack vectors, thus bypassing the traditional bad data detection (BDD) mechanism to maliciously tamper with measurements, which is more destructive and threatening to the power system. To address the difficulty of detecting them effectively, a detection method based on adaptive interpolation-adaptive inhibition extended Kalman filter (AI-AIEKF) is proposed in this paper. By the adaptive interpolation strategy and exponential weight function, the AI-AIEKF algorithm can improve the estimation accuracy and enhance the robustness of the EKF algorithm. Combined with the weight least squares (WVS), the two estimators respond to the system at different speeds, and the consistency test is introduced to detect the FDLAs. The extensive simulations on the IEEE-14-bus demonstrate that FDIAs can be accurately detected, thus validating the validity of the method.
Authored by Guoqing Zhang, Wengen Gao
As deep-learning based image and video manipulation technology advances, the future of truth and information looks bleak. In particular, Deepfakes, wherein a person’s face can be transferred onto the face of someone else, pose a serious threat for potential spread of convincing misinformation that is drastic and ubiquitous enough to have catastrophic real-world consequences. To prevent this, an effective detection tool for manipulated media is needed. However, the detector cannot just be good, it has to evolve with the technology to keep pace with or even outpace the enemy. At the same time, it must defend against different attack types to which deep learning systems are vulnerable. To that end, in this paper, we review various methods of both attack and defense on AI systems, as well as modes of evolution for such a system. Then, we put forward a potential system that combines the latest technologies in multiple areas as well as several novel ideas to create a detection algorithm that is robust against many attacks and can learn over time with unprecedented effectiveness and efficiency.
Authored by Ian Miller, Dan Lin
Bigdata and IoT technologies are developing rapidly. Accordingly, consideration of network security is also emphasized, and efficient intrusion detection technology is required for detecting increasingly sophisticated network attacks. In this study, we propose an efficient network anomaly detection method based on ensemble and unsupervised learning. The proposed model is built by training an autoencoder, a representative unsupervised deep learning model, using only normal network traffic data. The anomaly score of the detection target data is derived by ensemble the reconstruction loss and the Mahalanobis distances for each layer output of the trained autoencoder. By applying a threshold to this score, network anomaly traffic can be efficiently detected. To evaluate the proposed model, we applied our method to UNSW-NB15 dataset. The results show that the overall performance of the proposed method is superior to those of the model using only the reconstruction loss of the autoencoder and the model applying the Mahalanobis distance to the raw data.
Authored by Donghun Yang, Myunggwon Hwang
It is suggested in this paper that an LSIM model be used to find DDoS attacks, which usually involve patterns of bad traffic that happen over time. The idea for the model comes from the fact that bad IoTdevices often leave traces in network traffic data that can be used to find them. This is what the LSIM model needs to be done before it can spot attacks in real-time. An IoTattack dataset was used to test how well the suggested method works. What the test showed was that the suggested method worked well to find attacks. The suggested method can likely be used to find attacks on the Internet of Things. It s simple to set up and can stop many types of break-ins. This method will only work, though, if the training data are correct.LSIMmodel could be used to find attack detection who are breaking into the Internet of Things. Long short-term memory (LSIM) models are a type of AI that can find trends in data that have been collected over time. The LSIM model learns the difference patterns in network traffic data that are normal and patterns that show an attack. The proposed method to see how well it worked and found that it could achieve a precision of 99.4\%.
Authored by Animesh Srivastava, Vikash Sawan, Kumari Jugnu, Shiv Dhondiyal
The traditional port smart gate ground scale line pressure detection system employs a centralized data training method that carries the risk of privacy leakage. Federated Learning offers an effective solution to this issue by enabling each port gate to locally train data, sharing only model parameters, without the need to transmit raw data to a central server. This is particularly crucial for ground scale line pressure detection systems dealing with sensitive data. However, researchers have identified potential risks of backdoor attacks when applying Federated Learning. Currently, most existing backdoor attacks are directed towards image classification and centralized object detection. However, backdoor attacks for Federated Learning-based object detection tasks have not been explored. In this paper, we reveal that these threats may also manifest in this task. To analyze the impact of backdoor attacks on this task, we designed three backdoor attack triggers and proposed three trigger attack operations. To assess backdoor attacks on this task, we developed corresponding metrics and conducted experiments on local datasets from three port gates. The experimental results indicate that Federated Learning-based object detection tasks are susceptible to backdoor threats.
Authored by Chunming Tang, Jinghong Liu, Xinguang Dai, Yan Li
The Internet of Things (IoT) heralds a innovative generation in communication via enabling regular gadgets to supply, receive, and percentage records easily. IoT applications, which prioritise venture automation, aim to present inanimate items autonomy; they promise increased consolation, productivity, and automation. However, strong safety, privateness, authentication, and recuperation methods are required to understand this goal. In order to assemble give up-to-quit secure IoT environments, this newsletter meticulously evaluations the security troubles and risks inherent to IoT applications. It emphasises the vital necessity for architectural changes.The paper starts by conducting an examination of security worries before exploring emerging and advanced technologies aimed at nurturing a sense of trust, in Internet of Things (IoT) applications. The primary focus of the discussion revolves around how these technologies aid in overcoming security challenges and fostering an ecosystem for IoT.
Authored by Pranav A, Sathya S, HariHaran B
Nowadays, anomaly-based network intrusion detection system (NIDS) still have limited real-world applications; this is particularly due to false alarms, a lack of datasets, and a lack of confidence. In this paper, we propose to use explainable artificial intelligence (XAI) methods for tackling these issues. In our experimentation, we train a random forest (RF) model on the NSL-KDD dataset, and use SHAP to generate global explanations. We find that these explanations deviate substantially from domain expertise. To shed light on the potential causes, we analyze the structural composition of the attack classes. There, we observe severe imbalances in the number of records per attack type subsumed in the attack classes of the NSL-KDD dataset, which could lead to generalization and overfitting regarding classification. Hence, we train a new RF classifier and SHAP explainer directly on the attack types. Classification performance is considerably improved, and the new explanations are matching the expectations based on domain knowledge better. Thus, we conclude that the imbalances in the dataset bias classification and consequently also the results of XAI methods like SHAP. However, the XAI methods can also be employed to find and debug issues and biases in the data and the applied model. Furthermore, the debugging results in higher trustworthiness of anomaly-based NIDS.
Authored by Eric Lanfer, Sophia Sylvester, Nils Aschenbruck, Martin Atzmueller
With the increasing complexity of network attacks, traditional firewall technologies are facing challenges in effectively detecting and preventing these attacks. As a result, AI technology has emerged as a promising approach to enhance the capabilities of firewalls in detecting and mitigating network attacks. This paper aims to investigate the application of AI firewalls in network attack detection and proposes a testing method to evaluate their performance. An experiment was conducted to verify the feasibility of the proposed testing method. The results demonstrate that AI firewalls exhibit higher accuracy in detecting network attacks, thereby highlighting their effectiveness. Furthermore, the testing method can be utilized to compare different AI firewalls.
Authored by Zhijia Wang, Qi Deng
DDoS is considered as the most dangerous attack and threat to software defined network (SDN). The existing mitigation technologies include flow capacity method, entropy method and flow analysis method. They rely on traffic sampling to achieve true real-time inline DDoS detection accuracy. However, the cost of the method based on traffic sampling is very high. Early detection of DDoS attacks in the controller is very important, which requires highly adaptive and accurate methods. Therefore, this paper proposes an effective and accurate real-time DDoS attack detection technology based on hurst index. The main detection methods of DDoS attacks and the traffic characteristics when DDoS attacks occur are briefly analyzed. The Hurst exponent estimation method and its application in real-time detection (RTD) of DDoS attacks are discussed. Finally, the simulation experiment test analysis is improved to verify the effectiveness and feasibility of RTD of DDoS attacks based on hurst index.
Authored by Ying Ling, Chunyan Yang, Xin Li, Ming Xie, Shaofeng Ming, Jieke Lu, Fuchuan Tang
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Zero Day Threats (ZDT) are novel methods used by malicious actors to attack and exploit information technology (IT) networks or infrastructure. In the past few years, the number of these threats has been increasing at an alarming rate and have been costing organizations millions of dollars to remediate. The increasing expansion of network attack surfaces and the exponentially growing number of assets on these networks necessitate the need for a robust AI-based Zero Day Threat detection model that can quickly analyze petabyte-scale data for potentially malicious and novel activity. In this paper, the authors introduce a deep learning based approach to Zero Day Threat detection that can generalize, scale, and effectively identify threats in near real-time. The methodology utilizes network flow telemetry augmented with asset-level graph features, which are passed through a dual-autoencoder structure for anomaly and novelty detection respectively. The models have been trained and tested on four large scale datasets that are representative of real-world organizational networks and they produce strong results with high precision and recall values. The models provide a novel methodology to detect complex threats with low false positive rates that allow security operators to avoid alert fatigue while drastically reducing their mean time to response with near-real-time detection. Furthermore, the authors also provide a novel, labelled, cyber attack dataset generated from adversarial activity that can be used for validation or training of other models. With this paper, the authors’ overarching goal is to provide a novel architecture and training methodology for cyber anomaly detectors that can generalize to multiple IT networks with minimal to no retraining while still maintaining strong performance.
Authored by Christopher Redino, Dhruv Nandakumar, Robert Schiller, Kevin Choi, Abdul Rahman, Edward Bowen, Aaron Shaha, Joe Nehila, Matthew Weeks
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
In recent years, the security of AI systems has drawn increasing research attention, especially in the medical imaging realm. To develop a secure medical image analysis (MIA) system, it is a must to study possible backdoor attacks (BAs), which can embed hidden malicious behaviors into the system. However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e.g., X-Ray, CT, and MRI) and analysis tasks (e.g., classification, detection, and segmentation). Most existing BA methods are designed to attack natural image classification models, which apply spatial triggers to training images and inevitably corrupt the semantics of poisoned pixels, leading to the failures of attacking dense prediction models. To address this issue, we propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various MIA tasks. Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images. Since it preserves the semantics of the poisoned image pixels, FIBA can perform attacks on both classification and dense prediction models. Experiments on three benchmarks in MIA (i.e., ISIC-2019 [4] for skin lesion classification, KiTS-19 [17] for kidney tumor segmentation, and EAD-2019 [1] for endoscopic artifact detection), validate the effectiveness of FIBA and its superiority over stateof-the-art methods in attacking MIA models and bypassing backdoor defense. Source code will be available at code.
Authored by Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong Xia, Dacheng Tao
Federated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker’s goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve \textgreater 95% true positive rates while incurring only \textless 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.
Authored by Pinar Erbil, Emre Gursoy
Recently, research on AI-based network intrusion detection has been actively conducted. In previous studies, the machine learning models such as SVM (Support Vector Machine) and RF (Random Forest) showed consistently high performance, whereas the NB (Naïve Bayes) showed various performances with large deviations. In the paper, after analyzing the cause of the NB models showing various performances addressed in the several studies, we measured the performance of the Gaussian NB model according to the smoothing factor that is closely related to these causes. Furthermore, we compared the performance of the Gaussian NB model with that of the other models as a zero-day attack detection system. As a result of the experiment, the accuracy was 38.80% and 87.99% in case that the smoothing factor is 0 and default respectively, and the highest accuracy was 94.53% in case that the smoothing factor is 1e-01. In the experiment, we used only some types of the attack data in the NSL-KDD dataset. The experiments showed the applicability of the Gaussian NB model as a zero-day attack detection system in the future. In addition, it is clarified that the smoothing factor of the Gaussian NB model determines the shape of gaussian distribution that is related to the likelihood.
Authored by Kijung Bong, Jonghyun Kim
Phishing activity is undertaken by the hackers to compromise the computer networks and financial system. A compromised computer system or network provides data and or processing resources to the world of cybercrime. Cybercrimes are projected to cost the world \$6 trillion by 2021, in this context phishing is expected to continue being a growing challenge. Statistics around phishing growth over the last decade support this theory as phishing numbers enjoy almost an exponential growth over the period. Recent reports on the complexity of the phishing show that the fight against phishing URL as a means of building more resilient cyberspace is an evolving challenge. Compounding the problem is the lack of cyber security expertise to handle the expected rise in incidents. Previous research have proposed different methods including neural network, data mining technique, heuristic-based phishing detection technique, machine learning to detect phishing websites. However, recently phishers have started to use more sophisticated techniques to attack the internet users such as VoIP phishing, spear phishing etc. For these modern methods, the traditional ways of phishing detection provide low accuracy. Hence, the requirement arises for the application and development of modern tools and techniques to use as a countermeasure against such phishing attacks. Keeping in view the nature of recent phishing attacks, it is imperative to develop a state-of-the art anti-phishing tool which should be able to predict the phishing attacks before the occurrence of actual phishing incidents. We have designed such a tool that will work efficiently to detect the phishing websites so that a user can understand easily the risk of using of his personal and financial data.
Authored by Rajeev Shah, Mohammad Hasan, Shayla Islam, Asif Khan, Taher Ghazal, Ahmad Khan