This paper introduces an application of machine learning algorithms. In fact, support vector machine and decision tree approaches are studied and applied to compare their performances in detecting, classifying, and locating faults in the transmission network. The IEEE 14-bus transmission network is considered in this work. Besides, 13 types of faults are tested. Particularly, the one fault and the multiple fault cases are investigated and tested separately. Fault simulations are performed using the SimPowerSystems toolbox in Matlab. Basing on the accuracy score, a comparison is made between the proposed approaches while testing simple faults, on the one hand, and when complicated faults are integrated, on the other hand. Simulation results prove that the support vector machine technique can achieve an accuracy of 87% compared to the decision tree which had an accuracy of 53% in complicated cases.
Authored by Nouha Bouchiba, Azeddine Kaddouri
The rapid growth of number of devices that are connected to internet of things (IoT) networks, increases the severity of security problems that need to be solved in order to provide safe environment for network data exchange. The discovery of new vulnerabilities is everyday challenge for security experts and many novel methods for detection and prevention of intrusions are being developed for dealing with this issue. To overcome these shortcomings, artificial intelligence (AI) can be used in development of advanced intrusion detection systems (IDS). This allows such system to adapt to emerging threats, react in real-time and adjust its behavior based on previous experiences. On the other hand, the traffic classification task becomes more difficult because of the large amount of data generated by network systems and high processing demands. For this reason, feature selection (FS) process is applied to reduce data complexity by removing less relevant data for the active classification task and therefore improving algorithm's accuracy. In this work, hybrid version of recently proposed sand cat swarm optimizer algorithm is proposed for feature selection with the goal of increasing performance of extreme learning machine classifier. The performance improvements are demonstrated by validating the proposed method on two well-known datasets - UNSW-NB15 and CICIDS-2017, and comparing the results with those reported for other cutting-edge algorithms that are dealing with the same problems and work in a similar configuration.
Authored by Dijana Jovanovic, Marina Marjanovic, Milos Antonijevic, Miodrag Zivkovic, Nebojsa Budimirovic, Nebojsa Bacanin
Swarm robotics is a network based multi-device system designed to achieve shared objectives in a synchronized way. This system is widely used in industries like farming, manufacturing, and defense applications. In recent implementations, swarm robotics is integrated with Blockchain based networks to enhance communication, security, and decentralized decision-making capabilities. As most of the current blockchain applications are based on complex consensus algorithms, every individual robot in the swarm network requires high computing power to run these complex algorithms. Thus, it is a challenging task to achieve consensus between the robots in the network. This paper will discuss the details of designing an effective consensus algorithm that meets the requirements of swarm robotics network.
Authored by Sathishkumar Ranganathan, Muralindran Mariappan, Karthigayan Muthukaruppan
Classification problems have been part of numerous real-life applications in fields of security, medicine, agriculture, and more. Due to the wide range of applications, there is a constant need for more accurate and efficient methods. Besides more efficient and better classification algorithms, the optimal feature set is a significant factor for better classification accuracy. In general, more features can better describe instances, but besides showing differences between instances of different classes, it can also capture many similarities that lead to wrong classification. Determining the optimal feature set can be considered a hard optimization problem for which different metaheuristics, like swarm intelligence algorithms can be used. In this paper, we propose an adaptation of hybridized swarm intelligence (SI) algorithm for feature selection problem. To test the quality of the proposed method, classification was done by k-means algorithm and it was tested on 17 benchmark datasets from the UCI repository. The results are compared to similar approaches from the literature where SI algorithms were used for feature selection, which proves the quality of the proposed hybridized SI method. The proposed method achieved better classification accuracy for 16 datasets. Higher classification accuracy was achieved while simultaneously reducing the number of used features.
Authored by Eva Tuba, Adis Alihodzic, Una Tuba, Romana Hrosik, Milan Tuba
The Internet of Things (IoT) is advancing technology by creating smart surroundings that make it easier for humans to do their work. This technological advancement not only improves human life and expands economic opportunities, but also allows intruders or attackers to discover and exploit numerous methods in order to circumvent the security of IoT networks. Hence, security and privacy are the key concerns to the IoT networks. It is vital to protect computer and IoT networks from many sorts of anomalies and attacks. Traditional intrusion detection systems (IDS) collect and employ large amounts of data with irrelevant and inappropriate attributes to train machine learning models, resulting in long detection times and a high rate of misclassification. This research presents an advance approach for the design of IDS for IoT networks based on the Particle Swarm Optimization Algorithm (PSO) for feature selection and the Extreme Gradient Boosting (XGB) model for PSO fitness function. The classifier utilized in the intrusion detection process is Random Forest (RF). The IoTID20 is being utilized to evaluate the efficacy and robustness of our suggested strategy. The proposed system attains the following level of accuracy on the IoTID20 dataset for different levels of classification: Binary classification 98 %, multiclass classification 83 %. The results indicate that the proposed framework effectively detects cyber threats and improves the security of IoT networks.
Authored by Asima Sarwar, Salva Hasan, Waseem Khan, Salman Ahmed, Safdar Marwat
Any decentralized, biased distributed network is susceptible to the Sybil malicious attack, in which a malicious node masquerades as numerous different nodes, collectively referred to as Sybil nodes, causing the network to become unresponsive. Cloud computing environments are characterized by their loosely linked nature, which means that no node has comprehensive information of the entire system. In order to prevent Sybil attacks in cloud computing systems, it is necessary to detect them as soon as they occur. The network’s ability to function properly A Sybil attacker has the ability to construct. It is necessary to have multiple identities on a single physical device in order to execute a concerted attack on the network or switch between networks identities in order to make the detection process more difficult, and thereby lack of accountability is being promoted throughout the network. The purpose of this study is to Various varieties of Sybil assaults have been documented, including those that occur in Peer-to-peer reputation systems, self-organizing networks, and other similar technologies. The topic of social network systems is discussed. In addition, there are other approaches in which it has been urged over time that they be reduced or eliminated Their potential risks are also thoroughly investigated.
Authored by Ravula Kumar, Srikar Konda, Ramesh Karnati, Ravi Kumar.E, NarenderRavula
As a result of the inherent weaknesses of the wireless medium, ad hoc networks are susceptible to a broad variety of threats and assaults. As a direct consequence of this, intrusion detection, as well as security, privacy, and authentication in ad-hoc networks, have developed into a primary focus of current study. This body of research aims to identify the dangers posed by a variety of assaults that are often seen in wireless ad-hoc networks and provide strategies to counteract those dangers. The Black hole assault, Wormhole attack, Selective Forwarding attack, Sybil attack, and Denial-of-Service attack are the specific topics covered in this thesis. In this paper, we describe a trust-based safe routing protocol with the goal of mitigating the interference of black hole nodes in the course of routing in mobile ad-hoc networks. The overall performance of the network is negatively impacted when there are black hole nodes in the route that routing takes. As a result, we have developed a routing protocol that reduces the likelihood that packets would be lost as a result of black hole nodes. This routing system has been subjected to experimental testing in order to guarantee that the most secure path will be selected for the delivery of packets between a source and a destination. The invasion of wormholes into a wireless network results in the segmentation of the network as well as a disorder in the routing. As a result, we provide an effective approach for locating wormholes by using ordinal multi-dimensional scaling and round trip duration in wireless ad hoc networks with either sparse or dense topologies. Wormholes that are linked by both short route and long path wormhole linkages may be found using the approach that was given. In order to guarantee that this ad hoc network does not include any wormholes that go unnoticed, this method is subjected to experimental testing. In order to fight against selective forwarding attacks in wireless ad-hoc networks, we have developed three different techniques. The first method is an incentive-based algorithm that makes use of a reward-punishment system to drive cooperation among three nodes for the purpose of vi forwarding messages in crowded ad-hoc networks. A unique adversarial model has been developed by our team, and inside it, three distinct types of nodes and the activities they participate in are specified. We have shown that the suggested strategy that is based on incentives prohibits nodes from adopting an individualistic behaviour, which ensures collaboration in the process of packet forwarding. To guarantee that intermediate nodes in resource-constrained ad-hoc networks accurately convey packets, the second approach proposes a game theoretic model that uses non-cooperative game theory. This model is based on the idea that game theory may be used. This game reaches a condition of desired equilibrium, which assures that cooperation in multi-hop communication is physically possible, and it is this state that is discovered. In the third algorithm, we present a detection approach that locates malicious nodes in multihop hierarchical ad-hoc networks by employing binary search and control packets. We have shown that the cluster head is capable of accurately identifying the malicious node by analysing the sequences of packets that are dropped along the path leading from a source node to the cluster head. A lightweight symmetric encryption technique that uses Binary Playfair is presented here as a means of safeguarding the transport of data. We demonstrate via experimentation that the suggested encryption method is efficient with regard to the amount of energy used, the amount of time required for encryption, and the memory overhead. This lightweight encryption technique is used in clustered wireless ad-hoc networks to reduce the likelihood of a sybil attack occurring in such networks
Authored by Chethana C, Piyush Pareek, Victor de Albuquerque, Ashish Khanna, Deepak Gupta
Obfuscation refers to changing the structure of code in a way that original semantics can be hidden. These techniques are often used by application developers for code hardening but it has been found that obfuscation techniques are widely used by malware developers in order to hide the work flow and semantics of malicious code. Class Encryption, Code Re-Ordering, Junk Code insertion and Control Flow modifications are Code Obfuscation techniques. In these techniques, code of the application is changed. These techniques change the signature of the application and also affect the systems that use sequence of instructions in order to detect maliciousness of an application. In this paper an ’Opcode sequence’ based detection system is designed and tested against obfuscated samples. It has been found that the system works efficiently for the detection of non obfuscated samples but the performance is effected significantly against obfuscated samples. The study tests different code obfuscation schemes and reports the effect of each on sequential opcode based analytic system.
Authored by Saneeha Khalid, Faisal Hussain
This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.
Authored by Wisam Abbasi, Paolo Mori, Andrea Saracino, Valerio Frascolla
Modern consumer electronic devices often provide intelligence services with deep neural networks. We have started migrating the computing locations of intelligence services from cloud servers (traditional AI systems) to the corresponding devices (on-device AI systems). On-device AI systems generally have the advantages of preserving privacy, removing network latency, and saving cloud costs. With the emergence of on-device AI systems having relatively low computing power, the inconsistent and varying hardware resources and capabilities pose difficulties. Authors' affiliation has started applying a stream pipeline framework, NNStreamer, for on-device AI systems, saving developmental costs and hardware resources and improving performance. We want to expand the types of devices and applications with on-device AI services products of both the affiliation and second/third parties. We also want to make each AI service atomic, re-deployable, and shared among connected devices of arbitrary vendors; we now have yet another requirement introduced as it always has been. The new requirement of “among-device AI” includes connectivity between AI pipelines so that they may share computing resources and hardware capabilities across a wide range of devices regardless of vendors and manufacturers. We propose extensions of the stream pipeline framework, NNStreamer, for on-device AI so that NNStreamer may provide among-device AI capability. This work is a Linux Foundation (LF AI & Data) open source project accepting contributions from the general public.
Authored by MyungJoo Ham, Sangjung Woo, Jaeyun Jung, Wook Song, Gichan Jang, Yongjoo Ahn, Hyoungjoo Ahn
Throughout history, technological evolution has generated less desired side effects with impact on society. In the field of IT&C, there are ongoing discussions about the role of robots within economy, but also about their impact on the labour market. In the case of digital media systems, we talk about misinformation, manipulation, fake news, etc. Issues related to the protection of the citizen's life in the face of technology began more than 25 years ago; In addition to the many messages such as “the citizen is at the center of concern” or, “privacy must be respected”, transmitted through various channels of different entities or companies in the field of ICT, the EU has promoted a number of legislative and normative documents to protect citizens' rights and freedoms.
Authored by Doina Banciu, Carmen Cîrnu
Blockchain and artificial intelligence are two technologies that, when combined, have the ability to help each other realize their full potential. Blockchains can guarantee the accessibility and consistent admittance to integrity safeguarded big data indexes from numerous areas, allowing AI systems to learn more effectively and thoroughly. Similarly, artificial intelligence (AI) can be used to offer new consensus processes, and hence new methods of engaging with Blockchains. When it comes to sensitive data, such as corporate, healthcare, and financial data, various security and privacy problems arise that must be properly evaluated. Interaction with Blockchains is vulnerable to data credibility checks, transactional data leakages, data protection rules compliance, on-chain data privacy, and malicious smart contracts. To solve these issues, new security and privacy-preserving technologies are being developed. AI-based blockchain data processing, either based on AI or used to defend AI-based blockchain data processing, is emerging to simplify the integration of these two cutting-edge technologies.
Authored by Ramiz Salama, Fadi Al-Turjman
With the development of artificial intelligence, the need for data sharing is becoming more and more urgent. However, the existing data sharing methods can no longer fully meet the data sharing needs. Privacy breaches, lack of motivation and mutual distrust have become obstacles to data sharing. We design a privacy-preserving, decentralized data sharing method based on blockchain smart contracts, named PPDS. To protect data privacy, we transform the data sharing problem into a model sharing problem. This means that the data owner does not need to directly share the raw data, but the AI model trained with such data. The data requester and the data owner interact on the blockchain through a smart contract. The data owner trains the model with local data according to the requester's requirements. To fairly assess model quality, we set up several model evaluators to assess the validity of the model through voting. After the model is verified, the data owner who trained the model will receive reward in return through a smart contract. The sharing of the model avoids direct exposure of the raw data, and the reasonable incentive provides a motivation for the data owner to share the data. We describe the design and workflow of our PPDS, and analyze the security using formal verification technology, that is, we use Coloured Petri Nets (CPN) to build a formal model for our approach, proving its security through simulation execution and model checking. Finally, we demonstrate effectiveness of PPDS by developing a prototype with its corresponding case application.
Authored by Xuesong Hai, Jing Liu
Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ∼ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
Authored by Zacharias Anastasakis, Konstantinos Psychogyios, Terpsi Velivassaki, Stavroula Bourou, Artemis Voulkidis, Dimitrios Skias, Antonis Gonos, Theodore Zahariadis
Recent trends in the convergence of edge computing and artificial intelligence (AI) have led to a new paradigm of “edge intelligence”, which are more vulnerable to attack such as data and model poisoning and evasion of attacks. This paper proposes a white-box poisoning attack against online regression model for edge intelligence environment, which aim to prepare the protection methods in the future. Firstly, the new method selects data points from original stream with maximum loss by two selection strategies; Secondly, it pollutes these points with gradient ascent strategy. At last, it injects polluted points into original stream being sent to target model to complete the attack process. We extensively evaluate our proposed attack on open dataset, the results of which demonstrate the effectiveness of the novel attack method and the real implications of poisoning attack in a case study electric energy prediction application.
Authored by Yanxu Zhu, Hong Wen, Peng Zhang, Wen Han, Fan Sun, Jia Jia
Machine Learning (ML) and Artificial Intelligence (AI) techniques are widely adopted in the telecommunication industry, especially to automate beyond 5G networks. Federated Learning (FL) recently emerged as a distributed ML approach that enables localized model training to keep data decentralized to ensure data privacy. In this paper, we identify the applicability of FL for securing future networks and its limitations due to the vulnerability to poisoning attacks. First, we investigate the shortcomings of state-of-the-art security algorithms for FL and perform an attack to circumvent FoolsGold algorithm, which is known as one of the most promising defense techniques currently available. The attack is launched with the addition of intelligent noise at the poisonous model updates. Then we propose a more sophisticated defense strategy, a threshold-based clustering mechanism to complement FoolsGold. Moreover, we provide a comprehensive analysis of the impact of the attack scenario and the performance of the defense mechanism.
Authored by Yushan Siriwardhana, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila
Cyber attacks keep states, companies and individuals at bay, draining precious resources including time, money, and reputation. Attackers thereby seem to have a first mover advantage leading to a dynamic defender attacker game. Automated approaches taking advantage of Cyber Threat Intelligence on past attacks bear the potential to empower security professionals and hence increase cyber security. Consistently, there has been a lot of research on automated approaches in cyber risk management including works on predictive attack algorithms and threat hunting. Combining data on countermeasures from “MITRE Detection, Denial, and Disruption Framework Empowering Network Defense” and adversarial data from “MITRE Adversarial Tactics, Techniques and Common Knowledge” this work aims at developing methods that enable highly precise and efficient automatic incident response. We introduce Attack Incident Responder, a methodology working with simple heuristics to find the most efficient sets of counter-measures for hypothesized attacks. By doing so, the work contributes to narrowing the attackers first mover advantage. Experimental results are promising high average precisions in predicting effiective defenses when using the methodology. In addition, we compare the proposed defense measures against a static set of defensive techniques offering robust security against observed attacks. Furthermore, we combine the approach of automated incidence response to an approach for threat hunting enabling full automation of security operation centers. By this means, we define a threshold in the precision of attack hypothesis generation that must be met for predictive defense algorithms to outperform the baseline. The calculated threshold can be used to evaluate attack hypothesis generation algorithms. The presented methodology for automated incident response may be a valuable support for information security professionals. Last, the work elaborates on the combination of static base defense with adaptive incidence response for generating a bio-inspired artificial immune system for computerized networks.
Authored by Florian Kaiser, Leon Andris, Tim Tennig, Jonas Iser, Marcus Wiens, Frank Schultmann
Effective information security risk management is essential for survival of any business that is dependent on IT. In this paper we present an efficient and effective solution to find best parameters for managing cyber risks using artificial intelligence. Genetic algorithm is use as it can provide our required optimization and intelligence. Results show that GA is professional in finding the best parameters and minimizing the risk.
Authored by Osama Hosam
Traditional power consumption management systems are not showing enough reliability and thus, smart grid technology has been introduced to reduce the excess power wastages. In the context of smart grid systems, network communication is another term that is used for developing the network between the users and the load profiles. Cloud computing and clustering are also executed for efficient power management. Based on the facts, this research is going to identify wireless network communication systems to monitor and control smart grid power consumption. Primary survey-based research has been carried out with 62 individuals who worked in the smart grid system, tracked, monitored and controlled the power consumptions using WSN technology. The survey was conducted online where the respondents provided their opinions via a google survey form. The responses were collected and analyzed on Microsoft Excel. Results show that hybrid commuting of cloud and edge computing technology is more advantageous than individual computing. Respondents agreed that deep learning techniques will be more beneficial to analyze load profiles than machine learning techniques. Lastly, the study has explained the advantages and challenges of using smart grid network communication systems. Apart from the findings from primary research, secondary journal articles were also observed to emphasize the research findings.
Authored by Santosh Kumar, N Kumar, B.T. Geetha, M. Sangeetha, Kalyan Chakravarthi, Vikas Tripathi
This article describes an analysis of the key technologies currently applied to improve the quality, efficiency, safety and sustainability of Smart Grid systems and identifies the tools to optimize them and possible gaps in this area, considering the different energy sources, distributed generation, microgrids and energy consumption and production capacity. The research was conducted with a qualitative methodological approach, where the literature review was carried out with studies published from 2019 to 2022, in five (5) databases following the selection of studies recommended by the PRISMA guide. Of the five hundred and four (504) publications identified, ten (10) studies provided insight into the technological trends that are impacting this scenario, namely: Internet of Things, Big Data, Edge Computing, Artificial Intelligence and Blockchain. It is concluded that to obtain the best performance within Smart Grids, it is necessary to have the maximum synergy between these technologies, since this union will enable the application of advanced smart digital technology solutions to energy generation and distribution operations, thus allowing to conquer a new level of optimization.
Authored by Ivonne Núñez, Elia Cano, Carlos Rovetto, Karina Ojo-Gonzalez, Andrzej Smolarz, Juan Saldana-Barrios
In view of the problems that the existing power grid risk assessment mainly depends on the data fusion of decision-making level, which has strong subjectivity and less effective information, this paper proposes a risk assessment method of microgrid system based on random matrix theory. Firstly, the time series data of multiple sensors are constructed into a high-dimensional matrix according to the different parameter types and nodes; Then, based on random matrix theory and sliding time window processing, the average spectral radius sequence is calculated to characterize the state of microgrid system. Finally, an example is given to verify the effectiveness of the method.
Authored by Xi Cheng, Yafeng Liang, Jianhong Qiu, XiaoLi Zhao, Lihong Ma
With the proliferation of data in Internet-related applications, incidences of cyber security have increased manyfold. Energy management, which is one of the smart city layers, has also been experiencing cyberattacks. Furthermore, the Distributed Energy Resources (DER), which depend on different controllers to provide energy to the main physical smart grid of a smart city, is prone to cyberattacks. The increased cyber-attacks on DER systems are mainly because of its dependency on digital communication and controls as there is an increase in the number of devices owned and controlled by consumers and third parties. This paper analyzes the major cyber security and privacy challenges that might inflict, damage or compromise the DER and related controllers in smart cities. These challenges highlight that the security and privacy on the Internet of Things (IoT), big data, artificial intelligence, and smart grid, which are the building blocks of a smart city, must be addressed in the DER sector. It is observed that the security and privacy challenges in smart cities can be solved through the distributed framework, by identifying and classifying stakeholders, using appropriate model, and by incorporating fault-tolerance techniques.
Authored by Tarik Himdi, Mohammed Ishaque, Muhammed Ikram
Automatic Identification System (AIS) plays a leading role in maritime navigation, traffic control, local and global maritime situational awareness. Today, the reliable and secure AIS operation is threatened by probable cyber attacks such as imitation of ghost vessels, false distress or security messages, or fake virtual aids-to-navigation. We propose a method for ensuring the authentication and integrity of AIS messages based on the use of the Message Authentication Code scheme and digital watermarking (WM) technology to organize an additional tag transmission channel. The method provides full compatibility with the existing AIS functionality.
Authored by Oleksandr Shyshkin
To fulfill different requirements from various services, the smart grid typically uses 5G network slicing technique for splitting the physical network into multiple virtual logical networks. By doing so, end users in smart grid can select appropriate slice that is suitable for their services. Privacy has vital significance in network slicing selection, since both the end user and the network entities are afraid that their sensitive slicing features are leaked to an adversary. At the same time, in the smart grid, there are many low-power users who are not suitable for complex security schemes. Therefore, both security and efficiency are basic requirements for 5G slicing selection schemes. Considering both security and efficiency, we propose a 5G slicing selection security scheme based on matching degree estimation, called SS-MDE. In SS-MDE, a set of random numbers is used to hide the feature information of the end user and the AMF which can provide privacy protection for exchanged slicing features. Moreover, the best matching slice is selected by calculating the Euclid distance between two slices. Since the algorithms used in SS-MDE include only several simple mathematical operations, which are quite lightweight, SS-MDE can achieve high efficiency. At the same time, since third-party attackers cannot extract the slicing information, SS-MDE can fulfill security requirements. Experimental results show that the proposed scheme is feasible in real world applications.
Authored by Wei Wang, Jiming Yao, Weiping Shao, Yangzhou Xu, Shaowu Peng
5G has significantly facilitated the development of attractive applications such as autonomous driving and telemedicine due to its lower latency, higher data rates, and enormous connectivity. However, there are still some security and privacy issues in 5G, such as network slicing privacy and flexibility and efficiency of network slicing selection. In the smart grid scenario, this paper proposes a 5G slice selection security scheme based on the Pohlig-Hellman algorithm, which realizes the protection of slice selection privacy data between User i(Ui) and Access and Mobility Management function (AMF), so that the data will not be exposed to third-party attackers. Compared with other schemes, the scheme proposed in this paper is simple in deployment, low in computational overhead, and simple in process, and does not require the help of PKI system. The security analysis also verifies that the scheme can accurately protect the slice selection privacy data between Ui and AMF.
Authored by Jiming Yao, Peng Wu, Duanyun Chen, Wei Wang, Youxu Fang