In today's era, the smart grid is the carrier of the new energy technology revolution and a very critical development stage for grid intelligence. In the process of smart grid operation, maintenance and maintenance, many heterogeneous and polymorphic data can be formed, that is to say big data. This paper analyzes the power big data prediction technology for smart grid applications, and proposes practical application strategies In this paper, an in-depth analysis of the relationship between cloud computing and big data key technologies and smart grid is carried out, and an overview of the key technologies of electric power big data is carried out.
Authored by Guang-ye Li, Jia-xin Zhang, Xin Wen, Lang-Ming Xu, Ying Yuan
5G network slicing plays a key role in the smart grid business. The existing authentication schemes for 5G slicing in smart grids require high computing costs, so they are time-consuming and do not fully consider the security of authentication. Aiming at the application scenario of 5G smart grid, this paper proposes an identity-based lightweight secondary authentication scheme. Compared with other well-known methods, in the protocol interaction of this paper, both the user Ui and the grid server can authenticate each other's identities, thereby preventing illegal users from pretending to be identities. The grid user Ui and the grid server can complete the authentication process without resorting to complex bilinear mapping calculations, so the computational overhead is small. The grid user and grid server can complete the authentication process without transmitting the original identification. Therefore, this scheme has the feature of anonymous authentication. In this solution, the authentication process does not require infrastructure such as PKI, so the deployment is simple. Experimental results show that the protocol is feasible in practical applications
Authored by Yue Yu, Jiming Yao, Wei Wang, Lanxin Qiu, Yangzhou Xu
With the continuous development of the Internet, artificial intelligence, 5G and other technologies, various issues have started to receive attention, among which the network security issue is now one of the key research directions for relevant research scholars at home and abroad. This paper researches on the basis of traditional Internet technology to establish a security identification system on top of the network physical layer of the Internet, which can effectively identify some security problems on top of the network infrastructure equipment and solve the identified security problems on the physical layer. This experiment is to develop a security identification system, research and development in the network physical level of the Internet, compared with the traditional development of the relevant security identification system in the network layer, the development in the physical layer, can be based on the physical origin of the protection, from the root to solve part of the network security problems, can effectively carry out the identification and solution of network security problems. The experimental results show that the security identification system can identify some basic network security problems very effectively, and the system is developed based on the physical layer of the Internet network, and the protection is carried out from the physical device, and the retransmission symbol error rates of CQ-PNC algorithm and ML algorithm in the experiment are 110 and 102, respectively. The latter has a lower error rate and better protection.
Authored by Yunge Huang
In recent years, the blackout accident shows that the cause of power failure is not only in the power network, but also in the cyber network. Aiming at the problem of cyber network fault Cyber-physical power systems, combined with the structure and functional attributes of cyber network, the comprehensive criticality of information node is defined. By evaluating the vulnerability of ieee39 node system, it is found that the fault of high comprehensive criticality information node will cause greater load loss to the system. The simulation results show that the comprehensive criticality index can effectively identify the key nodes of the cyber network.
Authored by Duanyun Chen, Zewen Chen, Jie Li, Jidong Liu
Software vulnerabilities threaten the security of computer system, and recently more and more loopholes have been discovered and disclosed. For the detected vulnerabilities, the relevant personnel will analyze the vulnerability characteristics, and combine the vulnerability scoring system to determine their severity level, so as to determine which vulnerabilities need to be dealt with first. In recent years, some characteristic description-based methods have been used to predict the severity level of vulnerability. However, the traditional text processing methods only grasp the superficial meaning of the text and ignore the important contextual information in the text. Therefore, this paper proposes an innovative method, called BERT-CNN, which combines the specific task layer of Bert with CNN to capture important contextual information in the text. First, we use Bert to process the vulnerability description and other information, including Access Gained, Attack Origin and Authentication Required, to generate the feature vectors. Then these feature vectors of vulnerabilities and their severity levels are input into a CNN network, and the parameters of the CNN are gotten. Next, the fine-tuned Bert and the trained CNN are used to predict the severity level of a vulnerability. The results show that our method outperforms the state-of-the-art method with 91.31% on F1-score.
Authored by Xuming Ni, Jianxin Zheng, Yu Guo, Xu Jin, Ling Li
Due to the simplicity of implementation and high threat level, SQL injection attacks are one of the oldest, most prevalent, and most destructive types of security attacks on Web-based information systems. With the continuous development and maturity of artificial intelligence technology, it has been a general trend to use AI technology to detect SQL injection. The selection of the sample set is the deciding factor of whether AI algorithms can achieve good results, but dataset with tagged specific category labels are difficult to obtain. This paper focuses on data augmentation to learn similar feature representations from the original data to improve the accuracy of classification models. In this paper, deep convolutional generative adversarial networks combined with genetic algorithms are applied to the field of Web vulnerability attacks, aiming to solve the problem of insufficient number of SQL injection samples. This method is also expected to be applied to sample generation for other types of vulnerability attacks.
Authored by Dongzhe Lu, Jinlong Fei, Long Liu, Zecun Li
In the present innovation, for the trading of information, the internet is the most well-known and significant medium. With the progression of the web and data innovation, computerized media has become perhaps the most famous and notable data transfer tools. This advanced information incorporates text, pictures, sound, video etc moved over the public organization. The majority of these advanced media appear as pictures and are a significant part in different applications, for example, chat, talk, news, website, web-based business, email, and digital books. The content is still facing various challenges in which including the issues of protection of copyright, modification, authentication. Cryptography, steganography, embedding techniques is widely used to secure the digital data. In this present the hybrid model of LSB steganography and Advanced Encryption Standard (AES) cryptography techniques to enhanced the security of the digital image and text that is undeniably challenging to break by the unapproved person. The security level of the secret information is estimated in the term of MSE and PSNR for better hiding required the low MSE and high PSNR values.
Authored by Manish Kumar, Aman Soni, Ajay Shekhawat, Akash Rawat
Protection of private and sensitive information is the most alarming issue for security providers in surveillance videos. So to provide privacy as well as to enhance secrecy in surveillance video without affecting its efficiency in detection of violent activities is a challenging task. Here a steganography based algorithm has been proposed which hides private information inside the surveillance video without affecting its accuracy in criminal activity detection. Preprocessing of the surveillance video has been performed using Tunable Q-factor Wavelet Transform (TQWT), secret data has been hidden using Discrete Wavelet Transform (DWT) and after adding payload to the surveillance video, detection of criminal activities has been conducted with maintaining same accuracy as original surveillance video. UCF-crime dataset has been used to validate the proposed framework. Feature extraction is performed and after feature selection it has been trained to Temporal Convolutional Network (TCN) for detection. Performance measure has been compared to the state-of-the-art methods which shows that application of steganography does not affect the detection rate while preserving the perceptual quality of the surveillance video.
Authored by Sonali Rout, Ramesh Mohapatra
Network Intrusion Detection Systems (IDSs) have been used to increase the level of network security for many years. The main purpose of such systems is to detect and block malicious activity in the network traffic. Researchers have been improving the performance of IDS technology for decades by applying various machine-learning techniques. From the perspective of academia, obtaining a quality dataset (i.e. a sufficient amount of captured network packets that contain both malicious and normal traffic) to support machine learning approaches has always been a challenge. There are many datasets publicly available for research purposes, including NSL-KDD, KDDCUP 99, CICIDS 2017 and UNSWNB15. However, these datasets are becoming obsolete over time and may no longer be adequate or valid to model and validate IDSs against state-of-the-art attack techniques. As attack techniques are continuously evolving, datasets used to develop and test IDSs also need to be kept up to date. Proven performance of an IDS tested on old attack patterns does not necessarily mean it will perform well against new patterns. Moreover, existing datasets may lack certain data fields or attributes necessary to analyse some of the new attack techniques. In this paper, we argue that academia needs up-to-date high-quality datasets. We compare publicly available datasets and suggest a way to provide up-to-date high-quality datasets for researchers and the security industry. The proposed solution is to utilize the network traffic captured from the Locked Shields exercise, one of the world’s largest live-fire international cyber defence exercises held annually by the NATO CCDCOE. During this three-day exercise, red team members consisting of dozens of white hackers selected by the governments of over 20 participating countries attempt to infiltrate the networks of over 20 blue teams, who are tasked to defend a fictional country called Berylia. After the exercise, network packets captured from each blue team’s network are handed over to each team. However, the countries are not willing to disclose the packet capture (PCAP) files to the public since these files contain specific information that could reveal how a particular nation might react to certain types of cyberattacks. To overcome this problem, we propose to create a dedicated virtual team, capture all the traffic from this team’s network, and disclose it to the public so that academia can use it for unclassified research and studies. In this way, the organizers of Locked Shields can effectively contribute to the advancement of future artificial intelligence (AI) enabled security solutions by providing annual datasets of up-to-date attack patterns.
Authored by Maj. Halisdemir, Hacer Karacan, Mauno Pihelgas, Toomas Lepik, Sungbaek Cho
Security is a key concern across the world, and it has been a common thread for all critical sectors. Nowadays, it may be stated that security is a backbone that is absolutely necessary for personal safety. The most important requirements of security systems for individuals are protection against theft and trespassing. CCTV cameras are often employed for security purposes. The biggest disadvantage of CCTV cameras is their high cost and the need for a trustworthy individual to monitor them. As a result, a solution that is both easy and cost-effective, as well as secure has been devised. The smart door lock is built on Raspberry Pi technology, and it works by capturing a picture through the Pi Camera module, detecting a visitor's face, and then allowing them to enter. Local binary pattern approach is used for Face recognition. Remote picture viewing, notification, on mobile device are all possible with an IOT based application. The proposed system may be installed at front doors, lockers, offices, and other locations where security is required. The proposed system has an accuracy of 89%, with an average processing time is 20 seconds for the overall process.
Authored by Om Doshi, Hitesh Bendale, Aarti Chavan, Shraddha More
Phishing is a method of online fraud where attackers are targeted to gain access to the computer systems for monetary benefits or personal gains. In this case, the attackers pose themselves as legitimate entities to gain the users' sensitive information. Phishing has been significant concern over the past few years. The firms are recording an increase in phishing attacks primarily aimed at the firm's intellectual property and the employees' sensitive data. As a result, these attacks force firms to spend more on information security, both in technology-centric and human-centric approaches. With the advancements in cyber-security in the last ten years, many techniques evolved to detect phishing-related activities through websites and emails. This study focuses on the latest techniques used for detecting phishing attacks, including the usage of Visual selection features, Machine Learning (ML), and Artificial Intelligence (AI) to see the phishing attacks. New strategies for identifying phishing attacks are evolving, but limited standardized knowledge on phishing identification and mitigation is accessible from user awareness training. So, this study also focuses on the role of security-awareness movements to minimize the impact of phishing attacks. There are many approaches to train the user regarding these attacks, such as persona-centred training, anti-phishing techniques, visual discrimination training and the usage of spam filters, robust firewalls and infrastructure, dynamic technical defense mechanisms, use of third-party certified software to mitigate phishing attacks from happening. Therefore, the purpose of this paper is to carry out a systematic analysis of literature to assess the state of knowledge in prominent scientific journals on the identification and prevention of phishing. Forty-three journal articles with the perspective of phishing detection and prevention through awareness training were reviewed from 2011 to 2020. This timely systematic review also focuses on the gaps identified in the selected primary studies and future research directions in this area.
Authored by Kanchan Patil, Sai Arra
Phishing has become a prominent method of data theft among hackers, and it continues to develop. In recent years, many strategies have been developed to identify phishing website attempts using machine learning particularly. However, the algorithms and classification criteria that have been used are highly different from the real issues and need to be compared. This paper provides a detailed comparison and evaluation of the performance of several machine learning algorithms across multiple datasets. Two phishing website datasets were used for the experiments: the Phishing Websites Dataset from UCI (2016) and the Phishing Websites Dataset from Mendeley (2018). Because these datasets include different types of class labels, the comparison algorithms can be applied in a variety of situations. The tests showed that Random Forest was better than other classification methods, with an accuracy of 88.92% for the UCI dataset and 97.50% for the Mendeley dataset.
Authored by Wendy Sarasjati, Supriadi Rustad, Purwanto, Heru Santoso, Muljono, Abdul Syukur, Fauzi Rafrastara, De Setiadi
Virtual Private Networks (VPNs) have become a communication medium for accessing information, data exchange and flow of information. Many organizations require Intranet or VPN, for data access, access to servers from computers and sharing different types of data among their offices and users. A secure VPN environment is essential to the organizations to protect the information and their IT infrastructure and their assets. Every organization needs to protect their computer network environment from various malicious cyber threats. This paper presents a comprehensive network security management which includes significant strategies and protective measures during the management of a VPN in an organization. The paper also presents the procedures and necessary counter measures to preserve the security of VPN environment and also discussed few Identified Security Strategies and measures in VPN. It also briefs the Network Security and their Policies Management for implementation by covering security measures in firewall, visualized security profile, role of sandbox for securing network. In addition, a few identified security controls to strengthen the organizational security which are useful in designing a secure, efficient and scalable VPN environment, are also discussed.
Authored by Srinivasa Pedapudi, Nagalakshmi Vadlamani
In this study, the nature of human trust in communication robots was experimentally investigated comparing with trusts in other people and artificial intelligence (AI) systems. The results of the experiment showed that trust in robots is basically similar to that in AI systems in a calculation task where a single solution can be obtained and is partly similar to that in other people in an emotion recognition task where multiple interpretations can be acceptable. This study will contribute to designing a smooth interaction between people and communication robots.
Authored by Akihiro Maehigashi
Global traffic data are proliferating, including in Indonesia. The number of internet users in Indonesia reached 205 million in January 2022. This data means that 73.7% of Indonesia’s population has used the internet. The median internet speed for mobile phones in Indonesia is 15.82 Mbps, while the median internet connection speed for Wi-Fi in Indonesia is 20.13 Mbps. As predicted by many, real-time traffic such as multimedia streaming dominates more than 79% of traffic on the internet network. This condition will be a severe challenge for the internet network, which is required to improve the Quality of Experience (QoE) for user mobility, such as reducing delay, data loss, and network costs. However, IP-based networks are no longer efficient at managing traffic. Named Data Network (NDN) is a promising technology for building an agile communication model that reduces delays through a distributed and adaptive name-based data delivery approach. NDN replaces the ‘where’ paradigm with the concept of ‘what’. User requests are no longer directed to a specific IP address but to specific content. This paradigm causes responses to content requests to be served by a specific server and can also be served by the closest device to the requested data. NDN router has CS to cache the data, significantly reducing delays and improving the internet network’s quality of Service (QoS). Motivated by this, in 2019, we began intensive research to achieve a national flagship product, an NDN router with different functions from ordinary IP routers. NDN routers have cache, forwarding, and routing functions that affect data security on name-based networks. Designing scalable NDN routers is a new challenge as NDN requires fast hierarchical name-based lookups, perpackage data field state updates, and large-scale forward tables. We have a research team that has conducted NDN research through simulation, emulation, and testbed approaches using virtual machines to get the best NDN router design before building a prototype. Research results from 2019 show that the performance of NDN-based networks is better than existing IP-based networks. The tests were carried out based on various scenarios on the Indonesian network topology using NDNsimulator, MATLAB, Mininet-NDN, and testbed using virtual machines. Various network performance parameters, such as delay, throughput, packet loss, resource utilization, header overhead, packet transmission, round trip time, and cache hit ratio, showed the best results compared to IP-based networks. In addition, NDN Testbed based on open source is free, and the flexibility of creating topology has also been successfully carried out. This testbed includes all the functions needed to run an NDN network. The resource capacity on the server used for this testbed is sufficient to run a reasonably complex topology. However, bugs are still found on the testbed, and some features still need improvement. The following exploration of the NDN testbed will run with more new strategy algorithms and add Artificial Intelligence (AI) to the NDN function. Using AI in cache and forwarding strategies can make the system more intelligent and precise in making decisions according to network conditions. It will be a step toward developing NDN router products by the Bandung Institute of Technology (ITB) Indonesia.
Authored by Nana Syambas, Tutun Juhana, Hendrawan, Eueung Mulyana, Ian Edward, Hamonangan Situmorang, Ratna Mayasari, Ridha Negara, Leanna Yovita, Tody Wibowo, Syaiful Ahdan, Galih Nurkahfi, Ade Nurhayati, Hafiz Mulya, Mochamad Budiana
Nowadays, online cloud storage networks can be accessed by third parties. Businesses that host large data centers buy or rent storage space from individuals who need to store their data. According to customer needs, data hub operators visualise the data and expose the cloud storage for storing data. Tangibly, the resources may wander around numerous servers. Data resilience is a prior need for all storage methods. For routines in a distributed data center, distributed removable code is appropriate. A safe cloud cache solution, AES-UCODR, is proposed to decrease I/O overheads for multi-block updates in proxy re-encryption systems. Its competence is evaluated using the real-world finance sector.
Authored by Devaki K, Leena L
AbuSaif is a human-like social robot designed and built at the UAE University's Artificial Intelligence and Robotics Lab. AbuSaif was initially operated by a classical personal computer (PC), like most of the existing social robots. Thus, most of the robot's functionalities are limited to the capacity of that mounted PC. To overcome this, in this study, we propose a web-based platform that shall take the benefits of clustering in cloud computing. Our proposed platform will increase the operational capability and functionality of AbuSaif, especially those needed to operate artificial intelligence algorithms. We believe that the robot will become more intelligent and autonomous using our proposed web platform.
Authored by Mohammed Abduljabbar, Fady Alnajjar
The dynamic state of networks presents a challenge for the deployment of distributed applications and protocols. Ad-hoc schedules in the updating phase might lead to a lot of ambiguity and issues. By separating the control and data planes and centralizing control, Software Defined Networking (SDN) offers novel opportunities and remedies for these issues. However, software-based centralized architecture for distributed environments introduces significant challenges. Security is a main and crucial issue in SDN. This paper presents a deep study of the state-of-the-art of security challenges and solutions for the SDN paradigm. The conducted study helped us to propose a dynamic approach to efficiently detect different security violations and incidents caused by network updates including forwarding loop, forwarding black hole, link congestion, network policy violation, etc. Our solution relies on an intelligent approach based on the use of Machine Learning and Artificial Intelligence Algorithms.
Authored by Amina SAHBI, Faouzi JAIDI, Adel BOUHOULA
Nowadays, lives are very much easier with the help of IoT. Due to lack of protection and a greater number of connections, the management of IoT becomes more difficult To manage the network flow, a Software Defined Networking (SDN) has been introduced. The SDN has a great capability in automatic and dynamic distribution. For harmful attacks on the controller a centralized SDN architecture unlocks the scope. Therefore, to reduce these attacks in real-time, a securing SDN enabled IoT scenario infrastructure of Fog networks is preferred. The virtual switches have network enforcement authorized decisions and these are executed through the SDN network. Apart from this, SDN switches are generally powerful machines and simultaneously these are used as fog nodes. Therefore, SDN looks like a good selection for Fog networks of IoT. Moreover, dynamically distributing the necessary crypto keys are allowed by the centralized and software channel protection management solution, in order to establish the Datagram Transport Layer Security (DTIS) tunnels between the IoT devices, when demanded by the cyber security framework. Through the extensive deployment of this combination, the usage of CPU is observed to be 30% between devices and the latencies are in milliseconds range, and thus it presents the system feasibility with less delay. Therefore, by comparing with the traditional SDN, it is observed that the energy consumption is reduced by more than 90%.
Authored by Venkata Mohan, Sarangam Kodati, V. Krishna
In this paper, we established a unified deep learning-based spam filtering method. The proposed method uses the message byte-histograms as a unified representation for all message types (text, images, or any other format). A deep convolutional neural network (CNN) is used to extract high-level features from this representation. A fully connected neural network is used to perform the classification using the extracted CNN features. We validate our method using several open-source text-based and image-based spam datasets.We obtained an accuracy higher than 94% on all datasets.
Authored by Yassine Belkhouche
Aim: To bring off the spam detection in social media using Support Vector Machine (SVM) algorithm and compare accuracy with Artificial Neural Network (ANN) algorithm sample size of dataset is 5489, Initially the dataset contains several messages which includes spam and ham messages 80% messages are taken as training and 20% of messages are taken as testing. Materials and Methods: Classification was performed by KNN algorithm (N=10) for spam detection in social media and the accuracy was compared with SVM algorithm (N=10) with G power 80% and alpha value 0.05. Results: The value obtained in terms of accuracy was identified by ANN algorithm (98.2%) and for SVM algorithm (96.2%) with significant value 0.749. Conclusion: The accuracy of detecting spam using the ANN algorithm appears to be slightly better than the SVM algorithm.
Authored by Grandhi Svadasu, M. Adimoolam
The evolving and new age cybersecurity threats has set the information security industry on high alert. This modern age cyberattacks includes malware, phishing, artificial intelligence, machine learning and cryptocurrency. Our research highlights the importance and role of Software Quality Assurance for increasing the security standards that will not just protect the system but will handle the cyber-attacks better. With the series of cyber-attacks, we have concluded through our research that implementing code review and penetration testing will protect our data's integrity, availability, and confidentiality. We gathered user requirements of an application, gained a proper understanding of the functional as well as non-functional requirements. We implemented conventional software quality assurance techniques successfully but found that the application software was still vulnerable to potential issues. We proposed two additional stages in software quality assurance process to cater with this problem. After implementing this framework, we saw that maximum number of potential threats were already fixed before the first release of the software.
Authored by Ammar Haider, Wafa Bhatti
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
Based on the campus wireless IPv6 network system, using WiFi contactless sensing and positioning technology and action recognition technology, this paper designs a new campus security early warning system. The characteristic is that there is no need to add new monitoring equipment. As long as it is the location covered by the wireless IPv6 network, personnel quantity statistics and personnel body action status display can be realized. It plays an effective monitoring supplement to the places that cannot be covered by video surveillance in the past, and can effectively prevent campus violence or other emergencies.
Authored by Feng Sha, Ying Wei
Kerberos protocol is a derivative type of server used for the authentication purpose. Kerberos is a network-based authentication protocol which communicates the tickets from one network to another in a secured manner. Kerberos protocol encrypts the messages and provides mutual authentication. Kerberos uses the symmetric cryptography which uses the public key to strengthen the data confidentiality. The KDS Key Distribution System gives the center of securing the messages. Kerberos has certain disadvantages as it provides public key at both ends. In this proposed approach, the Kerberos are secured by using the HMAC Hash-based Message Authentication Code which is used for the authentication of message for integrity and authentication purpose. It verifies the data by authentication, verifies the e-mail address and message integrity. The computer network and security are authenticated by verifying the user or client. These messages which are transmitted and delivered have to be integrated by authenticating it. Kerberos authentication is used for the verification of a host or user. Authentication is based on the tickets on credentials in a secured way. Kerberos gives faster authentication and uses the unique ticketing system. It supports the authentication delegation with faster efficiency. These encrypt the standard by encrypting the tickets to pass the information.
Authored by R. Krishnamoorthy, S. Arun, N. Sujitha, K.M Vijayalakshmi, S. Karthiga, R. Thiagarajan