The enhancement of big data security in cloud computing has become inevitable dues to factors such as the volume, velocity, veracity, Value, and velocity of the big data. These enhancements of big data and cloud technologies have computing enabled a wide range of vulnerabilities in applications in organizational business environments leading to various attacks such as denial-of-service attacks, injection attacks, and Phishing among others. Deploying big data in cloud computing environments is a rapidly growing technology that significantly impacts organizations and provides benefits such as demand-driven access to computational services, a distorted version of infinite computing capacity, and assistance with demand-driven scaling up, scaling down, and scaling out. To secure cloud computing for big data processing, a variety of encryption techniques such as RSA, and AES can be applied. However, there are several vulnerabilities during processing. The paper aims to explore the enhancement of big data security in cloud computing using the RSA algorithm to improve the deployment and processing of the variety, volume, veracity, velocity and value of the data utilizing RSA encryptions. The novelty contribution of the paper is threefold: First, explore the current challenges and vulnerabilities in securing big data in cloud computing and how the RSA algorithm can be used to address them. Secondly, we implement the RSA algorithm in a cloud computing environment using the AWS cloud platform to secure big data to improve the performance and scalability of the RSA algorithm for big data security in cloud computing. We compare the RSA algorithm to other cryptographic algorithms in terms of its ability to enhance big data security in cloud computing. Finally, we recommend control mechanisms to improve security in the cloud platform. The results show that the RSA algorithm can be used to improve Cloud Security in a network environment.
Authored by Abel Yeboah-Ofori, Iman Darvishi, Azeez Opeyemi
The surveillance factor impacting the Internet-of-Things (IoT) conceptual framework has recently received significant attention from the research community. To do this, a number of surveys covering a variety of IoT-centric topics, such as intrusion detection systems, threat modeling, as well as emerging technologies, were suggested. Stability is not a problem that can be handled separately. Each layer of the IoT solutions must be designed and built with security in mind. IoT security goes beyond safeguarding the network as well as data to include attacks that could be directed at human health or even life. We discuss the IoT s security challenges in this study. We start by going over some fundamental security ideas and IoT security requirements. Following that, we look at IoT market statistics and IoT security statistics to see where it is all headed and how to make your situation better by implementing appropriate security measures.
Authored by Swati Rajput, R. Umamageswari, Rajesh Singh, Lalit Thakur, C.P Sanjay, Kalyan Chakravarthi
Cloud computing has turned into an important technology of our time. It has drawn attention due to its, availability, dynamicity, elasticity and pay as per use pricing mechanism this made multiple organizations to shift onto the cloud platform. It leverages the cloud to reduce administrative and backup overhead. Cloud computing offers a lot of versatility. Quantum technology, on the other hand, advances at a breakneck pace. Experts anticipate a positive outcome and predict that within the next decade, powerful quantum computers will be available. This has and will have a substantial impact on various sciences streams such as cryptography, medical research, and much more. Sourcing applications for business and informational data to the cloud, presents privacy and security concerns, which have become crucial in cloud installation and services adoption. To address the current security weaknesses, researchers and impacted organizations have offered several security techniques in the literature. The literature also gives a thorough examination of cloud computing security and privacy concerns.
Authored by Rajvir Shah
IoT shares data with other things, such as applications, networked devices, or industrial equipment. With a large-scale complex architecture de-sign composed of numerous ‘things’, the scalability and reliability of various models stand out. When these advantages are vulnerable to security, constant problems occur continuously. Since IoT devices are provided with services closely to users, it can be seen that there are many users with various hacking methods and environments vulnerable to hacking.
Authored by Daesoo Choi
Internet of Things (IoT) is encroaching in every aspect of our lives. The exponential increase in connected devices has massively increased the attack surface in IoT. The unprotected IoT devices are not only the target for attackers but also used as attack generating elements. The Distributed Denial of Service (DDoS) attacks generated using the geographically distributed unprotected IoT devices as botnet pose a serious threat to IoT. The large-scale DDoS attacks may arise through multiple low-rate DDoS attacks from geographically distributed, compromised IoT devices. This kind of DDoS attacks are difficult to detect with the existing security mechanisms because of the large-scale distributed nature of IoT. The proposed method provides solution to this problem using Fog computing containing fog nodes which are closer to edge IoT devices. The distributed fog nodes detects the low-rate DDoS attacks from IoT devices before it leads to largescale DDoS attack. The effectiveness analysis of the proposed method proves that the real time detection is practical. The experimental results depicts that the lowrate DDoS attacks are detected at faster rate in fog nodes, hence the large-scale DDoS attacks are detected at early stage to protect from massive attack.
Authored by S Prabavathy, I.Ravi Reddy
Different contemporary organisations are using cloud computing application in business operation activities to gain competitive advantages over other competitors. It also helps in promoting flexibility of the business operation activities. Cloud computing involves delivery of different computer resources to data centres over the internet services. Different kinds of delivered computer resources include data storage, servers, database, analytics, software, networking, and other types of data applications etc. In this present era of data breaches, cloud computing ensures security protocols to protect different kinds of sensitive transaction data and confidential information. Use of cloud computing ensures that a third party individual does not tamper the data. Use of cloud computing also provides different kinds of competitive advantages to the organisations. Cloud computing also helps in providing efficiency and a platform for innovation for the contemporary organisations. Theoretical frameworks are usedin the literature review section to determine the important roles of cloud computing in effective data and security management in the organisations. It is also justified in the research work that qualitative methodology is suitable for the researcher to meet the developed research objectives. A secondary data analysis approach has been considered by the researcher in this study to carry out the investigation and meet the developed objectives. From the findings, few challenges associated with the cloud computing system have been identified. Proper recommendations are suggested at the end of the study to help future researchers in overcoming the identified associated challenges.
Authored by Lusaka Bhattacharyya, Supriya Purohit, Endang Fatmawati, D Sunil, Zhanar Toktakynovna, G.V. Sriramakrishnan
With billions of devices already connected to the network s edge, the Internet of Things (IoT) is shaping the future of pervasive computing. Nonetheless, IoT applications still cannot escape the need for the computing resources available at the fog layer. This becomes challenging since the fog nodes are not necessarily secure nor reliable, which widens even further the IoT threat surface. Moreover, the security risk appetite of heterogeneous IoT applications in different domains or deploy-ment contexts should not be assessed similarly. To respond to this challenge, this paper proposes a new approach to optimize the allocation of secure and reliable fog computing resources among IoT applications with varying security risk level. First, the security and reliability levels of fog nodes are quantitatively evaluated, and a security risk assessment methodology is defined for IoT services. Then, an online, incentive-compatible mechanism is designed to allocate secure fog resources to high-risk IoT offloading requests. Compared to the offline Vickrey auction, the proposed mechanism is computationally efficient and yields an acceptable approximation of the social welfare of IoT devices, allowing to attenuate security risk within the edge network.
Authored by Talal Halabi, Adel Abusitta, Glaucio Carvalho, Benjamin Fung
As a result of this new computer design, edge computing can process data rapidly and effectively near to the source, avoiding network resource and latency constraints. By shifting computing power to the network edge, edge computing decreases the load on cloud services centers while also reducing the time required for users to input data. Edge computing advantages for data-intensive services, in particular, could be obscured if access latency becomes a bottleneck. Edge computing raises a number of challenges, such as security concerns, data incompleteness, and a hefty up-front and ongoing expense. There is now a shift in the worldwide mobile communications sector toward 5G technology. This unprecedented attention to edge computing has come about because 5G is one of the primary entry technologies for large-scale deployment. Edge computing privacy has been a major concern since the technology’s inception, limiting its adoption and advancement. As the capabilities of edge computing have evolved, so have the security issues that have arisen as a result of these developments, as well as the increasing public demand for privacy protection. The lack of trust amongst IoT devices is exacerbated by the inherent security concerns and assaults that plague IoT edge devices. A cognitive trust management system is proposed to reduce this malicious activity by maintaining the confidence of an appliance \& managing the service level belief \& Quality of Service (QoS). Improved packet delivery ratio and jitter in cognitive trust management systems based on QoS parameters show promise for spotting potentially harmful edge nodes in computing networks at the edge.
Authored by D. Ganesh, K. Suresh, Sunil Kumar, K. Balaji, Sreedhar Burada
The big data platform based on cloud computing realizes the storage, analysis and processing of massive data, and provides users with more efficient, accurate and intelligent Internet services. Combined with the characteristics of college teaching resource sharing platform based on cloud computing mode, the multi-faceted security defense strategy of the platform is studied from security management, security inspection and technical means. In the detection module, the optimization of the support vector machine is realized, the detection period is determined, the DDoS data traffic characteristics are extracted, and the source ID blacklist is established; the triggering of the defense mechanism in the defense module, the construction of the forwarder forwarding queue and the forwarder forwarding capability are realized. Reallocation.
Authored by Zhiyi Xing
The innovation introduced by connectivity brings about significant changes in the industrial environment leading to the fourth industrial revolution, known as Industry 4.0. However, the integration and connectivity between industrial systems have significantly increased the risks and cyberattack surfaces. Nowadays, Virtualization is added to the security field to provide maximum protection against toxic attacks at minimum costs. Combining paradigms such as Software Defined Networking (SDN), and Network Function Virtualization (NFV) can improve virtualization performance through Openness (unified control of heterogeneous hardware and software resources), Flexibility (remote management and rapid response to changing demands), and Scalability (a faster cycle of innovative services deployment). The present paper proposes a Virtualized Security for Industry 4.0 (ViSI4.0), based on both SDN and Network Security Function Virtualisation (NSFV), to prevent attacks on Cyber-Physical System (CPS). Since industrial devices are limited in memory and processing, vNSFs are deployed as Docker containers. We conducted experiments to evaluate the performances of IIoT applications when using virtualized security services. Results showed that many real-time IIoT applications are still within their latency tolerance range. However, the additional delays introduced by virtualization have an impact on IIoT applications with very strict delays.
Authored by Intissar Jamai, Lamia Ben Azzouz, Leila Saidane
This paper addresses the issues of fault tolerance (FT) and intrusion detection (ID) in the Software-defined networking (SDN) environment. We design an integrated model that combines the FT-Manager as an FT mechanism and an ID-Manager, as an ID technique to collaboratively detect and mitigate threats in the SDN. The ID-Manager employs a machine learning (ML) technique to identify anomalous traffic accurately and effectively. Both techniques in the integrated model leverage the controller-switches communication for real-time network statistics collection. While the full implementation of the framework is yet to be realized, experimental evaluations have been conducted to identify the most suitable ML algorithm for ID-Manager to classify network traffic using a benchmarking dataset and various performance metrics. The principal component analysis method was utilized for feature engineering optimization, and the results indicate that the Random Forest (RF) classifier outperforms other algorithms with 99.9\% accuracy, precision, and recall. Based on these findings, the paper recommended RF as the ideal choice for ID design in the integrated model. We also stress the significance and potential benefits of the integrated model to sustain SDN network security and dependability.
Authored by Bassey Isong, Thupae Ratanang, Naison Gasela, Adnan Abu-Mahfouz
With the proliferation of data in Internet-related applications, incidences of cyber security have increased manyfold. Energy management, which is one of the smart city layers, has also been experiencing cyberattacks. Furthermore, the Distributed Energy Resources (DER), which depend on different controllers to provide energy to the main physical smart grid of a smart city, is prone to cyberattacks. The increased cyber-attacks on DER systems are mainly because of its dependency on digital communication and controls as there is an increase in the number of devices owned and controlled by consumers and third parties. This paper analyzes the major cyber security and privacy challenges that might inflict, damage or compromise the DER and related controllers in smart cities. These challenges highlight that the security and privacy on the Internet of Things (IoT), big data, artificial intelligence, and smart grid, which are the building blocks of a smart city, must be addressed in the DER sector. It is observed that the security and privacy challenges in smart cities can be solved through the distributed framework, by identifying and classifying stakeholders, using appropriate model, and by incorporating fault-tolerance techniques.
Authored by Tarik Himdi, Mohammed Ishaque, Muhammed Ikram
Aiming at the security issues such as data leakage and tampering faced by experimental data sharing, research is conducted on data security sharing under multiple security mechanisms such as mixed encryption and secure storage on the blockchain against leakage, as well as experimental data tampering identification and recovery strategies based on an improved practical Byzantine fault-tolerant (PBFT) consensus algorithm. An integrated scheme for secure storage, sharing, and tamper resistant recovery of test data is proposed to address the contradiction between the security and sharing of sensitive data. Provide support for the security application of blockchain in experimental data management.
Authored by Lin Shaofeng, Zhang Yang, Zhou Yao, Ni Lin
Envisioned to be the next-generation Internet, the metaverse faces far more security challenges due to its large scale, distributed, and decentralized nature. While traditional third-party security solutions remain certain limitations such as scalability and Single Point of Failure (SPoF), numerous wearable Augmented/Virtual Reality (AR/VR) devices with increasingly computational capacity can contribute underused resource to protect the metaverse. Realizing the potential of Collaborative Intrusion Detection System (CIDS) in the metaverse context, we propose MetaCIDS, a blockchain-based Federated Learning (FL) framework that allows metaverse users to: (i) collaboratively train an adaptable CIDS model based on their collected local data with privacy protection; (ii) utilize such the FL model to detect metaverse intrusion using the locally observed network traffic; (iii) submit verifiable intrusion alerts through blockchain transactions to obtain token-based reward. Security analysis shows that MetaCIDS can tolerate up to 33\% malicious trainers during the training of FL models, while the verifiability of alerts offer resistance to Distributed Denial of Service (DDoS) attacks. Besides, experimental results illustrate the efficiency and feasibility of MetaCIDS.
Authored by Vu Truong, Vu Nguyen, Long Le
Delay Tolerant Network (DTN) is a network model designed for special environments. It is designed to be used in challenging network environments with high latency levels, bandwidth constraints, and unstable data transmission. It plays an important role in extremely special environments such as disaster rescue, maritime communication, and remote areas. Currently, research on DTN mainly focuses on innovative routing protocols, with limited research of the security issues and solutions. In response to the above problems, this paper analyzes and compares the security problems faced by delay tolerance networks and their solutions and security schemes.
Authored by Jingwen Su, Xiangyu Bai, Kexin Zhou
Cloud computing (CC) is vulnerable to existing information technology attacks, since it extends and utilizes information technology infrastructure, applications and typical operating systems. In this manuscript, an Enhanced capsule generative adversarial network (ECGAN) with blockchain based Proof of authority consensus procedure fostered Intrusion detection (ID) system is proposed for enhancing cyber security in CC. The data are collected via NSL-KDD benchmark dataset. The input data is fed to proposed Z-Score Normalization process to eliminate the redundancy including missing values. The pre-processing output is fed to feature selection. During feature selection, extracting the optimum features on the basis of univariate ensemble feature selection (UEFS). Optimum features basis, the data are classified as normal and anomalous utilizing Enhanced capsule generative adversarial networks. Subsequently, blockchain based Proof of authority (POA) consensus process is proposed for improving the cyber security of the data in cloud computing environment. The proposed ECGAN-BC-POA-IDS method is executed in Python and the performance metrics are calculated. The proposed approach has attained 33.7\%, 25.7\%, 21.4\% improved accuracy, 24.6\%, 35.6\%, 38.9\% lower attack detection time, and 23.8\%, 18.9\%, 15.78\% lower delay than the existing methods, like Artificial Neural Network (ANN) with blockchain framework, Integrated Architecture with Byzantine Fault Tolerance consensus, and Blockchain Random Neural Network (RNN-BC) respectively.
Authored by Ravi Kanth, Prem Jacob
Network intrusion detection technology has developed for more than ten years, but due to the network intrusion is complex and variable, it is impossible to determine the function of network intrusion behaviour. Combined with the research on the intrusion detection technology of the cluster system, the network security intrusion detection and mass alarms are realized. Method: This article starts with an intrusion detection system, which introduces the classification and workflow. The structure and working principle of intrusion detection system based on protocol analysis technology are analysed in detail. Results: With the help of the existing network intrusion detection in the network laboratory, the Synflood attack has successfully detected, which verified the flexibility, accuracy, and high reliability of the protocol analysis technology. Conclusion: The high-performance cluster-computing platform designed in this paper is already available. The focus of future work will strengthen the functions of the cluster-computing platform, enhancing stability, and improving and optimizing the fault tolerance mechanism.
Authored by Feng Li, Fei Shu, Mingxuan Li, Bin Wang
Container-based virtualization has gained momentum over the past few years thanks to its lightweight nature and support for agility. However, its appealing features come at the price of a reduced isolation level compared to the traditional host-based virtualization techniques, exposing workloads to various faults, such as co-residency attacks like container escape. In this work, we propose to leverage the automated management capabilities of containerized environments to derive a Fault and Intrusion Tolerance (FIT) framework based on error detection-recovery and fault treatment. Namely, we aim at deriving a specification-based error detection mechanism at the host level to systematically and formally capture security state errors indicating breaches potentially caused by malicious containers. Although the paper focuses on security side use cases, results are logically extendable to accidental faults. Our aim is to immunize the target environments against accidental and malicious faults and preserve their core dependability and security properties.
Authored by Taous Madi, Paulo Esteves-Verissimo
The open and shared environment makes it unavoidable to face data attacks in the context of the energy internet. Tolerance to data intrusion is of utmost importance for the security and stability of the energy internet. Existing methods for data intrusion tolerance suffer from insufficient dynamic adaptability and challenges in determining tolerance levels. To address these issues, this paper introduces a data intrusion tolerance model based on game theory. A Nash equilibrium is established by analyzing the gains and losses of both attackers and defenders through game theory. Finally, the simulation results conducted on the IEEE 14-bus node system illustrate that the model we propose offers guidance for decision-making within the energy internet, enabling the utilization of game theory to determine optimal intrusion tolerance strategies.
Authored by Zhanwang Zhu, Yiming Yuan, Song Deng
Malware, or software designed with harmful intent, is an ever-evolving threat that can have drastic effects on both individuals and institutions. Neural network malware classification systems are key tools for combating these threats but are vulnerable to adversarial machine learning attacks. These attacks perturb input data to cause misclassification, bypassing protective systems. Existing defenses often rely on enhancing the training process, thereby increasing the model’s robustness to these perturbations, which is quantified using verification. While training improvements are necessary, we propose focusing on the verification process used to evaluate improvements to training. As such, we present a case study that evaluates a novel verification domain that will help to ensure tangible safeguards against adversaries and provide a more reliable means of evaluating the robustness and effectiveness of anti-malware systems. To do so, we describe malware classification and two types of common malware datasets (feature and image datasets), demonstrate the certified robustness accuracy of malware classifiers using the Neural Network Verification (NNV) and Neural Network Enumeration (nnenum) tools1, and outline the challenges and future considerations necessary for the improvement and refinement of the verification of malware classification. By evaluating this novel domain as a case study, we hope to increase its visibility, encourage further research and scrutiny, and ultimately enhance the resilience of digital systems against malicious attacks.
Authored by Preston Robinette, Diego Lopez, Serena Serbinowska, Kevin Leach, Taylor Johnson
Mobile malware is a malicious code specifically designed to target mobile devices to perform multiple types of fraud. The number of attacks reported each day is increasing constantly and is causing an impact not only at the end-user level but also at the network operator level. Malware like FluBot contributes to identity theft and data loss but also enables remote Command & Control (C2) operations, which can instrument infected devices to conduct Distributed Denial of Service (DDoS) attacks. Current mobile device-installed solutions are not effective, as the end user can ignore security warnings or install malicious software. This article designs and evaluates MONDEO-Tactics5G - a multistage botnet detection mechanism that does not require software installation on end-user devices, together with tactics for 5G network operators to manage infected devices. We conducted an evaluation that demonstrates high accuracy in detecting FluBot malware, and in the different adaptation strategies to reduce the risk of DDoS while minimising the impact on the clients satisfaction by avoiding disrupting established sessions.
Authored by Bruno Sousa, Duarte Dias, Nuno Antunes, Javier amara, Ryan Wagner, Bradley Schmerl, David Garlan, Pedro Fidalgo
This work focuses on the problem of hyper-parameter tuning (HPT) for robust (i.e., adversarially trained) models, shedding light on the new challenges and opportunities arising during the HPT process for robust models. To this end, we conduct an extensive experimental study based on three popular deep models and explore exhaustively nine (discretized) hyper-parameters (HPs), two fidelity dimensions, and two attack bounds, for a total of 19208 configurations (corresponding to 50 thousand GPU hours). Through this study, we show that the complexity of the HPT problem is further exacerbated in adversarial settings due to the need to independently tune the HPs used during standard and adversarial training: succeeding in doing so (i.e., adopting different HP settings in both phases) can lead to a reduction of up to 80% and 43% of the error for clean and adversarial inputs, respectively. We also identify new opportunities to reduce the cost of HPT for robust models. Specifically, we propose to leverage cheap adversarial training methods to obtain inexpensive, yet highly correlated, estimations of the quality achievable using more robust/expensive state-of-the-art methods. We show that, by exploiting this novel idea in conjunction with a recent multi-fidelity optimizer (taKG), the efficiency of the HPT process can be enhanced by up to 2.1x.
Authored by Pedro Mendes, Paolo Romano, David Garlan
Neural networks are often overconfident about their pre- dictions, which undermines their reliability and trustworthiness. In this work, we present a novel technique, named Error-Driven Un- certainty Aware Training (EUAT), which aims to enhance the ability of neural models to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate. The EUAT approach oper- ates during the model’s training phase by selectively employing two loss functions depending on whether the training examples are cor- rectly or incorrectly predicted by the model. This allows for pursu- ing the twofold goal of i) minimizing model uncertainty for correctly predicted inputs and ii) maximizing uncertainty for mispredicted in- puts, while preserving the model’s misprediction rate. We evaluate EUAT using diverse neural models and datasets in the image recog- nition domains considering both non-adversarial and adversarial set- tings. The results show that EUAT outperforms existing approaches for uncertainty estimation (including other uncertainty-aware train- ing techniques, calibration, ensembles, and DEUP) by providing un- certainty estimates that not only have higher quality when evaluated via statistical metrics (e.g., correlation with residuals) but also when employed to build binary classifiers that decide whether the model’s output can be trusted or not and under distributional data shifts.
Authored by Pedro Mendes, Paolo Romano, David Garlan
This paper focuses on the problem of optimizing system utility of Machine-Learning (ML) based systems in the presence of ML mispredictions. This is achieved via the use of self-adaptive systems and through the execution of adaptation tactics, such as model retraining, which operate at the level of individual ML components. To address this problem, we propose a probabilistic modeling framework that reasons about the cost/benefit trade-offs associated with adapting ML components. The key idea of the proposed approach is to decouple the problems of estimating (i) the expected performance improvement after adaptation and (ii) the impact of ML adaptation on overall system utility. We apply the proposed framework to engineer a self-adaptive ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection data-set. We initially consider a scenario in which information on model’s quality is immediately available. Next we relax this assumption by integrating (and extending) state-of-the-art techniques for estimating model’s quality in the proposed framework. We show that by predicting the system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic or reactive retraining.
Authored by Maria Casimiro, Diogo Soares, David Garlan, Luís Rodrigues, Paolo Romano
In 2017, the United States Department of Homeland Security designated elections equipment as critical infrastructure. Poll workers play a crucial role in safeguarding election security and integrity and are responsible for administering an election at the more than 100,000 polling places needed during an election cycle, oftentimes interacting with, and having unsupervised access to, elections equipment. This paper examines the utility of training poll workers to mitigate potential cyber, physical, and insider threats that may emerge during U.S. elections through an analysis of the relationship between poll worker training performance and their individual cybersecurity practices. Specifically, we measure a poll worker’s personal cybersecurity behavior using the Security Behaviors and Intentions Scale (SeBIS) and statistically examine this measure to their performance on three poll worker election security training modules, along with quizzes to assess poll workers' knowledge. The results indicate that a poll worker’s personal security behaviors related to Device Securement, Password Generation, and Proactive Awareness have a positive relationship with poll workers' knowledge of the threats related to election equipment and processes. k-means analysis shows that educated poll workers and those who have strong device security personal behaviors tend to score better on the poll worker training quizzes; Device Securement was also the greatest driver of the relationship between individual security behaviors and poll worker threat knowledge. These findings have implications for election security policies, emphasizing the need for election officials and managers to prioritize Device Securement and Proactive Awareness in poll worker training initiatives to enhance election security.
Authored by Abigail Kassel, Isabella Bloomquist, Natalie Scala, Josh Dehlinger