Supervisory control and data acquisition (SCADA) systems play pivotal role in the operation of modern critical infrastructures (CIs). Technological advancements, innovations, economic trends, etc. have continued to improve SCADA systems effectiveness and overall CIs’ throughput. However, the trends have also continued to expose SCADA systems to security menaces. Intrusions and attacks on SCADA systems can cause service disruptions, equipment damage or/and even fatalities. The use of conventional intrusion detection models have shown trends of ineffectiveness due to the complexity and sophistication of modern day SCADA attacks and intrusions. Also, SCADA characteristics and requirement necessitate exceptional security considerations with regards to intrusive events’ mitigations. This paper explores the viability of supervised learning algorithms in detecting intrusions specific to SCADA systems and their communication protocols. Specifically, we examine four supervised learning algorithms: Random Forest, Naïve Bayes, J48 Decision Tree and Sequential Minimal Optimization-Support Vector Machines (SMO-SVM) for evaluating SCADA datasets. Two SCADA datasets were used for evaluating the performances of our approach. To improve the classification performances, feature selection using principal component analysis was used to preprocess the datasets. Using prominent classification metrics, the SVM-SMO presented the best overall results with regards to the two datasets. In summary, results showed that supervised learning algorithms were able to classify intrusions targeted against SCADA systems with satisfactory performances.
Authored by Oyeniyi Alimi, Khmaies Ouahada, Adnan Abu-Mahfouz, Suvendi Rimer, Kuburat Alimi
Supervisory Control and Data Acquisition (SCADA) systems are utilized extensively in critical power grid infrastructures. Modern SCADA systems have been proven to be susceptible to cyber-security attacks and require improved security primitives in order to prevent unwanted influence from an adversarial party. One section of weakness in the SCADA system is the integrity of field level sensors providing essential data for control decisions at a master station. In this paper we propose a lightweight hardware scheme providing inferred authentication for SCADA sensors by combining an analog to digital converter and a permutation generator as a single integrated circuit. Through this method we encode critical sensor data at the time of sensing, so that unencoded data is never stored in memory, increasing the difficulty of software attacks. We show through experimentation how our design stops both software and hardware false data injection attacks occurring at the field level of SCADA systems.
Authored by Kevin Hutto, Santiago Grijalva, Vincent Mooney
With the advent of cloud storage services many users tend to store their data in the cloud to save storage cost. However, this has lead to many security concerns, and one of the most important ones is ensuring data integrity. Public verification schemes are able to employ a third party auditor to perform data auditing on behalf of the user. But most public verification schemes are vulnerable to procrastinating auditors who may not perform auditing on time. These schemes do not have fair arbitration also, i.e. they lack a way to punish the malicious Cloud Service Provider (CSP) and compensate user whose data has been corrupted. On the other hand, CSP might be storing redundant data that could increase the storage cost for the CSP and computational cost of data auditing for the user. In this paper, we propose a Blockchain-based public auditing and deduplication scheme with a fair arbitration system against procrastinating auditors. The key idea requires auditors to record each verification using smart contract and store the result into a Blockchain as a transaction. Our scheme can detect and punish the procrastinating auditors and compensate users in the case of any data loss. Additionally, our scheme can detect and delete duplicate data that improve storage utilization and reduce the computational cost of data verification. Experimental evaluation demonstrates that our scheme is provably secure and does not incur overhead compared to the existing public auditing techniques while offering an additional feature of verifying the auditor’s performance.
Authored by Tariqul Islam, Kamrul Hasan, Saheb Singh, Joon Park
With the development of Internet of Things (IoT) technology, the transaction behavior of IoT devices has gradually increased, which also brings the problem of transaction data security and transaction processing efficiency. As one of the research hotspots in the field of data security, blockchain technology has been widely applied in the maintenance of transaction records and the construction of financial payment systems. However, the proportion of microtransactions in the Internet of Things poses challenges to the coupling of blockchain and IoT devices. This paper proposes a three-party scalable architecture based on “IoT device-edge server-blockchain”. In view of the characteristics of micropayment, the verification mechanism of the execution results of the off-chain transaction is designed, and the bridge node is designed in the off-chain architecture, which ensures the finality of the blockchain to the transaction. According to system evaluation, this scalable architecture improves the processing efficiency of micropayments on blockchain, while ensuring its decentration equal to that of blockchain. Compared with other blockchain-based IoT device payment schemes, our architecture is more excellent in activity.
Authored by Jingcong Yang, Qi Xia, Jianbin Gao, Isaac Obiri, Yushan Sun, Wenwu Yang
Robustness verification of neural networks (NNs) is a challenging and significant problem, which draws great attention in recent years. Existing researches have shown that bound propagation is a scalable and effective method for robustness verification, and it can be implemented on GPUs and TPUs to get parallelized. However, the bound propagation methods naturally produce weak bound due to linear relaxations on the neurons, which may cause failure in verification. Although tightening techniques for simple ReLU networks have been explored, they are not applicable for NNs with general activation functions such as Sigmoid and Tanh. Improving robustness verification on these NNs is still challenging. In this paper, we propose a Branch-and-Bound (BaB) style method to address this problem. The proposed BaB procedure improves the weak bound by splitting the input domain of neurons into sub-domains and solving the corresponding sub-problems. We propose a generic heuristic function to determine the priority of neuron splitting by scoring the relaxation and impact of neurons. Moreover, we combine bound optimization with the BaB procedure to improve the weak bound. Experimental results demonstrate that the proposed method gains up to 35% improvement compared to the state-of-art CROWN method on Sigmoid and Tanh networks.
Authored by Zhengwu Luo, Lina Wang, Run Wang, Kang Yang, Aoshuang Ye
As the COVID-19 continues to spread globally, more and more companies are transforming into remote online offices, leading to the expansion of electronic signatures. However, the existing electronic signatures platform has the problem of data-centered management. The system is subject to data loss, tampering, and leakage when an attack from outside or inside occurs. In response to the above problems, this paper designs an electronic signature solution and implements a prototype system based on the consortium blockchain. The solution divides the contract signing process into four states: contract upload, initiation signing, verification signing, and confirm signing. The signing process is mapped with the blockchain-linked data. Users initiate the signature transaction by signing the uploaded contract's hash. The sign state transition is triggered when the transaction is uploaded to the blockchain under the consensus mechanism and the smart contract control, which effectively ensures the integrity of the electronic contract and the non-repudiation of the electronic signature. Finally, the blockchain performance test shows that the system can be applied to the business scenario of contract signing.
Authored by Kaicheng Yang, Yongtang Wu, Yuling Chen
Embedded memory are important components in system-on-chip, which may be crippled by aging and wear faults or Hardware Trojan attacks to compromise run-time security. The current built-in self-test and pre-silicon verification lack efficiency and flexibility to solve this problem. To this end, we address such vulnerabilities by proposing a run-time memory security detecting framework in this paper. The solution builds mainly upon a centralized security detection controller for partially reconfigurable inspection content, and a static memory wrapper to handle access conflicts and buffering testing cells. We show that a field programmable gate array prototype of the proposed framework can pursue 16 memory faults and 3 types Hardware Trojans detection with one reconfigurable partition, whereas saves 12.7% area and 2.9% power overhead compared to a static implementation. This architecture has more scalable capability with little impact on the memory accessing throughput of the original chip system in run-time detection.
Authored by Ying Li, Lan Chen, Jian Wang, Guanfei Gong
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. While hybrid approaches aim for the “best of both worlds,” the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges-and resultant bugs-involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation-the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.
Authored by Tatiana Vélez, Raffi Khatchadourian, Mehdi Bagherzadeh, Anita Raja
The secure Internet of Things (loT) increasingly relies on digital cryptographic signatures which require a private signature and public verification key. By their intrinsic nature, public keys are meant to be accessible to any interested party willing to verify a given signature. Thus, the storing of such keys is of great concern, since an adversary shall not be able to tamper with the public keys, e.g., on a local filesystem. Commonly used public-key infrastructures (PKIs), which handle the key distribution and storage, are not feasible in most use-cases, due to their resource intensity and high complexity. Thus, the general storing of the public verification keys is of notable interest for low-resource loT networks. By using the Distributed Ledger Technology (DLT), this paper proposes a decentralized concept for storing public signature verification keys in a tamper-resistant, secure, and resilient manner. By combining lightweight public-key exchange protocols with the proposed approach, the storing of verification keys becomes scalable and especially suitable for low-resource loT devices. This paper provides a Proof-of-Concept implementation of the DLT public-key store by extending our previously proposed NFC-Key Exchange (NFC-KE) protocol with a decentralized Hyperledger Fabric public-key store. The provided performance analysis shows that by using the decentralized keystore, the NFC- KE protocol gains an increased tamper resistance and overall system resilience while also showing expected performance degradations with a low real-world impact.
Authored by Julian Dreyer, Ralf Tönjes, Nils Aschenbruck
Web-based Application Programming Interfaces (APIs) are often described using SOAP, OpenAPI, and GraphQL specifications. These specifications provide a consistent way to define web services and enable automated fuzz testing. As such, many fuzzers take advantage of these specifications. However, in an enterprise setting, the tools are usually installed and scaled by individual teams, leading to duplication of efforts. There is a need for an enterprise-wide fuzz testing solution to provide shared, cost efficient, off-nominal testing at scale where fuzzers can be plugged-in as needed. Internet cloud-based fuzz testing-as-a-service solutions mitigate scalability concerns but are not always feasible as they require artifacts to be uploaded to external infrastructure. Typically, corporate policies prevent sharing artifacts with third parties due to cost, intellectual property, and security concerns. We utilize API specifications and combine them with cluster computing elasticity to build an automated, scalable framework that can fuzz multiple apps at once and retain the trust boundary of the enterprise.
Authored by Riyadh Mahmood, Jay Pennington, Danny Tsang, Tan Tran, Andrea Bogle
This paper establishes a vector autoregressive model based on the current development status of the digital economy and studies the correlation between the digital economy and economic growth MIS from a dynamic perspective, and found that the digital economy has a strong supporting role in the growth of the total economic volume. The coordination degree model calculates the scientific and technological innovation capabilities of China's 30 provinces (except Tibet) from 2018 to 2022, and the coordination, green, open, and shared level of high-quality economic development. The basic principles of the composition of the security measurement are expounded, and the measurement information model can be used as a logic model. The analysis of security measure composition summarizes the selection principle and selection process of security measurement, and analyzes and compares the measure composition methods in several typical security measurement methods.
Authored by Ma Ying, Zhou Tingting
A huge number of cloud users and cloud providers are threatened of security issues by cloud computing adoption. Cloud computing is a hub of virtualization that provides virtualization-based infrastructure over physically connected systems. With the rapid advancement of cloud computing technology, data protection is becoming increasingly necessary. It's important to weigh the advantages and disadvantages of moving to cloud computing when deciding whether to do so. As a result of security and other problems in the cloud, cloud clients need more time to consider transitioning to cloud environments. Cloud computing, like any other technology, faces numerous challenges, especially in terms of cloud security. Many future customers are wary of cloud adoption because of this. Virtualization Technologies facilitates the sharing of recourses among multiple users. Cloud services are protected using various models such as type-I and type-II hypervisors, OS-level, and unikernel virtualization but also offer a variety of security issues. Unfortunately, several attacks have been built in recent years to compromise the hypervisor and take control of all virtual machines running above it. It is extremely difficult to reduce the size of a hypervisor due to the functions it offers. It is not acceptable for a safe device design to include a large hypervisor in the Trusted Computing Base (TCB). Virtualization is used by cloud computing service providers to provide services. However, using these methods entails handing over complete ownership of data to a third party. This paper covers a variety of topics related to virtualization protection, including a summary of various solutions and risk mitigation in VMM (virtual machine monitor). In this paper, we will discuss issues possible with a malicious virtual machine. We will also discuss security precautions that are required to handle malicious behaviors. We notice the issues of investigating malicious behaviors in cloud computing, give the scientific categorization and demonstrate the future headings. We've identified: i) security specifications for virtualization in Cloud computing, which can be used as a starting point for securing Cloud virtual infrastructure, ii) attacks that can be conducted against Cloud virtual infrastructure, and iii) security solutions to protect the virtualization environment from DDOS attacks.
Authored by Tahir Alyas, Karamath Ateeq, Mohammed Alqahtani, Saigeeta Kukunuru, Nadia Tabassum, Rukshanda Kamran
With the continuous development of computer technology, the coverage of informatization solutions covers all walks of life and all fields of society. For colleges and universities, teaching and scientific research are the basic tasks of the school. The scientific research ability of the school will affect the level of teachers and the training of students. The establishment of a good scientific research environment has become a more important link in the development of universities. SR(Scientific research) data is a prerequisite for SR activities. High-quality SR management data services are conducive to ensuring the quality and safety of SRdata, and further assisting the smooth development of SR projects. Therefore, this article mainly conducts research and practice on cloud computing-based scientific research management data services in colleges and universities. First, analyze the current situation of SR data management in colleges and universities, and the results show that the popularity of SR data management in domestic universities is much lower than that of universities in Europe and the United States, and the data storage awareness of domestic researchers is relatively weak. Only 46% of schools have developed SR data management services, which is much lower than that of European and American schools. Second, analyze the effect of CC(cloud computing )on the management of SR data in colleges and universities. The results show that 47% of SR believe that CC is beneficial to the management of SR data in colleges and universities to reduce scientific research costs and improve efficiency, the rest believe that CC can speed up data storage and improve security by acting on SR data management in colleges and universities.
Authored by Di Chen
With the development of cloud computing technology, more and more scientific researchers choose to deliver scientific workflow tasks to public cloud platforms for execution. This mode effectively reduces scientific research costs while also bringing serious security risks. In response to this problem, this article summarizes the current security issues facing cloud scientific workflows, and analyzes the importance of studying cloud scientific workflow security issues. Then this article analyzes, summarizes and compares the current cloud scientific workflow security methods from three perspectives: system architecture, security model, and security strategy. Finally made a prospect for the future development direction.
Authored by Xuanyu Liu, Guozhen Cheng, Yawen Wang, Shuai Zhang
Cooperative secure computing based on the relationship between numerical value and numerical interval is not only the basic problems of secure multiparty computing but also the core problems of cooperative secure computing. It is of substantial theoretical and practical significance for information security in relation to scientific computing to continuously investigate and construct solutions to such problems. Based on the Goldwasser-Micali homomorphic encryption scheme, this paper propose the Morton rule, according to the characteristics of the interval, a double-length vector is constructed to participate in the exclusive-or operation, and an efficient cooperative decision-making solution for integer and integer interval security is designed. This solution can solve more basic problems in cooperative security computation after suitable transformations. A theoretical analysis shows that this solution is safe and efficient. Finally, applications that are based on these protocols are presented.
Authored by Shaofeng Lu, Chengzhe Lv, Wei Wang, Changqing Xu, Huadan Fan, Yuefeng Lu, Yulong Hu, Wenxi Li
A considerable portion of computing power is always required to perform symbolic calculations. The reliability and effectiveness of algorithms are two of the most significant challenges observed in the field of scientific computing. The terms “feasible calculations” and “feasible computations” refer to the same idea: the algorithms that are reliable and effective despite practical constraints. This research study intends to investigate different types of computing and modelling challenges, as well as the development of efficient integration methods by considering the challenges before generating the accurate results. Further, this study investigates various forms of errors that occur in the process of data integration. The proposed framework is based on automata, which provides the ability to investigate a wide-variety of distinct distance-bounding protocols. The proposed framework is not only possible to produce computational (in)security proofs, but also includes an extensive investigation on different issues such as optimal space complexity trade-offs. The proposed framework in embedded with the already established symbolic framework in order to get a deeper understanding of distance-bound security. It is now possible to guarantee a certain level of physical proximity without having to continually mimic either time or distance.
Authored by Vinod Kumar, Sanjay Pachauri
Cloud computing is a model of service provisioning in heterogeneous distributed systems that encourages many researchers to explore its benefits and drawbacks in executing workflow applications. Recently, high-quality security protection has been a new challenge in workflow allocation. Different tasks may and may not have varied security demands, security overhead may vary for different virtual machines (VMs) at which the task is assigned. This paper proposes a Security Oriented Deadline-Aware workflow allocation (SODA) strategy in an IaaS cloud environment to minimize the risk probability of the workflow tasks while considering the deadline met in a deterministic environment. SODA picks out the task based on the highest security upward rank and assigns the selected task to the trustworthy VMs. SODA tries to simultaneously satisfy each task’s security demand and deadline at the maximum possible level. The simulation studies show that SODA outperforms the HEFT strategy on account of the risk probability of the cloud system on scientific workflow, namely CyberShake.
Authored by Mahfooz Alam, Mohammad Shahid, Suhel Mustajab
Network security is a problem that is of great concern to all countries at this stage. How to ensure that the network provides effective services to people without being exposed to potential security threats has become a major concern for network security researchers. In order to better understand the network security situation, researchers have studied a variety of quantitative assessment methods, and the most scientific and effective one is the hierarchical quantitative assessment method of the network security threat situation. This method allows the staff to have a very clear understanding of the security of the network system and make correct judgments. This article mainly analyzes the quantitative assessment of the hierarchical network security threat situation from the current situation and methods, which is only for reference.
Authored by Zuntao Sun
In this paper we present techniques for enhancing the security of south bound infrastructure in SDN which includes OpenFlow switches and end hosts. In particular, the proposed security techniques have three main goals: (i) validation and secure configuration of flow rules in the OpenFlow switches by trusted SDN controller in the domain; (ii) securing the flows from the end hosts; and (iii) detecting attacks on the switches by malicious entities in the SDN domain. We have implemented the proposed security techniques as an application for ONOS SDN controller. We have also validated our application by detecting various OpenFlow switch specific attacks such as malicious flow rule insertions and modifications in the switches over a mininet emulated network.
Authored by Uday Tupakula, Kallol Karmakar, Vijay Varadharajan, Ben Collins
SDN represents a significant advance for the telecom world, since the decoupling of the control and data planes offers numerous advantages in terms of management dynamism and programmability, mainly due to its software-based centralized control. Unfortunately, these features can be exploited by malicious entities, who take advantage of the centralized control to extend the scope and consequences of their attacks. When this happens, both the legal and network technical fields are concerned with gathering information that will lead them to the root cause of the problem. Although forensics and incident response processes share their interest in the event information, both operate in isolation due to the conceptual and pragmatic challenges of integrating them into SDN environments, which impacts on the resources and time required for information analysis. Given these limitations, the current work focuses on proposing a framework for SDNs that combines the above approaches to optimize the resources to deliver evidence, incorporate incident response activation mechanisms, and generate assumptions about the possible origin of the security problem.
Authored by Maria Jimenez, David Fernandez
In case of deploying additional network security equipment in a new location, network service providers face difficulties such as precise management of large number of network security equipment and expensive network operation costs. Accordingly, there is a need for a method for security-aware network service provisioning using the existing network security equipment. In order to solve this problem, there is an existing reinforcement learning-based routing decision method fixed for each node. This method performs repeatedly until a routing decision satisfying end-to-end security constraints is achieved. This generates a disadvantage of longer network service provisioning time. In this paper, we propose security constraints reinforcement learning based routing (SCRR) algorithm that generates routing decisions, which satisfies end-to-end security constraints by giving conditional reward values according to the agent state-action pairs when performing reinforcement learning.
Authored by Hyeonjun Jo, Kyungbaek Kim
Software-Defined Networking (SDN) can be a good option to support Industry 4.0 (4IR) and 5G wireless networks. SDN can also be a secure networking solution that improves the security, capability, and programmability in the networks. In this paper, we present and analyze an SDN-based security architecture for 4IR with 5G. SDN is used for increasing the level of security and reliability of the network by suitably dividing the whole network into data, control, and applications planes. The SDN control layer plays a beneficial role in 4IR with 5G scenarios by managing the data flow properly. We also evaluate the performance of the proposed architecture in terms of key parameters such as data transmission rate and response time.
Authored by Anichur Rahman, Kamrul Hasan, Seong–Ho Jeong
Middlebox is primarily used in Software-Defined Network (SDN) to enhance operational performance, policy compliance, and security operations. Therefore, security of the middlebox itself is essential because incorrect use of the middlebox can cause severe cybersecurity problems for SDN. Existing attacks against middleboxes in SDN (for instance, middleboxbypass attack) use methods such as cloned tags from the previous packets to justify that the middlebox has processed the injected packet. Flowcloak as the latest solution to defeat such an attack creates a defence using a tag by computing the hash of certain parts of the packet header. However, the security mechanisms proposed to mitigate these attacks are compromise-able since all parts of the packet header can be imitated, leaving the middleboxes insecure. To demonstrate our claim, we introduce a novel attack against SDN middleboxes by hijacking TCP/IP headers. The attack uses crafted TCP/IP headers to receive the tags and signatures and successfully bypasses the middleboxes.
Authored by Ali Mohammadi, Rasheed Hussain, Alma Oracevic, Syed Kazmi, Fatima Hussain, Moayad Aloqaily, Junggab Son
Since the advent of the Software Defined Networking (SDN) in 2011 and formation of Open Networking Foundation (ONF), SDN inspired projects have emerged in various fields of computer networks. Almost all the networking organizations are working on their products to be supported by SDN concept e.g. openflow. SDN has provided a great flexibility and agility in the networks by application specific control functions with centralized controller, but it does not provide security guarantees for security vulnerabilities inside applications, data plane and controller platform. As SDN can also use third party applications, an infected application can be distributed in the network and SDN based systems may be easily collapsed. In this paper, a security threats assessment model has been presented which highlights the critical areas with security requirements in SDN. Based on threat assessment model a proposed Security Threats Assessment and Diagnostic System (STADS) is presented for establishing a reliable SDN framework. The proposed STADS detects and diagnose various threats based on specified policy mechanism when different components of SDN communicate with controller to fulfil network requirements. Mininet network emulator with Ryu controller has been used for implementation and analysis.
Authored by Pradeep Sharma, Brijesh Kumar, S.S Tyagi
The dynamic state of networks presents a challenge for the deployment of distributed applications and protocols. Ad-hoc schedules in the updating phase might lead to a lot of ambiguity and issues. By separating the control and data planes and centralizing control, Software Defined Networking (SDN) offers novel opportunities and remedies for these issues. However, software-based centralized architecture for distributed environments introduces significant challenges. Security is a main and crucial issue in SDN. This paper presents a deep study of the state-of-the-art of security challenges and solutions for the SDN paradigm. The conducted study helped us to propose a dynamic approach to efficiently detect different security violations and incidents caused by network updates including forwarding loop, forwarding black hole, link congestion, network policy violation, etc. Our solution relies on an intelligent approach based on the use of Machine Learning and Artificial Intelligence Algorithms.
Authored by Amina SAHBI, Faouzi JAIDI, Adel BOUHOULA