Software quality assurance (SQA) is a means and practice of monitoring the software engineering processes and methods used in a project to ensure proper quality of the software. It encompasses the entire software development life-cycle, including requirements engineering, software design, coding, source code reviews, software configuration management, testing , release management, software deployment and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation. In this talk, we will mainly focus on the testing activity part of the software development life-cycle. Its main objective is checking that software is satisfying a set of quality properties that are identified by the "ISO/IEC 25010:2011 System and Software Quality Model" standard [1] .
Authored by Wissam Mallouli
The evolving and new age cybersecurity threats has set the information security industry on high alert. This modern age cyberattacks includes malware, phishing, artificial intelligence, machine learning and cryptocurrency. Our research highlights the importance and role of Software Quality Assurance for increasing the security standards that will not just protect the system but will handle the cyber-attacks better. With the series of cyber-attacks, we have concluded through our research that implementing code review and penetration testing will protect our data's integrity, availability, and confidentiality. We gathered user requirements of an application, gained a proper understanding of the functional as well as non-functional requirements. We implemented conventional software quality assurance techniques successfully but found that the application software was still vulnerable to potential issues. We proposed two additional stages in software quality assurance process to cater with this problem. After implementing this framework, we saw that maximum number of potential threats were already fixed before the first release of the software.
Authored by Ammar Haider, Wafa Bhatti
The increase of autonomy in autonomous surface vehicles development brings along modified and new risks and potential hazards, this in turn, introduces the need for processes and methods for ensuring that systems are acceptable for their intended use with respect to dependability and safety concerns. One approach for evaluating software requirements for claims of safety is to employ an assurance case. Much like a legal case, the assurance case lays out an argument and supporting evidence to provide assurance on the software requirements. This paper analyses safety and security requirements relating to autonomous vessels, and regulations in the automotive industry and the marine industry before proposing a generic cybersecurity and safety assurance case that takes a general graphical approach of Goal Structuring Notation (GSN).
Authored by Luis-Pedro Cobos, Tianlei Miao, Kacper Sowka, Garikayi Madzudzo, Alastair Ruddle, Ehab Amam
The use of software to support the information infrastructure that governments, critical infrastructure providers and businesses worldwide rely on for their daily operations and business processes is gradually becoming unavoidable. Commercial off-the shelf software is widely and increasingly used by these organizations to automate processes with information technology. That notwithstanding, cyber-attacks are becoming stealthier and more sophisticated, which has led to a complex and dynamic risk environment for IT-based operations which users are working to better understand and manage. This has made users become increasingly concerned about the integrity, security and reliability of commercial software. To meet up with these concerns and meet customer requirements, vendors have undertaken significant efforts to reduce vulnerabilities, improve resistance to attack and protect the integrity of the products they sell. These efforts are often referred to as “software assurance.” Software assurance is becoming very important for organizations critical to public safety and economic and national security. These users require a high level of confidence that commercial software is as secure as possible, something only achieved when software is created using best practices for secure software development. Therefore, in this paper, we explore the need for information assurance and its importance for both organizations and end users, methodologies and best practices for software security and information assurance, and we also conducted a survey to understand end users’ opinions on the methodologies researched in this paper and their impact.
Authored by Muhammad Khan, Enow Ehabe, Akalanka Mailewa
Aviation is a highly sophisticated and complex System-of-Systems (SoSs) with equally complex safety oversight. As novel products with autonomous functions and interactions between component systems are adopted, the number of interdependencies within and among the SoS grows. These interactions may not always be obvious. Understanding how proposed products (component systems) fit into the context of a larger SoS is essential to promote the safe use of new as well as conventional technology.UL 4600, is a Standard for Safety for the Evaluation of Autonomous Products specifically written for completely autonomous Load vehicles. The goal-based, technology-neutral features of this standard make it adaptable to other industries and applications.This paper, using the philosophy of UL 4600, gives guidance for creating an assurance case for products in an SoS context. An assurance argument is a cogent structured argument concluding that an autonomous aircraft system possesses all applicable through-life performance and safety properties. The assurance case process can be repeated at each level in the SoS: aircraft, aircraft system, unmodified components, and modified components. The original Equipment Manufacturer (OEM) develops the assurance case for the whole aircraft envisioned in the type certification process. Assurance cases are continuously validated by collecting and analyzing Safety Performance Indicators (SPIs). SPIs provide predictive safety information, thus offering an opportunity to improve safety by preventing incidents and accidents. Continuous validation is essential for risk-based approval of autonomously evolving (dynamic) systems, learning systems, and new technology. System variants, derivatives, and components are captured in a subordinate assurance case by their developer. These variants of the assurance case inherently reflect the evolution of the vehicle-level derivatives and options in the context of their specific target ecosystem. These subordinate assurance cases are nested under the argument put forward by the OEM of components and aircraft, for certification credit.It has become a common practice in aviation to address design hazards through operational mitigations. It is also common for hazards noted in an aircraft component system to be mitigated within another component system. Where a component system depends on risk mitigation in another component of the SoS, organizational responsibilities must be stated explicitly in the assurance case. However, current practices do not formalize accounting for these dependencies by the parties responsible for design; consequently, subsequent modifications are made without the benefit of critical safety-related information from the OEMs. The resulting assurance cases, including 3rd party vehicle modifications, must be scrutinized as part of the holistic validation process.When changes are made to a product represented within the assurance case, their impact must be analyzed and reflected in an updated assurance case. An OEM can facilitate this by integrating affected assurance cases across their customer’s supply chains to ensure their validity. The OEM is expected to exercise the sphere-of-control over their product even if it includes outsourced components. Any organization that modifies a product (with or without assurance argumentation information from other suppliers) is accountable for validating the conditions for any dependent mitigations. For example, the OEM may manage the assurance argumentation by identifying requirements and supporting SPI that must be applied in all component assurance cases. For their part, component assurance cases must accommodate all spheres-of-control that mitigate the risks they present in their respective contexts. The assurance case must express how interdependent mitigations will collectively assure the outcome. These considerations are much more than interface requirements and include explicit hazard mitigation dependencies between SoS components. A properly integrated SoS assurance case reflects a set of interdependent systems that could be independently developed..Even in this extremely interconnected environment, stakeholders must make accommodations for the independent evolution of products in a manner that protects proprietary information, domain knowledge, and safety data. The collective safety outcome for the SoS is based on the interdependence of mitigations by each constituent component and could not be accomplished by any single component. This dependency must be explicit in the assurance case and should include operational mitigations predicated on people and processes.Assurance cases could be used to gain regulatory approval of conventional and new technology. They can also serve to demonstrate consistency with a desired level of safety, especially in SoSs whose existing standards may not be adequate. This paper also provides guidelines for preserving alignment between component assurance cases along a product supply chain, and the respective SoSs that they support. It shows how assurance is a continuous process that spans product evolution through the monitoring of interdependent requirements and SPI. The interdependency necessary for a successful assurance case encourages stakeholders to identify and formally accept critical interconnections between related organizations. The resulting coordination promotes accountability for safety through increased awareness and the cultivation of a positive safety culture.
Authored by Uma Ferrell, Alfred Anderegg
For modern Automatic Test Equipment (ATE), one of the most daunting tasks conducting Information Assurance (IA). In addition, there is a desire to Network ATE to allow for information sharing and deployment of software. This is complicated by the fact that typically ATE are “unmanaged” systems in that most are configured, deployed, and then mostly left alone. This results in systems that are not patched with the latest Operating System updates and in fact may be running on legacy Operating Systems which are no longer supported (like Windows XP or Windows 7 for instance). A lot of this has to do with the cost of keeping a system updated on a continuous basis and regression testing the Test Program Sets (TPS) that run on them. Given that an Automated Test System can have thousands of Test Programs running on it, the cost and time involved in doing complete regression testing on all the Test Programs can be extremely expensive. In addition to the Test Programs themselves some Test Programs rely on third party Software and / or custom developed software that is required for the Test Programs to run. Add to this the requirement to perform software steering through all the Test Program paths, the length of time required to validate a Test Program could be measured in months in some cases. If system updates are performed once a month like some Operating System updates this could consume all the available time of the Test Station or require a fleet of Test Stations to be dedicated just to do the required regression testing. On the other side of the coin, a Test System running an old unpatched Operating System is a prime target for any manner of virus or other IA issues. This paper will discuss some of the pro's and con's of a managed Test System and how it might be accomplished.
Authored by William Headrick
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
The use of software daily has become inevitable nowadays. Almost all everyday tools and the most different areas (e.g., medicine or telecommunications) are dependent on software. The C programming language is one of the most used languages for software development, such as operating systems, drivers, embedded systems, and industrial products. Even with the appearance of new languages, it remains one of the most used [1] . At the same time, C lacks verification mechanisms, like array boundaries, leaving the entire responsibility to the developer for the correct management of memory and resources. These weaknesses are at the root of buffer overflows (BO) vulnerabilities, which range the first place in the CWE’s top 25 of the most dangerous weaknesses [2] . The exploitation of BO when existing in critical safety systems, such as railways and autonomous cars, can have catastrophic effects for manufacturers or endanger human lives.
Authored by João Inácio, Ibéria Medeiros
The FAA proposes Safety Continuum that recognizes the public expectation for safety outcomes vary with aviation sectors that have different missions, aircraft, and environments. The purpose is to align the rigor of oversight to the public expectations. An aircraft, its variants or derivatives may be used in operations with different expectations. The differences in mission might bring immutable risks for some applications that reuse or revise the original aircraft type design. The continuum enables a more agile design approval process for innovations in the context of a dynamic ecosystems, addressing the creation of variants for different sectors and needs. Since an aircraft type design can be reused in various operations under part 91 or 135 with different mission risks the assurance case will have many branches reflecting the variants and derivatives.This paper proposes a model for the holistic, performance-based, through-life safety assurance case that focuses applicant and oversight alike on achieving the safety outcomes. This paper describes the application of goal-based, technology-neutral features of performance-based assurance cases extending the philosophy of UL 4600, to the Safety Continuum. This paper specifically addresses component reuse including third-party vehicle modifications and changes to operational concept or eco-system. The performance-based assurance argument offers a way to combine the design approval more seamlessly with the oversight functions by focusing all aspects of the argument and practice together to manage the safety outcomes. The model provides the context to assure mitigated risk are consistent with an operation’s place on the safety continuum, while allowing the applicant to reuse parts of the assurance argument to innovate variants or derivatives. The focus on monitoring performance to constantly verify the safety argument complements compliance checking as a way to assure products are "fit-for-use". The paper explains how continued operational safety becomes a natural part of monitoring the assurance case for growing variety in a product line by accounting for the ecosystem changes. Such a model could be used with the Safety Continuum to promote applicant and operator accountability delivering the expected safety outcomes.
Authored by Alfred Anderegg, Uma Ferrell
The military operations in low communications infrastructure scenarios employ flexible solutions to optimize the data processing cycle using situational awareness systems, guaranteeing interoperability and assisting in all processes of decision-making. This paper presents an architecture for the integration of Command, Control, Computing, Communication, Intelligence, Surveillance and Reconnaissance Systems (C4ISR), developed within the scope of the Brazilian Ministry of Defense, in the context of operations with Unmanned Aerial Vehicles (UAV) - swarm drones - and the Internet-to-the-battlefield (IoBT) concept. This solution comprises the following intelligent subsystems embedded in UAV: STFANET, an SDN-Based Topology Management for Flying Ad Hoc Network focusing drone swarms operations, developed by University of Rio Grande do Sul; Interoperability of Command and Control (INTERC2), an intelligent communication middleware developed by Brazilian Navy; A Mission-Oriented Sensors Array (MOSA), which provides the automatization of data acquisition, data fusion, and data sharing, developed by Brazilian Army; The In-Flight Awareness Augmentation System (IFA2S), which was developed to increase the safety navigation of Unmanned Aerial Vehicles (UAV), developed by Brazilian Air Force; Data Mining Techniques to optimize the MOSA with data patterns; and an adaptive-collaborative system, composed of a Software Defined Radio (SDR), to solve the identification of electromagnetic signals and a Geographical Information System (GIS) to organize the information processed. This research proposes, as a main contribution in this conceptual phase, an application that describes the premises for increasing the capacity of sensing threats in the low structured zones, such as the Amazon rainforest, using existing communications solutions of Brazilian defense monitoring systems.
Authored by Nina Figueira, Pablo Pochmann, Abel Oliveira, Edison de Freitas
Military networks consist of heterogeneous devices that provide soldiers with real-time terrain and mission intel-ligence. The development of next-generation Software Defined Networks (SDN)-enabled devices is enabling the modernization of traditional military networks. Commonly, traditional military networks take the trustworthiness of devices for granted. How-ever, the recent modernization of military networks introduces cyber attacks such as data and identity spoofing attacks. Hence, it is crucial to ensure the trustworthiness of network traffic to ensure the mission's outcome. This work proposes a Continuous Behavior-based Authentication (CBA) protocol that integrates network traffic analysis techniques to provide robust and efficient network management flow by separating data and control planes in SDN-enabled military networks. The evaluation of the CBA protocol aimed to measure the efficiency of the proposed protocol in realistic military networks. Furthermore, we analyze the overall network overhead of the CBA protocol and its accuracy to detect rogue network traffic data from field devices.
Authored by Abel Rivera, Evan White, Jaime Acosta, Deepak Tosh
The latest generation of IoT systems incorporate machine learning (ML) technologies on edge devices. This introduces new engineering challenges to bring ML onto resource-constrained hardware, and complications for ensuring system security and privacy. Existing research prescribes iterative processes for machine learning enabled IoT products to ease development and increase product success. However, these processes mostly focus on existing practices used in other generic software development areas and are not specialized for the purpose of machine learning or IoT devices. This research seeks to characterize engineering processes and security practices for ML-enabled IoT systems through the lens of the engineering lifecycle. We collected data from practitioners through a survey (N=25) and interviews (N=4). We found that security processes and engineering methods vary by company. Respondents emphasized the engineering cost of security analysis and threat modeling, and trade-offs with business needs. Engineers reduce their security investment if it is not an explicit requirement. The threats of IP theft and reverse engineering were a consistent concern among practitioners when deploying ML for IoT devices. Based on our findings, we recommend further research into understanding engineering cost, compliance, and security trade-offs.
Authored by Nikhil Gopalakrishna, Dharun Anandayuvaraj, Annan Detti, Forrest Bland, Sazzadur Rahaman, James Davis
"Security first" is the most concerned issue of Linux administrators. Security refers to the integrity of data. The authentication security and integrity of data are higher than the privacy security of data. Firewall is used to realize the function of access control under Linux. It is divided into hardware or software firewall. No matter in which network, the firewall must work at the edge of the network. Our task is to define how the firewall works. This is the firewall's policies and rules, so that it can detect the IP and data in and out of the network. At present, there are three or four layers of firewalls on the market, which are called network layer firewalls, and seven layers of firewalls, which are actually the gateway of the agent layer. But for the seven layer firewall, no matter what your source port or target port, source address or target address is, it will check all your things. Therefore, the seven layer firewall is more secure, but it brings lower efficiency. Therefore, the usual firewall schemes on the market are a combination of the two. And because we all need to access from the port controlled by the firewall, the work efficiency of the firewall has become the most important control of how much data users can access. This paper introduces two types of firewalls iptables and TCP\_Wrappers. What are the differences between the use policies, rules and structures of the two firewalls? This is the problem to be discussed in this paper.
Authored by Limei Ma, Dongmei Zhao
Random numbers are essential for communications security, as they are widely employed as secret keys and other critical parameters of cryptographic algorithms. The Linux random number generator (LRNG) is the most popular open-source software-based random number generator (RNG). The security of LRNG is influenced by the overall design, especially the quality of entropy sources. Therefore, it is necessary to assess and quantify the quality of the entropy sources which contribute the main randomness to RNGs. In this paper, we perform an empirical study on the quality of entropy sources in LRNG with Linux kernel 5.6, and provide the following two findings. We first analyze two important entropy sources: jiffies and cycles, and propose a method to predict jiffies by cycles with high accuracy. The results indicate that, the jiffies can be correctly predicted thus contain almost no entropy in the condition of knowing cycles. The other important finding is the failure of interrupt cycles during system boot. The lower bits of cycles caused by interrupts contain little entropy, which is contrary to our traditional cognition that lower bits have more entropy. We believe these findings are of great significance to improve the efficiency and security of the RNG design on software platforms.
Authored by Mingshu Du, Yuan Ma, Na Lv, Tianyu Chen, Shijie Jia, Fangyu Zheng
Operating systems are essential software components for any computer. The goal of computer system manu-facturers is to provide a safe operating system that can resist a range of assaults. APTs (Advanced Persistent Threats) are merely one kind of attack used by hackers to penetrate organisations (APT). Here, we will apply the MITRE ATT&CK approach to analyze the security of Windows and Linux. Using the results of a series of vulnerability tests conducted on Windows 7, 8, 10, and Windows Server 2012, as well as Linux 16.04, 18.04, and its most current version, we can establish which operating system offers the most protection against future assaults. In addition, we have shown adversarial reflection in response to threats. We used ATT &CK framework tools to launch attacks on both platforms.
Authored by Hira Sikandar, Usman Sikander, Adeel Anjum, Muazzam Khan
Exploring the efficient vulnerability scanning and detection technology of various tools is one fundamental aim of network security. This network security technique ameliorates the tremendous number of IoT security challenges and the threats they face daily. However, among various tools, Shodan Eye scanning technology has proven to be very helpful for network administrators and security personnel to scan, detect and analyze vulnerable ports and traffic in organizations' networks. This work presents a simulated network scanning activity and manual vulnerability analysis of an internet-connected industrial equipment of two chosen industrial networks (Industry A and B) by running Shodan on a virtually hosted (Oracle Virtual Box)-Linux-based operating system (Kali Linux). The result shows that the shodan eye is a a promising tool for network security and efficient vulnerability research.
Authored by Ebuka Nkoro, Cosmas Nwakanma, Jae-Min Lee, Dong-Seong Kim
As information and communication technologies evolve every day, so does the use of technology in our daily lives. Along with our increasing dependence on digital information assets, security vulnerabilities are becoming more and more apparent. Passwords are a critical component of secure access to digital systems and applications. They not only prevent unauthorized access to these systems, but also distinguish the users of such systems. Research on password predictability often relies on surveys or leaked data. Therefore, there is a gap in the literature for studies that consider real data in this regard. This study investigates the password security awareness of 161 computer engineering students enrolled in a Linux-based undergraduate course at Ataturk University. The study is conducted in two phases, and in the first phase, 12 dictionaries containing also real student data are formed. In the second phase of the study, a dictionary-based brute-force attack is utilized by means of a serial and parallel version of a Bash script to crack the students’ passwords. In this respect, the /etc/shadow file of the Linux system is used as a basis to compare the hashed versions of the guessed passwords. As a result, the passwords of 23 students, accounting for 14% of the entire student group, were cracked. We believe that this is an unacceptably high prediction rate for such a group with high digital literacy. Therefore, due to this important finding of the study, we took immediate action and shared the results of the study with the instructor responsible for administering the information security course that is included in our curriculum and offered in one of the following semesters.
Authored by Deniz Dal, Esra Çelik
Fruit-80, an ultra-lightweight stream cipher with 80-bit secret key, is oriented toward resource constrained devices in the Internet of Things. In this paper, we propose area and speed optimization architectures of Fruit-80 on FPGAs. The area optimization architecture reuses NFSR&LFSR feedback functions and achieves the most suitable ratio of look-up-tables and flip-flops. The speed optimization architecture adopts a hybrid approach for parallelization and reduces the latency of long data paths by pre-generating primary feedback and inserting flip-flops. In conclusion, the optimal throughput-to-area ratio of the speed optimization architecture is better than that of Grain v1. The area optimization architecture occupies only 35 slices on Xilinx Spartan-3 FPGA, smaller than that of Grain and other common stream ciphers. To the best of our knowledge, this result sets a new record of the minimum area in lightweight cipher implementations on FPGA.
Authored by Gangqiang Yang, Zhengyuan Shi, Cheng Chen, Hailiang Xiong, Honggang Hu, Zhiguo Wan, Keke Gai, Meikang Qiu
We use mobile apps on a daily basis and there is an app for everything. We trust these applications with our most personal data. It is therefore important that these apps are as secure and well usable as possible. So far most studies on the maintenance and security of mobile applications have been done on Android applications. We do, however, not know how well these results translate to iOS.This research project aims to close this gap by analysing iOS applications with regards to maintainability and security. Regarding maintainability, we analyse code smells in iOS applications, the evolution of code smells in iOS applications and compare code smell distributions in iOS and Android applications. Regarding security, we analyse the evolution of the third-party library dependency network for the iOS ecosystem. Additionally, we analyse how publicly reported vulnerabilities spread in the library dependency network.Regarding maintainability, we found that the distributions of code smells in iOS and Android applications differ. Code smells in iOS applications tend to correspond to smaller classes, such as Lazy Class. Regarding security, we found that the library dependency network of the iOS ecosystem is not growing as fast as in some other ecosystems. There are less dependencies on average than for example in the npm ecosystem and, therefore, vulnerabilities do not spread as far.
Authored by Kristiina Rahkema, Dietmar Pfahl
User authentication based on muscle tension manifested during password typing seems to be an interesting additional layer of security. It represents another way of verifying a person’s identity, for example in the context of continuous verification. In order to explore the possibilities of such authentication method, it was necessary to create a capturing software that records and stores data from EMG (electromyography) sensors, enabling a subsequent analysis of the recorded data to verify the relevance of the method. The work presented here is devoted to the design, implementation and evaluation of such a solution. The solution consists of a protocol and a software application for collecting multimodal data when typing on a keyboard. Myo armbands on both forearms are used to capture EMG and inertial data while additional modalities are collected from a keyboard and a camera. The user experience evaluation of the solution is presented, too.
Authored by Stefan Korecko, Matus Haluska, Matus Pleva, Markus Skudal, Patrick Bours
Pauses in typing are generally considered to indicate cognitive processing and so are of interest in educational contexts. While much prior work has looked at typing behavior of Computer Science students, this paper presents results of a study specifically on the pausing behavior of students in Introductory Computer Programming. We investigate the frequency of pauses of different lengths, what last actions students take before pausing, and whether there is a correlation between pause length and performance in the course. We find evidence that frequency of pauses of all lengths is negatively correlated with performance, and that, while some keystrokes initiate pauses consistently across pause lengths, other keystrokes more commonly initiate short or long pauses. Clustering analysis discovers two groups of students, one that takes relatively fewer mid-to-long pauses and performs better on exams than the other.
Authored by Raj Shrestha, Juho Leinonen, Albina Zavgorodniaia, Arto Hellas, John Edwards
Resiliency of cyber-physical systems (CPSs) against malicious attacks has been a topic of active research in the past decade due to widely recognized importance. Resilient CPS is capable of tolerating some attacks, operating at a reduced capacity with core functions maintained, and failing gracefully to avoid any catastrophic consequences. Existing work includes an architecture for hierarchical control systems, which is a subset of CPS with wide applicability, that is tailored for resiliency. Namely, the architecture consists of local, network and supervision layers and features such as simplex structure, resource isolation by hypervisors, redundant sensors/actuators, and software defined network capabilities. Existing work also includes methods of ensuring a level of resiliency at each one of the layers, respectively. However, for a holistic system level resiliency, individual methods at each layers must be coordinated in their deployment because all three layers interact for the operation of CPS. For this purpose, a resiliency coordinator for CPS is proposed in this work. The resiliency coordinator is the interconnection of central resiliency coordinator in the supervision layer, network resiliency coordinator in the network layer, and finally, local resiliency coordinators in multiple physical systems that compose the physical layer. We show, by examples, the operation of the resiliency coordinator and illustrate that RC accomplishes a level of attack resiliency greater than the sum of resiliency at each one of the layers separately.
Authored by Yongsoon Eun, Jaegeun Park, Yechan Jeong, Daehoon Kim, Kyung-Joon Park
Power system robustness against high-impact low probability events is becoming a major concern. To depict distinct phases of a system response during these disturbances, an irregular polygon model is derived from the conventional trapezoid model and the model is analytically investigated for transmission system performance, based on which resiliency metrics are developed for the same. Furthermore, the system resiliency to windstorms is evaluated on the IEEE reliability test system (RTS) by performing steady-state and dynamic security assessment incorporating protection modelling and corrective action schemes using the Power System Simulator for Engineering (PSS®E) software. Based on the results of steady-state and dynamic analysis, modified resiliency metrics are quantified. Finally, this paper quantifies the interdependency of operational and infrastructure resiliency as they cannot be considered discrete characteristics of the system.
Authored by Giritharan Iswaran, Ramin Vakili, Mojdeh Khorsand
Recently, Cloud Computing became one of today’s great innovations for provisioning Information Technology (IT) resources. Moreover, a new model has been introduced named Fog Computing, which addresses Cloud Computing paradigm issues regarding time delay and high cost. However, security challenges are still a big concern about the vulnerabilities to both Cloud and Fog Computing systems. Man- in- the- Middle (MITM) is considered one of the most destructive attacks in a Fog Computing context. Moreover, it’s very complex to detect MiTM attacks as it is performed passively at the Software-Defined Networking (SDN) level, also the Fog Computing paradigm is ideally suitable for MITM attacks. In this paper, a MITM mitigation scheme will be proposed consisting of an SDN network (Fog Leaders) which controls a layer of Fog Nodes. Furthermore, Multi-Path TCP (MPTCP) has been used between all edge devices and Fog Nodes to improve resource utilization and security. The proposed solution performance evaluation has been carried out in a simulation environment using Mininet, Ryu SDN controller and Multipath TCP (MPTCP) Linux kernel. The experimental results showed that the proposed solution improves security, network resiliency and resource utilization without any significant overheads compared to the traditional TCP implementation.
Authored by Hossam ELMansy, Khaled Metwally, Khaled Badran
This paper presents the machine learning algorithm to detect whether an executable binary is benign or ransomware. The ransomware cybercriminals have targeted our infrastructure, businesses, and everywhere which has directly affected our national security and daily life. Tackling the ransomware threats more effectively is a big challenge. We applied a machine-learning model to classify and identify the security level for a given suspected malware for ransomware detection and prevention. We use the feature selection data preprocessing to improve the prediction accuracy of the model.
Authored by Chulan Gao, Hossain Shahriar, Dan Lo, Yong Shi, Kai Qian