Web-based Application Programming Interfaces (APIs) are often described using SOAP, OpenAPI, and GraphQL specifications. These specifications provide a consistent way to define web services and enable automated fuzz testing. As such, many fuzzers take advantage of these specifications. However, in an enterprise setting, the tools are usually installed and scaled by individual teams, leading to duplication of efforts. There is a need for an enterprise-wide fuzz testing solution to provide shared, cost efficient, off-nominal testing at scale where fuzzers can be plugged-in as needed. Internet cloud-based fuzz testing-as-a-service solutions mitigate scalability concerns but are not always feasible as they require artifacts to be uploaded to external infrastructure. Typically, corporate policies prevent sharing artifacts with third parties due to cost, intellectual property, and security concerns. We utilize API specifications and combine them with cluster computing elasticity to build an automated, scalable framework that can fuzz multiple apps at once and retain the trust boundary of the enterprise.
Authored by Riyadh Mahmood, Jay Pennington, Danny Tsang, Tan Tran, Andrea Bogle
In this paper we present techniques for enhancing the security of south bound infrastructure in SDN which includes OpenFlow switches and end hosts. In particular, the proposed security techniques have three main goals: (i) validation and secure configuration of flow rules in the OpenFlow switches by trusted SDN controller in the domain; (ii) securing the flows from the end hosts; and (iii) detecting attacks on the switches by malicious entities in the SDN domain. We have implemented the proposed security techniques as an application for ONOS SDN controller. We have also validated our application by detecting various OpenFlow switch specific attacks such as malicious flow rule insertions and modifications in the switches over a mininet emulated network.
Authored by Uday Tupakula, Kallol Karmakar, Vijay Varadharajan, Ben Collins
SDN represents a significant advance for the telecom world, since the decoupling of the control and data planes offers numerous advantages in terms of management dynamism and programmability, mainly due to its software-based centralized control. Unfortunately, these features can be exploited by malicious entities, who take advantage of the centralized control to extend the scope and consequences of their attacks. When this happens, both the legal and network technical fields are concerned with gathering information that will lead them to the root cause of the problem. Although forensics and incident response processes share their interest in the event information, both operate in isolation due to the conceptual and pragmatic challenges of integrating them into SDN environments, which impacts on the resources and time required for information analysis. Given these limitations, the current work focuses on proposing a framework for SDNs that combines the above approaches to optimize the resources to deliver evidence, incorporate incident response activation mechanisms, and generate assumptions about the possible origin of the security problem.
Authored by Maria Jimenez, David Fernandez
Software-Defined Networking (SDN) can be a good option to support Industry 4.0 (4IR) and 5G wireless networks. SDN can also be a secure networking solution that improves the security, capability, and programmability in the networks. In this paper, we present and analyze an SDN-based security architecture for 4IR with 5G. SDN is used for increasing the level of security and reliability of the network by suitably dividing the whole network into data, control, and applications planes. The SDN control layer plays a beneficial role in 4IR with 5G scenarios by managing the data flow properly. We also evaluate the performance of the proposed architecture in terms of key parameters such as data transmission rate and response time.
Authored by Anichur Rahman, Kamrul Hasan, Seong–Ho Jeong
Middlebox is primarily used in Software-Defined Network (SDN) to enhance operational performance, policy compliance, and security operations. Therefore, security of the middlebox itself is essential because incorrect use of the middlebox can cause severe cybersecurity problems for SDN. Existing attacks against middleboxes in SDN (for instance, middleboxbypass attack) use methods such as cloned tags from the previous packets to justify that the middlebox has processed the injected packet. Flowcloak as the latest solution to defeat such an attack creates a defence using a tag by computing the hash of certain parts of the packet header. However, the security mechanisms proposed to mitigate these attacks are compromise-able since all parts of the packet header can be imitated, leaving the middleboxes insecure. To demonstrate our claim, we introduce a novel attack against SDN middleboxes by hijacking TCP/IP headers. The attack uses crafted TCP/IP headers to receive the tags and signatures and successfully bypasses the middleboxes.
Authored by Ali Mohammadi, Rasheed Hussain, Alma Oracevic, Syed Kazmi, Fatima Hussain, Moayad Aloqaily, Junggab Son
Since the advent of the Software Defined Networking (SDN) in 2011 and formation of Open Networking Foundation (ONF), SDN inspired projects have emerged in various fields of computer networks. Almost all the networking organizations are working on their products to be supported by SDN concept e.g. openflow. SDN has provided a great flexibility and agility in the networks by application specific control functions with centralized controller, but it does not provide security guarantees for security vulnerabilities inside applications, data plane and controller platform. As SDN can also use third party applications, an infected application can be distributed in the network and SDN based systems may be easily collapsed. In this paper, a security threats assessment model has been presented which highlights the critical areas with security requirements in SDN. Based on threat assessment model a proposed Security Threats Assessment and Diagnostic System (STADS) is presented for establishing a reliable SDN framework. The proposed STADS detects and diagnose various threats based on specified policy mechanism when different components of SDN communicate with controller to fulfil network requirements. Mininet network emulator with Ryu controller has been used for implementation and analysis.
Authored by Pradeep Sharma, Brijesh Kumar, S.S Tyagi
The dynamic state of networks presents a challenge for the deployment of distributed applications and protocols. Ad-hoc schedules in the updating phase might lead to a lot of ambiguity and issues. By separating the control and data planes and centralizing control, Software Defined Networking (SDN) offers novel opportunities and remedies for these issues. However, software-based centralized architecture for distributed environments introduces significant challenges. Security is a main and crucial issue in SDN. This paper presents a deep study of the state-of-the-art of security challenges and solutions for the SDN paradigm. The conducted study helped us to propose a dynamic approach to efficiently detect different security violations and incidents caused by network updates including forwarding loop, forwarding black hole, link congestion, network policy violation, etc. Our solution relies on an intelligent approach based on the use of Machine Learning and Artificial Intelligence Algorithms.
Authored by Amina SAHBI, Faouzi JAIDI, Adel BOUHOULA
Nowadays, lives are very much easier with the help of IoT. Due to lack of protection and a greater number of connections, the management of IoT becomes more difficult To manage the network flow, a Software Defined Networking (SDN) has been introduced. The SDN has a great capability in automatic and dynamic distribution. For harmful attacks on the controller a centralized SDN architecture unlocks the scope. Therefore, to reduce these attacks in real-time, a securing SDN enabled IoT scenario infrastructure of Fog networks is preferred. The virtual switches have network enforcement authorized decisions and these are executed through the SDN network. Apart from this, SDN switches are generally powerful machines and simultaneously these are used as fog nodes. Therefore, SDN looks like a good selection for Fog networks of IoT. Moreover, dynamically distributing the necessary crypto keys are allowed by the centralized and software channel protection management solution, in order to establish the Datagram Transport Layer Security (DTIS) tunnels between the IoT devices, when demanded by the cyber security framework. Through the extensive deployment of this combination, the usage of CPU is observed to be 30% between devices and the latencies are in milliseconds range, and thus it presents the system feasibility with less delay. Therefore, by comparing with the traditional SDN, it is observed that the energy consumption is reduced by more than 90%.
Authored by Venkata Mohan, Sarangam Kodati, V. Krishna
Software quality assurance (SQA) is a means and practice of monitoring the software engineering processes and methods used in a project to ensure proper quality of the software. It encompasses the entire software development life-cycle, including requirements engineering, software design, coding, source code reviews, software configuration management, testing , release management, software deployment and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation. In this talk, we will mainly focus on the testing activity part of the software development life-cycle. Its main objective is checking that software is satisfying a set of quality properties that are identified by the "ISO/IEC 25010:2011 System and Software Quality Model" standard [1] .
Authored by Wissam Mallouli
The evolving and new age cybersecurity threats has set the information security industry on high alert. This modern age cyberattacks includes malware, phishing, artificial intelligence, machine learning and cryptocurrency. Our research highlights the importance and role of Software Quality Assurance for increasing the security standards that will not just protect the system but will handle the cyber-attacks better. With the series of cyber-attacks, we have concluded through our research that implementing code review and penetration testing will protect our data's integrity, availability, and confidentiality. We gathered user requirements of an application, gained a proper understanding of the functional as well as non-functional requirements. We implemented conventional software quality assurance techniques successfully but found that the application software was still vulnerable to potential issues. We proposed two additional stages in software quality assurance process to cater with this problem. After implementing this framework, we saw that maximum number of potential threats were already fixed before the first release of the software.
Authored by Ammar Haider, Wafa Bhatti
The increase of autonomy in autonomous surface vehicles development brings along modified and new risks and potential hazards, this in turn, introduces the need for processes and methods for ensuring that systems are acceptable for their intended use with respect to dependability and safety concerns. One approach for evaluating software requirements for claims of safety is to employ an assurance case. Much like a legal case, the assurance case lays out an argument and supporting evidence to provide assurance on the software requirements. This paper analyses safety and security requirements relating to autonomous vessels, and regulations in the automotive industry and the marine industry before proposing a generic cybersecurity and safety assurance case that takes a general graphical approach of Goal Structuring Notation (GSN).
Authored by Luis-Pedro Cobos, Tianlei Miao, Kacper Sowka, Garikayi Madzudzo, Alastair Ruddle, Ehab Amam
The use of software to support the information infrastructure that governments, critical infrastructure providers and businesses worldwide rely on for their daily operations and business processes is gradually becoming unavoidable. Commercial off-the shelf software is widely and increasingly used by these organizations to automate processes with information technology. That notwithstanding, cyber-attacks are becoming stealthier and more sophisticated, which has led to a complex and dynamic risk environment for IT-based operations which users are working to better understand and manage. This has made users become increasingly concerned about the integrity, security and reliability of commercial software. To meet up with these concerns and meet customer requirements, vendors have undertaken significant efforts to reduce vulnerabilities, improve resistance to attack and protect the integrity of the products they sell. These efforts are often referred to as “software assurance.” Software assurance is becoming very important for organizations critical to public safety and economic and national security. These users require a high level of confidence that commercial software is as secure as possible, something only achieved when software is created using best practices for secure software development. Therefore, in this paper, we explore the need for information assurance and its importance for both organizations and end users, methodologies and best practices for software security and information assurance, and we also conducted a survey to understand end users’ opinions on the methodologies researched in this paper and their impact.
Authored by Muhammad Khan, Enow Ehabe, Akalanka Mailewa
Aviation is a highly sophisticated and complex System-of-Systems (SoSs) with equally complex safety oversight. As novel products with autonomous functions and interactions between component systems are adopted, the number of interdependencies within and among the SoS grows. These interactions may not always be obvious. Understanding how proposed products (component systems) fit into the context of a larger SoS is essential to promote the safe use of new as well as conventional technology.UL 4600, is a Standard for Safety for the Evaluation of Autonomous Products specifically written for completely autonomous Load vehicles. The goal-based, technology-neutral features of this standard make it adaptable to other industries and applications.This paper, using the philosophy of UL 4600, gives guidance for creating an assurance case for products in an SoS context. An assurance argument is a cogent structured argument concluding that an autonomous aircraft system possesses all applicable through-life performance and safety properties. The assurance case process can be repeated at each level in the SoS: aircraft, aircraft system, unmodified components, and modified components. The original Equipment Manufacturer (OEM) develops the assurance case for the whole aircraft envisioned in the type certification process. Assurance cases are continuously validated by collecting and analyzing Safety Performance Indicators (SPIs). SPIs provide predictive safety information, thus offering an opportunity to improve safety by preventing incidents and accidents. Continuous validation is essential for risk-based approval of autonomously evolving (dynamic) systems, learning systems, and new technology. System variants, derivatives, and components are captured in a subordinate assurance case by their developer. These variants of the assurance case inherently reflect the evolution of the vehicle-level derivatives and options in the context of their specific target ecosystem. These subordinate assurance cases are nested under the argument put forward by the OEM of components and aircraft, for certification credit.It has become a common practice in aviation to address design hazards through operational mitigations. It is also common for hazards noted in an aircraft component system to be mitigated within another component system. Where a component system depends on risk mitigation in another component of the SoS, organizational responsibilities must be stated explicitly in the assurance case. However, current practices do not formalize accounting for these dependencies by the parties responsible for design; consequently, subsequent modifications are made without the benefit of critical safety-related information from the OEMs. The resulting assurance cases, including 3rd party vehicle modifications, must be scrutinized as part of the holistic validation process.When changes are made to a product represented within the assurance case, their impact must be analyzed and reflected in an updated assurance case. An OEM can facilitate this by integrating affected assurance cases across their customer’s supply chains to ensure their validity. The OEM is expected to exercise the sphere-of-control over their product even if it includes outsourced components. Any organization that modifies a product (with or without assurance argumentation information from other suppliers) is accountable for validating the conditions for any dependent mitigations. For example, the OEM may manage the assurance argumentation by identifying requirements and supporting SPI that must be applied in all component assurance cases. For their part, component assurance cases must accommodate all spheres-of-control that mitigate the risks they present in their respective contexts. The assurance case must express how interdependent mitigations will collectively assure the outcome. These considerations are much more than interface requirements and include explicit hazard mitigation dependencies between SoS components. A properly integrated SoS assurance case reflects a set of interdependent systems that could be independently developed..Even in this extremely interconnected environment, stakeholders must make accommodations for the independent evolution of products in a manner that protects proprietary information, domain knowledge, and safety data. The collective safety outcome for the SoS is based on the interdependence of mitigations by each constituent component and could not be accomplished by any single component. This dependency must be explicit in the assurance case and should include operational mitigations predicated on people and processes.Assurance cases could be used to gain regulatory approval of conventional and new technology. They can also serve to demonstrate consistency with a desired level of safety, especially in SoSs whose existing standards may not be adequate. This paper also provides guidelines for preserving alignment between component assurance cases along a product supply chain, and the respective SoSs that they support. It shows how assurance is a continuous process that spans product evolution through the monitoring of interdependent requirements and SPI. The interdependency necessary for a successful assurance case encourages stakeholders to identify and formally accept critical interconnections between related organizations. The resulting coordination promotes accountability for safety through increased awareness and the cultivation of a positive safety culture.
Authored by Uma Ferrell, Alfred Anderegg
For modern Automatic Test Equipment (ATE), one of the most daunting tasks conducting Information Assurance (IA). In addition, there is a desire to Network ATE to allow for information sharing and deployment of software. This is complicated by the fact that typically ATE are “unmanaged” systems in that most are configured, deployed, and then mostly left alone. This results in systems that are not patched with the latest Operating System updates and in fact may be running on legacy Operating Systems which are no longer supported (like Windows XP or Windows 7 for instance). A lot of this has to do with the cost of keeping a system updated on a continuous basis and regression testing the Test Program Sets (TPS) that run on them. Given that an Automated Test System can have thousands of Test Programs running on it, the cost and time involved in doing complete regression testing on all the Test Programs can be extremely expensive. In addition to the Test Programs themselves some Test Programs rely on third party Software and / or custom developed software that is required for the Test Programs to run. Add to this the requirement to perform software steering through all the Test Program paths, the length of time required to validate a Test Program could be measured in months in some cases. If system updates are performed once a month like some Operating System updates this could consume all the available time of the Test Station or require a fleet of Test Stations to be dedicated just to do the required regression testing. On the other side of the coin, a Test System running an old unpatched Operating System is a prime target for any manner of virus or other IA issues. This paper will discuss some of the pro's and con's of a managed Test System and how it might be accomplished.
Authored by William Headrick
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
The use of software daily has become inevitable nowadays. Almost all everyday tools and the most different areas (e.g., medicine or telecommunications) are dependent on software. The C programming language is one of the most used languages for software development, such as operating systems, drivers, embedded systems, and industrial products. Even with the appearance of new languages, it remains one of the most used [1] . At the same time, C lacks verification mechanisms, like array boundaries, leaving the entire responsibility to the developer for the correct management of memory and resources. These weaknesses are at the root of buffer overflows (BO) vulnerabilities, which range the first place in the CWE’s top 25 of the most dangerous weaknesses [2] . The exploitation of BO when existing in critical safety systems, such as railways and autonomous cars, can have catastrophic effects for manufacturers or endanger human lives.
Authored by João Inácio, Ibéria Medeiros
The FAA proposes Safety Continuum that recognizes the public expectation for safety outcomes vary with aviation sectors that have different missions, aircraft, and environments. The purpose is to align the rigor of oversight to the public expectations. An aircraft, its variants or derivatives may be used in operations with different expectations. The differences in mission might bring immutable risks for some applications that reuse or revise the original aircraft type design. The continuum enables a more agile design approval process for innovations in the context of a dynamic ecosystems, addressing the creation of variants for different sectors and needs. Since an aircraft type design can be reused in various operations under part 91 or 135 with different mission risks the assurance case will have many branches reflecting the variants and derivatives.This paper proposes a model for the holistic, performance-based, through-life safety assurance case that focuses applicant and oversight alike on achieving the safety outcomes. This paper describes the application of goal-based, technology-neutral features of performance-based assurance cases extending the philosophy of UL 4600, to the Safety Continuum. This paper specifically addresses component reuse including third-party vehicle modifications and changes to operational concept or eco-system. The performance-based assurance argument offers a way to combine the design approval more seamlessly with the oversight functions by focusing all aspects of the argument and practice together to manage the safety outcomes. The model provides the context to assure mitigated risk are consistent with an operation’s place on the safety continuum, while allowing the applicant to reuse parts of the assurance argument to innovate variants or derivatives. The focus on monitoring performance to constantly verify the safety argument complements compliance checking as a way to assure products are "fit-for-use". The paper explains how continued operational safety becomes a natural part of monitoring the assurance case for growing variety in a product line by accounting for the ecosystem changes. Such a model could be used with the Safety Continuum to promote applicant and operator accountability delivering the expected safety outcomes.
Authored by Alfred Anderegg, Uma Ferrell
The military operations in low communications infrastructure scenarios employ flexible solutions to optimize the data processing cycle using situational awareness systems, guaranteeing interoperability and assisting in all processes of decision-making. This paper presents an architecture for the integration of Command, Control, Computing, Communication, Intelligence, Surveillance and Reconnaissance Systems (C4ISR), developed within the scope of the Brazilian Ministry of Defense, in the context of operations with Unmanned Aerial Vehicles (UAV) - swarm drones - and the Internet-to-the-battlefield (IoBT) concept. This solution comprises the following intelligent subsystems embedded in UAV: STFANET, an SDN-Based Topology Management for Flying Ad Hoc Network focusing drone swarms operations, developed by University of Rio Grande do Sul; Interoperability of Command and Control (INTERC2), an intelligent communication middleware developed by Brazilian Navy; A Mission-Oriented Sensors Array (MOSA), which provides the automatization of data acquisition, data fusion, and data sharing, developed by Brazilian Army; The In-Flight Awareness Augmentation System (IFA2S), which was developed to increase the safety navigation of Unmanned Aerial Vehicles (UAV), developed by Brazilian Air Force; Data Mining Techniques to optimize the MOSA with data patterns; and an adaptive-collaborative system, composed of a Software Defined Radio (SDR), to solve the identification of electromagnetic signals and a Geographical Information System (GIS) to organize the information processed. This research proposes, as a main contribution in this conceptual phase, an application that describes the premises for increasing the capacity of sensing threats in the low structured zones, such as the Amazon rainforest, using existing communications solutions of Brazilian defense monitoring systems.
Authored by Nina Figueira, Pablo Pochmann, Abel Oliveira, Edison de Freitas
Military networks consist of heterogeneous devices that provide soldiers with real-time terrain and mission intel-ligence. The development of next-generation Software Defined Networks (SDN)-enabled devices is enabling the modernization of traditional military networks. Commonly, traditional military networks take the trustworthiness of devices for granted. How-ever, the recent modernization of military networks introduces cyber attacks such as data and identity spoofing attacks. Hence, it is crucial to ensure the trustworthiness of network traffic to ensure the mission's outcome. This work proposes a Continuous Behavior-based Authentication (CBA) protocol that integrates network traffic analysis techniques to provide robust and efficient network management flow by separating data and control planes in SDN-enabled military networks. The evaluation of the CBA protocol aimed to measure the efficiency of the proposed protocol in realistic military networks. Furthermore, we analyze the overall network overhead of the CBA protocol and its accuracy to detect rogue network traffic data from field devices.
Authored by Abel Rivera, Evan White, Jaime Acosta, Deepak Tosh
The latest generation of IoT systems incorporate machine learning (ML) technologies on edge devices. This introduces new engineering challenges to bring ML onto resource-constrained hardware, and complications for ensuring system security and privacy. Existing research prescribes iterative processes for machine learning enabled IoT products to ease development and increase product success. However, these processes mostly focus on existing practices used in other generic software development areas and are not specialized for the purpose of machine learning or IoT devices. This research seeks to characterize engineering processes and security practices for ML-enabled IoT systems through the lens of the engineering lifecycle. We collected data from practitioners through a survey (N=25) and interviews (N=4). We found that security processes and engineering methods vary by company. Respondents emphasized the engineering cost of security analysis and threat modeling, and trade-offs with business needs. Engineers reduce their security investment if it is not an explicit requirement. The threats of IP theft and reverse engineering were a consistent concern among practitioners when deploying ML for IoT devices. Based on our findings, we recommend further research into understanding engineering cost, compliance, and security trade-offs.
Authored by Nikhil Gopalakrishna, Dharun Anandayuvaraj, Annan Detti, Forrest Bland, Sazzadur Rahaman, James Davis
"Security first" is the most concerned issue of Linux administrators. Security refers to the integrity of data. The authentication security and integrity of data are higher than the privacy security of data. Firewall is used to realize the function of access control under Linux. It is divided into hardware or software firewall. No matter in which network, the firewall must work at the edge of the network. Our task is to define how the firewall works. This is the firewall's policies and rules, so that it can detect the IP and data in and out of the network. At present, there are three or four layers of firewalls on the market, which are called network layer firewalls, and seven layers of firewalls, which are actually the gateway of the agent layer. But for the seven layer firewall, no matter what your source port or target port, source address or target address is, it will check all your things. Therefore, the seven layer firewall is more secure, but it brings lower efficiency. Therefore, the usual firewall schemes on the market are a combination of the two. And because we all need to access from the port controlled by the firewall, the work efficiency of the firewall has become the most important control of how much data users can access. This paper introduces two types of firewalls iptables and TCP\_Wrappers. What are the differences between the use policies, rules and structures of the two firewalls? This is the problem to be discussed in this paper.
Authored by Limei Ma, Dongmei Zhao
Random numbers are essential for communications security, as they are widely employed as secret keys and other critical parameters of cryptographic algorithms. The Linux random number generator (LRNG) is the most popular open-source software-based random number generator (RNG). The security of LRNG is influenced by the overall design, especially the quality of entropy sources. Therefore, it is necessary to assess and quantify the quality of the entropy sources which contribute the main randomness to RNGs. In this paper, we perform an empirical study on the quality of entropy sources in LRNG with Linux kernel 5.6, and provide the following two findings. We first analyze two important entropy sources: jiffies and cycles, and propose a method to predict jiffies by cycles with high accuracy. The results indicate that, the jiffies can be correctly predicted thus contain almost no entropy in the condition of knowing cycles. The other important finding is the failure of interrupt cycles during system boot. The lower bits of cycles caused by interrupts contain little entropy, which is contrary to our traditional cognition that lower bits have more entropy. We believe these findings are of great significance to improve the efficiency and security of the RNG design on software platforms.
Authored by Mingshu Du, Yuan Ma, Na Lv, Tianyu Chen, Shijie Jia, Fangyu Zheng
Operating systems are essential software components for any computer. The goal of computer system manu-facturers is to provide a safe operating system that can resist a range of assaults. APTs (Advanced Persistent Threats) are merely one kind of attack used by hackers to penetrate organisations (APT). Here, we will apply the MITRE ATT&CK approach to analyze the security of Windows and Linux. Using the results of a series of vulnerability tests conducted on Windows 7, 8, 10, and Windows Server 2012, as well as Linux 16.04, 18.04, and its most current version, we can establish which operating system offers the most protection against future assaults. In addition, we have shown adversarial reflection in response to threats. We used ATT &CK framework tools to launch attacks on both platforms.
Authored by Hira Sikandar, Usman Sikander, Adeel Anjum, Muazzam Khan
Exploring the efficient vulnerability scanning and detection technology of various tools is one fundamental aim of network security. This network security technique ameliorates the tremendous number of IoT security challenges and the threats they face daily. However, among various tools, Shodan Eye scanning technology has proven to be very helpful for network administrators and security personnel to scan, detect and analyze vulnerable ports and traffic in organizations' networks. This work presents a simulated network scanning activity and manual vulnerability analysis of an internet-connected industrial equipment of two chosen industrial networks (Industry A and B) by running Shodan on a virtually hosted (Oracle Virtual Box)-Linux-based operating system (Kali Linux). The result shows that the shodan eye is a a promising tool for network security and efficient vulnerability research.
Authored by Ebuka Nkoro, Cosmas Nwakanma, Jae-Min Lee, Dong-Seong Kim
As information and communication technologies evolve every day, so does the use of technology in our daily lives. Along with our increasing dependence on digital information assets, security vulnerabilities are becoming more and more apparent. Passwords are a critical component of secure access to digital systems and applications. They not only prevent unauthorized access to these systems, but also distinguish the users of such systems. Research on password predictability often relies on surveys or leaked data. Therefore, there is a gap in the literature for studies that consider real data in this regard. This study investigates the password security awareness of 161 computer engineering students enrolled in a Linux-based undergraduate course at Ataturk University. The study is conducted in two phases, and in the first phase, 12 dictionaries containing also real student data are formed. In the second phase of the study, a dictionary-based brute-force attack is utilized by means of a serial and parallel version of a Bash script to crack the students’ passwords. In this respect, the /etc/shadow file of the Linux system is used as a basis to compare the hashed versions of the guessed passwords. As a result, the passwords of 23 students, accounting for 14% of the entire student group, were cracked. We believe that this is an unacceptably high prediction rate for such a group with high digital literacy. Therefore, due to this important finding of the study, we took immediate action and shared the results of the study with the instructor responsible for administering the information security course that is included in our curriculum and offered in one of the following semesters.
Authored by Deniz Dal, Esra Çelik