Nowadays, lives are very much easier with the help of IoT. Due to lack of protection and a greater number of connections, the management of IoT becomes more difficult To manage the network flow, a Software Defined Networking (SDN) has been introduced. The SDN has a great capability in automatic and dynamic distribution. For harmful attacks on the controller a centralized SDN architecture unlocks the scope. Therefore, to reduce these attacks in real-time, a securing SDN enabled IoT scenario infrastructure of Fog networks is preferred. The virtual switches have network enforcement authorized decisions and these are executed through the SDN network. Apart from this, SDN switches are generally powerful machines and simultaneously these are used as fog nodes. Therefore, SDN looks like a good selection for Fog networks of IoT. Moreover, dynamically distributing the necessary crypto keys are allowed by the centralized and software channel protection management solution, in order to establish the Datagram Transport Layer Security (DTIS) tunnels between the IoT devices, when demanded by the cyber security framework. Through the extensive deployment of this combination, the usage of CPU is observed to be 30% between devices and the latencies are in milliseconds range, and thus it presents the system feasibility with less delay. Therefore, by comparing with the traditional SDN, it is observed that the energy consumption is reduced by more than 90%.
Authored by Venkata Mohan, Sarangam Kodati, V. Krishna
Software quality assurance (SQA) is a means and practice of monitoring the software engineering processes and methods used in a project to ensure proper quality of the software. It encompasses the entire software development life-cycle, including requirements engineering, software design, coding, source code reviews, software configuration management, testing , release management, software deployment and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation. In this talk, we will mainly focus on the testing activity part of the software development life-cycle. Its main objective is checking that software is satisfying a set of quality properties that are identified by the "ISO/IEC 25010:2011 System and Software Quality Model" standard [1] .
Authored by Wissam Mallouli
The evolving and new age cybersecurity threats has set the information security industry on high alert. This modern age cyberattacks includes malware, phishing, artificial intelligence, machine learning and cryptocurrency. Our research highlights the importance and role of Software Quality Assurance for increasing the security standards that will not just protect the system but will handle the cyber-attacks better. With the series of cyber-attacks, we have concluded through our research that implementing code review and penetration testing will protect our data's integrity, availability, and confidentiality. We gathered user requirements of an application, gained a proper understanding of the functional as well as non-functional requirements. We implemented conventional software quality assurance techniques successfully but found that the application software was still vulnerable to potential issues. We proposed two additional stages in software quality assurance process to cater with this problem. After implementing this framework, we saw that maximum number of potential threats were already fixed before the first release of the software.
Authored by Ammar Haider, Wafa Bhatti
The increase of autonomy in autonomous surface vehicles development brings along modified and new risks and potential hazards, this in turn, introduces the need for processes and methods for ensuring that systems are acceptable for their intended use with respect to dependability and safety concerns. One approach for evaluating software requirements for claims of safety is to employ an assurance case. Much like a legal case, the assurance case lays out an argument and supporting evidence to provide assurance on the software requirements. This paper analyses safety and security requirements relating to autonomous vessels, and regulations in the automotive industry and the marine industry before proposing a generic cybersecurity and safety assurance case that takes a general graphical approach of Goal Structuring Notation (GSN).
Authored by Luis-Pedro Cobos, Tianlei Miao, Kacper Sowka, Garikayi Madzudzo, Alastair Ruddle, Ehab Amam
The use of software to support the information infrastructure that governments, critical infrastructure providers and businesses worldwide rely on for their daily operations and business processes is gradually becoming unavoidable. Commercial off-the shelf software is widely and increasingly used by these organizations to automate processes with information technology. That notwithstanding, cyber-attacks are becoming stealthier and more sophisticated, which has led to a complex and dynamic risk environment for IT-based operations which users are working to better understand and manage. This has made users become increasingly concerned about the integrity, security and reliability of commercial software. To meet up with these concerns and meet customer requirements, vendors have undertaken significant efforts to reduce vulnerabilities, improve resistance to attack and protect the integrity of the products they sell. These efforts are often referred to as “software assurance.” Software assurance is becoming very important for organizations critical to public safety and economic and national security. These users require a high level of confidence that commercial software is as secure as possible, something only achieved when software is created using best practices for secure software development. Therefore, in this paper, we explore the need for information assurance and its importance for both organizations and end users, methodologies and best practices for software security and information assurance, and we also conducted a survey to understand end users’ opinions on the methodologies researched in this paper and their impact.
Authored by Muhammad Khan, Enow Ehabe, Akalanka Mailewa
Aviation is a highly sophisticated and complex System-of-Systems (SoSs) with equally complex safety oversight. As novel products with autonomous functions and interactions between component systems are adopted, the number of interdependencies within and among the SoS grows. These interactions may not always be obvious. Understanding how proposed products (component systems) fit into the context of a larger SoS is essential to promote the safe use of new as well as conventional technology.UL 4600, is a Standard for Safety for the Evaluation of Autonomous Products specifically written for completely autonomous Load vehicles. The goal-based, technology-neutral features of this standard make it adaptable to other industries and applications.This paper, using the philosophy of UL 4600, gives guidance for creating an assurance case for products in an SoS context. An assurance argument is a cogent structured argument concluding that an autonomous aircraft system possesses all applicable through-life performance and safety properties. The assurance case process can be repeated at each level in the SoS: aircraft, aircraft system, unmodified components, and modified components. The original Equipment Manufacturer (OEM) develops the assurance case for the whole aircraft envisioned in the type certification process. Assurance cases are continuously validated by collecting and analyzing Safety Performance Indicators (SPIs). SPIs provide predictive safety information, thus offering an opportunity to improve safety by preventing incidents and accidents. Continuous validation is essential for risk-based approval of autonomously evolving (dynamic) systems, learning systems, and new technology. System variants, derivatives, and components are captured in a subordinate assurance case by their developer. These variants of the assurance case inherently reflect the evolution of the vehicle-level derivatives and options in the context of their specific target ecosystem. These subordinate assurance cases are nested under the argument put forward by the OEM of components and aircraft, for certification credit.It has become a common practice in aviation to address design hazards through operational mitigations. It is also common for hazards noted in an aircraft component system to be mitigated within another component system. Where a component system depends on risk mitigation in another component of the SoS, organizational responsibilities must be stated explicitly in the assurance case. However, current practices do not formalize accounting for these dependencies by the parties responsible for design; consequently, subsequent modifications are made without the benefit of critical safety-related information from the OEMs. The resulting assurance cases, including 3rd party vehicle modifications, must be scrutinized as part of the holistic validation process.When changes are made to a product represented within the assurance case, their impact must be analyzed and reflected in an updated assurance case. An OEM can facilitate this by integrating affected assurance cases across their customer’s supply chains to ensure their validity. The OEM is expected to exercise the sphere-of-control over their product even if it includes outsourced components. Any organization that modifies a product (with or without assurance argumentation information from other suppliers) is accountable for validating the conditions for any dependent mitigations. For example, the OEM may manage the assurance argumentation by identifying requirements and supporting SPI that must be applied in all component assurance cases. For their part, component assurance cases must accommodate all spheres-of-control that mitigate the risks they present in their respective contexts. The assurance case must express how interdependent mitigations will collectively assure the outcome. These considerations are much more than interface requirements and include explicit hazard mitigation dependencies between SoS components. A properly integrated SoS assurance case reflects a set of interdependent systems that could be independently developed..Even in this extremely interconnected environment, stakeholders must make accommodations for the independent evolution of products in a manner that protects proprietary information, domain knowledge, and safety data. The collective safety outcome for the SoS is based on the interdependence of mitigations by each constituent component and could not be accomplished by any single component. This dependency must be explicit in the assurance case and should include operational mitigations predicated on people and processes.Assurance cases could be used to gain regulatory approval of conventional and new technology. They can also serve to demonstrate consistency with a desired level of safety, especially in SoSs whose existing standards may not be adequate. This paper also provides guidelines for preserving alignment between component assurance cases along a product supply chain, and the respective SoSs that they support. It shows how assurance is a continuous process that spans product evolution through the monitoring of interdependent requirements and SPI. The interdependency necessary for a successful assurance case encourages stakeholders to identify and formally accept critical interconnections between related organizations. The resulting coordination promotes accountability for safety through increased awareness and the cultivation of a positive safety culture.
Authored by Uma Ferrell, Alfred Anderegg
For modern Automatic Test Equipment (ATE), one of the most daunting tasks conducting Information Assurance (IA). In addition, there is a desire to Network ATE to allow for information sharing and deployment of software. This is complicated by the fact that typically ATE are “unmanaged” systems in that most are configured, deployed, and then mostly left alone. This results in systems that are not patched with the latest Operating System updates and in fact may be running on legacy Operating Systems which are no longer supported (like Windows XP or Windows 7 for instance). A lot of this has to do with the cost of keeping a system updated on a continuous basis and regression testing the Test Program Sets (TPS) that run on them. Given that an Automated Test System can have thousands of Test Programs running on it, the cost and time involved in doing complete regression testing on all the Test Programs can be extremely expensive. In addition to the Test Programs themselves some Test Programs rely on third party Software and / or custom developed software that is required for the Test Programs to run. Add to this the requirement to perform software steering through all the Test Program paths, the length of time required to validate a Test Program could be measured in months in some cases. If system updates are performed once a month like some Operating System updates this could consume all the available time of the Test Station or require a fleet of Test Stations to be dedicated just to do the required regression testing. On the other side of the coin, a Test System running an old unpatched Operating System is a prime target for any manner of virus or other IA issues. This paper will discuss some of the pro's and con's of a managed Test System and how it might be accomplished.
Authored by William Headrick
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
The use of software daily has become inevitable nowadays. Almost all everyday tools and the most different areas (e.g., medicine or telecommunications) are dependent on software. The C programming language is one of the most used languages for software development, such as operating systems, drivers, embedded systems, and industrial products. Even with the appearance of new languages, it remains one of the most used [1] . At the same time, C lacks verification mechanisms, like array boundaries, leaving the entire responsibility to the developer for the correct management of memory and resources. These weaknesses are at the root of buffer overflows (BO) vulnerabilities, which range the first place in the CWE’s top 25 of the most dangerous weaknesses [2] . The exploitation of BO when existing in critical safety systems, such as railways and autonomous cars, can have catastrophic effects for manufacturers or endanger human lives.
Authored by João Inácio, Ibéria Medeiros
The FAA proposes Safety Continuum that recognizes the public expectation for safety outcomes vary with aviation sectors that have different missions, aircraft, and environments. The purpose is to align the rigor of oversight to the public expectations. An aircraft, its variants or derivatives may be used in operations with different expectations. The differences in mission might bring immutable risks for some applications that reuse or revise the original aircraft type design. The continuum enables a more agile design approval process for innovations in the context of a dynamic ecosystems, addressing the creation of variants for different sectors and needs. Since an aircraft type design can be reused in various operations under part 91 or 135 with different mission risks the assurance case will have many branches reflecting the variants and derivatives.This paper proposes a model for the holistic, performance-based, through-life safety assurance case that focuses applicant and oversight alike on achieving the safety outcomes. This paper describes the application of goal-based, technology-neutral features of performance-based assurance cases extending the philosophy of UL 4600, to the Safety Continuum. This paper specifically addresses component reuse including third-party vehicle modifications and changes to operational concept or eco-system. The performance-based assurance argument offers a way to combine the design approval more seamlessly with the oversight functions by focusing all aspects of the argument and practice together to manage the safety outcomes. The model provides the context to assure mitigated risk are consistent with an operation’s place on the safety continuum, while allowing the applicant to reuse parts of the assurance argument to innovate variants or derivatives. The focus on monitoring performance to constantly verify the safety argument complements compliance checking as a way to assure products are "fit-for-use". The paper explains how continued operational safety becomes a natural part of monitoring the assurance case for growing variety in a product line by accounting for the ecosystem changes. Such a model could be used with the Safety Continuum to promote applicant and operator accountability delivering the expected safety outcomes.
Authored by Alfred Anderegg, Uma Ferrell
IoBTs must feature collaborative, context-aware, multi-modal fusion for real-time, robust decision-making in adversarial environments. The integration of machine learning (ML) models into IoBTs has been successful at solving these problems at a small scale (e.g., AiTR), but state-of-the-art ML models grow exponentially with increasing temporal and spatial scale of modeled phenomena, and can thus become brittle, untrustworthy, and vulnerable when interpreting large-scale tactical edge data. To address this challenge, we need to develop principles and methodologies for uncertainty-quantified neuro-symbolic ML, where learning and inference exploit symbolic knowledge and reasoning, in addition to, multi-modal and multi-vantage sensor data. The approach features integrated neuro-symbolic inference, where symbolic context is used by deep learning, and deep learning models provide atomic concepts for symbolic reasoning. The incorporation of high-level symbolic reasoning improves data efficiency during training and makes inference more robust, interpretable, and resource-efficient. In this paper, we identify the key challenges in developing context-aware collaborative neuro-symbolic inference in IoBTs and review some recent progress in addressing these gaps.
Authored by Tarek Abdelzaher, Nathaniel Bastian, Susmit Jha, Lance Kaplan, Mani Srivastava, Venugopal Veeravalli
Recent concepts in defense herald an increasing degree of automation of future military systems, with an emphasis on accelerating sensing-to-decision loops at the tactical edge, reducing their network communication footprint, and improving the inference quality of intelligent components in the loop. These requirements pose resource management challenges, calling for operating-system-like constructs that optimize the use of limited computational resources at the tactical edge. This paper describes these challenges and presents IoBT-OS, an operating system for the Internet of Battlefield Things that aims to optimize decision latency, improve decision accuracy, and reduce corresponding resource demands on computational and network components. A simple case-study with initial evaluation results is shown from a target tracking application scenario.
Authored by Dongxin Liu, Tarek Abdelzaher, Tianshi Wang, Yigong Hu, Jinyang Li, Shengzhong Liu, Matthew Caesar, Deepti Kalasapura, Joydeep Bhattacharyya, Nassy Srour, Jae Kim, Guijun Wang, Greg Kimberly, Shouchao Yao
Intelligent service network under the paradigm of the Internet of Things (IoT) uses sensor and network communication technology to realize the interconnection of everything and real-time communication between devices. Under the background of combat, all kinds of sensor devices and equipment units need to be highly networked to realize interconnection and information sharing, which makes the Internet of Things technology hopeful to be applied in the battlefield to interconnect these entities to form the Internet of Battlefield Things (IoBT). This paper analyzes the related concepts of IoBT, and constructs the IoBT multilayer dependency network model according to the typical characteristics and topology of IoBT, then constructs the weighted super-adjacency matrix according to the coupling weights within and between different layers, and the stability model of IoBT is analyzed and derived. Finally, an example of IoBT network is given to provide a reference for analyzing the stability factors of IoBT network.
Authored by Haihao Ding, Qingsong Zhao
The military operations in low communications infrastructure scenarios employ flexible solutions to optimize the data processing cycle using situational awareness systems, guaranteeing interoperability and assisting in all processes of decision-making. This paper presents an architecture for the integration of Command, Control, Computing, Communication, Intelligence, Surveillance and Reconnaissance Systems (C4ISR), developed within the scope of the Brazilian Ministry of Defense, in the context of operations with Unmanned Aerial Vehicles (UAV) - swarm drones - and the Internet-to-the-battlefield (IoBT) concept. This solution comprises the following intelligent subsystems embedded in UAV: STFANET, an SDN-Based Topology Management for Flying Ad Hoc Network focusing drone swarms operations, developed by University of Rio Grande do Sul; Interoperability of Command and Control (INTERC2), an intelligent communication middleware developed by Brazilian Navy; A Mission-Oriented Sensors Array (MOSA), which provides the automatization of data acquisition, data fusion, and data sharing, developed by Brazilian Army; The In-Flight Awareness Augmentation System (IFA2S), which was developed to increase the safety navigation of Unmanned Aerial Vehicles (UAV), developed by Brazilian Air Force; Data Mining Techniques to optimize the MOSA with data patterns; and an adaptive-collaborative system, composed of a Software Defined Radio (SDR), to solve the identification of electromagnetic signals and a Geographical Information System (GIS) to organize the information processed. This research proposes, as a main contribution in this conceptual phase, an application that describes the premises for increasing the capacity of sensing threats in the low structured zones, such as the Amazon rainforest, using existing communications solutions of Brazilian defense monitoring systems.
Authored by Nina Figueira, Pablo Pochmann, Abel Oliveira, Edison de Freitas
Military networks consist of heterogeneous devices that provide soldiers with real-time terrain and mission intel-ligence. The development of next-generation Software Defined Networks (SDN)-enabled devices is enabling the modernization of traditional military networks. Commonly, traditional military networks take the trustworthiness of devices for granted. How-ever, the recent modernization of military networks introduces cyber attacks such as data and identity spoofing attacks. Hence, it is crucial to ensure the trustworthiness of network traffic to ensure the mission's outcome. This work proposes a Continuous Behavior-based Authentication (CBA) protocol that integrates network traffic analysis techniques to provide robust and efficient network management flow by separating data and control planes in SDN-enabled military networks. The evaluation of the CBA protocol aimed to measure the efficiency of the proposed protocol in realistic military networks. Furthermore, we analyze the overall network overhead of the CBA protocol and its accuracy to detect rogue network traffic data from field devices.
Authored by Abel Rivera, Evan White, Jaime Acosta, Deepak Tosh
On the Internet of Battlefield Things (IoBT), unmanned aerial vehicles (UAVs) provide significant operational advantages. However, the exploitation of the UAV by an untrustworthy entity might lead to security violations or possibly the destruction of crucial IoBT network functionality. The IoBT system has substantial issues related to data tampering and fabrication through illegal access. This paper proposes the use of an intelligent architecture called IoBT-Net, which is built on a convolution neural network (CNN) and connected with blockchain technology, to identify and trace illicit UAV in the IoBT system. Data storage on the blockchain ledger is protected from unauthorized access, data tampering, and invasions. Conveniently, this paper presents a low complexity and robustly performed CNN called LRCANet to estimate AOA for object localization. The proposed LRCANet is efficiently designed with two core modules, called GFPU and stacks, which are cleverly organized with regular and point convolution layers, a max pool layer, and a ReLU layer associated with residual connectivity. Furthermore, the effectiveness of LRCANET is evaluated by various network and array configurations, RMSE, and compared with the accuracy and complexity of the existing state-of-the-art. Additionally, the implementation of tailored drone-based consensus is evaluated in terms of three major classes and compared with the other existing consensus.
Authored by Mohtasin Golam, Rubina Akter, Revin Naufal, Van-Sang Doan, Jae-Min Lee, Dong-Seong Kim
Existing solutions for scheduling arbitrarily complex distributed applications on networks of computational nodes are insufficient for scenarios where the network topology is changing rapidly. New Internet of Things (IoT) domains like the Internet of Robotic Things (IoRT) and the Internet of Battlefield Things (IoBT) demand solutions that are robust and efficient in environments that experience constant and/or rapid change. In this paper, we demonstrate how recent advancements in machine learning (in particular, in graph convolutional neural networks) can be leveraged to solve the task scheduling problem with decent performance and in much less time than traditional algorithms.
Authored by Jared Coleman, Mehrdad Kiamari, Lillian Clark, Daniel D'Souza, Bhaskar Krishnamachari
Internet Protocol Version 6 (IPv6) is expected for widespread deployment worldwide. Such rapid development of IPv6 may lead to safety problems. The main threats in IPv6 networks are denial of service (DoS) attacks and distributed DoS (DDoS) attacks. In addition to the similar threats in Internet Protocol Version 4 (IPv4), IPv6 has introduced new potential vulnerabilities, which are DoS and DDoS attacks based on Internet Control Message Protocol version 6 (ICMPv6). We divide such new attacks into two categories: pure flooding attacks and source address spoofing attacks. We propose P4-NSAF, a scheme to defend against the above two IPv6 DoS and DDoS attacks in the programmable data plane. P4-NSAF uses Count-Min Sketch to defend against flooding attacks and records information about IPv6 agents into match tables to prevent source address spoofing attacks. We implement a prototype of P4-NSAF with P4 and evaluate it in the programmable data plane. The result suggests that P4-NSAF can effectively protect IPv6 networks from DoS and DDoS attacks based on ICMPv6.
Authored by Yubing Li, Wei Yang, Zhou Zhou, Qingyun Liu, Zhao Li, Shu Li
With the global transition to the IPv6 (Internet Protocol version 6), IP (Internet Protocol) validation efficiency and IPv6 support from the aspect of network programming are gaining more importance. As global computer networks grow in the era of IoT (Internet of Things), IP address validation is an inevitable process for assuring strong network privacy and security. The complexity of IP validation has been increased due to the rather drastic change in the memory architecture needed for storing IPv6 addresses. Low-level programming languages like C/C++ are a great choice for handling memory spaces and working with simple devices connected in an IoT (Internet of Things) network. This paper analyzes some user-defined and open-source implementations of IP validation codes in Boost. Asio and POCO C++ networking libraries, as well as the IP security support provided for general networking purposes and IoT. Considering a couple of sample codes, the paper gives a conclusion on whether these C++ implementations answer the needs for flexibility and security of the upcoming era of IPv6 addressed computers.
Authored by Esad Kadusic, Natasa Zivic, Narcisa Hadzajlic, Christoph Ruland
For the smart campus of Guangdong Ocean University, we analyze the current situation of the university's network construction, as well as the problems in infrastructure, equipment, operation management, and network security. We focus on the construction objectives and design scheme of the smart campus, including the design of network structure and basic network services. The followings are considered in this study: optimization of network structure simplification, business integration, multi-operator access environment, operation and maintenance guarantee system, organic integration of production, and teaching and research after network leveling transformation.
Authored by Guangya Zhang, Xiang Xu
This paper uses the test tool provided by the Internet Protocol Version 6 (IPv6) Forum to test the protocol conformance of IPv6 devices. The installation and testing process of IPv6 Ready Logo protocol conformance test suite developed by TAHI PROJECT team is described in detail. This section describes the test content and evaluation criteria of the suite, analyzes the problems encountered during the installation and use of the suite, describes the method of analyzing the test results of the suite, and describes the test content added to the latest version of the test suite. The test suite can realize automatic testing, the test cases accurately reflect the requirements of the IPv6 protocol specification, can be used to judge whether IPv6-based Internet of Things(IoT) devices meets the relevant protocol standards.
Authored by Ke Lu, Wenjuan Yan, Shuyi Wang
Based on the campus wireless IPv6 network system, using WiFi contactless sensing and positioning technology and action recognition technology, this paper designs a new campus security early warning system. The characteristic is that there is no need to add new monitoring equipment. As long as it is the location covered by the wireless IPv6 network, personnel quantity statistics and personnel body action status display can be realized. It plays an effective monitoring supplement to the places that cannot be covered by video surveillance in the past, and can effectively prevent campus violence or other emergencies.
Authored by Feng Sha, Ying Wei
Protecting an identity of IPv6 packet against Denial-of-Service (DoS) attack, depend on the proposed methods of cryptography and steganography. Reliable communication using the security aspect is the most visible issue, particularly in IPv6 network applications. Problems such as DoS attacks, IP spoofing and other kinds of passive attacks are common. This paper suggests an approach based on generating a randomly unique identities for every node. The generated identity is encrypted and hided in the transmitted packets of the sender side. In the receiver side, the received packet verified to identify the source before processed. Also, the paper involves implementing nine experiments that are used to test the proposed scheme. The scheme is based on creating the address of IPv6, then passing it to the logistics map then encrypted by RSA and authenticated by SHA2. In addition, network performance is computed by OPNET modular. The results showed better computation power consumption in case of lost packet, average events, memory and time, and the better results as total memory is 35,523 KB, average events/sec is 250,52, traffic sent is 30,324 packets/sec, traffic received is 27,227 packets/sec, and lose packets is 3,097 packets/sec.
Authored by Maytham Ali, Saif Al-Alak
The spread of the Internet of Things (IoT) and cloud services leads to a request for secure communication between devices, known as zero-trust security. The authors have been developing CYber PHysical Overlay Network over Internet Communication (CYPHONIC) to realize secure end-to-end communication among devices. A device requires installing the client program into the devices to realize secure communication over our overlay network. However, some devices refuse additional installation of external programs due to the limitation of system and hardware resources or the effect on system reliability. We proposed new technology, a CYPHONIC adapter, to support these devices. Currently, the CYPHONIC adapter supports only IPv4 virtual addresses and needs to be compatible with general devices that use IPv6. This paper proposes the dual-stack CYPHONIC adapter supporting IPv4/IPv6 virtual addresses for general devices. The prototype implementation shows that the general device can communicate over our overlay network using both IP versions through the proposed CYPHONIC adapter.
Authored by Ren Goto, Kazushige Matama, Chihiro Nishiwaki, Katsuhiro Naito
The Domain Name System (DNS) is critical to Internet communications. EDNS Client Subnet (ECS), a DNS extension, allows recursive resolvers to include client subnet information in DNS queries to improve CDN end-user mapping, extending the visibility of client information to a broader range. Major content delivery network (CDN) vendors, content providers (CP), and public DNS service providers (PDNS) are accelerating their IPv6 infrastructure development. With the increasing deployment of IPv6-enabled services and DNS being the most foundational system of the Internet, it becomes important to analyze the behavioral and privacy status of IPv6 resolvers. However, there is a lack of research on ECS for IPv6 DNS resolvers.In this paper, we study the ECS deployment and compliance status of IPv6 resolvers. Our measurement shows that 11.12% IPv6 open resolvers implement ECS. We discuss abnormal noncompliant scenarios that exist in both IPv6 and IPv4 that raise privacy and performance issues. Additionally, we measured if the sacrifice of clients’ privacy can enhance IPv6 CDN performance. We find that in some cases ECS helps end-user mapping but with an unnecessary privacy loss. And even worse, the exposure of client address information can sometimes backfire, which deserves attention from both Internet users and PDNSes.
Authored by Leyao Nie, Lin He, Guanglei Song, Hao Gao, Chenglong Li, Zhiliang Wang, Jiahai Yang