Information Centric Networks - Traffic in a backbone network has high forwarding rate requirements, and as the network gets larger, traffic increases and forwarding rates decrease. In a Software Defined Network (SDN), the controller can manage a global view of the network and control the forwarding of network traffic. A deterministic network has different forwarding requirements for the traffic of different priority levels. Static traffic load balancing is not flexible enough to meet the needs of users and may lead to the overloading of individual links and even network collapse. In this paper, we propose a new backbone network load balancing architecture - EDQN (Edge Deep Q-learning Network), which implements queue-based gate-shaping algorithms at the edge devices and load balancing of traffic on the backbone links. With the advantages of SDN, the link utilization of the backbone network can be improved, the delay in traffic transmission can be reduced and the throughput of traffic during transmission can be increased.
Authored by Xue Zhang, Liang Wei, Shan Jing, Chuan Zhao, Zhenxiang Chen
Information Centric Networks - Named in-network computing is an emerging technology of Named Data Networking (NDN). Through deploying the named computing services/functions on NDN router, the router can utilize its free resources to provide nearby computation for users while relieving the pressure of cloud and network edge. Benefitted from the characteristic of named addressing, named computing services/functions can be easily discovered and migrated in the network. To implement named in-network computing, integrating the computing services as Virtual Machines (VMs) into the software router is a feasible way, but how to effectively deploy the service VMs to optimize the local processing capability is still a challenge. Focusing on this problem, we first give the design of NDN-enabled software router in this paper, then propose a service earning based named service deployment scheme (SE-NSD). For available service VMs, SE-NSD not only considers their popularities but further evaluates their service earnings (processed data amount per CPU cycle). Through modelling the deployment problem as the knapsack problem, SE-NSD determines the optimal service VMs deployment scheme. The simulation results show that, comparing with the popularity-based deployment scheme, SE-NSD can promote about 30\% in-network computing capability while slightly reducing the service invoking RTT of user.
Authored by Bowen Liang, Jianye Tian, Yi Zhu
This paper assesses the impact on the performance that information-theoretic physical layer security (IT-PLS) introduces when integrated into a 5G New Radio (NR) system. For this, we implement a wiretap code for IT-PLS based on a modular coding scheme that uses a universal-hash function in its security layer. The main advantage of this approach lies in its flexible integration into the lower layers of the 5G NR protocol stack without affecting the communication s reliability. Specifically, we use IT-PLS to secure the transmission of downlink control information by integrating an extra pre-coding security layer as part of the physical downlink control channel (PDCCH) procedures, thus not requiring any change of the 3GPP 38 series standard. We conduct experiments using a real-time open-source 5G NR standalone implementation and use software-defined radios for over-the-air transmissions in a controlled laboratory environment. The overhead added by IT-PLS is determined in terms of the latency introduced into the system, which is measured at the physical layer for an end-to-end (E2E) connection between the gNB and the user equipment.
Authored by Luis Torres-Figueroa, Markus Hörmann, Moritz Wiese, Ullrich Mönich, Holger Boche, Oliver Holschke, Marc Geitz
Industrial Control Systems - With the wide application of Internet technology in the industrial control field, industrial control networks are getting larger and larger, and the industrial data generated by industrial control systems are increasing dramatically, and the performance requirements of the acquisition and storage systems are getting higher and higher. The collection and analysis of industrial equipment work logs and industrial timing data can realize comprehensive management and continuous monitoring of industrial control system work status, as well as intrusion detection and energy efficiency analysis in terms of traffic and data. In the face of increasingly large realtime industrial data, existing log collection systems and timing data gateways, such as packet loss and other phenomena [1], can not be more complete preservation of industrial control network thermal data. The emergence of software-defined networking provides a new solution to realize massive thermal data collection in industrial control networks. This paper proposes a 10-gigabit industrial thermal data acquisition and storage scheme based on software-defined networking, which uses software-defined networking technology to solve the problem of insufficient performance of existing gateways.
Authored by Ge Zhang, Zheyu Zhang, Jun Sun, Zun Wang, Rui Wang, Shirui Wang, Chengyun Xie
The rapid improvement of computer and network technology not only promotes the improvement of productivity and facilitates people s life, but also brings new threats to production and life. Cyberspace security has attracted more and more attention. Different from traditional cyberspace security, APT attacks on key networks or infrastructure, with the main goal of stealing intellectual property, confidential information or sabotage, seriously threatening the interests and security of governments, enterprises and scientific research institutions. Timely detection and blocking is particularly important. The purpose of this paper is to study the security of software supply chain in power industry based on BAS technology. The experimental data shows that Type 1 projects account for the least amount and Type 2 projects account for the highest proportion. Type 1 projects have high unit price contracts and high profits, but the number is small and the time for signing orders is long.
Authored by Bo Jin, Zheng Zhou, Fei Long, Huan Xu, Shi Chen, Fan Xia, Xiaoyan Wei, Qingyao Zhao
This paper provides an in-depth analysis of Android malware that bypassed the strictest defenses of the Google Play application store and penetrated the official Android market between January 2016 and July 2021. We systematically identified 1,238 such malicious applications, grouped them into 134 families, and manually analyzed one application from 105 distinct families. During our manual analysis, we identified malicious payloads the applications execute, conditions guarding execution of the payloads, hiding techniques applications employ to evade detection by the user, and other implementation-level properties relevant for automated malware detection. As most applications in our dataset contain multiple payloads, each triggered via its own complex activation logic, we also contribute a graph-based representation showing activation paths for all application payloads in form of a control- and data-flow graph. Furthermore, we discuss the capabilities of existing malware detection tools, put them in context of the properties observed in the analyzed malware, and identify gaps and future research directions. We believe that our detailed analysis of the recent, evasive malware will be of interest to researchers and practitioners and will help further improve malware detection tools.
Authored by Michael Cao, Khaled Ahmed, Julia Rubin
With the dramatic increase in malicious software, the sophistication and innovation of malware have increased over the years. In particular, the dynamic analysis based on the deep neural network has shown high accuracy in malware detection. However, most of the existing methods only employ the raw API sequence feature, which cannot accurately reflect the actual behavior of malicious programs in detail. The relationship between API calls is critical for detecting suspicious behavior. Therefore, this paper proposes a malware detection method based on the graph neural network. We first connect the API sequences executed by different processes to build a directed process graph. Then, we apply Bert to encode the API sequences of each process into node embedding, which facilitates the semantic execution information inside the processes. Finally, we employ GCN to mine the deep semantic information based on the directed process graph and node embedding. In addition to presenting the design, we have implemented and evaluated our method on 10,000 malware and 10,000 benign software datasets. The results show that the precision and recall of our detection model reach 97.84\% and 97.83\%, verifying the effectiveness of our proposed method.
Authored by Zhenquan Ding, Hui Xu, Yonghe Guo, Longchuan Yan, Lei Cui, Zhiyu Hao
The effective security system improvement from malware attacks on the Android operating system should be updated and improved. Effective malware detection increases the level of data security and high protection for the users. Malicious software or malware typically finds a means to circumvent the security procedure, even when the user is unaware whether the application can act as malware. The effectiveness of obfuscated android malware detection is evaluated by collecting static analysis data from a data set. The experiment assesses the risk level of which malware dataset using the hash value of the malware and records malware behavior. A set of hash SHA256 malware samples has been obtained from an internet dataset and will be analyzed using static analysis to record malware behavior and evaluate which risk level of the malware. According to the results, most of the algorithms provide the same total score because of the multiple crime inside the malware application.
Authored by Teddy Mantoro, Muhammad Fahriza, Muhammad Bhakti
Any software that runs malicious payloads on victims’ computers is referred to as malware. It is an increasing threat that costs people, businesses, and organizations a lot of money. Attacks on security have developed significantly in recent years. Malware may infiltrate both offline and online media, like: chat, SMS, and spam (email, or social media), because it has a built-in defensive mechanism and may conceal itself from antivirus software or even corrupt it. As a result, there is an urgent need to detect and prevent malware before it damages critical assets around the world. In fact, there are lots of different techniques and tools used to combat versus malware. In this paper, the malware samples were analyzing in the Virtual Box environment using in-depth analysis based on reverse engineering using advanced static malware analysis techniques. The results Obtained from malware analysis which represent a set of valuable information, all anti-malware and anti-virus program companies need for in order to update their products.
Authored by Maher Ismael, Karam Thanoon
A good ecological environment is crucial to attracting talents, cultivating talents, retaining talents and making talents fully effective. This study provides a solution to the current mainstream problem of how to deal with excellent employee turnover in advance, so as to promote the sustainable and harmonious human resources ecological environment of enterprises with a shortage of talents.This study obtains open data sets and conducts data preprocessing, model construction and model optimization, and describes a set of enterprise employee turnover prediction models based on RapidMiner workflow. The data preprocessing is completed with the help of the data statistical analysis software IBM SPSS Statistic and RapidMiner.Statistical charts, scatter plots and boxplots for analysis are generated to realize data visualization analysis. Machine learning, model application, performance vector, and cross-validation through RapidMiner s multiple operators and workflows. Model design algorithms include support vector machines, naive Bayes, decision trees, and neural networks. Comparing the performance parameters of the algorithm model from the four aspects of accuracy, precision, recall and F1-score. It is concluded that the performance of the decision tree algorithm model is the highest. The performance evaluation results confirm the effectiveness of this model in sustainable exploring of enterprise employee turnover prediction in human resource management.
Authored by Yong Shi
Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.
Authored by Elgun Jabrayilzade, Mikhail Evtikhiev, Eray Tüzün, Vladimir Kovalenko
Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.
Authored by Duane Bindschadler, Nari Hwangpo, Marc Sarrel
The security of Energy Data collection is the basis of achieving reliability and security intelligent of smart grid. The newest security communication of Data collection is Zero Trust communication; The Strategy of Zero Trust communication is that don’t trust any device of outside or inside. Only that device authenticate is successful and software and hardware is more security, the Energy intelligent power system allow the device enroll into network system, otherwise deny these devices. When the device has been communicating with the Energy system, the Zero Trust still need to detect its security and vulnerability, if device have any security issue or vulnerability issue, the Zero Trust deny from network system, it ensures that Energy power system absolute security, which lays a foundation for the security analysis of intelligent power unit.
Authored by Yan Chen, Xingchen Zhou, Jian Zhu, Hongbin Ji
How can high-level directives concerning risk, cybersecurity and compliance be operationalized in the central nervous system of any organization above a certain complexity? How can the effectiveness of technological solutions for security be proven and measured, and how can this technology be aligned with the governance and financial goals at the board level? These are the essential questions for any CEO, CIO or CISO that is concerned with the wellbeing of the firm. The concept of Zero Trust (ZT) approaches information and cybersecurity from the perspective of the asset to be protected, and from the value that asset represents. Zero Trust has been around for quite some time. Most professionals associate Zero Trust with a particular architectural approach to cybersecurity, involving concepts such as segments, resources that are accessed in a secure manner and the maxim “always verify never trust”. This paper describes the current state of the art in Zero Trust usage. We investigate the limitations of current approaches and how these are addressed in the form of Critical Success Factors in the Zero Trust Framework developed by ON2IT ‘Zero Trust Innovators’ (1). Furthermore, this paper describes the design and engineering of a Zero Trust artefact that addresses the problems at hand (2), according to Design Science Research (DSR). The last part of this paper outlines the setup of an empirical validation trough practitioner oriented research, in order to gain a broader acceptance and implementation of Zero Trust strategies (3). The final result is a proposed framework and associated technology which, via Zero Trust principles, addresses multiple layers of the organization to grasp and align cybersecurity risks and understand the readiness and fitness of the organization and its measures to counter cybersecurity risks.
Authored by Yuri Bobbert, Jeroen Scheerder
Internet of Things (IoT) evolution calls for stringent communication demands, including low delay and reliability. At the same time, wireless mesh technology is used to extend the communication range of IoT deployments, in a multi-hop manner. However, Wireless Mesh Networks (WMNs) are facing link failures due to unstable topologies, resulting in unsatisfied IoT requirements. Named-Data Networking (NDN) can enhance WMNs to meet such IoT requirements, thanks to the content naming scheme and in-network caching, but necessitates adaptability to the challenging conditions of WMNs.In this work, we argue that Software-Defined Networking (SDN) is an ideal solution to fill this gap and introduce an integrated SDN-NDN deployment over WMNs involving: (i) global view of the network in real-time; (ii) centralized decision making; and (iii) dynamic NDN adaptation to network changes. The proposed system is deployed and evaluated over the wiLab.1 Fed4FIRE+ test-bed. The proof-of-concept results validate that the centralized control of SDN effectively supports the NDN operation in unstable topologies with frequent dynamic changes, such as the WMNs.
Authored by Sarantis Kalafatidis, Vassilis Demiroglou, Lefteris Mamatas, Vassilis Tsaoussidis
This paper proposes an improved version of the newly developed Honey Badger Algorithm (HBA), called Generalized opposition Based-Learning HBA (GOBL-HBA), for solving the mesh routers placement problem. The proposed GOBLHBA is based on the integration of the generalized opposition-based learning strategy into the original HBA. GOBL-HBA is validated in terms of three performance metrics such as user coverage, network connectivity, and fitness value. The evaluation is done using various scenarios with different number of mesh clients, number of mesh routers, and coverage radius values. The simulation results revealed the efficiency of GOBL-HBA when compared with the classical HBA, Genetic Algorithm (GA), and Particle Swarm optimization (PSO).
Authored by Sylia Taleb, Yassine Meraihi, Seyedali Mirjalili, Dalila Acheli, Amar Ramdane-Cherif, Asma Gabis
Mesh networks based on the wireless local area network (WLAN) technology, as specified by the standards amendment IEEE 802.11s, provide for a flexible and low-cost interconnection of devices and embedded systems for various use cases. To assess the real-world performance of WLAN mesh networks and potential optimization strategies, suitable testbeds and measurement tools are required. Designed for highly automated transport-layer throughput and latency measurements, the software FLExible Network Tester (Flent) is a promising candidate. However, so far Flent does not integrate information specific to IEEE 802.11s networks, such as peer link status data or mesh routing metrics. Consequently, we propose Flent extensions that allow to additionally capture IEEE 802.11s information as part of the automated performance tests. For the functional validation of our extensions, we conduct Flent measurements in a mesh mobility scenario using the network emulation framework Mininet-WiFi.
Authored by Michael Rethfeldt, Tim Brockmann, Richard Eckhardt, Benjamin Beichler, Lukas Steffen, Christian Haubelt, Dirk Timmermann
Intelligent Environments (IEs) enrich the physical world by connecting it to software applications in order to increase user comfort, safety and efficiency. IEs are often supported by wireless networks of smart sensors and actuators, which offer multi-year battery life within small packages. However, existing radio mesh networks suffer from high latency, which precludes their use in many user interface systems such as real-time speech, touch or positioning. While recent advances in optical networks promise low end-to-end latency through symbol-synchronous transmission, current approaches are power hungry and therefore cannot be battery powered. We tackle this problem by introducing BoboLink, a mesh network that delivers low-power and low-latency optical networking through a combination of symbol-synchronous transmission and a novel wake-up technology. BoboLink delivers mesh-wide wake-up in 1.13ms, with a quiescent power consumption of 237µW. This enables building-wide human computer interfaces to be seamlessly delivered using wireless mesh networks for the first time.
Authored by Mengyao Liu, Jonathan Oostvogels, Sam Michiels, Wouter Joosen, Danny Hughes
The “Internet of Things (IoT)” is a term that describes physical sensors, processing software, power and other technologies to connect or interchange information between systems and devices through the Internet and other forms of communication. RPL protocol can efficiently establish network routes, communicate routing information, and adjust the topology. The 6LoWPAN concept was born out of the belief that IP should protect even the tiniest devices, and for low-power devices, minimal computational capabilities should be permitted to join IoT. The DIS-Flooding against RPL-based IoT with its mitigation techniques are discussed in this paper.
Authored by Nisha, Akshaya Dhingra, Vikas Sindhu
The “Internet of Things” (IoT) is internetworking of physical devices known as 'things', algorithms, equipment and techniques that allow communication with another device, equipment and software over the network. And with the advancement in data communication, every device must be connected via the Internet. For this purpose, we use resource-constrained sensor nodes for collecting data from homes, offices, hospitals, industries and data centers. But various vulnerabilities may ruin the functioning of the sensor nodes. Routing Protocol for Low Power and Lossy Networks (RPL) is a standardized, secure routing protocol designed for the 6LoWPAN IoT network. It's a proactive routing protocol that works on the destination-oriented topology to perform safe routing. The Sinkhole is a networking attack that destroys the topology of the RPL protocol as the attacker node changes the route of all the traffic in the IoT network. In this paper, we have given a survey of Sinkhole attacks in IoT and proposed different methods for preventing and detecting these attacks in a low-power-based IoT network.
Authored by Jyoti Rani, Akshaya Dhingra, Vikas Sindhu
In the IoT (Internet of Things) domain, it is still a challenge to modify the routing behavior of IoT traffic at the decentralized backbone network. In this paper, centralized and flexible software-defined networking (SDN) is utilized to route the IoT traffic. The management of IoT data transmission through the SDN core network gives the chance to choose the path with the lowest delay, minimum packet loss, or hops. Therefore, fault-tolerant delay awareness routing is proposed for the emulated SDN-based backbone network to handle delay-sensitive IoT traffic. Besides, the hybrid form of GNS3 and Mininet-WiFi emulation is introduced to collaborate the SDN-based backbone network in GNS3 and the 6LoWPAN (IPv6 over Low Power Personal Area Network) sensor network in Mininet-WiFi.
Authored by May Han, Soe Htet, Lunchakorn Wuttisttikulkij
Artificial intelligence is a subfield of computer science that refers to the intelligence displayed by machines or software. The research has influenced the rapid development of smart devices that have a significant impact on our daily lives. Science, engineering, business, and medicine have all improved their prediction powers in order to make our lives easier in our daily tasks. The quality and efficiency of regions that use artificial intelligence has improved, as shown in this study. It successfully handles data organisation and environment difficulties, allowing for the development of a more solid and rigorous model. The pace of life is quickening in the digital age, and the PC Internet falls well short of meeting people’s needs. Users want to be able to get convenient network information services at any time and from any location
Authored by K. Thiagarajan, Chandra Dixit, M. Panneerselvam, C.Arunkumar Madhuvappan, Samata Gadde, Jyoti Shrote
The development of industrial robots, as a carrier of artificial intelligence, has played an important role in promoting the popularisation of artificial intelligence super automation technology. The paper introduces the system structure, hardware structure, and software system of the mobile robot climber based on computer big data technology, based on this research background. At the same time, the paper focuses on the climber robot's mechanism compound method and obstacle avoidance control algorithm. Smart home computing focuses on “home” and brings together related peripheral industries to promote smart home services such as smart appliances, home entertainment, home health care, and security monitoring in order to create a safe, secure, energy-efficient, sustainable, and comfortable residential living environment. It's been twenty years. There is still no clear definition of “intelligence at home,” according to Philips Inc., a leading consumer electronics manufacturer, which once stated that intelligence should comprise sensing, connectedness, learning, adaption, and ease of interaction. S mart applications and services are still in the early stages of development, and not all of them can yet exhibit these five intelligent traits.
Authored by Karrar Hussain, D. Vanathi, Bibin Jose, S Kavitha, Bhuvaneshwari Rane, Harpreet Kaur, C. Sandhya
Intelligent transportation systems, such as connected vehicles, are able to establish real-time, optimized and collision-free communication with the surrounding ecosystem. Introducing the internet of things (IoT) in connected vehicles relies on deployment of massive scale sensors, actuators, electronic control units (ECUs) and antennas with embedded software and communication technologies. Combined with the lack of designed-in security for sensors and ECUs, this creates challenges for security engineers and architects to identify, understand and analyze threats so that actions can be taken to protect the system assets. This paper proposes a novel STRIDE-based threat model for IoT sensors in connected vehicle networks aimed at addressing these challenges. Using a reference architecture of a connected vehicle, we identify system assets in connected vehicle sub-systems such as devices and peripherals that mostly involve sensors. Moreover, we provide a prioritized set of security recommendations, with consideration to the feasibility and deployment challenges, which enables practical applicability of the developed threat model to help specify security requirements to protect critical assets within the sensor network.
Authored by Sajib Kuri, Tarim Islam, Jason Jaskolka, Mohamed Ibnkahla
In the world of information technology and the Internet, which has become a part of human life today and is constantly expanding, Attention to the users' requirements such as information security, fast processing, dynamic and instant access, and costs savings has become essential. The solution that is proposed for such problems today is a technology that is called cloud computing. Today, cloud computing is considered one of the most essential distributed tools for processing and storing data on the Internet. With the increasing using this tool, the need to schedule tasks to make the best use of resources and respond appropriately to requests has received much attention, and in this regard, many efforts have been made and are being made. To this purpose, various algorithms have been proposed to calculate resource allocation, each of which has tried to solve equitable distribution challenges while using maximum resources. One of these calculation methods is the DRF algorithm. Although it offers a better approach than previous algorithms, it faces challenges, especially with time-consuming resource allocation computing. These challenges make the use of DRF more complex than ever in the low number of requests with high resource capacity as well as the high number of simultaneous requests. This study tried to reduce the computations costs associated with the DRF algorithm for resource allocation by introducing a new approach to using this DRF algorithm to automate calculations by machine learning and artificial intelligence algorithms (Autonomic Dominant Resource Fairness or A-DRF).
Authored by Amin Fakhartousi, Sofia Meacham, Keith Phalp