Control room video surveillance is an important source of information for ensuring public safety. To facilitate the process, a Decision-Support System (DSS) designed for the security task force is vital and necessary to take decisions rapidly using a sea of information. In case of mission critical operation, Situational Awareness (SA) which consists of knowing what is going on around you at any given time plays a crucial role across a variety of industries and should be placed at the center of our DSS. In our approach, SA system will take advantage of the human factor thanks to the reinforcement signal whereas previous work on this field focus on improving knowledge level of DSS at first and then, uses the human factor only for decision-making. In this paper, we propose a situational awareness-centric decision-support system framework for mission-critical operations driven by Quality of Experience (QoE). Our idea is inspired by the reinforcement learning feedback process which updates the environment understanding of our DSS. The feedback is injected by a QoE built on user perception. Our approach will allow our DSS to evolve according to the context with an up-to-date SA.
Authored by Abhishek Djeachandrane, Said Hoceini, Serge Delmas, Jean-Michel Duquerrois, Abdelhamid Mellouk
The features of socio-cyber-physical systems are presented, which dictate the need to revise traditional management methods and transform the management system in such a way that it takes into account the presence of a person both in the control object and in the control loop. The use of situational control mechanisms is proposed. The features of this approach and its comparison with existing methods of situational awareness are presented. The comparison has demonstrated wider possibilities and scope for managing socio-cyber-physical systems. It is recommended to consider a wider class of types of relations that exist in socio-cyber-physical systems. It is indicated that such consideration can be based on the use of pseudo-physical logics considered in situational control. It is pointed out that it is necessary to design a classifier of situations (primarily in cyberspace), instead of traditional classifiers of threats and intruders.
Authored by Oleksandr Milov, Vladyslav Khvostenko, Voropay Natalia, Olha Korol, Nataliia Zviertseva
With the electric power distribution grid facing ever increasing complexity and new threats from cyber-attacks, situational awareness for system operators is quickly becoming indispensable. Identifying de-energized lines on the distribution system during a SCADA communication failure is a prime example where operators need to act quickly to deal with an emergent loss of service. Loss of cellular towers, poor signal strength, and even cyber-attacks can impact SCADA visibility of line devices on the distribution system. Neural Networks (NNs) provide a unique approach to learn the characteristics of normal system behavior, identify when abnormal conditions occur, and flag these conditions for system operators. This study applies a 24-hour load forecast for distribution line devices given the weather forecast and day of the week, then determines the current state of distribution devices based on changes in SCADA analogs from communicating line devices. A neural network-based algorithm is applied to historical events on Alabama Power's distribution system to identify de-energized sections of line when a significant amount of SCADA information is hidden.
Authored by Matthew Leak, Ganesh Venayagamoorthy
Increasing connectivity and automation in vehicles leads to a greater potential attack surface. Such vulnerabilities within vehicles can also be used for auto-theft, increasing the potential for attackers to disable anti-theft mechanisms implemented by vehicle manufacturers. We utilize patterns derived from Controller Area Network (CAN) bus traffic to verify driver “behavior”, as a basis to prevent vehicle theft. Our proposed model uses semi-supervised learning that continuously profiles a driver, using features extracted from CAN bus traffic. We have selected 15 key features and obtained an accuracy of 99% using a dataset comprising a total of 51 features across 10 different drivers. We use a number of data analysis algorithms, such as J48, Random Forest, JRip and clustering, using 94K records. Our results show that J48 is the best performing algorithm in terms of training and testing (1.95 seconds and 0.44 seconds recorded, respectively). We also analyze the effect of using a sliding window on algorithm performance, altering the size of the window to identify the impact on prediction accuracy.
Authored by Rashid Khan, Neetesh Saxena, Omer Rana, Prosanta Gope
Accurate and synchronized timing information is required by power system operators for controlling the grid infrastructure (relays, Phasor Measurement Units (PMUs), etc.) and determining asset positions. Satellite-based global positioning system (GPS) is the primary source of timing information. However, GPS disruptions today (both intentional and unintentional) can significantly compromise the reliability and security of our electric grids. A robust alternate source for accurate timing is critical to serve both as a deterrent against malicious attacks and as a redundant system in enhancing the resilience against extreme events that could disrupt the GPS network. To achieve this, we rely on the highly accurate, terrestrial atomic clock-based network for alternative timing and synchronization. In this paper, we discuss an experimental setup for an alternative timing approach. The data obtained from this experimental setup is continuously monitored and analyzed using various time deviation metrics. We also use these metrics to compute deviations of our clock with respect to the National Institute of Standards and Technologys (NIST) GPS data. The results obtained from these metric computations are elaborately discussed. Finally, we discuss the integration of the procedures involved, like real-time data ingestion, metric computation, and result visualization, in a novel microservices-based architecture for situational awareness.
Authored by Supriya Chinthavali, S.M.Shamimul Hasan, Srikanth Yoginath, Haowen Xu, Phil Nugent, Terry Jones, Cozmo Engebretsen, Joseph Olatt, Varisara Tansakul, Carter Christopher, Yarom Polsky
Automatic Identification System (AIS) plays a leading role in maritime navigation, traffic control, local and global maritime situational awareness. Today, the reliable and secure AIS operation is threatened by probable cyber attacks such as imitation of ghost vessels, false distress or security messages, or fake virtual aids-to-navigation. We propose a method for ensuring the authentication and integrity of AIS messages based on the use of the Message Authentication Code scheme and digital watermarking (WM) technology to organize an additional tag transmission channel. The method provides full compatibility with the existing AIS functionality.
Authored by Oleksandr Shyshkin
Intrusion detection systems (IDSs) are widely deployed in the industrial control systems to protect network security. IDSs typically generate a huge number of alerts, which are time-consuming for system operators to process. Most of the alerts are individually insignificant false alarms. However, it is not the best solution to discard these alerts, as they can still provide useful information about network situation. Based on the study of characteristics of alerts in the industrial control systems, we adopt an enhanced method of exponentially weighted moving average (EWMA) control charts to help operators in processing alerts. We classify all detection signatures as regular and irregular according to their frequencies, set multiple control limits to detect anomalies, and monitor regular signatures for network security situational awareness. Extensive experiments have been performed using real-world alert data. Simulation results demonstrate that the proposed enhanced EWMA method can greatly reduce the volume of alerts to be processed while reserving significant abnormal information.
Authored by Baoxiang Jiang, Yang Liu, Huixiang Liu, Zehua Ren, Yun Wang, Yuanyi Bao, Wenqing Wang
The integration of distributed energy resources (DERs) and expansion of complex network in the distribution grid requires an advanced two-level state estimator to monitor the grid health at micro-level. The distribution state estimator will improve the situational awareness and resiliency of distributed power system. This paper implements a synchrophasors-based master state awareness (MSA) estimator to enhance the cybersecurity in distribution grid by providing a real-time estimation of system operating states to control center operators. In this paper, the implemented MSA estimator utilizes only phasor measurements, bus magnitudes and angles, from phasor measurement units (PMUs), deployed in local substations, to estimate the system states and also detects data integrity attacks, such as load tripping attack that disconnects the load. To validate the proof of concept, we implement this methodology in cyber-physical testbed environment at the Idaho National Laboratory (INL) Electric Grid Security Testbed. Further, to address the "valley of death" and support technology commercialization, field demonstration is also performed at the Critical Infrastructure Test Range Complex (CITRC) at the INL. Our experimental results reveal a promising performance in detecting load tripping attack and providing an accurate situational awareness through an alert visualization dashboard in real-time.
Authored by Mataz Alanzi, Hari Challa, Hussain Beleed, Brian Johnson, Yacine Chakhchoukh, Dylan Reen, Vivek Singh, John Bell, Craig Rieger, Jake Gentle
While digitization of distribution grids through information and communications technology brings numerous benefits, it also increases the grid's vulnerability to serious cyber attacks. Unlike conventional systems, attacks on many industrial control systems such as power grids often occur in multiple stages, with the attacker taking several steps at once to achieve its goal. Detection mechanisms with situational awareness are needed to detect orchestrated attack steps as part of a coherent attack campaign. To provide a foundation for detection and prevention of such attacks, this paper addresses the detection of multi-stage cyber attacks with the aid of a graph-based cyber intelligence database and alert correlation approach. Specifically, we propose an approach to detect multi-stage attacks by lever-aging heterogeneous data to form a knowledge base and employ a model-based correlation approach on the generated alerts to identify multi-stage cyber attack sequences taking place in the network. We investigate the detection quality of the proposed approach by using a case study of a multi-stage cyber attack campaign in a future-orientated power grid pilot.
Authored by Ömer Sen, Chijioke Eze, Andreas Ulbig, Antonello Monti
Security is undoubtedly the most serious problem for Web applications, and SQL injection (SQLi) attacks are one of the most damaging. The detection of SQL blind injection vulnerability is very important, but unfortunately, it is not fast enough. This is because time-based SQL blind injection lacks web page feedback, so the delay function can only be set artificially to judge whether the injection is successful by observing the response time of the page. However, brute force cracking and binary search methods used in injection require more web requests, resulting in a long time to obtain database information in SQL blind injection. In this paper, a gated recurrent neural network-based SQL blind injection technology is proposed to generate the predictive characters in SQL blind injection. By using the neural language model based on deep learning and character sequence prediction, the method proposed in this paper can learn the regularity of common database information, so that it can predict the next possible character according to the currently obtained database information, and sort it according to probability. In this paper, the training model is evaluated, and experiments are carried out on the shooting range to compare the method used in this paper with sqlmap (the most advanced sqli test automation tool at present). The experimental results show that the method used in this paper is more effective and significant than sqlmap in time-based SQL blind injection. It can obtain the database information of the target site through fewer requests, and run faster.
Authored by Jiahui Zheng, Junjian Li, Chao Li, Ran Li
Forming a secure autonomous vehicle group is extremely challenging since we have to consider threats and vulnerability of autonomous vehicles. Existing studies focus on communications among risk-free autonomous vehicles, which lack metrics to measure passenger security and cargo values. This work proposes a novel autonomous vehicle group formation method. We introduce risk assessment scoring to assess passenger security and cargo values, and propose an autonomous vehicle group formation method based on it. Our vehicle group is composed of a master node, and a number of core and border ones. Finally, the extensive simulation results show that our method is better than a Connectivity Prediction-based Dynamic Clustering model and a Low-InDependently clustering architecture in terms of node survival time, average change count of master nodes, and average risk assessment scoring.
Authored by Jiujun Cheng, Mengnan Hou, MengChu Zhou, Guiyuan Yuan, Qichao Mao
Soft real-time applications, including multimedia, gaming, and smart appliances, rely on specific architectural characteristics to deliver output in a time-constrained fashion. Any violation of application deadlines can lower the Quality-of-Service (QoS). The data sets associated with these applications are distributed over cores that communicate via Network-on-Chip (NoC) in multi-core systems. Accordingly, the response time of such applications depends on the worst-case latency of request/reply packets. A malicious implant such as Hardware Trojan (HT) that initiates a delay-of-service attack can tamper with the system performance. We model an HT that mounts a time-delay attack in the system by violating the path selection strategy used by the adaptive NoC router. Our analysis shows that once activated, the proposed HT increases the packet latency by 17% and degrades the system performance (IPC) by 18% over the Baseline. Furthermore, we propose an HT detection framework that uses packet traffic analysis and path monitoring to localise the HT. Experiment results show that the proposed detection framework exhibits 4.8% less power consumption and 6.4% less area than the existing technique.
Authored by Manju Rajan, Mayank Choksey, John Jose
The security of manycore systems has become increasingly critical. In system-on-chips (SoCs), Hardware Trojans (HTs) manipulate the functionalities of the routing components to saturate the on-chip network, degrade performance, and result in the leakage of sensitive data. Existing HT detection techniques, including runtime monitoring and state-of-the-art learning-based methods, are unable to timely and accurately identify the implanted HTs, due to the increasingly dynamic and complex nature of on-chip communication behaviors. We propose AGAPE, a novel Generative Adversarial Network (GAN)-based anomaly detection and mitigation method against HTs for secured on-chip communication. AGAPE learns the distribution of the multivariate time series of a number of NoC attributes captured by on-chip sensors under both HT-free and HT-infected working conditions. The proposed GAN can learn the potential latent interactions among different runtime attributes concurrently, accurately distinguish abnormal attacked situations from normal SoC behaviors, and identify the type and location of the implanted HTs. Using the detection results, we apply the most suitable protection techniques to each type of detected HTs instead of simply isolating the entire HT-infected router, with the aim to mitigate security threats as well as reducing performance loss. Simulation results show that AGAPE enhances the HT detection accuracy by 19%, reduces network latency and power consumption by 39% and 30%, respectively, as compared to state-of-the-art security designs.
Authored by Ke Wang, Hao Zheng, Yuan Li, Jiajun Li, Ahmed Louri
The Network-on-Chip (NoC) is the communication heart in Multiprocessors System-on-Chip (MPSoC). It offers an efficient and scalable interconnection platform, which makes it a focal point of potential security threats. Due to outsourcing design, the NoC can be infected with a malicious circuit, known as Hardware Trojan (HT), to leak sensitive information or degrade the system’s performance and function. An HT can form a security threat by consciously dropping packets from the NoC, structuring a Black Hole Router (BHR) attack. This paper presents an end-to-end secure interconnection network against the BHR attack. The proposed scheme is energy-efficient to detect the BHR in runtime with 1% and 2% average throughput and energy consumption overheads, respectively.
Authored by Luka Daoud, Nader Rafla
Smart Security Solutions are in high demand with the ever-increasing vulnerabilities within the IT domain. Adjusting to a Work-From-Home (WFH) culture has become mandatory by maintaining required core security principles. Therefore, implementing and maintaining a secure Smart Home System has become even more challenging. ARGUS provides an overall network security coverage for both incoming and outgoing traffic, a firewall and an adaptive bandwidth management system and a sophisticated CCTV surveillance capability. ARGUS is such a system that is implemented into an existing router incorporating cloud and Machine Learning (ML) technology to ensure seamless connectivity across multiple devices, including IoT devices at a low migration cost for the customer. The aggregation of the above features makes ARGUS an ideal solution for existing Smart Home System service providers and users where hardware and infrastructure is also allocated. ARGUS was tested on a small-scale smart home environment with a Raspberry Pi 4 Model B controller. Its intrusion detection system identified an intrusion with 96% accuracy while the physical surveillance system predicts the user with 81% accuracy.
Authored by R.M. Ratnayake, G.D.N.D.K. Abeysiriwardhena, G.A.J. Perera, Amila Senarathne, R. Ponnamperuma, B.A. Ganegoda
The phenomenon known as "Internet ossification" describes the process through which certain components of the Internet’s older design have become immovable at the present time. This presents considerable challenges to the adoption of IPv6 and makes it hard to implement IP multicast services. For new applications such as data centers, cloud computing and virtualized networks, improved network availability, improved internal and external domain routing, and seamless user connectivity throughout the network are some of the advantages of Internet growth. To meet these needs, we've developed Software Defined Networking for the Future Internet (SDN). When compared to current networks, this new paradigm emphasizes control plane separation from network-forwarding components. To put it another way, this decoupling enables the installation of control plane software (such as Open Flow controller) on computer platforms that are substantially more powerful than traditional network equipment (such as switches/routers). This research describes Mininet’s routing techniques for a virtualized software-defined network. There are two obstacles to overcome when attempting to integrate SDN in an LTE/WiFi network. The first problem is that external network load monitoring tools must be used to measure QoS settings. Because of the increased demand for real-time load balancing methods, service providers cannot adopt QoS-based routing. In order to overcome these issues, this research suggests a router configuration method. Experiments have proved that the network coefficient matrix routing arrangement works, therefore it may provide an answer to the above-mentioned concerns. The Java-based SDN controller outperforms traditional routing systems by nine times on average highest sign to sound ratio. The study’s final finding suggests that the field’s future can be forecast. We must have a thorough understanding of this emerging paradigm to solve numerous difficulties, such as creating the Future Internet and dealing with its obliteration problem. In order to address these issues, we will first examine current technologies and a wide range of current and future SDN projects before delving into the most important issues in this field in depth.
Authored by Kumar Gopal, M Sambath, Angelina Geetha, Himanshu Shekhar
Global traffic data are proliferating, including in Indonesia. The number of internet users in Indonesia reached 205 million in January 2022. This data means that 73.7% of Indonesia’s population has used the internet. The median internet speed for mobile phones in Indonesia is 15.82 Mbps, while the median internet connection speed for Wi-Fi in Indonesia is 20.13 Mbps. As predicted by many, real-time traffic such as multimedia streaming dominates more than 79% of traffic on the internet network. This condition will be a severe challenge for the internet network, which is required to improve the Quality of Experience (QoE) for user mobility, such as reducing delay, data loss, and network costs. However, IP-based networks are no longer efficient at managing traffic. Named Data Network (NDN) is a promising technology for building an agile communication model that reduces delays through a distributed and adaptive name-based data delivery approach. NDN replaces the ‘where’ paradigm with the concept of ‘what’. User requests are no longer directed to a specific IP address but to specific content. This paradigm causes responses to content requests to be served by a specific server and can also be served by the closest device to the requested data. NDN router has CS to cache the data, significantly reducing delays and improving the internet network’s quality of Service (QoS). Motivated by this, in 2019, we began intensive research to achieve a national flagship product, an NDN router with different functions from ordinary IP routers. NDN routers have cache, forwarding, and routing functions that affect data security on name-based networks. Designing scalable NDN routers is a new challenge as NDN requires fast hierarchical name-based lookups, perpackage data field state updates, and large-scale forward tables. We have a research team that has conducted NDN research through simulation, emulation, and testbed approaches using virtual machines to get the best NDN router design before building a prototype. Research results from 2019 show that the performance of NDN-based networks is better than existing IP-based networks. The tests were carried out based on various scenarios on the Indonesian network topology using NDNsimulator, MATLAB, Mininet-NDN, and testbed using virtual machines. Various network performance parameters, such as delay, throughput, packet loss, resource utilization, header overhead, packet transmission, round trip time, and cache hit ratio, showed the best results compared to IP-based networks. In addition, NDN Testbed based on open source is free, and the flexibility of creating topology has also been successfully carried out. This testbed includes all the functions needed to run an NDN network. The resource capacity on the server used for this testbed is sufficient to run a reasonably complex topology. However, bugs are still found on the testbed, and some features still need improvement. The following exploration of the NDN testbed will run with more new strategy algorithms and add Artificial Intelligence (AI) to the NDN function. Using AI in cache and forwarding strategies can make the system more intelligent and precise in making decisions according to network conditions. It will be a step toward developing NDN router products by the Bandung Institute of Technology (ITB) Indonesia.
Authored by Nana Syambas, Tutun Juhana, Hendrawan, Eueung Mulyana, Ian Edward, Hamonangan Situmorang, Ratna Mayasari, Ridha Negara, Leanna Yovita, Tody Wibowo, Syaiful Ahdan, Galih Nurkahfi, Ade Nurhayati, Hafiz Mulya, Mochamad Budiana
Volumetric Distributed Denial of Service attacks forcefully disrupt the availability of online services by congesting network links with arbitrary high-volume traffic. This brute force approach has collateral impact on the upstream network infrastructure, making early attack traffic removal a key objective. To reduce infrastructure load and maintain service availability, we introduce ReCEIF, a topology-independent mitigation strategy for early, rule-based ingress filtering leveraging deep reinforcement learning. ReCEIF utilizes hierarchical heavy hitters to monitor traffic distribution and detect subnets that are sending high-volume traffic. Deep reinforcement learning subsequently serves to refine hierarchical heavy hitters into effective filter rules that can be propagated upstream to discard traffic originating from attacking systems. Evaluating all filter rules requires only a single clock cycle when utilizing fast ternary content-addressable memory, which is commonly available in software defined networks. To outline the effectiveness of our approach, we conduct a comparative evaluation to reinforcement learning-based router throttling.
Authored by Hauke Heseding, Martina Zitterbart
DDoS attacks are usually accompanied by IP spoofing, but the availability of existing DDoS defense systems for high-speed networks decreases when facing DDoS attacks with IP spoofing. Although IP traceback technologies are proposed to focus on IP spoofing in DDoS attacks, there are problems in practical application such as the need to change existing protocols and extensive infrastructure support. To defend against DDoS attacks under IP spoofing in high-speed networks, we propose a novel DDoS defense system, IM-Shield. IM-Shield uses the address pair consisting of the upper router interface MAC address and the destination IP address for DDoS attack detection. IM-Shield implements fine-grained defense against DDoS attacks under IP spoofing by filtering the address pairs of attack traffic without requiring protocol and infrastructure extensions to be applied on the Internet. Detection experiments using the public dataset show that in a 10Gbps high-speed network, the detection precision of IM-Shield for DDoS attacks under IP spoofing is higher than 99.9%; and defense experiments simulating real-time processing in a 10Gbps high-speed network show that IM-Shield can effectively defend against DDoS attacks under IP spoofing.
Authored by Hua Wu, Xuange Zhang, Tingzheng Chen, Guang Cheng, Xiaoyan Hu
The technology advance and convergence of cyber physical systems, smart sensors, short-range wireless communications, cloud computing, and smartphone apps have driven the proliferation of Internet of things (IoT) devices in smart homes and smart industry. In light of the high heterogeneity of IoT system, the prevalence of system vulnerabilities in IoT devices and applications, and the broad attack surface across the entire IoT protocol stack, a fundamental and urgent research problem of IoT security is how to effectively collect, analyze, extract, model, and visualize the massive network traffic of IoT devices for understanding what is happening to IoT devices. Towards this end, this paper develops and demonstrates an end-to-end system with three key components, i.e., the IoT network traffic monitoring system via programmable home routers, the backend IoT traffic behavior analysis system in the cloud, and the frontend IoT visualization system via smartphone apps, for monitoring, analyzing and virtualizing network traffic behavior of heterogeneous IoT devices in smart homes. The main contributions of this demonstration paper is to present a novel system with an end-to-end process of collecting, analyzing and visualizing IoT network traffic in smart homes.
Authored by Keith Erkert, Andrew Lamontagne, Jereming Chen, John Cummings, Mitchell Hoikka, Kuai Xu, Feng Wang
This paper presents the machine learning algorithm to detect whether an executable binary is benign or ransomware. The ransomware cybercriminals have targeted our infrastructure, businesses, and everywhere which has directly affected our national security and daily life. Tackling the ransomware threats more effectively is a big challenge. We applied a machine-learning model to classify and identify the security level for a given suspected malware for ransomware detection and prevention. We use the feature selection data preprocessing to improve the prediction accuracy of the model.
Authored by Chulan Gao, Hossain Shahriar, Dan Lo, Yong Shi, Kai Qian
Terrorism, and radicalization are major economic, political, and social issues faced by the world in today's era. The challenges that governments and citizens face in combating terrorism are growing by the day. Artificial intelligence, including machine learning and deep learning, has shown promising results in predicting terrorist attacks. In this paper, we attempted to build a machine learning model to predict terror activities using a global terrorism database in both relational and graphical forms. Using the Neo4j Sandbox, you can create a graph database from a relational database. We used the node2vec algorithm from Neo4j Sandbox's graph data science library to convert the high-dimensional graph to a low-dimensional vector form. In order to predict terror activities, seven machine learning models were used, and the performance parameters that were calculated were accuracy, precision, recall, and F1 score. According to our findings, the Logistic Regression model was the best performing model which was able to classify the dataset with an accuracy of 0.90, recall of 0.94 precision of 0.93, and an F1 score of 0.93.
Authored by Ankit Raj, Sunil Somani
Intelligent, smart, Cloud, reconfigurable manufac-turing, and remote monitoring, all intersect in modern industry and mark the path toward more efficient, effective, and sustain-able factories. Many obstacles are found along the path, including legacy machineries and technologies, security issues, and software that is often hard, slow, and expensive to adapt to face unforeseen challenges and needs in this fast-changing ecosystem. Light-weight, portable, loosely coupled, easily monitored, variegated software components, supporting Edge, Fog and Cloud computing, that can be (re)created, (re)configured and operated from remote through Web requests in a matter of milliseconds, and that rely on libraries of ready-to-use tasks also extendable from remote through sub-second Web requests, constitute a fertile technological ground on top of which fourth-generation industries can be built. In this demo it will be shown how starting from a completely virgin Docker Engine, it is possible to build, configure, destroy, rebuild, operate, exclusively from remote, exclusively via API calls, computation networks that are capable to (i) raise alerts based on configured thresholds or trained ML models, (ii) transform Big Data streams, (iii) produce and persist Big Datasets on the Cloud, (iv) train and persist ML models on the Cloud, (v) use trained models for one-shot or stream predictions, (vi) produce tabular visualizations, line plots, pie charts, histograms, at real-time, from Big Data streams. Also, it will be shown how easily such computation networks can be upgraded with new functionalities at real-time, from remote, via API calls.
Authored by Mirco Soderi, Vignesh Kamath, John Breslin
The Internet of Vehicles (IoVs) performs the rapid expansion of connected devices. This massive number of devices is constantly generating a massive and near-real-time data stream for numerous applications, which is known as big data. Analyzing such big data to find, predict, and control decisions is a critical solution for IoVs to enhance service quality and experience. Thus, the main goal of this paper is to study the impact of big data analytics on traffic prediction in IoVs. In which we have used big data analytics steps to predict the traffic flow, and based on different deep neural models such as LSTM, CNN-LSTM, and GRU. The models are validated using evaluation metrics, MAE, MSE, RMSE, and R2. Hence, a case study based on a real-world road is used to implement and test the efficiency of the traffic prediction models.
Authored by Hakima Khelifi, Amani Belouahri
The ubiquitous nature of the Internet of Things (IoT) devices and their wide-scale deployment have remarkably attracted hackers to exploit weakly-configured and vulnerable devices, allowing them to form large IoT botnets and launch unprecedented attacks. Modeling the behavior of IoT botnets leads to a better understanding of their spreading mechanisms and the state of the network at different levels of the attack. In this paper, we propose a generic model to capture the behavior of IoT botnets. The proposed model uses Markov Chains to study the botnet behavior. Discrete Event System Specifications environment is used to simulate the proposed model.
Authored by Ghena Barakat, Basheer Al-Duwairi, Moath Jarrah, Manar Jaradat