In the Smart Grid paradigm, this critical infrastructure operation is increasingly exposed to cyber-threats due to the increased dependency on communication networks. An adversary can launch an attack on a power grid operation through False Data Injection into system measurements and/or through attacks on the communication network, such as flooding the communication channels with unnecessary data or intercepting messages. A cross-layered strategy that combines power grid data, communication grid monitoring and Machine Learning-based processing is a promising solution for detecting cyber-threats. In this paper, an implementation of an integrated solution of a cross-layer framework is presented. The advantage of such a framework is the augmentation of valuable data that enhances the detection of anomalies in the operation of power grid. IEEE 118-bus system is built in Simulink to provide a power grid testing environment and communication network data is emulated using SimComponents. The performance of the framework is investigated under various FDI and communication attacks.
Authored by Nader Aljohani, Dennis Agnew, Keerthiraj Nagaraj, Sharon Boamah, Reynold Mathieu, Arturo Bretas, Janise McNair, Alina Zare
In recent years, research has focused on exploiting the inherent physical (PHY) characteristics of wireless channels to discriminate between different spatially separated network terminals, mitigating the significant costs of signature-based techniques. In this paper, the legitimacy of the corresponding terminal is firstly verified at the protocol stack’s upper layers, and then the re-authentication process is performed at the PHY-layer. In the latter, a unique PHY-layer signature is created for each transmission based on the spatially and temporally correlated channel attributes within the coherence time interval. As part of the verification process, the PHY-layer signature can be used as a message authentication code to prove the packet’s authenticity. Extensive simulation has shown the capability of the proposed scheme to support high detection probability at small signal-to-noise ratios. In addition, security evaluation is conducted against passive and active attacks. Computation and communication comparisons are performed to demonstrate that the proposed scheme provides superior performance compared to conventional cryptographic approaches.
Authored by Mahmoud Shawky, Qammer Abbasi, Muhammad Imran, Shuja Ansari, Ahmad Taha
Researchers have investigated the dark web for various purposes and with various approaches. Most of the dark web data investigation focused on analysing text collected from HTML pages of websites hosted on the dark web. In addition, researchers have documented work on dark web image data analysis for a specific domain, such as identifying and analyzing Child Sexual Abusive Material (CSAM) on the dark web. However, image data from dark web marketplace postings and forums could also be helpful in forensic analysis of the dark web investigation.The presented work attempts to conduct image classification on classes other than CSAM. Nevertheless, manually scanning thousands of websites from the dark web for visual evidence of criminal activity is time and resource intensive. Therefore, the proposed work presented the use of quantum computing to classify the images using a Quantum Convolutional Neural Network (QCNN). Authors classified dark web images into four categories alcohol, drugs, devices, and cards. The provided dataset used for work discussed in the paper consists of around 1242 images. The image dataset combines an open source dataset and data collected by authors. The paper discussed the implementation of QCNN and offered related performance measures.
Authored by Ashwini Dalvi, Soham Bhoir, Irfan Siddavatam, S Bhirud
Cyber threat intelligence (CTI) is vital for enabling effective cybersecurity decisions by providing timely, relevant, and actionable information about emerging threats. Monitoring the dark web to generate CTI is one of the upcoming trends in cybersecurity. As a result, developing CTI capabilities with the dark web investigation is a significant focus for cybersecurity companies like Deepwatch, DarkOwl, SixGill, ThreatConnect, CyLance, ZeroFox, and many others. In addition, the dark web marketplace (DWM) monitoring tools are of much interest to law enforcement agencies (LEAs). The fact that darknet market participants operate anonymously and online transactions are pseudo-anonymous makes it challenging to identify and investigate them. Therefore, keeping up with the DWMs poses significant challenges for LEAs today. Nevertheless, the offerings on the DWM give insights into the dark web economy to LEAs. The present work is one such attempt to describe and analyze dark web market data collected for CTI using a dark web crawler. After processing and labeling, authors have 53 DWMs with their product listings and pricing.
Authored by Ashwini Dalvi, Gunjan Patil, S Bhirud
The value and size of information exchanged through dark-web pages are remarkable. Recently Many researches showed values and interests in using machine-learning methods to extract security-related useful knowledge from those dark-web pages. In this scope, our goals in this research focus on evaluating best prediction models while analyzing traffic level data coming from the dark web. Results and analysis showed that feature selection played an important role when trying to identify the best models. Sometimes the right combination of features would increase the model’s accuracy. For some feature set and classifier combinations, the Src Port and Dst Port both proved to be important features. When available, they were always selected over most other features. When absent, it resulted in many other features being selected to compensate for the information they provided. The Protocol feature was never selected as a feature, regardless of whether Src Port and Dst Port were available.
Authored by Ahmad Al-Omari, Andrew Allhusen, Abdullah Wahbeh, Mohammad Al-Ramahi, Izzat Alsmadi
Web evolution and Web 2.0 social media tools facilitate communication and support the online economy. On the other hand, these tools are actively used by extremist, terrorist and criminal groups. These malicious groups use these new communication channels, such as forums, blogs and social networks, to spread their ideologies, recruit new members, market their malicious goods and raise their funds. They rely on anonymous communication methods that are provided by the new Web. This malicious part of the web is called the “dark web”. Dark web analysis became an active research area in the last few decades, and multiple research studies were conducted in order to understand our enemy and plan for counteract. We have conducted a systematic literature review to identify the state-of-art and open research areas in dark web analysis. We have filtered the available research papers in order to obtain the most relevant work. This filtration yielded 28 studies out of 370. Our systematic review is based on four main factors: the research trends used to analyze dark web, the employed analysis techniques, the analyzed artifacts, and the accuracy and confidence of the available work. Our review results have shown that most of the dark web research relies on content analysis. Also, the results have shown that forum threads are the most analyzed artifacts. Also, the most significant observation is the lack of applying any accuracy metrics or validation techniques by most of the relevant studies. As a result, researchers are advised to consider using acceptance metrics and validation techniques in their future work in order to guarantee the confidence of their study results. In addition, our review has identified some open research areas in dark web analysis which can be considered for future research work.
Authored by Tamer Abdellatif, Raed Said, Taher Ghazal
Currently, the Dark Web is one key platform for the online trading of illegal products and services. Analysing the .onion sites hosting marketplaces is of interest for law enforcement and security researchers. This paper presents a study on 123k listings obtained from 6 different Dark Web markets. While most of current works leverage existing datasets, these are outdated and might not contain new products, e.g., those related to the 2020 COVID pandemic. Thus, we build a custom focused crawler to collect the data. Being able to conduct analyses on current data is of considerable importance as these marketplaces continue to change and grow, both in terms of products offered and users. Also, there are several anti-crawling mechanisms being improved, making this task more difficult and, consequently, reducing the amount of data obtained in recent years on these marketplaces. We conduct a data analysis evaluating multiple characteristics regarding the products, sellers, and markets. These characteristics include, among others, the number of sales, existing categories in the markets, the origin of the products and the sellers. Our study sheds light on the products and services being offered in these markets nowadays. Moreover, we have conducted a case study on one particular productive and dynamic drug market, i.e., Cannazon. Our initial goal was to understand its evolution over time, analyzing the variation of products in stock and their price longitudinally. We realized, though, that during the period of study the market suffered a DDoS attack which damaged its reputation and affected users' trust on it, which was a potential reason which lead to the subsequent closure of the market by its operators. Consequently, our study provides insights regarding the last days of operation of such a productive market, and showcases the effectiveness of a potential intervention approach by means of disrupting the service and fostering mistrust.
Authored by Víctor Labrador, Sergio Pastrana
Internet technology has made surveillance widespread and access to resources at greater ease than ever before. This implied boon has countless advantages. It however makes protecting privacy more challenging for the greater masses, and for the few hacktivists, supplies anonymity. The ever-increasing frequency and scale of cyber-attacks has not only crippled private organizations but has also left Law Enforcement Agencies(LEA's) in a fix: as data depicts a surge in cases relating to cyber-bullying, ransomware attacks; and the force not having adequate manpower to tackle such cases on a more microscopic level. The need is for a tool, an automated assistant which will help the security officers cut down precious time needed in the very first phase of information gathering: reconnaissance. Confronting the surface web along with the deep and dark web is not only a tedious job but which requires documenting the digital footprint of the perpetrator and identifying any Indicators of Compromise(IOC's). TORSION which automates web reconnaissance using the Open Source Intelligence paradigm, extracts the metadata from popular indexed social sites and un-indexed dark web onion sites, provided it has some relating Intel on the target. TORSION's workflow allows account matching from various top indexed sites, generating a dossier on the target, and exporting the collected metadata to a PDF file which can later be referenced.
Authored by Hritesh Sonawane, Sanika Deshmukh, Vinay Joy, Dhanashree Hadsul
Onion Routing is an encrypted communication system developed by the U.S. Naval Laboratory that uses existing Internet equipment to communicate anonymously. Miscreants use this means to conduct illegal transactions in the dark web, posing a security risk to citizens and the country. For this means of anonymous communication, website fingerprinting methods have been used in existing studies. These methods often have high overhead and need to run on devices with high performance, which makes the method inflexible. In this paper, we propose a lightweight method to address the high overhead problem that deep learning website fingerprinting methods generally have, so that the method can be applied on common devices while also ensuring accuracy to a certain extent. The proposed method refers to the structure of Inception net, divides the original larger convolutional kernels into smaller ones, and uses group convolution to reduce the website fingerprinting and computation to a certain extent without causing too much negative impact on the accuracy. The method was experimented on the data set collected by Rimmer et al. to ensure the effectiveness.
Authored by Dingyang Liang, Jianing Sun, Yizhi Zhang, Jun Yan
Deep web refers to sites that cannot be found by search engines and makes up the 96% of the digital world. The dark web is the part of the deep web that can only be accessed through specialised tools and anonymity networks. To avoid monitoring and control, communities that seek for anonymization are moving to the dark web. In this work, we scrape five dark web forums and construct five graphs to model user connections. These networks are then studied and compared using data mining techniques and social network analysis tools; for each community we identify the key actors, we study the social connections and interactions, we observe the small world effect, and we highlight the type of discussions among the users. Our results indicate that only a small subset of users are influential, while the rapid dissemination of information and resources between users may affect behaviours and formulate ideas for future members.
Authored by Sotirios Nikoletos, Paraskevi Raftopoulou
The amount of information that is shared regularly has increased as a direct result of the rapid development of network administrators, Web of Things-related devices, and online users. Cybercriminals constantly work to gain access to the data that is stored and transferred online in order to accomplish their objectives, whether those objectives are to sell the data on the dark web or to commit another type of crime. After conducting a thorough writing analysis of the causes and problems that arise with wireless networks’ security and privacy, it was discovered that there are a number of factors that can make the networks unpredictable, particularly those that revolve around cybercriminals’ evolving skills and the lack of significant bodies’ efforts to combat them. It was observed. Wireless networks have a built-in security flaw that renders them more defenceless against attack than their wired counterparts. Additionally, problems arise in networks with hub mobility and dynamic network geography. Additionally, inconsistent availability poses unanticipated problems, whether it is accomplished through mobility or by sporadic hub slumber. In addition, it is difficult, if not impossible, to implement recently developed security measures due to the limited resources of individual hubs. Large-scale problems that arise in relation to wireless networks and flexible processing are examined by the Wireless Correspondence Network Security and Privacy research project. A few aspects of security that are taken into consideration include confirmation, access control and approval, non-disavowal, privacy and secrecy, respectability, and inspection. Any good or service should be able to protect a client’s personal information. an approach that emphasises quality, implements strategy, and uses a poll as a research tool for IT and public sector employees. This strategy reflects a higher level of precision in IT faculties.
Authored by Hoshiyar Singh, K Balamurgan
A new privacy Laplace common recognition algorithm is designed to protect users’ privacy data in this paper. This algorithm disturbs state transitions and information generation functions using exponentially decaying Laplace noise to avoid attacks. The mean square consistency and privacy protection performance are further studied. Finally, the theoretical results obtained are verified by performing numerical simulations.
Authored by Yue Zhang, Xiaoya Nan, Jialing Zhou, Shuai Wang
The healthcare industry is confronted with a slew of significant challenges, including stringent regulations, privacy concerns, and rapidly rising costs. Many leaders and healthcare professionals are looking to new technology and informatics to expand more intelligent forms of healthcare delivery. Numerous technologies have advanced during the last few decades. Over the past few decades, pharmacy has changed and grown, concentrating less on drugs and more on patients. Pharmaceutical services improve healthcare's affordability and security. The primary invention was a cyber-infrastructure made up of smart gadgets that are connected to and communicate with one another. These cyber infrastructures have a number of problems, including privacy, trust, and security. These gadgets create cyber-physical systems for pharmaceutical care services in p-health. In the present period, cyber-physical systems for pharmaceutical care services are dealing with a variety of important concerns and demanding conditions, i.e., problems and obstacles that need be overcome to create a trustworthy and effective medical system. This essay offers a thorough examination of CPS's architectural difficulties and emerging tendencies.
Authored by Swati Devliyal, Sachin Sharma, Himanshu Goyal
Considering the evolution of technology, the need to secure data is growing fast. When we turn our attention to the healthcare field, securing data and assuring privacy are critical conditions that must be accomplished. The information is sensitive and confidential, and the exchange rate is very fast. Over the years, the healthcare domain has gradually seen a growth of interest regarding the interconnectivity of different processes to optimize and improve the services that are provided. Therefore, we need intelligent complex systems that can collect and transport sensitive data in a secure way. These systems are called cyber-physical systems. In healthcare domain, these complex systems are named medical cyber physical systems. The paper presents a brief description of the above-mentioned intelligent systems. Then, we focus on wireless sensor networks and the issues and challenges that occur in securing sensitive data and what improvements we propose on this subject. In this paper we tried to provide a detailed overview about cyber-physical systems, medical cyber-physical systems, wireless sensor networks and the security issues that can appear.
Authored by Balaban Béatrix-May, Sacală Ştefan, Petrescu-Niţă Alina-Claudia, Simen Radu
Cyber-physical Systems can be defined as a complex networked control system, which normally develop by combining several physical components with the cyber space. Cyber Physical System are already a part of our daily life. As its already being a part of everyone life, CPS also have great potential security threats and can be vulnerable to various cyber-attacks without showing any sign directly to component failure. To protect user security and privacy is a fundamental concern of any kind of system; either it’s a simple web application or supplicated professional system. Digital Multifactor authentication is one of the best ways to make secure authentication. It covers many different areas of a Cyber-connected world, including online payments, communications, access right management, etc. Most of the time, Multifactor authentication is little complex as it requires extra step from users. This paper will discuss the evolution from single authentication to Multi-Factor Authentication (MFA) starting from Single-Factor Authentication (SFA) and through Two-Factor Authentication (2FA). This paper seeks to analyze and evaluate the most prominent authentication techniques based on accuracy, cost, and feasibility of implementation. We also suggest several authentication schemes which incorporate with Multifactor authentication for CPS.
Authored by Mangal Sain, Oloviddin Normurodov, Chen Hong, Kueh Hui
This work-in-progress paper proposes a design methodology that addresses the complexity and heterogeneity of cyber-physical systems (CPS) while simultaneously proving resilient control logic and security properties. The design methodology involves a formal methods-based approach by translating the complex control logic and security properties of a water flow CPS into timed automata. Timed automata are a formal model that describes system behaviors and properties using mathematics-based logic languages with precision. Due to the semantics that are used in developing the formal models, verification techniques, such as theorem proving and model checking, are used to mathematically prove the specifications and security properties of the CPS. This work-in-progress paper aims to highlight the need for formalizing plant models by creating a timed automata of the physical portions of the water flow CPS. Extending the time automata with control logic, network security, and privacy control processes is investigated. The final model will be formally verified to prove the design specifications of the water flow CPS to ensure efficacy and security.
Authored by Robert Lois, Daniel Cole
Embedded devices are becoming increasingly pervasive in safety-critical systems of the emerging cyber-physical world. While trusted execution environments (TEEs), such as ARM TrustZone, have been widely deployed in mobile platforms, little attention has been given to deployment on real-time cyber-physical systems, which present a different set of challenges compared to mobile applications. For safety-critical cyber-physical systems, such as autonomous drones or automobiles, the current TEE deployment paradigm, which focuses only on confidentiality and integrity, is insufficient. Computation in these systems also needs to be completed in a timely manner (e.g., before the car hits a pedestrian), putting a much stronger emphasis on availability.To bridge this gap, we present RT-TEE, a real-time trusted execution environment. There are three key research challenges. First, RT-TEE bootstraps the ability to ensure availability using a minimal set of hardware primitives on commodity embedded platforms. Second, to balance real-time performance and scheduler complexity, we designed a policy-based event-driven hierarchical scheduler. Third, to mitigate the risks of having device drivers in the secure environment, we designed an I/O reference monitor that leverages software sandboxing and driver debloating to provide fine-grained access control on peripherals while minimizing the trusted computing base (TCB).We implemented prototypes on both ARMv8-A and ARMv8-M platforms. The system is tested on both synthetic tasks and real-life CPS applications. We evaluated rover and plane in simulation and quadcopter both in simulation and with a real drone.
Authored by Jinwen Wang, Ao Li, Haoran Li, Chenyang Lu, Ning Zhang
Networked cyber-physical systems must balance the utility of communication for monitoring and control with the risks of revealing private information. Many of these networks, such as wireless communication, are vulnerable to eavesdrop-ping by illegitimate recipients. Obfuscation can hide information from eaves-droppers by ensuring their observations are ambiguous or misleading. At the same time, coordination with recipients can enable them to interpret obfuscated data. In this way, we propose an obfuscation framework for dynamic systems that ensures privacy against eavesdroppers while maintaining utility for legitimate recipients. We consider eavesdroppers unaware of obfuscation by requiring that their observations are consistent with the original system, as well as eaves-droppers aware of the goals of obfuscation by assuming they learn of the specific obfuscation implementation used. We present a method for bounded synthesis of solutions based upon distributed reactive synthesis and the synthesis of publicly-known obfuscators.
Authored by Andrew Wintenberg, Stéphane Lafortune, Necmiye Ozay
Modern day cyber-infrastructures are critically dependent on each other to provide essential services. Current frameworks typically focus on the risk analysis of an isolated infrastructure. Evaluation of potential disruptions taking the heterogeneous cyber-infrastructures is vital to note the cascading disruption vectors and determine the appropriate interventions to limit the damaging impact. This paper presents a cyber-security risk assessment framework for the interconnected cyber-infrastructures. Our methodology is designed to be comprehensive in terms of accommodating accidental incidents and malicious cyber threats. Technically, we model the functional dependencies between the different architectures using reliability block diagrams (RBDs). RBDs are convenient, yet powerful graphical diagrams, which succinctly describe the functional dependence between the system components. The analysis begins by selecting a service from the many services that are outputted by the synchronized operation of the architectures whose disruption is deemed critical. For this service, we design an attack fault tree (AFT). AFT is a recent graphical formalism that combines the two popular formalisms of attack trees and fault trees. We quantify the attack-fault tree and compute the risk metrics - the probability of a disruption and the damaging impact. For this purpose, we utilize the open source ADTool. We show the efficacy of our framework with an example outage incident.
Authored by Rajesh Kumar
As cyber threats become highly damaging and complex, a new cybersecurity compliance certification model has been developed by the Department of Defense (DoD) to secure its Defense Industrial Base (DIB), and communication with its private partners. These partners or contractors are obligated by the Defense Federal Acquisition Regulations (DFARS) to be compliant with the latest standards in computer and data security. The Cybersecurity Maturity Model Certification (CMMC), and it is built upon existing DFARS 252.204-7012 and the NIST SP 800–171 controls. As of 2020, the DoD has incorporated DFARS and the National Institute of Standards and Technology (NIST) recommended security practices into what is now the CMMC. This paper presents the most commonly identified Security-Control-Deficiencies (SCD) faced, the attacks mitigated by addressing these SCD, and remediations applied to 127 DoD contractors in order to bring them into compliance with the CMMC guidelines. An analysis is done on what vulnerabilities are most prominent in the companies, and remediations applied to ensure these vulnerabilities are better avoided and the DoD supply-chain is more secure from attacks.
Authored by Vijay Sundararajan, Arman Ghodousi, Eric Dietz
In the prevailing situation, the sports like economic, industrial, cultural, social, and governmental activities are carried out in the online world. Today's international is particularly dependent on the wireless era and protective these statistics from cyber-assaults is a hard hassle. The reason for cyber-assaults is to damage thieve the credentials. In a few other cases, cyber-attacks ought to have a navy or political functions. The damages are PC viruses, facts break, DDS, and exceptional attack vectors. To this surrender, various companies use diverse answers to prevent harm because of cyberattacks. Cyber safety follows actual-time data at the modern-day-day IT data. So, far, numerous techniques have proposed with the resource of researchers around the area to prevent cyber-attacks or lessen the harm due to them. The cause of this has a look at is to survey and comprehensively evaluate the usual advances supplied around cyber safety and to analyse the traumatic situations, weaknesses, and strengths of the proposed techniques. Different sorts of attacks are taken into consideration in element. In addition, evaluation of various cyber-attacks had been finished through the platform called Kali Linux. It is predicted that the complete assessment has a have a study furnished for college students, teachers, IT, and cyber safety researchers might be beneficial.
Authored by Gururaj L, Soundarya C, Janhavi V, Lakshmi H, Prassan MJ
Cyber threats can cause severe damage to computing infrastructure and systems as well as data breaches that make sensitive data vulnerable to attackers and adversaries. It is therefore imperative to discover those threats and stop them before bad actors penetrating into the information systems.Threats hunting algorithms based on machine learning have shown great advantage over classical methods. Reinforcement learning models are getting more accurate for identifying not only signature-based but also behavior-based threats. Quantum mechanics brings a new dimension in improving classification speed with exponential advantage. The accuracy of the AI/ML algorithms could be affected by many factors, from algorithm, data, to prejudicial, or even intentional. As a result, AI/ML applications need to be non-biased and trustworthy.In this research, we developed a machine learning-based cyber threat detection and assessment tool. It uses two-stage (both unsupervised and supervised learning) analyzing method on 822,226 log data recorded from a web server on AWS cloud. The results show the algorithm has the ability to identify the threats with high confidence.
Authored by Shuangbao Wang, Md Arafin, Onyema Osuagwu, Ketchiozo Wandji
Supply chain cyberattacks that exploit insecure third-party software are a growing concern for the security of the electric power grid. These attacks seek to deploy malicious software in grid control devices during the fabrication, shipment, installation, and maintenance stages, or as part of routine software updates. Malicious software on grid control devices may inject bad data or execute bad commands, which can cause blackouts and damage power equipment. This paper describes an experimental setup to simulate the software update process of a commercial power relay as part of a hardware-in-the-loop simulation for grid supply chain cyber-security assessment. The laboratory setup was successfully utilized to study three supply chain cyber-security use cases.
Authored by Joseph Keller, Shuva Paul, Santiago Grijalva, Vincent Mooney
The damage or destruction of Critical Infrastructures (CIs) affect societies’ sustainable functioning. Therefore, it is crucial to have effective methods to assess the risk and resilience of CIs. Failure Mode and Effects Analysis (FMEA) and Failure Mode Effects and Criticality Analysis (FMECA) are two approaches to risk assessment and criticality analysis. However, these approaches are complex to apply to intricate CIs and associated Cyber-Physical Systems (CPS). We provide a top-down strategy, starting from a high abstraction level of the system and progressing to cover the functional elements of the infrastructures. This approach develops from FMECA but estimates risks and focuses on assessing resilience. We applied the proposed technique to a real-world CI, predicting how possible improvement scenarios may influence the overall system resilience. The results show the effectiveness of our approach in benchmarking the CI resilience, providing a cost-effective way to evaluate plausible alternatives concerning the improvement of preventive measures.
Authored by Gonçalo Carvalho, Nadia Medeiros, Henrique Madeira, Bruno Cabral
In this era, with a great extent of automation and connection, modern production processes are highly prone to cyber-attacks. The sensor-controller chain becomes an obvious target for attacks because sensors are commonly used to regulate production facilities. In this research, we introduce a new control configuration for the system, which is sensitive to time delay attacks (TDA), in which data transfer from the sensor to the controller is intentionally delayed. The attackers want to disrupt and damage the system by forcing controllers to use obsolete data about the system status. In order to improve the accuracy of delay identification and prediction, as well as erroneous limit and estimation for control, a new control structure is developed by an Internal Model Control (IMC) based Proportional-Integral-Derivative (PID) scheme with a fractional filter. An additional concept is included to mitigate the effect of time delay attack, i.e., the smith predictor. Simulation studies of the established control framework have been implemented with two numerical examples. The performance assessment of the proposed method has been done based on integral square error (ISE), integral absolute error (IAE) and total variation (TV).
Authored by Vivek Kumar, Yogesh Hote