With the increased commercialization of deep learning (DL) models, there is also a growing need to protect them from illicit usage. For cost- and ease of deployment reasons it is becoming increasingly common to run DL models on the hardware of third parties. Although there are some hardware mechanisms, such as Trusted Execution Environments (TEE), to protect sensitive data, their availability is still limited and not well suited to resource demanding tasks, like DL models, that benefit from hardware accelerators. In this work, we make model stealing more difficult, presenting a novel way to divide up a DL model, with the main part on normal infrastructure and a small part in a remote TEE, and train it using adversarial techniques. In initial experiments on image classification models for the Fashion MNIST and CIFAR 10 datasets, we observed that this obfuscation protection makes it significantly more difficult for an adversary to leverage the exposed model components.
Authored by Jakob Sternby, Bjorn Johansson, Michael Liljenstam
Proactive approaches to security, such as adversary emulation, leverage information about threat actors and their techniques (Cyber Threat Intelligence, CTI). However, most CTI still comes in unstructured forms (i.e., natural language), such as incident reports and leaked documents. To support proactive security efforts, we present an experimental study on the automatic classification of unstructured CTI into attack techniques using machine learning (ML). We contribute with two new datasets for CTI analysis, and we evaluate several ML models, including both traditional and deep learning-based ones. We present several lessons learned about how ML can perform at this task, which classifiers perform best and under which conditions, which are the main causes of classification errors, and the challenges ahead for CTI analysis.
Authored by Vittorio Orbinato, Mariarosaria Barbaraci, Roberto Natella, Domenico Cotroneo
In recent days, security and privacy is becoming a challenge due to the rapid development of technology. In 2021, Khan et al. proposed an authentication and key agreement framework for smart grid network and claimed that the proposed protocol provides security against all well-known attacks. However, in this paper, we present the analysis and shows that the protocol proposed by Khan et al has failed to protect the secrecy of the shared session key between the user and service provider. An adversary can derive the session key (online) by intercepting the communicated messages under the Dolev-Yao threat model. We simulated Khan et al.’s protocol for formal security verification using Tamarin Prover and found a trace for deriving the temporary key. It is used to encrypt the login request that includes the user’s secret credentials. Hence, it also fails to preserve the privacy of the user’s credentials, and therefore any adversary can impersonate the user. As a result, the protocol proposed by Khan et al. is not suitable for practical applications.
Authored by Singam Ram, Vanga Odelu
Security evaluation can be performed using a variety of analysis methods, such as attack trees, attack graphs, threat propagation models, stochastic Petri nets, and so on. These methods analyze the effect of attacks on the system, and estimate security attributes from different perspectives. However, they require information from experts in the application domain for properly capturing the key elements of an attack scenario: i) the attack paths a system could be subject to, and ii) the different characteristics of the possible adversaries. For this reason, some recent works focused on the generation of low-level security models from a high-level description of the system, hiding the technical details from the modeler.
Authored by Francesco Mariotti, Matteo Tavanti, Leonardo Montecchi, Paolo Lollini
The traditional threat modeling methodologies work well on a small scale, when evaluating targets such as a data field, a software application, or a system component—but they do not allow for comprehensive evaluation of an entire enterprise architecture. They also do not enumerate and consider a comprehensive set of actual threat actions observed in the wild. Because of the lack of adequate threat modeling methodologies for determining cybersecurity protection needs on an enterprise scale, cybersecurity executives and decision makers have traditionally relied upon marketing pressure as the main input into decision making for investments in cybersecurity capabilities (tools). A new methodology, originally developed by the Department of Defense then further expanded by the Department of Homeland Security, for the first time allows for a threat-based, end-to-end evaluation of cybersecurity architectures and determination of gaps or areas in need of future investments. Although in the public domain, this methodology has not been used outside of the federal government. This paper examines the new threat modeling approach that allows organizations to look at their cybersecurity protections from the standpoint of an adversary. The methodology enumerates threat actions that have been observed in the wild using a cyber threat framework and scores cybersecurity architectural capabilities for their ability to protect, detect, and recover from each threat action. The results of the analysis form a matrix called capability coverage map that visually represents the coverage, gaps, and overlaps against threat actions. The threat actions can be further prioritized using a threat heat map – a visual representation of the prevalence and maneuverability of threat actions that can be overlaid on top of a coverage map.
Authored by Branko Bokan, Joost Santos
Network Intrusion Detection Systems (NIDS) monitor networking environments for suspicious events that could compromise the availability, integrity, or confidentiality of the network’s resources. To ensure NIDSs play their vital roles, it is necessary to identify how they can be attacked by adopting a viewpoint similar to the adversary to identify vulnerabilities and defenses hiatus. Accordingly, effective countermeasures can be designed to thwart any potential attacks. Machine learning (ML) approaches have been adopted widely for network anomaly detection. However, it has been found that ML models are vulnerable to adversarial attacks. In such attacks, subtle perturbations are inserted to the original inputs at inference time in order to evade the classifier detection or at training time to degrade its performance. Yet, modeling adversarial attacks and the associated threats of employing the machine learning approaches for NIDSs was not addressed. One of the growing challenges is to avoid ML-based systems’ diversity and ensure their security and trust. In this paper, we conduct threat modeling for ML-based NIDS using STRIDE and Attack Tree approaches to identify the potential threats on different levels. We model the threats that can be potentially realized by exploiting vulnerabilities in ML algorithms through a simplified structural attack tree. To provide holistic threat modeling, we apply the STRIDE method to systems’ data flow to uncover further technical threats. Our models revealed a noticing of 46 possible threats to consider. These presented models can help to understand the different ways that a ML-based NIDS can be attacked; hence, hardening measures can be developed to prevent these potential attacks from achieving their goals.
Authored by Huda Alatwi, Charles Morisset
The number of Internet of Things (IoT) devices being deployed into networks is growing at a phenomenal pace, which makes IoT networks more vulnerable in the wireless medium. Advanced Persistent Threat (APT) is malicious to most of the network facilities and the available attack data for training the machine learning-based Intrusion Detection System (IDS) is limited when compared to the normal traffic. Therefore, it is quite challenging to enhance the detection performance in order to mitigate the influence of APT. Therefore, Prior Knowledge Input (PKI) models are proposed and tested using the SCVIC-APT2021 dataset. To obtain prior knowledge, the proposed PKI model pre-classifies the original dataset with unsupervised clustering method. Then, the obtained prior knowledge is incorporated into the supervised model to decrease training complexity and assist the supervised model in determining the optimal mapping between the raw data and true labels. The experimental findings indicate that the PKI model outperforms the supervised baseline, with the best macro average F1-score of 81.37\%, which is 10.47\% higher than the baseline.
Authored by Yu Shen, Murat Simsek, Burak Kantarci, Hussein Mouftah, Mehran Bagheri, Petar Djukic
The last decade witnessed a gradual shift from cloudbased computing towards ubiquitous computing, which has put at a greater security risk every element of the computing ecosystem including devices, data, network, and decision making. Indeed, emerging pervasive computing paradigms have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness, the underlying mechanics that enable intelligent functions, and the deeply integrated physical and cyber spaces. Furthermore, interconnected computing environments now enjoy many unconventional characteristics that mandate a radical change in security engineering tools. This need is further exacerbated by the rapid emergence of new Advanced Persistent Threats (APTs) that target critical infrastructures and aim to stealthily undermine their operations in innovative and intelligent ways. To enable system and network designers to be prepared to face this new wave of dangerous threats, this paper overviews recent APTs in emerging computing systems and proposes a new approach to APTs that is more tailored towards such systems compared to traditional IT infrastructures. The proposed APT lifecycle will inform security decisions and implementation choices in future pervasive networked systems.
Authored by Talal Halabi, Aawista Chaudhry, Sarra Alqahtani, Mohammad Zulkernine
Currently, there are no mission-capable systems that can successfully detect advanced persistent threats (APTs). These types of threats are hazardous in critical infrastructures (CIs). Due to the integration of operational technology (OT) and information communication technology (ICT), CI systems are particularly vulnerable to cyberattacks. In addition, power systems, in particular, are an attractive target for attackers, as they are responsible for the operation of modern infrastructures and are thus of great importance for modern warfare or even for strategic purposes of other criminal activities. Virtual power plants (VPPs) are a new implementation of power plants for energy management. The protection of virtual power plants against APTs is not yet sufficiently researched. This circumstance raises the research question - What might an APT detection system architecture for VPPs look like? Our methodology is based on intensive literature research to bundle knowledge from different sub-areas to solve a superordinate problem. After the literature review and domain analysis, a synthesis of new knowledge is provided in the presentation of a possible architecture. The in-depth proposal for a potential system architecture relies on the study of VPPs, APTs, and previous prevention mechanisms. The architecture is then evaluated for its effectiveness based on the challenges identified.
Authored by Robin Buchta, Felix Heine, Carsten Kleiner
Traditional defense methods can only evaluate a single security element and cannot determine the threat of Advanced Persistent Threat (APT) according to multi-source data. This paper proposes a network security situation awareness (NSSA) model to get the network situation under APT attacks based on knowledge graph. Firstly, the vulnerability knowledge graph and APT attack knowledge graph are constructed using public security databases and ATT\&CK (Adversarial Tactics, Techniques, and Common Knowledge), and the targeted knowledge graph APT-NSKG is obtained by combining the two using Bidirectional Encoder Representations from Transformers (BERT). Then, according to the Endsley model and the characteristics of APT , the NSSA model for APT is proposed. The model uses APTNSKG to obtain situation elements, and then comprehensively assesses and predicts the network situation from the perspectives of network asset dimension, vulnerability dimension, security dimension and threat dimension. Finally, the effectiveness of the model is verified by the data from the U.S. Cybersecurity and Infrastructure Security Agency.
Authored by Kai Chen, Jingxian Zhu, Lansheng Han, Shenghui Li, Pengyi Gao
The paper focus on the application of Systems Dynamics Modelling (SDM) for simulating socio-technical vulnerabilities of Advanced Persistent Threats (APT) to unravel Human Computer Interaction (HCI) for strategic visibility of threat actors. SDM has been widely applied to analyze nonlinear, complex, and dynamic systems in social sciences and technology. However, its application in the cyber security domain especially APT that involve complex and dynamic human computer interaction is a promising but scant research domain. While HCI deals with the interaction between one or more humans and between one or more computers for greater usability, this same interactive process is exploited by the APT actor. In this respect, using a data breach case study, we applied the socio-technical vulnerabilities classification as a theoretical lens to model socio and technical vulnerabilities on systems dynamics using Vensim software. The variables leading to the breach were identified, entered into Vensim software, and simulated to get the results. The results demonstrated an optimal interactive mix of one or more of the six socio variables and three technical variables leading to the data breach. SDM approach thus provides insights into the dynamics of the threat as well as throw light on the strategies to undertake for minimizing APT risks. This can assist in the reduction of the attack surface and reinforce mitigation efforts (prior to exfiltration) should an APT attack occur. In this paper, we thus propose and validate the application of system dynamics approach for designing a dynamic threat assessment framework for socio-technical vulnerabilities of APT.
Authored by Mathew Nicho, Shini Girija
Advanced persistent threat (APT) attacks have caused severe damage to many core information infrastructures. To tackle this issue, the graph-based methods have been proposed due to their ability for learning complex interaction patterns of network entities with discrete graph snapshots. However, such methods are challenged by the computer networking model characterized by a natural continuous-time dynamic heterogeneous graph. In this paper, we propose a heterogeneous graph neural network based APT detection method in smart grid clouds. Our model is an encoderdecoder structure. The encoder uses heterogeneous temporal memory and attention embedding modules to capture contextual information of interactions of network entities from the time and spatial dimensions respectively. We implement a prototype and conduct extensive experiments on real-world cyber-security datasets with more than 10 million records. Experimental results show that our method can achieve superior detection performance than state-of-the-art methods.
Authored by Weiyong Yang, Peng Gao, Hao Huang, Xingshen Wei, Haotian Zhang, Zhihao Qu
With the proliferation of Low Earth Orbit (LEO) spacecraft constellations, comes the rise of space-based wireless cognitive communications systems (CCS) and the need to safeguard and protect data against potential hostiles to maintain widespread communications for enabling science, military and commercial services. For example, known adversaries are using advanced persistent threats (APT) or highly progressive intrusion mechanisms to target high priority wireless space communication systems. Specialized threats continue to evolve with the advent of machine learning and artificial intelligence, where computer systems inherently can identify system vulnerabilities expeditiously over naive human threat actors due to increased processing resources and unbiased pattern recognition. This paper presents a disruptive abuse case for an APT-attack on such a CCS and describes a trade-off analysis that was performed to evaluate a variety of machine learning techniques that could aid in the rapid detection and mitigation of an APT-attack. The trade results indicate that with the employment of neural networks, the CCS s resiliency would increase its operational functionality, and therefore, on-demand communication services reliability would increase. Further, modelling, simulation, and analysis (MS\&A) was achieved using the Knowledge Discovery and Data Mining (KDD) Cup 1999 data set as a means to validate a subset of the trade study results against Training Time and Number of Parameters selection criteria. Training and cross-validation learning curves were computed to model the learning performance over time to yield a reasonable conclusion about the application of neural networks.
Authored by Suzanna LaMar, Jordan Gosselin, Lisa Happel, Anura Jayasumana
Counteracting the most dangerous attacks –advanced persistent threats – is an actual problem of modern enterprises. Usually these threats aimed not only at information resources but also at software and hardware resources of automated systems of industrial plants. As a rule, attackers use a number of methods including social engineering methods. The article is devoted to development of the methods for timely prevention from advanced persistent threats based on analysis of attackers’ tactics. Special attention in the article is paid to methods for detection provocations of the modernization of protection systems, as well as methods for monitoring the state of resources of the main automated system. Technique of identification of suspicious changes in the resources is also considered in the article. The result of applying this set of methods will help to increase the protection level of automated systems’ resources.
Authored by Nataliya Kuznetsova, Tatiana Karlova, Alexander Bekmeshov
Data management systems in smart grids have to address advanced persistent threats (APTs), where malware injection methods are performed by the attacker to launch stealthy attacks and thus steal more data for illegal advantages. In this paper, we present a hierarchical deep reinforcement learning based APT detection scheme for smart grids, which enables the control center of the data management system to choose the APT detection policy to reduce the detection delay and improve the data protection level without knowing the attack model. Based on the state that consists of the size of the gathered power usage data, the priority level of the data, and the detection history, this scheme develops a two-level hierarchical structure to compress the high-dimensional action space and designs four deep dueling networks to accelerate the optimization speed with less over-estimation. Detection performance bound is provided and simulation results show that the proposed scheme improves both the data protection level and the utility of the control center with less detection delay.
Authored by Shi Yu
To meet the high safety and reliability requirements of today’s power transformers, advanced online diagnosis systems using seamless communications and information technologies have been developed, which potentially presents growing cybersecurity concerns. This paper provides practical attack models breaching a power transformer diagnosis system (PTDS) in a digital substation by advanced persistent threats (APTs) and proposes a security testbed for developing future security built-in PTDS against APTs. The proposed security testbed includes: 1) a real-time substation power system simulator, 2) a real-time cyber system, and 3) penetration testing tools. Several real cyber-attacks are generated and the impact on a digital substation are provided to validate the feasibility of the proposed security testbed. The proposed PTDS-focused security testbed will be used to develop self-safe defense strategies against malicious cyber-attacks in a digital substation environment.
Authored by Seerin Ahmad, BoHyun Ahn, Syed. Alvee, Daniela Trevino, Taesic Kim, Young-Woo Youn, Myung-Hyo Ryu
Social networks are good platforms for likeminded people to exchange their views and thoughts. With the rapid growth of web applications, social networks became huge networks with million numbers of users. On the other hand, number of malicious activities by untrustworthy users also increased. Users must estimate the people trustworthiness before sharing their personal information with them. Since the social networks are huge and complex, the estimation of user trust value is not trivial task and could gain main researchers focus. Some of the mathematical methods are proposed to estimate the user trust value, but still they are lack of efficient methods to analyze user activities. In this paper “An Efficient Trust Computation Methods Using Machine Learning in Online Social Networks- TCML” is proposed. Here the twitter user activities are considered to estimate user direct trust value. The trust values of unknown users are computed through the recommendations of common friends. The available twitter data set is unlabeled data, hence unsupervised methods are used in categorization (clusters) of users and in computation of their trust value. In experiment results, silhouette score is used in assessing of cluster quality. The proposed method performance is compared with existing methods like mole and tidal where it could outperform them.
Authored by Anitha Yarava, Shoba Bindu
We analyze a dataset from Twitter of misinformation related to the COVID-19 pandemic. We consider this dataset from the intersection of two important but, heretofore, largely separate perspectives: misinformation and trust. We apply existing direct trust measures to the dataset to understand their topology, and to better understand if and how trust relates to spread of misinformation online. We find evidence for small worldness in the misinformation trust network; outsized influence from broker nodes; a digital fingerprint that may indicate when a misinformation trust network is forming; and, a positive relationship between greater trust and spread of misinformation.
Authored by Bryan Boots, Steven Simske
The new web 3.0 or Web3 is a distributed web technology mainly operated by decentralized blockchain and Artificial Intelligence. The Web 3.0 technologies bring the changes in industry 4.0 especially the business sector. The contribution of this paper to discuss the new web 3.0 (not semantic web) and to explore the essential factors of the new Web 3.0 technologies in business or industry based on 7 layers of decentralized web. The Layers have users, interface, application, execution, settlement, data, and social as main components. The concept 7 layers of decentralized web was introduced by Polynya. This research was carried out using SLR (Systematic Literature Review) methodology to identify certain factors by analyzing high quality papers in the Scopus database. We found 21 essential factors that are Distributed, Real-time, Community, Culture, Productivity, Efficiency, Decentralized, Trust, Security, Performance, Reliability, Scalability, Transparency, Authenticity, Cost Effective, Communication, Telecommunication, Social Network, Use Case, and Business Simulation. We also present opportunities and challenges of the 21 factors in business and Industry.
Authored by Calvin Vernando, Hendry Hitojo, Randy Steven, Meyliana, Surjandy
Large amount of information generated on the web is useful for extracting useful patterns about customers and their purchases. Recommender system provides framework to utilize this information to make suggestions to user according to their previous preferences. They are intelligent systems having decision making capabilities. This in turn enhances business profit. Recommender system endure from problems like cold start, fake profile generation and data sparsity. Inclusion of trust in recommender system helps to alleviate these problems to a great extent. The phenomenon of trust is derived from daily life experiences like believing the views/reviews suggested by friends and relatives for buying new things. The desideratum of this research paper is to procure a survey on how trust can be incorporated in recommender systems and the advantages trust aware recommender systems have over traditional recommender systems. It highlights the techniques that have been used to develop trust aware recommenders and pros and cones of these techniques.
Authored by Megha Raizada
Nowadays, Recommender Systems (RSs) have become the indispensable solution to the problem of information overload in many different fields (e-commerce, e-tourism, ...) because they offer their customers with more adapted and increasingly personalized services. In this context, collaborative filtering (CF) techniques are used by many RSs since they make it easier to provide recommendations of acceptable quality by leveraging the preferences of similar user communities. However, these types of techniques suffer from the problem of the sparsity of user evaluations, especially during the cold start phase. Indeed, the process of searching for similar neighbors may not be successful due to insufficient data in the matrix of user-item ratings (case of a new user or new item). To solve this kind of problem, we can find in the literature several solutions which allow to overcome the insufficiency of the data thanks to the social relations between the users. These solutions can provide good quality recommendations even when data is sparse because they permit for an estimation of the level of trust between users. This type of metric is often used in tourism domain to support the computation of similarity measures between users by producing valuable POI (point of interest) recommendations through a better trust-based neighborhood. However, the difficulty of obtaining explicit trust data from the social relationships between tourists leads researchers to infer this data implicitly from the user-item relationships (implicit trust). In this paper, we make a state of the art on CF techniques that can be utilized to reduce the data sparsity problem during the RSs cold start phase. Second, we propose a method that essentially relies on user trustworthiness inferred using scores computed from users’ ratings of items. Finally, we explain how these relationships deduced from existing social links between tourists might be employed as additional sources of information to minimize cold start problem.
Authored by Sarah Medjroud, Nassim Dennouni, Mhamed Henni, Djelloul Bettache
The internet has made everything convenient. Through the world wide web it has almost single-handily transformed the way we live our lives. In doing so, we have become so fuelled by cravings for fast and cheap web connections that we find it difficult to take in the bigger picture. It is widely documented that we need a safer and more trusting internet, but few know or agree on what this actually means. This paper introduces a new body of research that explores whether there needs to be a fundamental shift in how we design and deliver these online spaces. In detail, the authors suggest the need for an internet security aesthetic that opens up the internet (from end to end) to fully support the people that are using it. Going forward, this research highlights that social trust needs to be a key concern in defining the future value of the internet.
Authored by Fiona Carroll, Rhyd Lewis
Web technologies have created a worldwide web of problems and cyber risks for individuals and organizations. In this paper, we evaluate web technologies and present the different technologies and their positive impacts on individuals and business sectors. Also, we present a cyber-criminals metrics engine for attack determination on web technologies platforms’ weaknesses. Finally, this paper offers a cautionary note to protect Small and Medium Businesses (SMBs) and make recommendations to help minimize cyber risks and save individuals and organizations from cyberattack distress.
Authored by Olumide Malomo, Shanzhen Gao, Adeyemi Adekoya, Ephrem Eyob, Weizheng Gao
To improve the security and reliability of remote terminals under trusted cloud platform, an identity authentication model based on DAA optimization is proposed. By introducing a trusted third-party CA, the scheme issues a cross domain DAA certificate to the trusted platform that needs cross domain authentication. Then, privacy CA isolation measures are taken to improve the security of the platform, so that the authentication scheme can be used for identity authentication when ordinary users log in to the host equipped with TPM chip. Finally, the trusted computing platform environment is established, and the performance load distribution and total performance load of each entity in the DAA protocol in the unit of machine cycle can be acquired through experimental analysis. The results show that the scheme can take into account the requirements of anonymity, time cost and cross domain authentication in the trusted cloud computing platform, and it is a useful supplement and extension to the existing theories of web service security.
Authored by Yi Liang, Youyong Chen, Xiaoqi Dong, Changchao Dong, Qingyuan Cai
Federated Data-as-a-Service systems are helpful in applications that require dynamic coordination of multiple organizations, such as maritime search and rescue, disaster relief, or contact tracing of an infectious disease. In such systems it is often the case that users cannot be wholly trusted, and access control conditions need to take the level of trust into account. Most existing work on trust-based access control in web services focuses on a single aspect of trust, like user credentials, but trust often has multiple aspects such as users’ behavior and their organization. In addition, most existing solutions use a fixed threshold to determine whether a user’s trust is sufficient, ignoring the dynamic situation where the trade-off between benefits and risks of granting access should be considered. We have developed a Multi-aspect and Adaptive Trust-based Situation-aware Access Control Framework we call “MATS” for federated data sharing systems. Our framework is built using Semantic Web technologies and uses game theory to adjust a system’s access decisions based on dynamic situations. We use query rewriting to implement this framework and optimize the system’s performance by carefully balancing efficiency and simplicity. In this paper we present this framework in detail, including experimental results that validate the feasibility of our approach.
Authored by Dae-Young Kim, Nujood Alodadi, Zhiyuan Chen, Karuna Joshi, Adina Crainiceanu, Don Needham