Operating systems have various components that produce artifacts. These artifacts are the outcome of a user’s interaction with an application or program and the operating system’s logging capabilities. Thus, these artifacts have great importance in digital forensics investigations. For example, these artifacts can be utilized in a court of law to prove the existence of compromising computer system behaviors. One such component of the Microsoft Windows operating system is Shellbag, which is an enticing source of digital evidence of high forensics interest. The presence of a Shellbag entry means a specific user has visited a particular folder and done some customizations such as accessing, sorting, resizing the window, etc. In this work, we forensically analyze Shellbag as we talk about its purpose, types, and specificity with the latest version of the Windows 11 operating system and uncover the registry hives that contain Shellbag customization information. We also conduct in-depth forensics examinations on Shellbag entries using three tools of three different types, i.e., open-source, freeware, and proprietary tools. Lastly, we compared the capabilities of tools utilized in Shellbag forensics investigations.
Authored by Ashar Neyaz, Narasimha Shashidhar, Cihan Varol, Amar Rasheed
Today billions of people are accessing the internet around the world. There is a need for new technology to provide security against malicious activities that can take preventive/ defensive actions against constantly evolving attacks. A new generation of technology that keeps an eye on such activities and responds intelligently to them is the intrusion detection system employing machine learning. It is difficult for traditional techniques to analyze network generated data due to nature, amount, and speed with which the data is generated. The evolution of advanced cyber threats makes it difficult for existing IDS to perform up to the mark. In addition, managing large volumes of data is beyond the capabilities of computer hardware and software. This data is not only vast in scope, but it is also moving quickly. The system architecture suggested in this study uses SVM to train the model and feature selection based on the information gain ratio measure ranking approach to boost the overall system's efficiency and increase the attack detection rate. This work also addresses the issue of false alarms and trying to reduce them. In the proposed framework, the UNSW-NB15 dataset is used. For analysis, the UNSW-NB15 and NSL-KDD datasets are used. Along with SVM, we have also trained various models using Naive Bayes, ANN, RF, etc. We have compared the result of various models. Also, we can extend these trained models to create an ensemble approach to improve the performance of IDS.
Authored by Manish Khodaskar, Darshan Medhane, Rajesh Ingle, Amar Buchade, Anuja Khodaskar
Modern software development frequently uses third-party packages, raising the concern of supply chain security attacks. Many attackers target popular package managers, like npm, and their users with supply chain attacks. In 2021 there was a 650% year-on-year growth in security attacks by exploiting Open Source Software's supply chain. Proactive approaches are needed to predict package vulnerability to high-risk supply chain attacks. The goal of this work is to help software developers and security specialists in measuring npm supply chain weak link signals to prevent future supply chain attacks by empirically studying npm package metadata.
In this paper, we analyzed the metadata of 1.63 million JavaScript npm packages. We propose six signals of security weaknesses in a software supply chain, such as the presence of install scripts, maintainer accounts associated with an expired email domain, and inactive packages with inactive maintainers. One of our case studies identified 11 malicious packages from the install scripts signal. We also found 2,818 maintainer email addresses associated with expired domains, allowing an attacker to hijack 8,494 packages by taking over the npm accounts. We obtained feedback on our weak link signals through a survey responded to by 470 npm package developers. The majority of the developers supported three out of our six proposed weak link signals. The developers also indicated that they would want to be notified about weak links signals before using third-party packages. Additionally, we discussed eight new signals suggested by package developers.
Authored by Nusrat Zahan, Thomas Zimmermann, Patrice Godefroid, Brendan Murphy, Chandra Maddila, Laurie Williams
Blockchain has emerged as a leading technological innovation because of its indisputable safety and services in a distributed setup. Applications of blockchain are rising covering varied fields such as financial transactions, supply chains, maintenance of land records, etc. Supply chain management is a potential area that can immensely benefit from blockchain technology (BCT) along with smart contracts, making supply chain operations more reliable, safer, and trustworthy for all its stakeholders. However, there are numerous challenges such as scalability, coordination, and safety-related issues which are yet to be resolved. Multi-agent systems (MAS) offer a completely new dimension for scalability, cooperation, and coordination in distributed culture. MAS consists of a collection of automated agents who can perform a specific task intelligently in a distributed environment. In this work, an attempt has been made to develop a framework for implementing a multi-agent system for a large-scale product manufacturing supply chain with blockchain technology wherein the agents communicate with each other to monitor and organize supply chain operations. This framework eliminates many of the weaknesses of supply chain management systems. The overall goal is to enhance the performance of SCM in terms of transparency, traceability, trustworthiness, and resilience by using MAS and BCT.
Authored by Satyananda Swain, Manas Patra
Today’s Supply Chains (SC) are engulfed in a maelstrom of risks which arise mainly from uncertain, contradictory, and incomplete information. A decision-making process is required in order to detect threats, assess risks, and implements mitigation methods to address these issues. However, Neutrosophic Data Analytic Hierarchy Process (NDAHP) allows for a more realistic reflection of real-world problems while taking into account all factors that lead to effective risk assessment for Multi Criteria Decision-Making (MCDM). The purpose of this paper consists of an implementation of the NDAHP for MCDM aiming to identifying, ranking, prioritizing and analyzing risks without considering SC’ expert opinions. To that end, we proceed, first, for selecting and analyzing the most 23 relevant risk indicators that have a significant impact on the SC considering three criteria: severity, occurrence, and detection. After that, the NDAHP method is implemented and showcased, on the selected risk indicators, throw an illustrative example. Finally, we discuss the usability and effectiveness of the suggested method for the SCRM purposes.
Authored by Ahlem Meziani, Abdelhabib Bourouis, Mohamed Chebout
Under the new situation of China's new infrastructure and digital transformation and upgrading, large IT companies such as the United States occupy the market of key information infrastructure components in important fields such as power and energy in China, which makes the risk of key information infrastructure in China's power enterprises become more and more prominent. In the power Internet of Things environment where everything is connected, the back doors and loopholes of basic software and hardware caused by the supply chain risks of key information infrastructure have broken through the foundation of power cyber-security and information security defense, and the security risk management of power key information infrastructure cyber-security has become urgent. Therefore, this paper studies the construction of the cyber-security management framework of key information infrastructure suitable for electric power enterprises, and defines the security risk assessment norms of each link of equipment access to the network. Implement the national cyber-security requirements, promote the cyber-security risk controllable assessment service of key information infrastructure, improve the security protection level of power grid information system from the source, and promote the construction and improvement of the network and information security system of power industry.
Authored by Guoying Zhang, Yongchao Xu, Yushuo Hou, Lu Cui, Qian Wang
Security and Controls with Data privacy in Internet of Things (IoT) devices is not only a present and future technology that is projected to connect a multitude of devices, but it is also a critical survival factor for IoT to thrive. As the quantity of communications increases, massive amounts of data are expected to be generated, posing a threat to both physical device and data security. In the Internet of Things architecture, small and low-powered devices are widespread. Due to their complexity, traditional encryption methods and algorithms are computationally expensive, requiring numerous rounds to encrypt and decode, squandering the limited energy available on devices. A simpler cryptographic method, on the other hand, may compromise the intended confidentiality and integrity. This study examines two lightweight encryption algorithms for Android devices: AES and RSA. On the other hand, the traditional AES approach generates preset encryption keys that the sender and receiver share. As a result, the key may be obtained quickly. In this paper, we present an improved AES approach for generating dynamic keys.
Authored by RV Chandrashekhar, J Visumathi, PeterSoosai Anandaraj
In the context of cybersecurity systems, trust is the firm belief that a system will behave as expected. Trustworthiness is the proven property of a system that is worthy of trust. Therefore, trust is ephemeral, i.e. trust can be broken; trustworthiness is perpetual, i.e. trustworthiness is verified and cannot be broken. The gap between these two concepts is one which is, alarmingly, often overlooked. In fact, the pressure to meet with the pace of operations for mission critical cross domain solution (CDS) development has resulted in a status quo of high-risk, ad hoc solutions. Trustworthiness, proven through formal verification, should be an essential property in any hardware and/or software security system. We have shown, in "vCDS: A Virtualized Cross Domain Solution Architecture", that developing a formally verified CDS is possible. virtual CDS (vCDS) additionally comes with security guarantees, i.e. confidentiality, integrity, and availability, through the use of a formally verified trusted computing base (TCB). In order for a system, defined by an architecture description language (ADL), to be considered trustworthy, the implemented security configuration, i.e. access control and data protection models, must be verified correct. In this paper we present the first and only security auditing tool which seeks to verify the security configuration of a CDS architecture defined through ADL description. This tool is useful in mitigating the risk of existing solutions by ensuring proper security enforcement. Furthermore, when coupled with the agile nature of vCDS, this tool significantly increases the pace of system delivery.
Authored by Nathan Daughety, Marcus Pendleton, Rebeca Perez, Shouhuai Xu, John Franco
Modern consumer electronic devices often provide intelligence services with deep neural networks. We have started migrating the computing locations of intelligence services from cloud servers (traditional AI systems) to the corresponding devices (on-device AI systems). On-device AI systems generally have the advantages of preserving privacy, removing network latency, and saving cloud costs. With the emergence of on-device AI systems having relatively low computing power, the inconsistent and varying hardware resources and capabilities pose difficulties. Authors' affiliation has started applying a stream pipeline framework, NNStreamer, for on-device AI systems, saving developmental costs and hardware resources and improving performance. We want to expand the types of devices and applications with on-device AI services products of both the affiliation and second/third parties. We also want to make each AI service atomic, re-deployable, and shared among connected devices of arbitrary vendors; we now have yet another requirement introduced as it always has been. The new requirement of “among-device AI” includes connectivity between AI pipelines so that they may share computing resources and hardware capabilities across a wide range of devices regardless of vendors and manufacturers. We propose extensions of the stream pipeline framework, NNStreamer, for on-device AI so that NNStreamer may provide among-device AI capability. This work is a Linux Foundation (LF AI & Data) open source project accepting contributions from the general public.
Authored by MyungJoo Ham, Sangjung Woo, Jaeyun Jung, Wook Song, Gichan Jang, Yongjoo Ahn, Hyoungjoo Ahn
With the development of artificial intelligence, the need for data sharing is becoming more and more urgent. However, the existing data sharing methods can no longer fully meet the data sharing needs. Privacy breaches, lack of motivation and mutual distrust have become obstacles to data sharing. We design a privacy-preserving, decentralized data sharing method based on blockchain smart contracts, named PPDS. To protect data privacy, we transform the data sharing problem into a model sharing problem. This means that the data owner does not need to directly share the raw data, but the AI model trained with such data. The data requester and the data owner interact on the blockchain through a smart contract. The data owner trains the model with local data according to the requester's requirements. To fairly assess model quality, we set up several model evaluators to assess the validity of the model through voting. After the model is verified, the data owner who trained the model will receive reward in return through a smart contract. The sharing of the model avoids direct exposure of the raw data, and the reasonable incentive provides a motivation for the data owner to share the data. We describe the design and workflow of our PPDS, and analyze the security using formal verification technology, that is, we use Coloured Petri Nets (CPN) to build a formal model for our approach, proving its security through simulation execution and model checking. Finally, we demonstrate effectiveness of PPDS by developing a prototype with its corresponding case application.
Authored by Xuesong Hai, Jing Liu
Graph-based Semi-Supervised Learning (GSSL) is a practical solution to learn from a limited amount of labelled data together with a vast amount of unlabelled data. However, due to their reliance on the known labels to infer the unknown labels, these algorithms are sensitive to data quality. It is therefore essential to study the potential threats related to the labelled data, more specifically, label poisoning. In this paper, we propose a novel data poisoning method which efficiently approximates the result of label inference to identify the inputs which, if poisoned, would produce the highest number of incorrectly inferred labels. We extensively evaluate our approach on three classification problems under 24 different experimental settings each. Compared to the state of the art, our influence-driven attack produces an average increase of error rate 50% higher, while being faster by multiple orders of magnitude. Moreover, our method can inform engineers of inputs that deserve investigation (relabelling them) before training the learning model. We show that relabelling one-third of the poisoned inputs (selected based on their influence) reduces the poisoning effect by 50%. ACM Reference Format: Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, and Yves Le Traon. 2022. Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers. In 1st Conference on AI Engineering - Software Engineering for AI (CAIN’22), May 16–24, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3522664.3528606
Authored by Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, Yves Le Traon
The data centers of cloud computing-based aerospace ground systems and the businesses running on them are extremely vulnerable to man-made disasters, emergencies, and other disasters, which means security is seriously threatened. Thus, cloud centers need to provide effective disaster recovery services for software and data. However, the disaster recovery methods for current cloud centers of aerospace ground systems have long been in arrears, and the disaster tolerance and anti-destruction capability are weak. Aiming at the above problems, in this paper we design a disaster recovery service for aerospace ground systems based on cloud computing. On account of the software warehouse, this service adopts the main standby mode to achieve the backup, local disaster recovery, and remote disaster recovery of software and data. As a result, this service can timely response to the disasters, ensure the continuous running of businesses, and improve the disaster tolerance and anti-destruction capability of aerospace ground systems. Extensive simulation experiments validate the effectiveness of the disaster recovery service proposed in this paper.
Authored by Xiao Yu, Dong Wang, Xiaojuan Sun, Bingbing Zheng, Yankai Du
With the considerable success achieved by modern fuzzing in-frastructures, more crashes are produced than ever before. To dig out the root cause, rapid and faithful crash triage for large numbers of crashes has always been attractive. However, hindered by the practical difficulty of reducing analysis imprecision without compromising efficiency, this goal has not been accomplished. In this paper, we present an end-to-end crash triage solution Default, for accurately and quickly pinpointing unique root cause from large numbers of crashes. In particular, we quantify the “crash relevance” of program entities based on mutual information, which serves as the criterion of unique crash bucketing and allows us to bucket massive crashes without pre-analyzing their root cause. The quantification of “crash relevance” is also used in the shortening of long crashing traces. On this basis, we use the interpretability of neural networks to precisely pinpoint the root cause in the shortened traces by evaluating each basic block's impact on the crash label. Evaluated with 20 programs with 22216 crashes in total, Default demonstrates remarkable accuracy and performance, which is way beyond what the state-of-the-art techniques can achieve: crash de-duplication was achieved at a super-fast processing speed - 0.017 seconds per crashing trace, without missing any unique bugs. After that, it identifies the root cause of 43 unique crashes with no false negatives and an average false positive rate of 9.2%.
Authored by Xing Zhang, Jiongyi Chen, Chao Feng, Ruilin Li, Wenrui Diao, Kehuan Zhang, Jing Lei, Chaojing Tang
Port knocking provides an added layer of security on top of the existing security systems of a network. A predefined port knocking sequence is used to open the ports, which are closed by the firewall by default. The server determines the valid request if the knocking sequence is correct and opens the desired port. However, this sequence poses a security threat due to its static nature. This paper presents the port knock sequence-based communication protocol in the Software Defined network (SDN). It provides better management by separating the control plane and data plane. At the same time, it causes a communication overhead between the switches and the controller. To avoid this overhead, we are using the port knocking concept in the data plane without any involvement of the SDN controller. This study proposes three port knock sequence-based protocols (static, partial dynamic, and dynamic) in the data plane. To test the protocol in SDN environment, the P4 implementation of the underlying model is done in the BMV2 (behavioral model version 2) virtual switch. To check the security of the protocols, an informal security analysis is performed, which shows that the proposed protocols are secured to be implemented in the SDN data plane.
Authored by Isha Pali, Ruhul Amin
Intrusion Detection System (IDS) is one of the applications to detect intrusions in the network. IDS aims to detect any malicious activities that protect the computer networks from unknown persons or users called attackers. Network security is one of the significant tasks that should provide secure data transfer. Virtualization of networks becomes more complex for IoT technology. Deep Learning (DL) is most widely used by many networks to detect the complex patterns. This is very suitable approaches for detecting the malicious nodes or attacks. Software-Defined Network (SDN) is the default virtualization computer network. Attackers are developing new technology to attack the networks. Many authors are trying to develop new technologies to attack the networks. To overcome these attacks new protocols are required to prevent these attacks. In this paper, a unique deep intrusion detection approach (UDIDA) is developed to detect the attacks in SDN. Performance shows that the proposed approach is achieved more accuracy than existing approaches.
Authored by Vamsi Krishna, Venkata Matta
Python continues to be one of the most popular programming languages and has been used in many safety-critical fields such as medical treatment, autonomous driving systems, and data science. These fields put forward higher security requirements to Python ecosystems. However, existing studies on machine learning systems in Python concentrate on data security, model security and model privacy, and just assume the underlying Python virtual machines (PVMs) are secure and trustworthy. Unfortunately, whether such an assumption really holds is still unknown.This paper presents, to the best of our knowledge, the first and most comprehensive empirical study on the security of CPython, the official and most deployed Python virtual machine. To this end, we first designed and implemented a software prototype dubbed PVMSCAN, then use it to scan the source code of the latest CPython (version 3.10) and other 10 versions (3.0 to 3.9), which consists of 3,838,606 lines of source code. Empirical results give relevant findings and insights towards the security of Python virtual machines, such as: 1) CPython virtual machines are still vulnerable, for example, PVMSCAN detected 239 vulnerabilities in version 3.10, including 55 null dereferences, 86 uninitialized variables and 98 dead stores; Python/C API-related vulnerabilities are very common and have become one of the most severe threats to the security of PVMs: for example, 70 Python/C API-related vulnerabilities are identified in CPython 3.10; 3) the overall quality of the code remained stable during the evolution of Python VMs with vulnerabilities per thousand line (VPTL) to be 0.50; and 4) automatic vulnerability rectification is effective: 166 out of 239 (69.46%) vulnerabilities can be rectified by a simple yet effective syntax-directed heuristics.We have reported our empirical results to the developers of CPython, and they have acknowledged us and already confirmed and fixed 2 bugs (as of this writing) while others are still being analyzed. This study not only demonstrates the effectiveness of our approach, but also highlights the need to improve the reliability of infrastructures like Python virtual machines by leveraging state-of-the-art security techniques and tools.
Authored by Xinrong Lin, Baojian Hua, Qiliang Fan
The purpose of this article is to consider one of the options for automating the process of collecting information from open sources when conducting penetration testing in an organization's information security audit using the capabilities of the Python programming language. Possible primary vectors for collecting information about the organization, personnel, software, and hardware are shown. The basic principles of operation of the software product are presented in a visual form, which allows automated analysis of information from open sources about the object under study.
Authored by Anton Bryushinin, Alexandr Dushkin, Maxim Melshiyan
Successful information and communication technology (ICT) may propel administrative procedures forward quickly. In order to achieve efficient usage of TCT in their businesses, ICT strategies and plans should be examined to ensure that they align with the organization's visions and missions. Efficient software and hardware work together to provide relevant data that aids in the improvement of how we do business, learn, communicate, entertain, and work. This exposes them to a risky environment that is prone to both internal and outside threats. The term “security” refers to a level of protection or resistance to damage. Security can also be thought of as a barrier between assets and threats. Important terms must be understood in order to have a comprehensive understanding of security. This research paper discusses key terms, concerns, and challenges related to information systems and security auditing. Exploratory research is utilised in this study to find an explanation for the observed occurrences, problems, or behaviour. The study's findings include a list of various security risks that must be seriously addressed in any Information System and Security Audit.
Authored by Saloni, Dilpreet Arora
This article discusses a threat and vulnerability analysis model that allows you to fully analyze the requirements related to information security in an organization and document the results of the analysis. The use of this method allows avoiding and preventing unnecessary costs for security measures arising from subjective risk assessment, planning and implementing protection at all stages of the information systems lifecycle, minimizing the time spent by an information security specialist during information system risk assessment procedures by automating this process and reducing the level of errors and professional skills of information security experts. In the initial sections, the common methods of risk analysis and risk assessment software are analyzed and conclusions are drawn based on the results of comparative analysis, calculations are carried out in accordance with the proposed model.
Authored by Zhanna Alimzhanova, Akzer Tleubergen, Salamat Zhunusbayeva, Dauren Nazarbayev
Source code security audit is an effective technique to deal with security vulnerabilities and software bugs. As one kind of white-box testing approaches, it can effectively help developers eliminate defects in the code. However, it suffers from performance issues. In this paper, we propose an incremental checking mechanism which enables fast source code security audits. And we conduct comprehensive experiments to verify the effectiveness of our approach.
Authored by Xiuli Li, Guoshi Wang, Chuping Wang, Yanyan Qin, Ning Wang
We present a system for interactive examination of learned security policies. It allows a user to traverse episodes of Markov decision processes in a controlled manner and to track the actions triggered by security policies. Similar to a software debugger, a user can continue or or halt an episode at any time step and inspect parameters and probability distributions of interest. The system enables insight into the structure of a given policy and in the behavior of a policy in edge cases. We demonstrate the system with a network intrusion use case. We examine the evolution of an IT infrastructure’s state and the actions prescribed by security policies while an attack occurs. The policies for the demonstration have been obtained through a reinforcement learning approach that includes a simulation system where policies are incrementally learned and an emulation system that produces statistics that drive the simulation runs.
Authored by Kim Hammar, Rolf Stadler
Design a new generation of smart power meter components, build a smart power network, implement power meter safety protection, and complete smart power meter network security protection. The new generation of smart electric energy meters mainly complete legal measurement, safety fee control, communication, control, calculation, monitoring, etc. The smart power utilization structure network consists of the master station server, front-end processor, cryptographic machine and master station to form a master station management system. Through data collection and analysis, the establishment of intelligent energy dispatching operation, provides effective energy-saving policy algorithms and strategies, and realizes energy-smart electricity use manage. The safety protection architecture of the electric energy meter is designed from the aspects of its own safety, full-scenario application safety, and safety management. Own security protection consists of hardware security protection and software security protection. The full-scene application security protection system includes four parts: boundary security, data security, password security, and security monitoring. Security management mainly provides application security management strategies and security responsibility division strategies. The construction of the intelligent electric energy meter network system lays the foundation for network security protection.
Authored by Baofeng Li, Feng Zhai, Yilun Fu, Bin Xu
Open Source Software plays an important role in many software ecosystems. Whether in operating systems, network stacks, or as low-level system drivers, software we encounter daily is permeated with code contributions from open source projects. Decentralized development and open collaboration in open source projects introduce unique challenges: code submissions from unknown entities, limited personpower for commit or dependency reviews, and bringing new contributors up-to-date in projects’ best practices & processes.In 27 in-depth, semi-structured interviews with owners, maintainers, and contributors from a diverse set of open source projects, we investigate their security and trust practices. For this, we explore projects’ behind-the-scene processes, provided guidance & policies, as well as incident handling & encountered challenges. We find that our participants’ projects are highly diverse both in deployed security measures and trust processes, as well as their underlying motivations. Based on our findings, we discuss implications for the open source software ecosystem and how the research community can better support open source projects in trust and security considerations. Overall, we argue for supporting open source projects in ways that consider their individual strengths and limitations, especially in the case of smaller projects with low contributor numbers and limited access to resources.
Authored by Dominik Wermke, Noah Wöhler, Jan Klemmer, Marcel Fourné, Yasemin Acar, Sascha Fahl
The Personnel Management Information System is managed by the Personnel and Human Resources Development Agency on local government office to provide personnel services. The existence of a system and information technology can help ongoing business processes but can have an impact or risk if the proper mitigation is not carried out. It is known that the problems are damage to databases, servers, and computer equipment due to bad weather, network connections being lost due to power outages, data loss due to not having backup data, and human error. This resulted in PMIS being inaccessible for some time, thus hampering ongoing business processes and causing financial losses. This study aims to identify risks, conduct a risk assessment using the failure mode and effects analysis (FMEA) method, and provide mitigation recommendations based on the ISO/IEC 27002:2013 standard. The analysis results obtained 50 failure modes categorized into five asset categories, and six failure modes have a high level. Then provide mitigation recommendations based on the ISO/IEC 27002:2013 Standard, which has been adapted to the needs of Human Resources Development Agency. Thus, the results of this study are expected to assist and serve as material for local office government's consideration in making improvements and security controls to avoid emerging threats to information assets.
Authored by Gunawan Nur, Rahmi Lusi, Fitroh Fitroh
This paper presents a Ph.D. research plan that focuses on solving the existing problems in risk management of critical infrastructures, by means of a novel DevSecOps-enabled framework. Critical infrastructures are complex physical and cyber-based systems that form the lifeline of a modern society, and their reliable and secure operation is of paramount importance to national security and economic vitality. Therefore, this paper proposes DevSecOps technology for managing risk throughout the entire development life cycle of such systems.
Authored by Xhesika Ramaj