AI systems face potential hardware security threats. Existing AI systems generally use the heterogeneous architecture of CPU + Intelligent Accelerator, with PCIe bus for communication between them. Security mechanisms are implemented on CPUs based on the hardware security isolation architecture. But the conventional hardware security isolation architecture does not include the intelligent accelerator on the PCIe bus. Therefore, from the perspective of hardware security, data offloaded to the intelligent accelerator face great security risks. In order to effectively integrate intelligent accelerator into the CPU’s security mechanism, a novel hardware security isolation architecture is presented in this paper. The PCIe protocol is extended to be security-aware by adding security information packaging and unpacking logic in the PCIe controller. The hardware resources on the intelligent accelerator are isolated in fine-grained. The resources classified into the secure world can only be controlled and used by the software of CPU’s trusted execution environment. Based on the above hardware security isolation architecture, a security isolation spiking convolutional neural network accelerator is designed and implemented in this paper. The experimental results demonstrate that the proposed security isolation architecture has no overhead on the bandwidth and latency of the PCIe controller. The architecture does not affect the performance of the entire hardware computing process from CPU data offloading, intelligent accelerator computing, to data returning to CPU. With low hardware overhead, this security isolation architecture achieves effective isolation and protection of input data, model, and output data. And this architecture can effectively integrate hardware resources of intelligent accelerator into CPU’s security isolation mechanism.
Authored by Rui Gong, Lei Wang, Wei Shi, Wei Liu, JianFeng Zhang
As artificial intelligent models continue to grow in their capacity and sophistication, they are often trusted with very sensitive information. In the sub-field of adversarial machine learning, developments are geared solely towards finding reliable methods to systematically erode the ability of artificial intelligent systems to perform as intended. These techniques can cause serious breaches of security, interruptions to major systems, and irreversible damage to consumers. Our research evaluates the effects of various white box adversarial machine learning attacks on popular computer vision deep learning models leveraging a public X-ray dataset from the National Institutes of Health (NIH). We make use of several experiments to gauge the feasibility of developing deep learning models that are robust to adversarial machine learning attacks by taking into account different defense strategies, such as adversarial training, to observe how adversarial attacks evolve over time. Our research details how a variety white box attacks effect different components of InceptionNet, DenseNet, and ResNeXt and suggest how the models can effectively defend against these attacks.
Authored by Ilyas Bankole-Hameed, Arav Parikh, Josh Harguess
The world has seen a quick transition from hard devices for local storage to massive virtual data centers, all possible because of cloud storage technology. Businesses have grown to be scalable, meeting consumer demands on every turn. Cloud computing has transforming the way we do business making IT more efficient and cost effective that leads to new types of cybercrimes. Securing the data in cloud is a challenging task. Cloud security is a mixture of art and science. Art is to create your own technique and technologies in such a way that the user should be authenticated. Science is because you have to come up with ways of securing your application. Data security refers to a broad set of policies, technologies and controls deployed to protect data application and the associated infrastructure of cloud computing. It ensures that the data has not been accessed by any unauthorized person. Cloud storage systems are considered to be a network of distributed data centers which typically uses cloud computing technologies like virtualization and offers some kind of interface for storing data. Virtualization is the process of grouping the physical storage from multiple network storage devices so that it looks like a single storage device.Storing the important data in the cloud has become an essential argument in the computer territory. The cloud enables the user to store the data efficiently and access the data securely. It avoids the basic expenditure on hardware, software and maintenance. Protecting the cloud data has become one of the burdensome tasks in today’s environment. Our proposed scheme "Certificateless Compressed Data Sharing in Cloud through Partial Decryption" (CCDSPD) makes use of Shared Secret Session (3S) key for encryption and double decryption process to secure the information in the cloud. CC does not use pairing concept to solve the key escrow problem. Our scheme provides an efficient secure way of sharing data to the cloud and reduces the time consumption nearly by 50 percent as compared to the existing mCL-PKE scheme in encryption and decryption process.Distributed Cloud Environment (DCE) has the ability to store the da-ta and share it with others. One of the main issues arises during this is, how safe the data in the cloud while storing and sharing. Therefore, the communication media should be safe from any intruders residing between the two entities. What if the key generator compromises with intruders and shares the keys used for both communication and data? Therefore, the proposed system makes use of the Station-to-Station (STS) protocol to make the channel safer. The concept of encrypting the secret key confuses the intruders. Duplicate File Detector (DFD) checks for any existence of the same file before uploading. The scheduler as-signs the work of generating keys to the key manager who has less task to complete or free of any task. By these techniques, the proposed system makes time-efficient, cost-efficient, and resource efficient compared to the existing system. The performance is analysed in terms of time, cost and resources. It is necessary to safeguard the communication channel between the entities before sharing the data. In this process of sharing, what if the key manager’s compromises with intruders and reveal the information of the user’s key that is used for encryption. The process of securing the key by using the user’s phrase is the key concept used in the proposed system "Secure Storing and Sharing of Data in Cloud Environment using User Phrase" (S3DCE). It does not rely on any key managers to generate the key instead the user himself generates the key. In order to provide double security, the encryption key is also encrypted by the public key derived from the user’s phrase. S3DCE guarantees privacy, confidentiality and integrity of the user data while storing and sharing. The proposed method S3DCE is more efficient in terms of time, cost and resource utilization compared to the existing algorithm DaSCE (Data Security for Cloud Environment with Semi Trusted Third Party) and DACESM (Data Security for Cloud Environment with Scheduled Key Managers).For a cloud to be secure, all of the participating entities must be secure. The security of the assets does not solely depend on an individual s security measures. The neighbouring entities may provide an opportunity to an attacker to bypass the user s defences. The data may compromise due to attacks by other users and nodes within the cloud. Therefore, high security measures are required to protect data within the cloud. Cloudsim allows to create a network that contains a set of Intelligent Sense Point (ISP) spread across an area. Each ISPs will have its own unique position and will be different from other ISPs. Cloud is a cost-efficient solution for the distribution of data but has the challenge of a data breach. The data can be compromised of attacks of ISPs. Therefore, in OSNQSC (Optimized Selection of Nodes for Enhanced in Cloud Environment), an optimized method is proposed to find the best ISPs to place the data fragments that considers the channel quality, distance and the remaining energy of the ISPs. The fragments are encrypted before storing. OSNQSC is more efficient in terms of total upload time, total download time, throughput, storage and memory consumption of the node with the existing Betweenness centrality, Eccentricity and Closeness centrality methods of DROPS (Division and Replication of Data in the Cloud for Optimal Performance and Security).
Authored by Jeevitha K, Thriveni J
Today, Distribution System Operators (DSO) face numerous challenges, such as growth of decentralized power generation, increasing unconventional demands, active network management for peak load- and congestion management. Moreover, DSO also face an accelerated asset ageing while confronted with tight budgets and a strong ROI business case justification. The Digital Transformer Twin is the digital representation of real physical assets and enables the operators to evaluate the Transformer Asset Condition by leveraging software capabilities, AI insights from large datasets as well as academic research results in order to turn data into reality. Thus, trusted and consistent results over the entire transformer life span require also a faithful Digital Transformer Twin over the entire physical transformer life cycle from inception to retirement.
Authored by B. Fischer, K. Viereck, C. Hofmeister
Sustainability within the built environment is increasingly important to our global community as it minimizes environmental impact whilst providing economic and social benefits. Governments recognize the importance of sustainability by providing economic incentives and tenants, particularly large enterprises seek facilities that align with their corporate social responsibility objectives. Claiming sustainability outcomes clearly has benefits for facility owners and facility occupants that have sustainability as part of their business objectives but there are also incentives to overstate the value delivered or only measure parts of the facility lifecycle that provide attractive results. Whilst there is a plethora of research on Building Information Management (BIM) systems within the construction industry there has been limited research on BIM in the facilities management \& sustainability fields. The significant contribution with this research is the integration of blockchain for the purposes of transaction assurance with development of a working model spanning BIM and blockchain underpinning phase one of this research. From an industry perspective the contribution of this paper is to articulate a path to integrate a wide range of mature and emerging technologies into solutions that deliver trusted results for government, facility owners, tenants and other directly impacted stakeholders to assess the sustainability impact.
Authored by Luke Desmomd, Mohamed Salama
Happiness Is Homemade is a safe and trusted platform that addresses the lack of recreational opportunities faced by older adults. Our website will help people not only elders but also volunteers of younger age groups, connect with people of similar likes and interests helping them enlarge their social circle and switching to other means of recreation apart from mobile phones and television. This platform aims at resolving the issues of lack of leisure time activities which may lead to problems in physical and mental health, social life, and the environment in which they live and interact with older adults. Registered volunteers organize specific activities for senior citizens. Elders who are interested in embarking on new experiences or continue pursuing their hobbies and interests can register for the specific curated event. The event details, time, place, and the details of the volunteer/s organizing the event would be mentioned. Activities here include excursions to specific locations, temple visits, retro nights and yoga, meditation events, etc. It also provides a platform for seniors to organize courses(classes) in their areas of expertise. These courses are accompanied by interested volunteers. Classes can be conducted online or offline at senior citizens homes. Classes can include any subject, including cooking, finance, gardening, and home economics. With the help of this platform not only will the problem of leisure time activities be resolved but also it will help the elder citizens to earn some income.
Authored by Vaishnavi Kothari, Anupama Menon, Itisha Mathane, Shivangi Kumar, Ashhvini Gaikwad
In Industry 4.0, the Digital twin has been widely used in industrial activities. However, the data-driven industry is placing a higher demand on digital twins, especially for the secure sharing and management of data throughout the lifecycle. As a distributed ledger technology, Blockchain is well suited to address these challenges. Unfortunately, current blockchain-based digital twin lifecycle management does not focus on data processing after the retirement stage. In this paper, we propose BDTwins, a blockchain-based digital twin lifecycle management framework, which is built based on our proposed 7D model. In this framework, we make innovative use of Non-Fungible Tokens (NFT) to process the data in the recovery stage of the digital twin. This method solves digital intellectual property disputes and inherits digital twin knowledge completely and stably after the destruction of physical entities. In addition, BDTwins has designed a fine-grained hierarchical access control policy to enable secure data sharing among stakeholders. And solves the performance bottleneck of traditional single-chain blockchain architecture by utilizing directed acyclic graph (DAG) blockchain and off-chain distributed storage. Finally, we implement a general blockchain-based digital twin case using smart contract technology to demonstrate our proposed digital twin lifecycle management framework.
Authored by Xianxian Cao, Xiaoling Li, Yinhao Xiao, Yumin Yao, Shuang Tan, Ping Wang
This study aims to examine the effect of Islamic financial literacy on Islamic financial inclusion through the mediation of digital finance and social capital. Proportionate Stratified Random Sampling was used to select 385 samples from each of Banda Aceh City s 9 sub-districts. Afterward, the questionnaire data were analyzed using Structural Equation Modeling (SEM) in accordance with scientific standards. This study found two important things. First, Islamic financial literacy, digital finance, and social capital boost Banda Aceh s Islamic financial inclusion. Second, digital finance and social capital can mediate the effects of Islamic financial literacy on Banda Aceh s Islamic financial inclusion. This study emphasizes the need for a holistic approach, combining education, technology, and community trust to promote Islamic financial inclusion. Policymakers, educators, institutions, and community leaders can leverage these insights to contribute to a more inclusive Islamic finance ecosystem.
Authored by Putri Marla, Shabri Majid, Said Musnadi, Maulidar Agustina, Faisal Faisal, Ridwan Nurdin
The backend of the processor executes the μops decoded from the frontend out of order, while the retirement is responsible for retiring completed μops in the Reorder Buffer in order. Consequently, the retirement may stall differently depending on the execution time of the first instruction in the Reorder Buffer. Moreover, since retirement is shared between two logical cores on the same physical core, an attacker can deduce the instructions executed on the other logical core by observing the availability of its own retirement. Based on this finding, we introduce two novel covert channels: the Different Instructions covert channel and the Same Instructions covert channel, which can transmit information across logical cores and possess the ability to bypass the existing protection strategies. Furthermore, this paper explores additional applications of retirement. On the one hand, we propose a new variant of Spectre v1 by applying the retirement to the Spectre attack using the principle that the fallback penalty of misprediction is related to the instructions speculated to be executed. On the other hand, based on the principle that different programs result in varied usage patterns of retirement, we propose an attack method that leverages the retirement to infer the program run by the victim. Finally, we discuss possible mitigations against new covert channels.
Authored by Ke Xu, Ming Tang, Quancheng Wang, Han Wang
This study explores how AI-driven personal finance advisors can significantly improve individual financial well-being. It addresses the complexity of modern finance, emphasizing the integration of AI for informed decision-making. The research covers challenges like budgeting, investment planning, debt management, and retirement preparation. It highlights AI s capabilities in data-driven analysis, predictive modeling, and personalized recommendations, particularly in risk assessment, portfolio optimization, and real-time market monitoring. The paper also addresses ethical and privacy concerns, proposing a transparent deployment framework. User acceptance and trust-building are crucial for widespread adoption. A case study demonstrates enhanced financial literacy, returns, and overall well-being with AI-powered advisors, underscoring their potential to revolutionize financial wellness. The study emphasizes responsible implementation and trust-building for ethical and effective AI deployment in personal finance.
Authored by Parth Pangavhane, Shivam Kolse, Parimal Avhad, Tushar Gadekar, N. Darwante, S. Chaudhari
Digitization expansion enables business transactions operating in distributed systems encompassing Internet- and Machine-to-Everything (M2X) economies. Distributed collaboration systems growth comes at a cost of rapidly rising numbers of machines, infrastructure, machine-infrastructure traffic, and consequently a significant augmentation of associated carbon emissions. In order to investigate M2X’s carbon footprint, we design an impact index application layer using blockchain technology of smart contracts to empower a sustainable management of distributed collaboration systems. The impact measurement methodology based on transparent liquid data secures trusted inter-organizational collaborations and supports traceable standardization of sustainability regulation.
Authored by Olena Chornovol, Alex Norta
Processor design and manufacturing is often done globally, involving multiple companies, some of which can be untrustworthy. This lack of trust leads to the threat of malicious modifications like Hardware Trojans. Hardware Trojans can cause drastic consequences and even endanger human lives. Hence, effective countermeasures against Hardware Trojans are urgently needed. To develop countermeasures, Hardware Trojans and their properties have to be understood well. For this reason, we describe and characterize Hardware Trojans in detail in this paper. We perform a theoretical analysis of Hardware Trojans for processors. Afterwards, we present a new classification of processor constituents, which can be used to derive several triggers and payloads and compare them with previously published Hardware Trojans. This shows in detail possible attack vectors for processors and gaps in existing processor Hardware Trojan landscape. No previous work presents such a detailed investigation of Hardware Trojans for processors. With this work, we intend to improve understanding of Hardware Trojans in processors, supporting the development of new countermeasures and prevention techniques.
Authored by Czea Chuah, Alexander Hepp, Christian Appold, Tim Leinmueller
Human-Centered Artificial Intelligence (AI) focuses on AI systems prioritizing user empowerment and ethical considerations. We explore the importance of usercentric design principles and ethical guidelines in creating AI technologies that enhance user experiences and align with human values. It emphasizes user empowerment through personalized experiences and explainable AI, fostering trust and user agency. Ethical considerations, including fairness, transparency, accountability, and privacy protection, are addressed to ensure AI systems respect human rights and avoid biases. Effective human AI collaboration is emphasized, promoting shared decision-making and user control. By involving interdisciplinary collaboration, this research contributes to advancing human-centered AI, providing practical recommendations for designing AI systems that enhance user experiences, promote user empowerment, and adhere to ethical standards. It emphasizes the harmonious coexistence between humans and AI, enhancing well-being and autonomy and creating a future where AI technologies benefit humanity. Overall, this research highlights the significance of human-centered AI in creating a positive impact. By centering on users needs and values, AI systems can be designed to empower individuals and enhance their experiences. Ethical considerations are crucial to ensure fairness and transparency. With effective collaboration between humans and AI, we can harness the potential of AI to create a future that aligns with human aspirations and promotes societal well-being.
Authored by Usman Usmani, Ari Happonen, Junzo Watada
Boolean network is a popular and well-established modelling framework for gene regulatory networks. The steady-state behaviour of Boolean networks can be described as attractors, which are hypothesised to characterise cellular phenotypes. In this work, we study the target control problem of Boolean networks, which has important applications for cellular reprogramming. More specifically, we want to reduce the total number of attractors of a Boolean network to a single target attractor. Different from existing approaches to solving control problems of Boolean networks with node perturbations, we aim to develop an approach utilising edgetic perturbations. Namely, our objective is to modify the update functions of a Boolean network such that there remains only one attractor. The design of our approach is inspired by Thomas’ first rule, and we primarily focus on the removal of cycles in the interaction graph of a Boolean network. We further use results in the literature to only remove positive cycles which are responsible for the appearance of multiple attractors. We apply our solution to a number of real-life biological networks modelled as Boolean networks, and the experimental results demonstrate its efficacy and efficiency.
Authored by Olivier Zeyen, Jun Pang
Operational technology (OT) systems use hardware and software to monitor and control physical processes, devices, and infrastructure - often critical infrastructures. The convergence of information technology (IT) and OT has significantly heightened the cyber threats in OT systems. Although OT systems share many of the hardware and software components in IT systems, these components often operate under different expectations. In this work, several hardware root-of-trust architectures are surveyed and the attacks each one mitigates are compared. Attacks spanning the design, manufacturing, and deployment life cycle of safety-critical operational technology are considered. The survey examines architectures that provide a hardware root-of-trust as a peripheral component in a larger system, SoC architectures with an integrated hardware root-of-trust, and FPGA-based hardware root-of-trust systems. Each architecture is compared based on the attacks mitigated. The comparison demonstrates that protecting operational technology across its complete life cycle requires multiple solutions working in tandem.
Authored by Alan Ehret, Peter Moore, Milan Stojkov, Michel Kinsy
For power grid enterprises in the development of power engineering infrastructure, line equipment operation and inspection and other production and management activities, often due to evidence collection is not timely, lack of effective evidence and other reasons lead to the inability to prove, weak defense of rights, to the legitimate rights and interests of power grid enterprises caused losses. In this context, this paper carries out the technical research on the whole life cycle management scheme of electronic evidence for power grid enterprises safety production, designs the architecture of electronic evidence credible storage and traceability application service system, and realizes the whole life cycle credible management of electronic evidence from collection, curing, transmission, sealing to checking and identification. Enhance the credibility of electronic evidence, access to evidence from the traditional "after the fact evidence" to "before the evidence" mode change, and promote the company s safety production management level.
Authored by Peng Chen, Hejian Wang, Lihua Zhao, Qinglei Guo, Bo Gao, Yongliang Li
Original Equipment Manufacturers (OEMs) need to collaborate within and outside their organizations to improve product quality and time to market. However, legacy systems built over decades using different technology stacks make information sharing and maintaining consistency challenging. Distributed ledger technologies (DLTs) can improve efficiency and provide trust, thus helping to achieve a more streamlined and unified collaboration infrastructure. However, most of the work done is theoretical or conceptual and lacks implementation. This paper elaborates on architecture and implementing a proof of concept (POC) of blockchain-based interoperability and data sharing system that allows OEMs to collaborate seamlessly and share information in real-time.
Authored by Niranjan Marathe, Lawrence Chung, Tom Hill
With the popularization of AIoT applications, every endpoint device is facing information security risks. Thus, how to ensure the security of the device becomes essential. Chip security is divided into software security and hardware security, both of which are indispensable and complement each other. Hardware security underpins the entire cybersecurity ecosystem by proving essential primitives, including key provisioning, hardware cryptographic engines, hardware unique key (HUK), and unique identification (UID). This establishes a Hardware Root of Trust (HRoT) with secure storage, secure operation, and a secure environment to provide a trustworthy foundation for chip security. Today s talk starts with how to use a Physical Unclonable Function (PUF) to generate a unique “fingerprint” (static random number) for the chip. Next, we will address using a static random number and dynamic entropy to design a high-performance true random number generator and achieve real anti-tampering HRoT by leveraging static and dynamic entropy. By integrating NISTstandard cryptographic engines, we have created an authentic PUF-based Hardware Root of Trust. The all-in-one integrated solution can handle all the necessary security functions throughout the product life cycle as well as maintaining a secure boundary to achieve the integrity of sensitive information or assets. Finally, as hardware-level protection extends to operating systems and applications, products and services become secure.
Authored by Meng-Yi Wu
Industrial control systems (ICSs) and supervisory control and data acquisition (SCADA) are frequently used and are essential to the operation of vital infrastructure such as oil and gas pipelines, power plants, distribution grids, and airport control towers. However, these systems confront a number of obstacles and risks that can jeopardize their safety and reliability, including communication failures, cyber-attacks, environmental hazards, and human errors. How can ensure that SCADA systems are both effective and secure? The oil and gas industry literature needs to include an analysis of the underpinning design process. Available research fails to offer appropriate direction for a methodical technique or modeling language that enables trust-based study of ICS and SCADA systems. The most pressing challenges include attaining trust by design in ICS and SCADA, as well as methodically implementing trust design into the development process from the beginning of the system s life cycle. This paper presents the design of a modern ICS and SCADA system for the oil and gas industries utilizing model-based systems engineering (MBSE) approaches. ICS and SCADA concepts and definitions are presented, and ICS and SCADA are examined using comprehensive architectural artifacts. By extending the SysML diagrams to trust ICS, SCADA, and UML diagrams, we showcase the usefulness of the MBSE method.
Authored by Zina Oudina, Makhlouf Derdour, Ahmed Dib, Amal Tachouche
Summary \& ConclusionsResilience, a system property merging the consideration of stochastic and malicious events focusing on mission success, motivates researchers and practitioners to develop methodologies to support holistic assessments. While established risk assessment methods exist for early and advanced analysis of complex systems, the dynamic nature of security is much more challenging for resilience analysis.The scientific contribution of this paper is a methodology called Trust Loss Effects Analysis (TLEA) for the systematic assessment of the risks to the mission emerging from compromised trust of humans who are part of or are interacting with the system. To make this work more understandable and applicable, the TLEA method follows the steps of Failure Mode, Effects \& Criticality Analysis (FMECA) with a difference in the steps related to the identification of security events. There, the TLEA method uses steps from the Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service (DoS), Elevation of privilege (STRIDE) methodology.The TLEA is introduced using a generic example and is then demonstrated using a more realistic use case of a drone-based system on a reconnaissance mission. After the application of the TLEA method, it is possible to identify different risks related to the loss of trust and evaluate their impact on mission success.
Authored by Douglas Van Bossuyt, Nikolaos Papakonstantinou, Britta Hale, Ryan Arlitt
Anomaly detection is a challenge well-suited to machine learning and in the context of information security, the benefits of unsupervised solutions show significant promise. Recent attention to Graph Neural Networks (GNNs) has provided an innovative approach to learn from attributed graphs. Using a GNN encoder-decoder architecture, anomalous edges between nodes can be detected during the reconstruction phase. The aim of this research is to determine whether an unsupervised GNN model can detect anomalous network connections in a static, attributed network. Network logs were collected from four corporate networks and one artificial network using endpoint monitoring tools. A GNN-based anomaly detection system was designed and employed to score and rank anomalous connections between hosts. The model was validated against four realistic experimental scenarios against the four large corporate networks and the smaller artificial network environment. Although quantitative metrics were affected by factors including the scale of the network, qualitative assessments indicated that anomalies from all scenarios were detected. The false positives across each scenario indicate that this model in its current form is useful as an initial triage, though would require further improvement to become a performant detector. This research serves as a promising step for advancing this methodology in detecting anomalous network connections. Future work to improve results includes narrowing the scope of detection to specific threat types and a further focus on feature engineering and selection.
Authored by Charlie Grimshaw, Brian Lachine, Taylor Perkins, Emilie Coote
The escalating visibility of secure direct object reference (IDOR) vulnerabilities in API security, as indicated in the compilation of OWASP Top 10 API Security Risks, highlights a noteworthy peril to sensitive data. This study explores IDOR vulnerabilities found within Android APIs, intending to clarify their inception while evaluating their implications for application security. This study combined the qualitative and quantitative approaches. Insights were obtained from an actual penetration test on an Android app into the primary reasons for IDOR vulnerabilities, underscoring insufficient input validation and weak authorization methods. We stress the frequent occurrence of IDOR vulnerabilities in the OWASP Top 10 API vulnerability list, highlighting the necessity to prioritize them in security evaluations. There are mitigation recommendations available for developers, which recognize its limitations involving a possibly small and homogeneous selection of tested Android applications, the testing environment that could cause some inaccuracies, and the impact of time constraints. Additionally, the study noted insufficient threat modeling and root cause analysis, affecting its generalizability and real-world relevance. However, comprehending and controlling IDOR dangers can enhance Android API security, protect user data, and bolster application resilience.
Authored by Semi Yulianto, Roni Abdullah, Benfano Soewito
Vendor cybersecurity risk assessment is of critical importance to smart city infrastructure and sustainability of the autonomous mobility ecosystem. Lack of engagement in cybersecurity policies and process implementation by the tier companies providing hardware or services to OEMs within this ecosystem poses a significant risk to not only the individual companies but to the ecosystem overall. The proposed quantitative method of estimating cybersecurity risk allows vendors to have visibility to the financial risk associated with potential threats and to consequently allocate adequate resources to cybersecurity. It facilitates faster implementation of defense measures and provides a useful tool in the vendor selection process. The paper focuses on cybersecurity risk assessment as a critical part of the overall company mission to create a sustainable structure for maintaining cybersecurity health. Compound cybersecurity risk and impact on company operations as outputs of this quantitative analysis present a unique opportunity to strategically plan and make informed decisions towards acquiring a reputable position in a sustainable ecosystem. This method provides attack trees and assigns a risk factor to each vendor thus offering a competitive advantage and an insight into the supply chain risk map. This is an innovative way to look at vendor cybersecurity posture. Through a selection of unique industry specific parameters and a modular approach, this risk assessment model can be employed as a tool to navigate the supply base and prevent significant financial cost. It generates synergies within the connected vehicle ecosystem leading to a safe and sustainable economy.
Authored by Albena Tzoneva, Galina Momcheva, Borislav Stoyanov
An end-to-end cyber risk assessment process is presented that is based on the combination of guidelines from the National Institute of Standards \& Technology (NIST), the standard 5\times 5 risk matrix, and quantitative methods for generating loss exceedance curves.The NIST guidelines provide a framework for cyber risk assessment, and the standard 5\times 5 matrix is widely used across the industry for the representation of risk across multiple disciplines. Loss exceedance curves are a means of quantitatively assessing the loss that occurs due to a given risk profile. Combining these different techniques enables us to follow the guidelines, adhere to standard 5\times 5 risk management practices and develop quantitative metrics simultaneously. Our quantification process is based on the consideration of the NASA and JPL Cost Risk assessment modeling techniques as we define the cost associated with the cybersecurity risk profile of a mission as a function of the mission cost.
Authored by Leila Meshkat, Robert Miller
In recent times, the research looks into the measures taken by financial institutions to secure their systems and reduce the likelihood of attacks. The study results indicate that all cultures are undergoing a digital transformation at the present time. The dawn of the Internet ushered in an era of increased sophistication in many fields. There has been a gradual but steady shift in attitude toward digital and networked computers in the business world over the past few years. Financial organizations are increasingly vulnerable to external cyberattacks due to the ease of usage and positive effects. They are also susceptible to attacks from within their own organisation. In this paper, we develop a machine learning based quantitative risk assessment model that effectively assess and minimises this risk. Quantitative risk calculation is used since it is the best way for calculating network risk. According to the study, a network s vulnerability is proportional to the number of times its threats have been exploited and the amount of damage they have caused. The simulation is used to test the model s efficacy, and the results show that the model detects threats more effectively than the other methods.
Authored by Lavanya M, Mangayarkarasi S