This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This tertiary systematic literature review examines 29 systematic literature reviews and surveys in Explainable Artificial Intelligence (XAI) to uncover trends, limitations, and future directions. The study explores current explanation techniques, providing insights for researchers, practitioners, and policymakers interested in enhancing AI transparency. Notably, the increasing number of systematic literature reviews (SLRs) in XAI publications indicates a growing interest in the field. The review offers an annotated catalogue for human-computer interaction-focused XAI stakeholders, emphasising practitioner guidelines. Automated searches across ACM, IEEE, and Science Direct databases identified SLRs published between 2019 and May 2023, covering diverse application domains and topics. While adhering to methodological guidelines, the SLRs often lack primary study quality assessments. The review highlights ongoing challenges and future opportunities related to XAI evaluation standardisation, its impact on users, and interdisciplinary research on ethics and GDPR aspects. The 29 SLRs, analysing over two thousand papers, include five directly relevant to practitioners. Additionally, references from the SLRs were analysed to compile a list of frequently cited papers, serving as recommended foundational readings for newcomers to the field.
Authored by Saša Brdnik, Boštjan Šumak
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
Nowadays, anomaly-based network intrusion detection system (NIDS) still have limited real-world applications; this is particularly due to false alarms, a lack of datasets, and a lack of confidence. In this paper, we propose to use explainable artificial intelligence (XAI) methods for tackling these issues. In our experimentation, we train a random forest (RF) model on the NSL-KDD dataset, and use SHAP to generate global explanations. We find that these explanations deviate substantially from domain expertise. To shed light on the potential causes, we analyze the structural composition of the attack classes. There, we observe severe imbalances in the number of records per attack type subsumed in the attack classes of the NSL-KDD dataset, which could lead to generalization and overfitting regarding classification. Hence, we train a new RF classifier and SHAP explainer directly on the attack types. Classification performance is considerably improved, and the new explanations are matching the expectations based on domain knowledge better. Thus, we conclude that the imbalances in the dataset bias classification and consequently also the results of XAI methods like SHAP. However, the XAI methods can also be employed to find and debug issues and biases in the data and the applied model. Furthermore, the debugging results in higher trustworthiness of anomaly-based NIDS.
Authored by Eric Lanfer, Sophia Sylvester, Nils Aschenbruck, Martin Atzmueller
With the increasing complexity of network attacks, traditional firewall technologies are facing challenges in effectively detecting and preventing these attacks. As a result, AI technology has emerged as a promising approach to enhance the capabilities of firewalls in detecting and mitigating network attacks. This paper aims to investigate the application of AI firewalls in network attack detection and proposes a testing method to evaluate their performance. An experiment was conducted to verify the feasibility of the proposed testing method. The results demonstrate that AI firewalls exhibit higher accuracy in detecting network attacks, thereby highlighting their effectiveness. Furthermore, the testing method can be utilized to compare different AI firewalls.
Authored by Zhijia Wang, Qi Deng
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
Automated Internet of Things (IoT) devices generate a considerable amount of data continuously. However, an IoT network can be vulnerable to botnet attacks, where a group of IoT devices can be infected by malware and form a botnet. Recently, Artificial Intelligence (AI) algorithms have been introduced to detect and resist such botnet attacks in IoT networks. However, most of the existing Deep Learning-based algorithms are designed and implemented in a centralized manner. Therefore, these approaches can be sub-optimal in detecting zero-day botnet attacks against a group of IoT devices. Besides, a centralized AI approach requires sharing of data traces from the IoT devices for training purposes, which jeopardizes user privacy. To tackle these issues in this paper, we propose a federated learning based framework for a zero-day botnet attack detection model, where a new aggregation algorithm for the IoT devices is developed so that a better model aggregation can be achieved without compromising user privacy. Evaluations are conducted on an open dataset, i.e., the N-BaIoT. The evaluation results demonstrate that the proposed learning framework with the new aggregation algorithm outperforms the existing baseline aggregation algorithms in federated learning for zero-day botnet attack detection in IoT networks.
Authored by Jielun Zhang, Shicong Liang, Feng Ye, Rose Hu, Yi Qian
Significant progress has been made towards developing Deep Learning (DL) in Artificial Intelligence (AI) models that can make independent decisions. However, this progress has also highlighted the emergence of malicious entities that aim to manipulate the outcomes generated by these models. Due to increasing complexity, this is a concerning issue in various fields, such as medical image classification, autonomous vehicle systems, malware detection, and criminal justice. Recent research advancements have highlighted the vulnerability of these classifiers to both conventional and adversarial assaults, which may skew their results in both the training and testing stages. The Systematic Literature Review (SLR) aims to analyse traditional and adversarial attacks comprehensively. It evaluates 45 published works from 2017 to 2023 to better understand adversarial attacks, including their impact, causes, and standard mitigation approaches.
Authored by Tarek Ali, Amna Eleyan, Tarek Bejaoui
This study presents a novel approach for fortifying network security systems, crucial for ensuring network reliability and survivability against evolving cyber threats. Our approach integrates Explainable Artificial Intelligence (XAI) with an en-semble of autoencoders and Linear Discriminant Analysis (LDA) to create a robust framework for detecting both known and elusive zero-day attacks. We refer to this integrated method as AE- LDA. Our method stands out in its ability to effectively detect both known and previously unidentified network intrusions. By employing XAI for feature selection, we ensure improved inter-pretability and precision in identifying key patterns indicative of network anomalies. The autoencoder ensemble, trained on benign data, is adept at recognising a broad spectrum of network behaviours, thereby significantly enhancing the detection of zero-day attacks. Simultaneously, LDA aids in the identification of known threats, ensuring a comprehensive coverage of potential network vulnerabilities. This hybrid model demonstrates superior performance in anomaly detection accuracy and complexity management. Our results highlight a substantial advancement in network intrusion detection capabilities, showcasing an effective strategy for bolstering network reliability and resilience against a diverse range of cyber threats.
Authored by Fatemeh Stodt, Fabrice Theoleyre, Christoph Reich
With the proliferation of data in Internet-related applications, incidences of cyber security have increased manyfold. Energy management, which is one of the smart city layers, has also been experiencing cyberattacks. Furthermore, the Distributed Energy Resources (DER), which depend on different controllers to provide energy to the main physical smart grid of a smart city, is prone to cyberattacks. The increased cyber-attacks on DER systems are mainly because of its dependency on digital communication and controls as there is an increase in the number of devices owned and controlled by consumers and third parties. This paper analyzes the major cyber security and privacy challenges that might inflict, damage or compromise the DER and related controllers in smart cities. These challenges highlight that the security and privacy on the Internet of Things (IoT), big data, artificial intelligence, and smart grid, which are the building blocks of a smart city, must be addressed in the DER sector. It is observed that the security and privacy challenges in smart cities can be solved through the distributed framework, by identifying and classifying stakeholders, using appropriate model, and by incorporating fault-tolerance techniques.
Authored by Tarik Himdi, Mohammed Ishaque, Muhammed Ikram
Neural networks are often overconfident about their pre- dictions, which undermines their reliability and trustworthiness. In this work, we present a novel technique, named Error-Driven Un- certainty Aware Training (EUAT), which aims to enhance the ability of neural models to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate. The EUAT approach oper- ates during the model’s training phase by selectively employing two loss functions depending on whether the training examples are cor- rectly or incorrectly predicted by the model. This allows for pursu- ing the twofold goal of i) minimizing model uncertainty for correctly predicted inputs and ii) maximizing uncertainty for mispredicted in- puts, while preserving the model’s misprediction rate. We evaluate EUAT using diverse neural models and datasets in the image recog- nition domains considering both non-adversarial and adversarial set- tings. The results show that EUAT outperforms existing approaches for uncertainty estimation (including other uncertainty-aware train- ing techniques, calibration, ensembles, and DEUP) by providing un- certainty estimates that not only have higher quality when evaluated via statistical metrics (e.g., correlation with residuals) but also when employed to build binary classifiers that decide whether the model’s output can be trusted or not and under distributional data shifts.
Authored by Pedro Mendes, Paolo Romano, David Garlan
Problems such as the increase in the number of private vehicles with the population, the rise in environmental pollution, the emergence of unmet infrastructure and resource problems, and the decrease in time efficiency in cities have put local governments, cities, and countries in search of solutions. These problems faced by cities and countries are tried to be solved in the concept of smart cities and intelligent transportation by using information and communication technologies in line with the needs. While designing intelligent transportation systems (ITS), beyond traditional methods, big data should be designed in a state-of-the-art and appropriate way with the help of methods such as artificial intelligence, machine learning, and deep learning. In this study, a data-driven decision support system model was established to help the business make strategic decisions with the help of intelligent transportation data and to contribute to the elimination of public transportation problems in the city. Our study model has been established using big data technologies and business intelligence technologies: a decision support system including data sources layer, data ingestion/ collection layer, data storage and processing layer, data analytics layer, application/presentation layer, developer layer, and data management/ data security layer stages. In our study, the decision support system was modeled using ITS data supported by big data technologies, where the traditional structure could not find a solution. This paper aims to create a basis for future studies looking for solutions to the problems of integration, storage, processing, and analysis of big data and to add value to the literature that is missing within the framework of the model. We provide both the lack of literature, eliminate the lack of models before the application process of existing data sets to the business intelligence architecture and a model study before the application to be carried out by the authors.
Authored by Kutlu Sengul, Cigdem Tarhan, Vahap Tecim
In response to the advent of software defined world, this Fast Abstract introduces a new notion, information gravitation, with an attempt to unify and expand two related ones, information mass (related to the supposed fifth force) and data gravitation. This is motivated by the following question: is there a new kind of (gravitational) force between any two distinct pieces of information conveying messages. A possibly affirmative answer to this question of information gravitation, which is supposed to explore the theoretically and/or experimentally justified interplay between information and gravitation, might make significant sense for the software defined world being augmented with artificial intelligence and virtual reality in the age of information. Information induces gravitation. Information gravitation should be related to Newton s law of universal gravitation and Einstein s general theory of relativity, and even to gravitational waves and the unified theory of everything.
Authored by Kai-Yuan Cai
In today s society, with the continuous development of artificial intelligence, artificial intelligence technology plays an increasingly important role in social and economic development, and hass become the fastest growing, most widely used and most influential high-tech in the world today one. However, at the same time, information technology has also brought threats to network security to the entire network world, which makes information systems also face huge and severe challenges, which will affect the stability and development of society to a certain extent. Therefore, comprehensive analysis and research on information system security is a very necessary and urgent task. Through the security assessment of the information system, we can discover the key hidden dangers and loopholes that are hidden in the information source or potentially threaten user data and confidential files, so as to effectively prevent these risks from occurring and provide effective solutions; at the same time To a certain extent, prevent virus invasion, malicious program attacks and network hackers intrusive behaviors. This article adopts the experimental analysis method to explore how to apply the most practical, advanced and efficient artificial intelligence theory to the information system security assessment management, so as to further realize the optimal design of the information system security assessment management system, which will protect our country the information security has very important meaning and practical value. According to the research results, the function of the experimental test system is complete and available, and the security is good, which can meet the requirements of multi-user operation for security evaluation of the information system.
Authored by Song He, Xiaohong Shi, Yan Huang, Gong Chen, Huihui Tang
Sustainability within the built environment is increasingly important to our global community as it minimizes environmental impact whilst providing economic and social benefits. Governments recognize the importance of sustainability by providing economic incentives and tenants, particularly large enterprises seek facilities that align with their corporate social responsibility objectives. Claiming sustainability outcomes clearly has benefits for facility owners and facility occupants that have sustainability as part of their business objectives but there are also incentives to overstate the value delivered or only measure parts of the facility lifecycle that provide attractive results. Whilst there is a plethora of research on Building Information Management (BIM) systems within the construction industry there has been limited research on BIM in the facilities management \& sustainability fields. The significant contribution with this research is the integration of blockchain for the purposes of transaction assurance with development of a working model spanning BIM and blockchain underpinning phase one of this research. From an industry perspective the contribution of this paper is to articulate a path to integrate a wide range of mature and emerging technologies into solutions that deliver trusted results for government, facility owners, tenants and other directly impacted stakeholders to assess the sustainability impact.
Authored by Luke Desmomd, Mohamed Salama
This study explores how AI-driven personal finance advisors can significantly improve individual financial well-being. It addresses the complexity of modern finance, emphasizing the integration of AI for informed decision-making. The research covers challenges like budgeting, investment planning, debt management, and retirement preparation. It highlights AI s capabilities in data-driven analysis, predictive modeling, and personalized recommendations, particularly in risk assessment, portfolio optimization, and real-time market monitoring. The paper also addresses ethical and privacy concerns, proposing a transparent deployment framework. User acceptance and trust-building are crucial for widespread adoption. A case study demonstrates enhanced financial literacy, returns, and overall well-being with AI-powered advisors, underscoring their potential to revolutionize financial wellness. The study emphasizes responsible implementation and trust-building for ethical and effective AI deployment in personal finance.
Authored by Parth Pangavhane, Shivam Kolse, Parimal Avhad, Tushar Gadekar, N. Darwante, S. Chaudhari
Human-Centered Artificial Intelligence (AI) focuses on AI systems prioritizing user empowerment and ethical considerations. We explore the importance of usercentric design principles and ethical guidelines in creating AI technologies that enhance user experiences and align with human values. It emphasizes user empowerment through personalized experiences and explainable AI, fostering trust and user agency. Ethical considerations, including fairness, transparency, accountability, and privacy protection, are addressed to ensure AI systems respect human rights and avoid biases. Effective human AI collaboration is emphasized, promoting shared decision-making and user control. By involving interdisciplinary collaboration, this research contributes to advancing human-centered AI, providing practical recommendations for designing AI systems that enhance user experiences, promote user empowerment, and adhere to ethical standards. It emphasizes the harmonious coexistence between humans and AI, enhancing well-being and autonomy and creating a future where AI technologies benefit humanity. Overall, this research highlights the significance of human-centered AI in creating a positive impact. By centering on users needs and values, AI systems can be designed to empower individuals and enhance their experiences. Ethical considerations are crucial to ensure fairness and transparency. With effective collaboration between humans and AI, we can harness the potential of AI to create a future that aligns with human aspirations and promotes societal well-being.
Authored by Usman Usmani, Ari Happonen, Junzo Watada
Nowadays, companies, critical infrastructure and governments face cyber attacks every day ranging from simple denial-of-service and password guessing attacks to complex nationstate attack campaigns, so-called advanced persistent threats (APTs). Defenders employ intrusion detection systems (IDSs) among other tools to detect malicious activity and protect network assets. With the evolution of threats, detection techniques have followed with modern systems usually relying on some form of artificial intelligence (AI) or anomaly detection as part of their defense portfolio. While these systems are able to achieve higher accuracy in detecting APT activity, they cannot provide much context about the attack, as the underlying models are often too complex to interpret. This paper presents an approach to explain single predictions (i. e., detected attacks) of any graphbased anomaly detection systems. By systematically modifying the input graph of an anomaly and observing the output, we leverage a variation of permutation importance to identify parts of the graph that are likely responsible for the detected anomaly. Our approach treats the anomaly detection function as a black box and is thus applicable to any whole-graph explanation problems. Our results on two established datasets for APT detection (StreamSpot \& DARPA TC Engagement Three) indicate that our approach can identify nodes that are likely part of the anomaly. We quantify this through our area under baseline (AuB) metric and show how the AuB is higher for anomalous graphs. Further analysis via the Wilcoxon rank-sum test confirms that these results are statistically significant with a p-value of 0.0041\%.
Authored by Felix Welter, Florian Wilkens, Mathias Fischer
Cybercrime continues to pose a significant threat to modern society, requiring a solid emphasis on cyber-attack prevention, detection and response by civilian and military organisations aimed at brand protection. This study applies a novel framework to identify, detect and mitigate phishing attacks, leveraging the power of computer vision technology and artificial intelligence. The primary objective is to automate the classification process, reducing the dwell time between detection and executing courses of action to respond to phishing attacks. When applied to a real-world curated dataset, the proposed classifier achieved relevant results with an F1-Score of 95.76\% and an MCC value of 91.57\%. These metrics highlight the classifier’s effectiveness in identifying phishing domains with minimal false classifications, affirming its suitability for the intended purpose. Future enhancements include considering a fuzzy logic model that accounts for the classification probability in conjunction with the domain creation date and the uniqueness of downloaded resources when accessing the website or domain.
Authored by Carlos Pires, José Borges
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
In recent times, the research looks into the measures taken by financial institutions to secure their systems and reduce the likelihood of attacks. The study results indicate that all cultures are undergoing a digital transformation at the present time. The dawn of the Internet ushered in an era of increased sophistication in many fields. There has been a gradual but steady shift in attitude toward digital and networked computers in the business world over the past few years. Financial organizations are increasingly vulnerable to external cyberattacks due to the ease of usage and positive effects. They are also susceptible to attacks from within their own organisation. In this paper, we develop a machine learning based quantitative risk assessment model that effectively assess and minimises this risk. Quantitative risk calculation is used since it is the best way for calculating network risk. According to the study, a network s vulnerability is proportional to the number of times its threats have been exploited and the amount of damage they have caused. The simulation is used to test the model s efficacy, and the results show that the model detects threats more effectively than the other methods.
Authored by Lavanya M, Mangayarkarasi S
Cyber security is a critical problem that causes data breaches, identity theft, and harm to millions of people and businesses. As technology evolves, new security threats emerge as a result of a dearth of cyber security specialists equipped with up-to-date information. It is hard for security firms to prevent cyber-attacks without the cooperation of senior professionals. However, by depending on artificial intelligence to combat cyber-attacks, the strain on specialists can be lessened. as the use of Artificial Intelligence (AI) can improve Machine Learning (ML) approaches that can mine data to detect the sources of cyberattacks or perhaps prevent them as an AI method, it enables and facilitates malware detection by utilizing data from prior cyber-attacks in a variety of methods, including behavior analysis, risk assessment, bot blocking, endpoint protection, and security task automation. However, deploying AI may present new threats, therefore cyber security experts must establish a balance between risk and benefit. While AI models can aid cybersecurity experts in making decisions and forming conclusions, they will never be able to make all cybersecurity decisions and judgments.
Authored by Safiya Alawadhi, Areej Zowayed, Hamad Abdulla, Moaiad Khder, Basel Ali
In the last decade the rapid development of the communications and IoT systems have risen many challenges regarding the security of the devices that are handled wirelessly. Therefore, in this paper, we intend to test the possibility of spoofing the parameters for connection of the Bluetooth Low Energy (BLE) devices, to make several recommendations for increasing the security of the usage of those devices and to propose basic counter measurements regarding the possibility of hacking them.
Authored by Cristian Capotă, Mădălin Popescu, Simona Halunga, Octavian Fratu
The digitalization and smartization of modern digital systems include the implementation and integration of emerging innovative technologies, such as Artificial Intelligence. By incorporating new technologies, the surface attack of the system also expands, and specialized cybersecurity mechanisms and tools are required to counter the potential new threats. This paper introduces a holistic security risk assessment methodology that aims to assist Artificial Intelligence system stakeholders guarantee the correct design and implementation of technical robustness in Artificial Intelligence systems. The methodology is designed to facilitate the automation of the security risk assessment of Artificial Intelligence components together with the rest of the system components. Supporting the methodology, the solution to the automation of Artificial Intelligence risk assessment is also proposed. Both the methodology and the tool will be validated when assessing and treating risks on Artificial Intelligence-based cybersecurity solutions integrated in modern digital industrial systems that leverage emerging technologies such as cloud continuum including Software-defined networking (SDN).
Authored by Eider Iturbe, Erkuden Rios, Nerea Toledo