Around the world there has been an advancement of IoT edge devices, that in turn have enabled the collection of rich datasets as part of the Mobile Crowd Sensing (MCS) paradigm, which in practice is implemented in a variety of safety critical applications. In spite of the advantages of such datasets, there exists an inherent data trustworthiness challenge due to the interference of malevolent actors. In this context, there has been a great body of proposed solutions which capitalize on conventional machine algorithms for sifting through faulty data without any assumptions on the trustworthiness of the source. However, there is still a number of open issues, such as how to cope with strong colluding adversaries, while in parallel managing efficiently the sizable influx of user data. In this work we suggest that the usage of explainable artificial intelligence (XAI) can lead to even more efficient performance as we tackle the limitation of conventional black box models, by enabling the understanding and interpretation of a model s operation. Our approach enables the reasoning of the model s accuracy in the presence of adversaries and has the ability to shun out faulty or malicious data, thus, enhancing the model s adaptation process. To this end, we provide a prototype implementation coupled with a detailed performance evaluation under different scenarios of attacks, employing both real and synthetic datasets. Our results suggest that the use of XAI leads to improved performance compared to other existing schemes.
Authored by Sam Afzal-Houshmand, Dimitrios Papamartzivanos, Sajad Homayoun, Entso Veliou, Christian Jensen, Athanasios Voulodimos, Thanassis Giannetsos
The aim of the study is to review XAI studies in terms of their solutions, applications and challenges in renewable energy and resources. The results have shown that XAI really helps to explain how the decisions are made by AI models, to increase confidence and trust to the models, to make decision mode reliable, show the transparency of decision-making mechanism. Even if there have been a number of solutions such as SHAP, LIME, ELI5, DeepLIFT, Rule Based Approach of XAI methods, a number of problems in metrics, evaluations, performance and explanations are still specific, and require domain experts to develop new models or to apply available techniques. It is hoped that this article might help researchers to develop XAI solutions in their energy applications and improve their AI approaches for further studies.
Authored by Betül Ersöz, Şeref Sağıroğlu, Halil Bülbül
As we know, change is the only constant present in healthcare services. In this rapidly developing world, the need to drastically improve healthcare performance is essential. Real-time health data monitoring, analysis, and storage securely present us with a highly efficient healthcare system to diagnose, predict, and prevent deadly diseases. Integrating IoT data with blockchain storage technology gives safety and security to the data. The current bottleneck we face while integrating blockchain and IoT is primarily interoperability, scalability, and lack of regulatory frameworks. By integrating Explainable AI into the system, it is possible to overcome some of these bottlenecks between IoT devices and blockchain. XAI acts as a middleware solution, helping in interpreting the predictions and enforcing the standard data communication protocol.
Authored by CH Murthy V, Lawanya Shri
Sixth generation (6G)-enabled massive network MANO orchestration, alongside distributed supervision and fully reconfigurable control logic that manages dynamic arrangement of network components, such as cell-free, Open-Air Interface (OAI) and RIS, is a potent enabler for the upcoming pervasive digitalization of the vertical use cases. In such a disruptive domain, artificial intelligence (AI)-driven zero-touch “Network of Networks” intent-based automation shall be able to guarantee a high degree of security, efficiency, scalability, and sustainability, especially in cross-domain and interoperable deployment environments (i.e., where points of presence (PoPs) are non-independent and identically distributed (non-IID)). To this extent, this paper presents a novel breakthrough, open, and fully reconfigurable networking architecture for 6G cellular paradigms, named 6G-BRICKS. To this end, 6G-BRICKS will deliver the first open and programmable O-RAN Radio Unit (RU) for 6G networks, termed as the OpenRU, based on an NI USRP-based platform. Moreover, 6G-BRICKS will integrate the RIS concept into the OAI alongside Testing as a Service (TaaS) capabilities, multi-tenancy, disaggregated Operations Support Systems (OSS) and Deep Edge adaptation at the forefront. The overall ambition of 6G-BRICKS is to offer evolvability, granularity, while, at the same time, tackling big challenges such as interdisciplinary efforts and big investments in 6G integration.
Authored by Kostas Ramantas, Anastasios Bikos, Walter Nitzold, Sofie Pollin, Adlen Ksentini, Sylvie Mayrargue, Vasileios Theodorou, Loizos Christofi, Georgios Gardikis, Md Rahman, Ashima Chawla, Francisco Ibañez, Ioannis Chochliouros, Didier Nicholson, Mario, Montagudand, Arman Shojaeifard, Alexios Pagkotzidis, Christos Verikoukis
This study addresses the critical need to secure VR network communication from non-immersive attacks, employing an intrusion detection system (IDS). While deep learning (DL) models offer advanced solutions, their opacity as "black box" models raises concerns. Recognizing this gap, the research underscores the urgency for DL-based explainability, enabling data analysts and cybersecurity experts to grasp model intricacies. Leveraging sensed data from IoT devices, our work trains a DL-based model for attack detection and mitigation in the VR network, Importantly, we extend our contribution by providing comprehensive global and local interpretations of the model’s decisions post-evaluation using SHAP-based explanation.
Authored by Urslla Izuazu, Dong-Seong Kim, Jae Lee
Impact of Equivalence Assessment in the Education Sector using the XAI Model of Blockchain with ECTS
The procedure for obtaining an equivalency certificate for international educational recognition is typically complicated and opaque, and differs depending on the nation and system. To overcome these issues and empower students, this study suggests a revolutionary assessment tool that makes use of blockchain technology, chatbots, the European Credit Transfer and Accumulation System (ECTS), and Explainable Artificial Intelligence (XAI). Educational equivalency assessments frequently face difficulties and lack of openness in a variety of settings. The suggested solution uses blockchain for tamper-proof record keeping and secure data storage, based on the capabilities of each component. This improves the blockchain’s ability to securely store application data and evaluation results, fostering immutability and trust. Using the distributed ledger feature of blockchain promotes fairness in evaluations by preventing tampering and guaranteeing data integrity. The blockchain ensures data security and privacy by encrypting and storing data. Discuss how XAI might explain AI-driven equivalence choices, promoting fairness and trust, by reviewing pertinent material in each domain. Chatbots can improve accessibility by streamlining data collection and assisting students along the way. Transparency and efficiency are provided via ECTS computations that integrate XAI and chatbots. Emphasizing the availability of multilingual support for international students, we also address issues such as data privacy and system adaption. The study recommends further research to assess the multifaceted method in practical contexts and improve the technology for moral and efficient application. In the end, both students and institutions will benefit from this, as it can empower individuals and promote international mobility of degree equivalization.
Authored by Sumathy Krishnan, R Surendran
Many studies of the adoption of machine learning (ML) in Security Operation Centres (SOCs) have pointed to a lack of transparency and explanation – and thus trust – as a barrier to ML adoption, and have suggested eXplainable Artificial Intelligence (XAI) as a possible solution. However, there is a lack of studies addressing to which degree XAI indeed helps SOC analysts. Focusing on two XAI-techniques, SHAP and LIME, we have interviewed several SOC analysts to understand how XAI can be used and adapted to explain ML-generated alerts. The results show that XAI can provide valuable insights for the analyst by highlighting features and information deemed important for a given alert. As far as we are aware, we are the first to conduct such a user study of XAI usage in a SOC and this short paper provides our initial findings.
Authored by Håkon Eriksson, Gudmund Grov
Explainable AI is an emerging field that aims to address how black-box decisions of AI systems are made, by attempting to understand the steps and models involved in this decision-making. Explainable AI in manufacturing is supposed to deliver predictability, agility, and resiliency across targeted manufacturing apps. In this context, large amounts of data, which can be of high sensitivity and various formats need to be securely and efficiently handled. This paper proposes an Asset Management and Secure Sharing solution tailored to the Explainable AI and Manufacturing context in order to tackle this challenge. The proposed asset management architecture enables an extensive data management and secure sharing solution for industrial data assets. Industrial data can be pulled, imported, managed, shared, and tracked with a high level of security using this design. This paper describes the solution´s overall architectural design and gives an overview of the functionalities and incorporated technologies of the involved components, which are responsible for data collection, management, provenance, and sharing as well as for overall security.
Authored by Sangeetha Reji, Jonas Hetterich, Stamatis Pitsios, Vasilis Gkolemi, Sergi Perez-Castanos, Minas Pertselakis
In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AIIML-based security solutions to meet the new concerns posed by the dynamic nature of forth-coming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AIIML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AIIML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AIIML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AIIML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.
Authored by Bartlomiej Siniarski, Chamara Sandeepa, Shen Wang, Madhusaska Liyanage, Cem Ayyildiz, Veli Yildirim, Hakan Alakoca, Fatma Kesik, Betül Paltun, Giovanni Perin, Michele Rossi, Stefano Tomasin, Arsenia Chorti, Pietro Giardina, Alberto Pércz, José Valero, Tommy Svensson, Nikolaos Pappas, Marios Kountouris
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
The pervasive proliferation of digital technologies and interconnected systems has heightened the necessity for comprehensive cybersecurity measures in computer technological know-how. While deep gaining knowledge of (DL) has turn out to be a effective tool for bolstering security, its effectiveness is being examined via malicious hacking. Cybersecurity has end up an trouble of essential importance inside the cutting-edge virtual world. By making it feasible to become aware of and respond to threats in actual time, Deep Learning is a important issue of progressed security. Adversarial assaults, interpretability of models, and a lack of categorized statistics are all obstacles that want to be studied further with the intention to support DL-based totally security solutions. The protection and reliability of DL in our on-line world relies upon on being able to triumph over those boundaries. The present studies presents a unique method for strengthening DL-based totally cybersecurity, known as name dynamic adverse resilience for deep learning-based totally cybersecurity (DARDL-C). DARDL-C gives a dynamic and adaptable framework to counter antagonistic assaults by using combining adaptive neural community architectures with ensemble learning, real-time threat tracking, risk intelligence integration, explainable AI (XAI) for version interpretability, and reinforcement getting to know for adaptive defense techniques. The cause of this generation is to make DL fashions more secure and proof against the constantly transferring nature of online threats. The importance of simulation evaluation in determining DARDL-C s effectiveness in practical settings with out compromising genuine safety is important. Professionals and researchers can compare the efficacy and versatility of DARDL-C with the aid of simulating realistic threats in managed contexts. This gives precious insights into the machine s strengths and regions for improvement.
Authored by D. Poornima, A. Sheela, Shamreen Ahamed, P. Kathambari
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
This study addresses the critical need to secure VR network communication from non-immersive attacks, employing an intrusion detection system (IDS). While deep learning (DL) models offer advanced solutions, their opacity as "black box" models raises concerns. Recognizing this gap, the research underscores the urgency for DL-based explainability, enabling data analysts and cybersecurity experts to grasp model intricacies. Leveraging sensed data from IoT devices, our work trains a DL-based model for attack detection and mitigation in the VR network, Importantly, we extend our contribution by providing comprehensive global and local interpretations of the model’s decisions post-evaluation using SHAP-based explanation.
Authored by Urslla Izuazu, Dong-Seong Kim, Jae Lee
In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AIIML-based security solutions to meet the new concerns posed by the dynamic nature of forth-coming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AIIML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AIIML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AIIML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AIIML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.
Authored by Bartlomiej Siniarski, Chamara Sandeepa, Shen Wang, Madhusaska Liyanage, Cem Ayyildiz, Veli Yildirim, Hakan Alakoca, Fatma Kesik, Betül Paltun, Giovanni Perin, Michele Rossi, Stefano Tomasin, Arsenia Chorti, Pietro Giardina, Alberto Pércz, José Valero, Tommy Svensson, Nikolaos Pappas, Marios Kountouris
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The vision and key elements of the 6th generation (6G) ecosystem are being discussed very actively in academic and industrial circles. In this work, we provide a timely update to the 6G security vision presented in our previous publications to contribute to these efforts. We elaborate further on some key security challenges for the envisioned 6G wireless systems, explore recently emerging aspects, and identify potential solutions from an additive perspective. This speculative treatment aims explicitly to complement our previous work through the lens of developments of the last two years in 6G research and development.
Authored by Gürkan Gur, Pawani Porambage, Diana Osorio, Attila Yavuz, Madhusanka Liyanage
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
Procurement is a critical step in the setup of systems, as reverting decisions made at this point is typically time-consuming and costly. Especially Artificial Intelligence (AI) based systems face many challenges, starting with unclear and unknown side parameters at design time of the systems, changing ecosystems and regulations, as well as problems of overselling capabilities of systems by vendors. Furthermore, the AI Act puts forth a great deal of additional requirements for operators of critical AI systems, like risk management and transparency measures, thus making procurement even more complex. In addition, the number of providers of AI systems is drastically increasing. In this paper we provide guidelines for the procurement of AI based systems that support the decision maker in identifying the key elements for the procurement of secure AI systems, depending on the respective technical and regulatory environment. Furthermore, we provide additional resources for utilizing these guidelines in practical procurement.
Authored by Peter Kieseberg, Christina Buttinger, Laura Kaltenbrunner, Marlies Temper, Simon Tjoa
In recent years, with the accelerated development of social informatization, digital economy has gradually become the core force of economic growth in various countries. As the carrier for the digital economy, the number of IDCs is also increasing day by day, and their construction volume and scale are expanding. Energy consumption and carbon emissions are growing rapidly as IDCs require large amounts of electricity to run servers, storage, backup, cooling systems and other infrastructure. IDCs are facing serious challenges of energy saving and greenhouse gas emission. How to achieve green, low-carbon and high-quality development is of particular concern. This paper summarizes and classifies all the current green energy-saving technologies in IDCs, introduces AI-based energy-saving solutions for IDC cooling systems in detail, compares and analyzes the energy-saving effects of AI energy-saving technologies and traditional energy-saving technologies, and points out the advantages of AI energy-saving solutions applied in green IDCs.
Authored by Hongdan Ren, Xinlan Xu, Yu Zeng
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Authored by Felipe De Magalhaes, Mahdi Nikdast, Gabriela Nicolescu
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
Generative Artificial Intelligence (AI) has increasingly been used to enhance threat intelligence and cyber security measures for organizations. Generative AI is a form of AI that creates new data without relying on existing data or expert knowledge. This technology provides decision support systems with the ability to automatically and quickly identify threats posed by hackers or malicious actors by taking into account various sources and data points. In addition, generative AI can help identify vulnerabilities within an organization s infrastructure, further reducing the potential for a successful attack. This technology is especially well-suited for security operations centers (SOCs), which require rapid identification of threats and defense measures. By incorporating interesting and valuable data points that previously would have been missed, generative AI can provide organizations with an additional layer of defense against increasingly sophisticated attacks.
Authored by Venkata Saddi, Santhosh Gopal, Abdul Mohammed, S. Dhanasekaran, Mahaveer Naruka
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar