The pervasive proliferation of digital technologies and interconnected systems has heightened the necessity for comprehensive cybersecurity measures in computer technological know-how. While deep gaining knowledge of (DL) has turn out to be a effective tool for bolstering security, its effectiveness is being examined via malicious hacking. Cybersecurity has end up an trouble of essential importance inside the cutting-edge virtual world. By making it feasible to become aware of and respond to threats in actual time, Deep Learning is a important issue of progressed security. Adversarial assaults, interpretability of models, and a lack of categorized statistics are all obstacles that want to be studied further with the intention to support DL-based totally security solutions. The protection and reliability of DL in our on-line world relies upon on being able to triumph over those boundaries. The present studies presents a unique method for strengthening DL-based totally cybersecurity, known as name dynamic adverse resilience for deep learning-based totally cybersecurity (DARDL-C). DARDL-C gives a dynamic and adaptable framework to counter antagonistic assaults by using combining adaptive neural community architectures with ensemble learning, real-time threat tracking, risk intelligence integration, explainable AI (XAI) for version interpretability, and reinforcement getting to know for adaptive defense techniques. The cause of this generation is to make DL fashions more secure and proof against the constantly transferring nature of online threats. The importance of simulation evaluation in determining DARDL-C s effectiveness in practical settings with out compromising genuine safety is important. Professionals and researchers can compare the efficacy and versatility of DARDL-C with the aid of simulating realistic threats in managed contexts. This gives precious insights into the machine s strengths and regions for improvement.
Authored by D. Poornima, A. Sheela, Shamreen Ahamed, P. Kathambari
This paper presents a reputation-based threat mitigation framework that defends potential security threats in electroencephalogram (EEG) signal classification during model aggregation of Federated Learning. While EEG signal analysis has attracted attention because of the emergence of brain-computer interface (BCI) technology, it is difficult to create efficient learning models for EEG analysis because of the distributed nature of EEG data and related privacy and security concerns. To address these challenges, the proposed defending framework leverages the Federated Learning paradigm to preserve privacy by collaborative model training with localized data from dispersed sources and introduces a reputation-based mechanism to mitigate the influence of data poisoning attacks and identify compromised participants. To assess the efficiency of the proposed reputation-based federated learning defense framework, data poisoning attacks based on the risk level of training data derived by Explainable Artificial Intelligence (XAI) techniques are conducted on both publicly available EEG signal datasets and the self-established EEG signal dataset. Experimental results on the poisoned datasets show that the proposed defense methodology performs well in EEG signal classification while reducing the risks associated with security threats.
Authored by Zhibo Zhang, Pengfei Li, Ahmed Hammadi, Fusen Guo, Ernesto Damiani, Chan Yeun
Internet of Things (IoT) and Artificial Intelligence (AI) systems have become prevalent across various industries, steering to diverse and far-reaching outcomes, and their convergence has garnered significant attention in the tech world. Studies and reviews are instrumental in supplying industries with the nuanced understanding of the multifaceted developments of this joint domain. This paper undertakes a critical examination of existing perspectives and governance policies, adopting a contextual approach, and addressing not only the potential but also the limitations of these governance policies. In the complex landscape of AI-infused IoT systems, transparency and interpretability are pivotal qualities for informed decision-making and effective governance. In AI governance, transparency allows for scrutiny and accountability, while interpretability facilitates trust and confidence in AI-driven decisions. Therefore, we also evaluate and advocate for the use of two very popular eXplainable AI (XAI) techniques-SHAP and LIME-in explaining the predictive results of AI models. Subsequently, this paper underscores the imperative of not only maximizing the advantages and services derived from the incorporation of IoT and AI but also diligently minimizing possible risks and challenges.
Authored by Nadine Fares, Denis Nedeljkovic, Manar Jammal
The stock market is a topic that is of interest to all sorts of people. It is a place where the prices change very drastically. So, something needs to be done to help the people risking their money on the stock market. The public s opinions are crucial for the stock market. Sentiment is a very powerful force that is constantly changing and having a significant impact. It is reflected on social media platforms, where almost the entire country is active, as well as in the daily news. Many projects have been done in the stock prediction genre, but since sentiments play a big part in the stock market, making predictions of prices without them would lead to inefficient predictions, and hence Sentiment analysis is very important for stock market price prediction. To predict stock market prices, we will combine sentiment analysis from various sources, including News and Twitter. Results are evaluated for two different cryptocurrencies: Ethereum and Solana. Random Forest achieved the best RMSE of 13.434 and MAE of 11.919 for Ethereum. Support Vector Machine achieved the best RMSE of 2.48 and MAE of 1.78 for Solana.
Authored by Arayan Gupta, Durgesh Vyas, Pranav Nale, Harsh Jain, Sashikala Mishra, Ranjeet Bidwe, Bhushan Zope, Amar Buchade
Recently, the increased use of artificial intelligence in healthcare has significantly changed the developments in the field of medicine. Medical centres have adopted AI applications and used it in many applications to predict disease diagnosis and reduce health risks in a predetermined way. In addition to Artificial Intelligence (AI) techniques for processing data and understanding the results of this data, Explainable Artificial Intelligence (XAI) techniques have also gained an important place in the healthcare sector. In this study, reliable and explainable artificial intelligence studies in the field of healthcare were investigated and the blockchain framework, one of the latest technologies in the field of reliability, was examined. Many researchers have used blockchain technology in the healthcare industry to exchange information between laboratories, hospitals, pharmacies, and doctors and to protect patient data. In our study, firstly, the studies whose keywords were XAI and Trustworthy Artificial Intelligence were examined, and then, among these studies, priority was given to current articles using Blockchain technology. Combining the existing methods and results of previous studies and organizing these studies, our study presented a general framework obtained from the reviewed articles. Obtaining this framework from current studies will be beneficial for future studies of both academics and scientists.
Authored by Kübra Arslanoğlu, Mehmet Karaköse
In this work, a novel framework for detecting mali-cious networks in the IoT-enabled Metaverse networks to ensure that malicious network traffic is identified and integrated to suit optimal Metaverse cybersecurity is presented. First, the study raises a core security issue related to the cyberthreats in Metaverse networks and its privacy breaching risks. Second, to address the shortcomings of efficient and effective network intrusion detection (NIDS) of dark web traffic, this study employs a quantization-aware trained (QAT) 1D CNN followed by fully con-nected networks (ID CNNs-GRU-FCN) model, which addresses the issues of and memory contingencies in Metaverse NIDS models. The QAT model is made interpretable using eXplainable artificial intelligence (XAI) methods namely, SHapley additive exPlanations (SHAP) and local interpretable model-agnostic ex-planations (LIME), to provide trustworthy model transparency and interpretability. Overall, the proposed method contributes to storage benefits four times higher than the original model without quantization while attaining a high accuracy of 99.82 \%.
Authored by Ebuka Nkoro, Cosmas Nwakanma, Jae-Min Lee, Dong-Seong Kim
IoT and AI created a Transportation Management System, resulting in the Internet of Vehicles. Intelligent vehicles are combined with contemporary communication technologies (5G) to achieve automated driving and adequate mobility. IoV faces security issues in the next five (5) areas: data safety, V2X communication safety, platform safety, Intermediate Commercial Vehicles (ICV) safety, and intelligent device safety. Numerous types of AI models have been created to reduce the outcome infiltration risks on ICVs. The need to integrate confidence, transparency, and repeatability into the creation of Artificial Intelligence (AI) for the safety of ICV and to deliver harmless transport systems, on the other hand, has led to an increase in explainable AI (XAI). Therefore, the space of this analysis protected the XAI models employed in ICV intrusion detection systems (IDSs), their taxonomies, and available research concerns. The study s findings demonstrate that, despite its relatively recent submission to ICV, XAI is a potential explore area for those looking to increase the net effect of ICVs. The paper also demonstrates that XAI s greater transparency will help it gain acceptance in the vehicle industry.
Authored by Ravula Vishnukumar, Adla Padma, Mangayarkarasi Ramaiah
Peer-to-peer (P2P) lenders face regulatory, compliance, application, and data security risks. A complete methodology that includes more than statistical and economic methods is needed to conduct credit assessments effectively. This study uses systematic literature network analysis and artificial intelligence to comprehend risk management in P2P lending financial technology. This study suggests that explainable AI (XAI) is better at identifying, analyzing, and evaluating financial industry risks, including financial technology. This is done through human agency, monitoring, transparency, and accountability. The LIME Framework and SHAP Value are widely used machine learning frameworks for data integration to speed up and improve credit score analysis using bank-like criteria. Thus, machine learning is expected to be used to develop a precise and rational individual credit evaluation system in peer-to-peer lending to improve credit risk supervision and forecasting while reducing default risk.
Authored by Ika Arifah, Ina Nihaya
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
Security applications use machine learning (ML) models and artificial intelligence (AI) to autonomously protect systems. However, security decisions are more impactful if they are coupled with their rationale. The explanation behind an ML model s result provides the rationale necessary for a security decision. Explainable AI (XAI) techniques provide insights into the state of a model s attributes and their contribution to the model s results to gain the end user s confidence. It requires human intervention to investigate and interpret the explanation. The interpretation must align system s security profile(s). A security profile is an abstraction of the system s security requirements and related functionalities to comply with them. Relying on human intervention for interpretation is infeasible for an autonomous system (AS) since it must self-adapt its functionalities in response to uncertainty at runtime. Thus, an AS requires an automated approach to extract security profile information from ML model XAI outcomes. The challenge is unifying the XAI outcomes with the security profile to represent the interpretation in a structured form. This paper presents a component to facilitate AS information extraction from ML model XAI outcomes related to predictions and generating an interpretation considering the security profile.
Authored by Sharmin Jahan, Sarra Alqahtani, Rose Gamble, Masrufa Bayesh
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
The boundaries between the real world and the virtual world are going to be blurred by Metaverse. It is transforming every aspect of humans to seamlessly transition from one virtual world to another. It is connecting the real world with the digital world by integrating emerging tech like 5G, 3d reconstruction, IoT, Artificial intelligence, digital twin, augmented reality (AR), and virtual reality (VR). Metaverse platforms inherit many security \& privacy issues from underlying technologies, and this might impede their wider adoption. Emerging tech is easy to target for cybercriminals as security posture is in its infancy. This work elaborates on current and potential security, and privacy risks in the metaverse and put forth proposals and recommendations to build a trusted ecosystem in a holistic manner.
Authored by Sailaja Vadlamudi
As the world becomes increasingly interconnected, new AI technologies bring new opportunities while also giving rise to new network security risks, while network security is critical to national and social security, enterprise as well as personal information security. This article mainly introduces the empowerment of AI in network security, the development trend of its application in the field of network security, the challenges faced, and suggestions, providing beneficial exploration for effectively applying artificial intelligence technology to computer network security protection.
Authored by Jia Li, Sijia Zhang, Jinting Wang, Han Xiao
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
AI systems face potential hardware security threats. Existing AI systems generally use the heterogeneous architecture of CPU + Intelligent Accelerator, with PCIe bus for communication between them. Security mechanisms are implemented on CPUs based on the hardware security isolation architecture. But the conventional hardware security isolation architecture does not include the intelligent accelerator on the PCIe bus. Therefore, from the perspective of hardware security, data offloaded to the intelligent accelerator face great security risks. In order to effectively integrate intelligent accelerator into the CPU’s security mechanism, a novel hardware security isolation architecture is presented in this paper. The PCIe protocol is extended to be security-aware by adding security information packaging and unpacking logic in the PCIe controller. The hardware resources on the intelligent accelerator are isolated in fine-grained. The resources classified into the secure world can only be controlled and used by the software of CPU’s trusted execution environment. Based on the above hardware security isolation architecture, a security isolation spiking convolutional neural network accelerator is designed and implemented in this paper. The experimental results demonstrate that the proposed security isolation architecture has no overhead on the bandwidth and latency of the PCIe controller. The architecture does not affect the performance of the entire hardware computing process from CPU data offloading, intelligent accelerator computing, to data returning to CPU. With low hardware overhead, this security isolation architecture achieves effective isolation and protection of input data, model, and output data. And this architecture can effectively integrate hardware resources of intelligent accelerator into CPU’s security isolation mechanism.
Authored by Rui Gong, Lei Wang, Wei Shi, Wei Liu, JianFeng Zhang
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Artificial intelligence (AI) has been successfully used in cyber security for enhancing comprehending, investigating, and evaluating cyber threats. It can effectively anticipate cyber risks in a more efficient way. AI also helps in putting in place strategies to safeguard assets and data. Due to their complexity and constant development, it has been difficult to comprehend cybersecurity controls and adopt the corresponding cyber training and security policies and plans.Given that both cyber academics and cyber practitioners need to have a deep comprehension of cybersecurity rules, artificial intelligence (AI) in cybersecurity can be a crucial tool in both education and awareness. By offering an in-depth demonstration of how AI may help in cybersecurity education and awareness and in creating policies fast and to the needed level, this study focuses on the efficiency of AI-driven mechanisms in strengthening the entire cyber security education life cycle.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
The emergence of large language models (LLMs) has brought forth remarkable capabilities in various domains, yet it also poses inherent risks to trustfulness, encompassing concerns such as toxicity, stereotype bias, adversarial robustness, ethics, privacy, and fairness. Particularly in sensitive applications like customer support chatbots, AI assistants, and digital information automation, which handle privacy-sensitive data, the adoption of generative pre-trained transformer (GPT) models is pervasive. However, ensuring robust security measures to mitigate potential security vulnerabilities is imperative. This paper advocates for a proactive approach termed "security shift-left," which emphasizes integrating security measures early in the development lifecycle to bolster the security posture of LLM-based applications. Our proposed method leverages basic machine learning (ML) techniques and retrieval-augmented generation (RAG) to effectively address security concerns. We present empirical evidence validating the efficacy of our approach with one LLM-based security application designed for the detection of malicious intent, utilizing both open-source datasets and synthesized datasets. By adopting this security shift-left methodology, developers can confidently develop LLM-based applications with robust security protection, safeguarding against potential threats and vulnerabilities.
Authored by Qianlong Lan, Anuj Kaul, Nishant Pattanaik, Piyush Pattanayak, Vinothini Pandurangan
This study investigates the performance and security indicators of mainstream large language models in Chinese generation tasks. It explores potential security risks associated with these models and offers suggestions for improvement. The study utilizes publicly available datasets to assess Chinese language generation tasks, develops datasets and multidimensional security rating standards for security task evaluations, compares the performance of three models across 5 Chinese tasks and 6 security tasks, and conducts Pearson correlation analysis using GPT-4 and questionnaire surveys. Furthermore, the study implements automatic scoring based on GPT-3.5-Turbe. The experimental findings indicate that the models excel in Chinese language generation tasks. ERNIE Bot outperforms in the evaluation of ideology and ethics, ChatGPT excels in rumor and falsehood and privacy security assessments, and Claude performs well in assessing factual fallacy and social prejudice. The fine-tuned model demonstrates high accuracy in security tasks, yet all models exhibit security vulnerabilities. Integration into the prompt project proves to be effective in mitigating security risks. It is recommended that both domestic and foreign models adhere to the legal frameworks of each country, reduce AI hallucinations, continuously expand corpora, and update iterations accordingly.
Authored by Yu Zhang, Yongbing Gao, Weihao Li, Zirong Su, Lidong Yang
LLMs face content security risks such as prompt information injection, insecure output processing, sensitive information leakage, and over-dependence, etc. By constructing a firewall for LLMs with intelligent detection strategies and introducing multi-engine detection capabilities such as rule matching, semantic computing, and AI models, we can intelligently detect and dispose of inputs and outputs of the LLMs, and realize the full-time on-line security protection of LLM applications. The system is tested on open-source LLMs, and there is a significant improvement in terms of the detection rate of insecure content.
Authored by Tianrui Huang, Lina You, Nishui Cai, Ting Huang
With the rapid growth in information technology and being called the Digital Era, it is very evident that no one can survive without internet or ICT advancements. The day-to-day life operations and activities are dependent on these technologies. The latest technology trends in the market and industry are computing power, Smart devices, artificial intelligence, Robotic process automation, metaverse, IOT (Internet of things), cloud computing, Edge computing, Block chain and much more in the coming years. When looking at all these aspect and advancements, one common thing is cloud computing and data which must be protected and safeguarded which brings in the need for cyber/cloud security. Hence cloud security challenges have become an omnipresent concern for organizations or industries of any size where it has gone from a small incident to threat landscape. When it comes to data and cyber/ cloud security there are lots of challenges seen to safeguard these data. Towards that it is necessary that everyone must be aware of the latest technological advancements, evolving cyber threats, data as a valuable asset, Human Factor, Regulatory compliance, Cyber resilience. To handle all these challenges, security and risk prediction framework is proposed in this paper. This framework PRCSAM (Predictive Risk and Complexity Score Assessment Model) will consider factors like impact and likelihood of the main risks, threats and attacks that is foreseen in cloud security and the recommendation of the Risk management framework with automatic risk assessment and scoring option catering to Information security and privacy risks. This framework will help management and organizations in making informed decisions on the cyber security strategy as this is a data driven, dynamic \& proactive approach to cyber security and its complexity calculation. This paper also discusses on the prediction techniques using Generative AI techniques.
Authored by Kavitha Ayappan, J.M Mathana, J Thangakumar