With the proliferation of data in Internet-related applications, incidences of cyber security have increased manyfold. Energy management, which is one of the smart city layers, has also been experiencing cyberattacks. Furthermore, the Distributed Energy Resources (DER), which depend on different controllers to provide energy to the main physical smart grid of a smart city, is prone to cyberattacks. The increased cyber-attacks on DER systems are mainly because of its dependency on digital communication and controls as there is an increase in the number of devices owned and controlled by consumers and third parties. This paper analyzes the major cyber security and privacy challenges that might inflict, damage or compromise the DER and related controllers in smart cities. These challenges highlight that the security and privacy on the Internet of Things (IoT), big data, artificial intelligence, and smart grid, which are the building blocks of a smart city, must be addressed in the DER sector. It is observed that the security and privacy challenges in smart cities can be solved through the distributed framework, by identifying and classifying stakeholders, using appropriate model, and by incorporating fault-tolerance techniques.
Authored by Tarik Himdi, Mohammed Ishaque, Muhammed Ikram
Neural networks are often overconfident about their pre- dictions, which undermines their reliability and trustworthiness. In this work, we present a novel technique, named Error-Driven Un- certainty Aware Training (EUAT), which aims to enhance the ability of neural models to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate. The EUAT approach oper- ates during the model’s training phase by selectively employing two loss functions depending on whether the training examples are cor- rectly or incorrectly predicted by the model. This allows for pursu- ing the twofold goal of i) minimizing model uncertainty for correctly predicted inputs and ii) maximizing uncertainty for mispredicted in- puts, while preserving the model’s misprediction rate. We evaluate EUAT using diverse neural models and datasets in the image recog- nition domains considering both non-adversarial and adversarial set- tings. The results show that EUAT outperforms existing approaches for uncertainty estimation (including other uncertainty-aware train- ing techniques, calibration, ensembles, and DEUP) by providing un- certainty estimates that not only have higher quality when evaluated via statistical metrics (e.g., correlation with residuals) but also when employed to build binary classifiers that decide whether the model’s output can be trusted or not and under distributional data shifts.
Authored by Pedro Mendes, Paolo Romano, David Garlan
Problems such as the increase in the number of private vehicles with the population, the rise in environmental pollution, the emergence of unmet infrastructure and resource problems, and the decrease in time efficiency in cities have put local governments, cities, and countries in search of solutions. These problems faced by cities and countries are tried to be solved in the concept of smart cities and intelligent transportation by using information and communication technologies in line with the needs. While designing intelligent transportation systems (ITS), beyond traditional methods, big data should be designed in a state-of-the-art and appropriate way with the help of methods such as artificial intelligence, machine learning, and deep learning. In this study, a data-driven decision support system model was established to help the business make strategic decisions with the help of intelligent transportation data and to contribute to the elimination of public transportation problems in the city. Our study model has been established using big data technologies and business intelligence technologies: a decision support system including data sources layer, data ingestion/ collection layer, data storage and processing layer, data analytics layer, application/presentation layer, developer layer, and data management/ data security layer stages. In our study, the decision support system was modeled using ITS data supported by big data technologies, where the traditional structure could not find a solution. This paper aims to create a basis for future studies looking for solutions to the problems of integration, storage, processing, and analysis of big data and to add value to the literature that is missing within the framework of the model. We provide both the lack of literature, eliminate the lack of models before the application process of existing data sets to the business intelligence architecture and a model study before the application to be carried out by the authors.
Authored by Kutlu Sengul, Cigdem Tarhan, Vahap Tecim
In response to the advent of software defined world, this Fast Abstract introduces a new notion, information gravitation, with an attempt to unify and expand two related ones, information mass (related to the supposed fifth force) and data gravitation. This is motivated by the following question: is there a new kind of (gravitational) force between any two distinct pieces of information conveying messages. A possibly affirmative answer to this question of information gravitation, which is supposed to explore the theoretically and/or experimentally justified interplay between information and gravitation, might make significant sense for the software defined world being augmented with artificial intelligence and virtual reality in the age of information. Information induces gravitation. Information gravitation should be related to Newton s law of universal gravitation and Einstein s general theory of relativity, and even to gravitational waves and the unified theory of everything.
Authored by Kai-Yuan Cai
In today s society, with the continuous development of artificial intelligence, artificial intelligence technology plays an increasingly important role in social and economic development, and hass become the fastest growing, most widely used and most influential high-tech in the world today one. However, at the same time, information technology has also brought threats to network security to the entire network world, which makes information systems also face huge and severe challenges, which will affect the stability and development of society to a certain extent. Therefore, comprehensive analysis and research on information system security is a very necessary and urgent task. Through the security assessment of the information system, we can discover the key hidden dangers and loopholes that are hidden in the information source or potentially threaten user data and confidential files, so as to effectively prevent these risks from occurring and provide effective solutions; at the same time To a certain extent, prevent virus invasion, malicious program attacks and network hackers intrusive behaviors. This article adopts the experimental analysis method to explore how to apply the most practical, advanced and efficient artificial intelligence theory to the information system security assessment management, so as to further realize the optimal design of the information system security assessment management system, which will protect our country the information security has very important meaning and practical value. According to the research results, the function of the experimental test system is complete and available, and the security is good, which can meet the requirements of multi-user operation for security evaluation of the information system.
Authored by Song He, Xiaohong Shi, Yan Huang, Gong Chen, Huihui Tang
Sustainability within the built environment is increasingly important to our global community as it minimizes environmental impact whilst providing economic and social benefits. Governments recognize the importance of sustainability by providing economic incentives and tenants, particularly large enterprises seek facilities that align with their corporate social responsibility objectives. Claiming sustainability outcomes clearly has benefits for facility owners and facility occupants that have sustainability as part of their business objectives but there are also incentives to overstate the value delivered or only measure parts of the facility lifecycle that provide attractive results. Whilst there is a plethora of research on Building Information Management (BIM) systems within the construction industry there has been limited research on BIM in the facilities management \& sustainability fields. The significant contribution with this research is the integration of blockchain for the purposes of transaction assurance with development of a working model spanning BIM and blockchain underpinning phase one of this research. From an industry perspective the contribution of this paper is to articulate a path to integrate a wide range of mature and emerging technologies into solutions that deliver trusted results for government, facility owners, tenants and other directly impacted stakeholders to assess the sustainability impact.
Authored by Luke Desmomd, Mohamed Salama
This study explores how AI-driven personal finance advisors can significantly improve individual financial well-being. It addresses the complexity of modern finance, emphasizing the integration of AI for informed decision-making. The research covers challenges like budgeting, investment planning, debt management, and retirement preparation. It highlights AI s capabilities in data-driven analysis, predictive modeling, and personalized recommendations, particularly in risk assessment, portfolio optimization, and real-time market monitoring. The paper also addresses ethical and privacy concerns, proposing a transparent deployment framework. User acceptance and trust-building are crucial for widespread adoption. A case study demonstrates enhanced financial literacy, returns, and overall well-being with AI-powered advisors, underscoring their potential to revolutionize financial wellness. The study emphasizes responsible implementation and trust-building for ethical and effective AI deployment in personal finance.
Authored by Parth Pangavhane, Shivam Kolse, Parimal Avhad, Tushar Gadekar, N. Darwante, S. Chaudhari
Human-Centered Artificial Intelligence (AI) focuses on AI systems prioritizing user empowerment and ethical considerations. We explore the importance of usercentric design principles and ethical guidelines in creating AI technologies that enhance user experiences and align with human values. It emphasizes user empowerment through personalized experiences and explainable AI, fostering trust and user agency. Ethical considerations, including fairness, transparency, accountability, and privacy protection, are addressed to ensure AI systems respect human rights and avoid biases. Effective human AI collaboration is emphasized, promoting shared decision-making and user control. By involving interdisciplinary collaboration, this research contributes to advancing human-centered AI, providing practical recommendations for designing AI systems that enhance user experiences, promote user empowerment, and adhere to ethical standards. It emphasizes the harmonious coexistence between humans and AI, enhancing well-being and autonomy and creating a future where AI technologies benefit humanity. Overall, this research highlights the significance of human-centered AI in creating a positive impact. By centering on users needs and values, AI systems can be designed to empower individuals and enhance their experiences. Ethical considerations are crucial to ensure fairness and transparency. With effective collaboration between humans and AI, we can harness the potential of AI to create a future that aligns with human aspirations and promotes societal well-being.
Authored by Usman Usmani, Ari Happonen, Junzo Watada
Nowadays, companies, critical infrastructure and governments face cyber attacks every day ranging from simple denial-of-service and password guessing attacks to complex nationstate attack campaigns, so-called advanced persistent threats (APTs). Defenders employ intrusion detection systems (IDSs) among other tools to detect malicious activity and protect network assets. With the evolution of threats, detection techniques have followed with modern systems usually relying on some form of artificial intelligence (AI) or anomaly detection as part of their defense portfolio. While these systems are able to achieve higher accuracy in detecting APT activity, they cannot provide much context about the attack, as the underlying models are often too complex to interpret. This paper presents an approach to explain single predictions (i. e., detected attacks) of any graphbased anomaly detection systems. By systematically modifying the input graph of an anomaly and observing the output, we leverage a variation of permutation importance to identify parts of the graph that are likely responsible for the detected anomaly. Our approach treats the anomaly detection function as a black box and is thus applicable to any whole-graph explanation problems. Our results on two established datasets for APT detection (StreamSpot \& DARPA TC Engagement Three) indicate that our approach can identify nodes that are likely part of the anomaly. We quantify this through our area under baseline (AuB) metric and show how the AuB is higher for anomalous graphs. Further analysis via the Wilcoxon rank-sum test confirms that these results are statistically significant with a p-value of 0.0041\%.
Authored by Felix Welter, Florian Wilkens, Mathias Fischer
Cybercrime continues to pose a significant threat to modern society, requiring a solid emphasis on cyber-attack prevention, detection and response by civilian and military organisations aimed at brand protection. This study applies a novel framework to identify, detect and mitigate phishing attacks, leveraging the power of computer vision technology and artificial intelligence. The primary objective is to automate the classification process, reducing the dwell time between detection and executing courses of action to respond to phishing attacks. When applied to a real-world curated dataset, the proposed classifier achieved relevant results with an F1-Score of 95.76\% and an MCC value of 91.57\%. These metrics highlight the classifier’s effectiveness in identifying phishing domains with minimal false classifications, affirming its suitability for the intended purpose. Future enhancements include considering a fuzzy logic model that accounts for the classification probability in conjunction with the domain creation date and the uniqueness of downloaded resources when accessing the website or domain.
Authored by Carlos Pires, José Borges
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
In recent times, the research looks into the measures taken by financial institutions to secure their systems and reduce the likelihood of attacks. The study results indicate that all cultures are undergoing a digital transformation at the present time. The dawn of the Internet ushered in an era of increased sophistication in many fields. There has been a gradual but steady shift in attitude toward digital and networked computers in the business world over the past few years. Financial organizations are increasingly vulnerable to external cyberattacks due to the ease of usage and positive effects. They are also susceptible to attacks from within their own organisation. In this paper, we develop a machine learning based quantitative risk assessment model that effectively assess and minimises this risk. Quantitative risk calculation is used since it is the best way for calculating network risk. According to the study, a network s vulnerability is proportional to the number of times its threats have been exploited and the amount of damage they have caused. The simulation is used to test the model s efficacy, and the results show that the model detects threats more effectively than the other methods.
Authored by Lavanya M, Mangayarkarasi S
Cyber security is a critical problem that causes data breaches, identity theft, and harm to millions of people and businesses. As technology evolves, new security threats emerge as a result of a dearth of cyber security specialists equipped with up-to-date information. It is hard for security firms to prevent cyber-attacks without the cooperation of senior professionals. However, by depending on artificial intelligence to combat cyber-attacks, the strain on specialists can be lessened. as the use of Artificial Intelligence (AI) can improve Machine Learning (ML) approaches that can mine data to detect the sources of cyberattacks or perhaps prevent them as an AI method, it enables and facilitates malware detection by utilizing data from prior cyber-attacks in a variety of methods, including behavior analysis, risk assessment, bot blocking, endpoint protection, and security task automation. However, deploying AI may present new threats, therefore cyber security experts must establish a balance between risk and benefit. While AI models can aid cybersecurity experts in making decisions and forming conclusions, they will never be able to make all cybersecurity decisions and judgments.
Authored by Safiya Alawadhi, Areej Zowayed, Hamad Abdulla, Moaiad Khder, Basel Ali
In the last decade the rapid development of the communications and IoT systems have risen many challenges regarding the security of the devices that are handled wirelessly. Therefore, in this paper, we intend to test the possibility of spoofing the parameters for connection of the Bluetooth Low Energy (BLE) devices, to make several recommendations for increasing the security of the usage of those devices and to propose basic counter measurements regarding the possibility of hacking them.
Authored by Cristian Capotă, Mădălin Popescu, Simona Halunga, Octavian Fratu
The digitalization and smartization of modern digital systems include the implementation and integration of emerging innovative technologies, such as Artificial Intelligence. By incorporating new technologies, the surface attack of the system also expands, and specialized cybersecurity mechanisms and tools are required to counter the potential new threats. This paper introduces a holistic security risk assessment methodology that aims to assist Artificial Intelligence system stakeholders guarantee the correct design and implementation of technical robustness in Artificial Intelligence systems. The methodology is designed to facilitate the automation of the security risk assessment of Artificial Intelligence components together with the rest of the system components. Supporting the methodology, the solution to the automation of Artificial Intelligence risk assessment is also proposed. Both the methodology and the tool will be validated when assessing and treating risks on Artificial Intelligence-based cybersecurity solutions integrated in modern digital industrial systems that leverage emerging technologies such as cloud continuum including Software-defined networking (SDN).
Authored by Eider Iturbe, Erkuden Rios, Nerea Toledo
In an environment where terrorist group actions are heavily predominate, the study introduces novel modeling tools that really are adept at controlling, coordinating, manipulating, detecting, and tracing drones. Modern humans now need to simulate their surroundings in order to boost their comfort and productivity at work. The ability to imitate a person s everyday work has undergone tremendous advancement. A simulation is a representation of how a system or process would work in the actual world.
Authored by Soumya V, S. Sujitha, Mohan R, Sharmi Kanaujia, Sanskriti Agarwalla, Shaik Sameer, Tabasum Manzoor
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model s accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93\% to 85.98\% with IID data and from 94.18\% to 30.04\% with Non-IID.
Authored by Mohamed Ferrag, Burak Kantarci, Lucas Cordeiro, Merouane Debbah, Kim-Kwang Choo
As a recent breakthrough in generative artificial intelligence, ChatGPT is capable of creating new data, images, audio, or text content based on user context. In the field of cybersecurity, it provides generative automated AI services such as network detection, malware protection, and privacy compliance monitoring. However, it also faces significant security risks during its design, training, and operation phases, including privacy breaches, content abuse, prompt word attacks, model stealing attacks, abnormal structure attacks, data poisoning attacks, model hijacking attacks, and sponge attacks. This paper starts from the risks and events that ChatGPT has recently faced, proposes a framework for analyzing cybersecurity in cyberspace, and envisions adversarial models and systems. It puts forward a new evolutionary relationship between attackers and defenders using ChatGPT to enhance their own capabilities in a changing environment and predicts the future development of ChatGPT from a security perspective.
Authored by Chunhui Hu, Jianfeng Chen
As the use of machine learning continues to grow in prominence, so does the need for increased knowledge of the threats posed by artificial intelligence. Now more than ever, people are worried about poison attacks, one of the many AI-generated dangers that have already been made public. To fool a classifier during testing, an attacker may "poison" it by altering a portion of the dataset it utilised for training. The poison-resistance strategy presented in this article is novel. The approach uses a recently developed basic called the keyed nonlinear probability test to determine whether or not the training input is consistent with a previously learnt Ddistribution when the odds are stacked against the model. We use an adversary-unknown secret key in our operation. Since the caveats are kept hidden, an adversary cannot use them to fool a keyed nonparametric normality test into concluding that a (substantially) modified dataset really originates from the designated dataset (D).
Authored by Ramesh Saini
This survey paper provides an overview of the current state of AI attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques.
Authored by Md Rahman, Aiasha Arshi, Md Hasan, Sumayia Mishu, Hossain Shahriar, Fan Wu
Recently, the manufacturing industry is changing into a smart manufacturing era with the development of 5G, artificial intelligence, and cloud computing technologies. As a result, Operational Technology (OT), which controls and operates factories, has been digitized and used together with Information Technology (IT). Security is indispensable in the smart manu-facturing industry as a problem with equipment, facilities, and operations in charge of manufacturing can cause factory shutdown or damage. In particular, security is required in smart factories because they implement automation in the manufacturing industry by monitoring the surrounding environment and collecting meaningful information through Industrial IoT (IIoT). Therefore, in this paper, IIoT security proposed in 2022 and recent technology trends are analyzed and explained in order to understand the current status of IIoT security technology in a smart factory environment.
Authored by Jihye Kim, Jaehyoung Park, Jong-Hyouk Lee
Digital Twin can be developed to represent a certain soil carbon emissions ecosystem that takes into account various parameters such as the type of soil, vegetation, climate, human interaction, and many more. With the help of sensors and satellite imagery, real-time data can be collected and fed into the digital model to simulate and predict soil carbon emissions. However, the lack of interpretable prediction results and transparent decision-making results makes Digital Twin unreliable, which could damage the management process. Therefore, we proposed an explainable artificial intelligence (XAI) empowered Digital Twin for better managing soil carbon emissions through AI-enabled proximal sensing. We validated our XAIoT-DT components by analyzing real-world soil carbon content datasets. The preliminary results demonstrate that our framework is a reliable tool for managing soil carbon emissions with relatively high prediction results at a low cost.
Authored by Di An, YangQuan Chen
Alzheimer s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
The rapid advancement in Deep Learning (DL) proposes viable solutions to various real-world problems. However, deploying DL-based models in some applications is hindered by their black-box nature and the inability to explain them. This has pushed Explainable Artificial Intelligence (XAI) research toward DL-based models, aiming to increase the trust by reducing their opacity. Although many XAI algorithms were proposed earlier, they lack the ability to explain certain tasks, i.e. image captioning (IC). This is caused by the IC task nature, e.g. the presence of multiple objects from the same category in the captioned image. In this paper we propose and investigate an XAI approach for this particular task. Additionally, we provide a method to evaluate XAI algorithms performance in the domain1.
Authored by Modafar Al-Shouha, Gábor Szűcs
The results of the Deep Learning (DL) are indisputable in different fields and in particular that of the medical diagnosis. The black box nature of this tool has left the doctors very cautious with regard to its estimates. The eXplainable Artificial Intelligence (XAI) recently seemed to lift this challenge by providing explanations to the DL estimates. Several works are published in the literature offering explanatory methods. We are interested in this survey to present an overview on the application of XAI in Deep Learning-based Magnetic Resonance Imaging (MRI) image analysis for Brain Tumor (BT) diagnosis. In this survey, we divide these XAI methods into four groups, the group of the intrinsic methods and three groups of post-hoc methods which are the activation based, the gradientr based and the perturbation based XAI methods. These XAI tools improved the confidence on the DL based brain tumor diagnosis.
Authored by Hana Charaabi, Hiba Mzoughi, Ridha Hamdi, Mohamed Njah