In an environment where terrorist group actions are heavily predominate, the study introduces novel modeling tools that really are adept at controlling, coordinating, manipulating, detecting, and tracing drones. Modern humans now need to simulate their surroundings in order to boost their comfort and productivity at work. The ability to imitate a person s everyday work has undergone tremendous advancement. A simulation is a representation of how a system or process would work in the actual world.
Authored by Soumya V, S. Sujitha, Mohan R, Sharmi Kanaujia, Sanskriti Agarwalla, Shaik Sameer, Tabasum Manzoor
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-Enabled IoTs: An Anticipatory Study
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model s accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93\% to 85.98\% with IID data and from 94.18\% to 30.04\% with Non-IID.
Authored by Mohamed Ferrag, Burak Kantarci, Lucas Cordeiro, Merouane Debbah, Kim-Kwang Choo
As a recent breakthrough in generative artificial intelligence, ChatGPT is capable of creating new data, images, audio, or text content based on user context. In the field of cybersecurity, it provides generative automated AI services such as network detection, malware protection, and privacy compliance monitoring. However, it also faces significant security risks during its design, training, and operation phases, including privacy breaches, content abuse, prompt word attacks, model stealing attacks, abnormal structure attacks, data poisoning attacks, model hijacking attacks, and sponge attacks. This paper starts from the risks and events that ChatGPT has recently faced, proposes a framework for analyzing cybersecurity in cyberspace, and envisions adversarial models and systems. It puts forward a new evolutionary relationship between attackers and defenders using ChatGPT to enhance their own capabilities in a changing environment and predicts the future development of ChatGPT from a security perspective.
Authored by Chunhui Hu, Jianfeng Chen
As the use of machine learning continues to grow in prominence, so does the need for increased knowledge of the threats posed by artificial intelligence. Now more than ever, people are worried about poison attacks, one of the many AI-generated dangers that have already been made public. To fool a classifier during testing, an attacker may "poison" it by altering a portion of the dataset it utilised for training. The poison-resistance strategy presented in this article is novel. The approach uses a recently developed basic called the keyed nonlinear probability test to determine whether or not the training input is consistent with a previously learnt Ddistribution when the odds are stacked against the model. We use an adversary-unknown secret key in our operation. Since the caveats are kept hidden, an adversary cannot use them to fool a keyed nonparametric normality test into concluding that a (substantially) modified dataset really originates from the designated dataset (D).
Authored by Ramesh Saini
This survey paper provides an overview of the current state of AI attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques.
Authored by Md Rahman, Aiasha Arshi, Md Hasan, Sumayia Mishu, Hossain Shahriar, Fan Wu
Recently, the manufacturing industry is changing into a smart manufacturing era with the development of 5G, artificial intelligence, and cloud computing technologies. As a result, Operational Technology (OT), which controls and operates factories, has been digitized and used together with Information Technology (IT). Security is indispensable in the smart manu-facturing industry as a problem with equipment, facilities, and operations in charge of manufacturing can cause factory shutdown or damage. In particular, security is required in smart factories because they implement automation in the manufacturing industry by monitoring the surrounding environment and collecting meaningful information through Industrial IoT (IIoT). Therefore, in this paper, IIoT security proposed in 2022 and recent technology trends are analyzed and explained in order to understand the current status of IIoT security technology in a smart factory environment.
Authored by Jihye Kim, Jaehyoung Park, Jong-Hyouk Lee
Digital Twin can be developed to represent a certain soil carbon emissions ecosystem that takes into account various parameters such as the type of soil, vegetation, climate, human interaction, and many more. With the help of sensors and satellite imagery, real-time data can be collected and fed into the digital model to simulate and predict soil carbon emissions. However, the lack of interpretable prediction results and transparent decision-making results makes Digital Twin unreliable, which could damage the management process. Therefore, we proposed an explainable artificial intelligence (XAI) empowered Digital Twin for better managing soil carbon emissions through AI-enabled proximal sensing. We validated our XAIoT-DT components by analyzing real-world soil carbon content datasets. The preliminary results demonstrate that our framework is a reliable tool for managing soil carbon emissions with relatively high prediction results at a low cost.
Authored by Di An, YangQuan Chen
Alzheimer s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
The rapid advancement in Deep Learning (DL) proposes viable solutions to various real-world problems. However, deploying DL-based models in some applications is hindered by their black-box nature and the inability to explain them. This has pushed Explainable Artificial Intelligence (XAI) research toward DL-based models, aiming to increase the trust by reducing their opacity. Although many XAI algorithms were proposed earlier, they lack the ability to explain certain tasks, i.e. image captioning (IC). This is caused by the IC task nature, e.g. the presence of multiple objects from the same category in the captioned image. In this paper we propose and investigate an XAI approach for this particular task. Additionally, we provide a method to evaluate XAI algorithms performance in the domain1.
Authored by Modafar Al-Shouha, Gábor Szűcs
The results of the Deep Learning (DL) are indisputable in different fields and in particular that of the medical diagnosis. The black box nature of this tool has left the doctors very cautious with regard to its estimates. The eXplainable Artificial Intelligence (XAI) recently seemed to lift this challenge by providing explanations to the DL estimates. Several works are published in the literature offering explanatory methods. We are interested in this survey to present an overview on the application of XAI in Deep Learning-based Magnetic Resonance Imaging (MRI) image analysis for Brain Tumor (BT) diagnosis. In this survey, we divide these XAI methods into four groups, the group of the intrinsic methods and three groups of post-hoc methods which are the activation based, the gradientr based and the perturbation based XAI methods. These XAI tools improved the confidence on the DL based brain tumor diagnosis.
Authored by Hana Charaabi, Hiba Mzoughi, Ridha Hamdi, Mohamed Njah
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visualbased, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
Cyber Threat Intelligence has been demonstrated to be an effective element of defensive security and cyber protection with examples dating back to the founding of the Financial Sector Information Sharing and Analysis Center (FS ISAC) in 1998. Automated methods are needed today in order to stay current with the magnitude of attacks across the globe. Threat information must be actionable, current and credibly validated if they are to be ingested into computer operated defense systems. False positives degrade the value of the system. This paper outlines some of the progress made in applying artificial intelligence techniques as well as the challenges associated with utilizing machine learning to refine the flow of threat intelligence. A variety of methods have been developed to create learning models that can be integrated with firewalls, rules and heuristics. In addition more work is needed to effectively support the limited number of expert human hours available to evaluate the prioritized threat landscape flagged as malicious in a (Security Operations Center) SOC environment.
Authored by Jon Haass
With the proliferation of Low Earth Orbit (LEO) spacecraft constellations, comes the rise of space-based wireless cognitive communications systems (CCS) and the need to safeguard and protect data against potential hostiles to maintain widespread communications for enabling science, military and commercial services. For example, known adversaries are using advanced persistent threats (APT) or highly progressive intrusion mechanisms to target high priority wireless space communication systems. Specialized threats continue to evolve with the advent of machine learning and artificial intelligence, where computer systems inherently can identify system vulnerabilities expeditiously over naive human threat actors due to increased processing resources and unbiased pattern recognition. This paper presents a disruptive abuse case for an APT-attack on such a CCS and describes a trade-off analysis that was performed to evaluate a variety of machine learning techniques that could aid in the rapid detection and mitigation of an APT-attack. The trade results indicate that with the employment of neural networks, the CCS s resiliency would increase its operational functionality, and therefore, on-demand communication services reliability would increase. Further, modelling, simulation, and analysis (MS\&A) was achieved using the Knowledge Discovery and Data Mining (KDD) Cup 1999 data set as a means to validate a subset of the trade study results against Training Time and Number of Parameters selection criteria. Training and cross-validation learning curves were computed to model the learning performance over time to yield a reasonable conclusion about the application of neural networks.
Authored by Suzanna LaMar, Jordan Gosselin, Lisa Happel, Anura Jayasumana
The new web 3.0 or Web3 is a distributed web technology mainly operated by decentralized blockchain and Artificial Intelligence. The Web 3.0 technologies bring the changes in industry 4.0 especially the business sector. The contribution of this paper to discuss the new web 3.0 (not semantic web) and to explore the essential factors of the new Web 3.0 technologies in business or industry based on 7 layers of decentralized web. The Layers have users, interface, application, execution, settlement, data, and social as main components. The concept 7 layers of decentralized web was introduced by Polynya. This research was carried out using SLR (Systematic Literature Review) methodology to identify certain factors by analyzing high quality papers in the Scopus database. We found 21 essential factors that are Distributed, Real-time, Community, Culture, Productivity, Efficiency, Decentralized, Trust, Security, Performance, Reliability, Scalability, Transparency, Authenticity, Cost Effective, Communication, Telecommunication, Social Network, Use Case, and Business Simulation. We also present opportunities and challenges of the 21 factors in business and Industry.
Authored by Calvin Vernando, Hendry Hitojo, Randy Steven, Meyliana, Surjandy
COVID-19 has taught us the need of practicing social distancing. In the year 2020 because of sudden lockdown across the globe, E-commerce websites and e-shopping were the only escape to fulfill our basic needs and with the advancement of technology putting your websites online has become a necessity. Be it food, groceries, or our favorite outfit, all these things are now available online. It was noticed during the lockdown period that the businesses that had no social presence suffered heavy losses. On the other hand, people who had established their presence on the internet saw a sudden boom in their overall sales. This project discusses how the recent advancement in the field of Machine Learning and Artificial Intelligence has led to an increase in the sales of various businesses. The machine learning model analyses the pattern of customer’s behavior which affects the sales builds a dataset after many observations and finally helps generate an algorithm which is an efficient recommendation system. This project also discusses how cyber security helps us have secured and authenticated transactions which have aided ecommerce business growth by building customer s trust.
Authored by Tanya Pahadi, Abhishek Verma, Raju Ranjan
This paper presents a case study about the initial phases of the interface design for an artificial intelligence-based decision-support system for clinical diagnosis. The study presents challenges and opportunities in implementing a human-centered design (HCD) approach during the early stages of the software development of a complex system. These methods are commonly adopted to ensure that the systems are designed based on users needs. For this project, they are also used to investigate the users potential trust issues and ensure the creation of a trustworthy platform. However, the project stage and heterogeneity of the teams can pose obstacles to their implementation. The results of the implementation of HCD methods have shown to be effective and informed the creation of low fidelity prototypes. The outcomes of this process can assist other designers, developers, and researchers in creating trustworthy AI solutions.
Authored by Gabriela Beltrao, Iuliia Paramonova, Sonia Sousa
The Assessment List for Trustworthy AI (ALTAI) was developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) set up by the European Commission to help assess whether the AI system that is being developed, deployed, procured, or used, complies with the seven requirements of Trustworthy AI, as specified in the AI HLEG’s Ethics Guidelines for Trustworthy AI. This paper describes the self-evaluation process of the SHAPES pilot campaign and presents some individual case results applying the prototype of an interactive version of the Assessment List for Trustworthy AI. Finally, the available results of two individual cases are combined. The best results are obtained from the evaluation category ‘transparency’ and the worst from ‘technical robustness and safety’. Future work will be combining the missing self-assessment results and developing mitigation recommendations for AI-based risk reduction recommendations for new SHAPES services.
Authored by Jyri Rajamaki, Pedro Rocha, Mira Perenius, Fotios Gioulekas
Recent advances in artificial intelligence, specifically machine learning, contributed positively to enhancing the autonomous systems industry, along with introducing social, technical, legal and ethical challenges to make them trustworthy. Although Trustworthy Autonomous Systems (TAS) is an established and growing research direction that has been discussed in multiple disciplines, e.g., Artificial Intelligence, Human-Computer Interaction, Law, and Psychology. The impact of TAS on education curricula and required skills for future TAS engineers has rarely been discussed in the literature. This study brings together the collective insights from a number of TAS leading experts to highlight significant challenges for curriculum design and potential TAS required skills posed by the rapid emergence of TAS. Our analysis is of interest not only to the TAS education community but also to other researchers, as it offers ways to guide future research toward operationalising TAS education.
Authored by Mohammad Naiseh, Caitlin Bentley, Sarvapali Ramchurn
Artificial intelligence (AI) technology is becoming common in daily life as it finds applications in various fields. Consequently, studies have strongly focused on the reliability of AI technology to ensure that it will be used ethically and in a nonmalicious manner. In particular, the fairness of AI technology should be ensured to avoid problems such as discrimination against a certain group (e.g., racial discrimination). This paper defines seven requirements for eliminating factors that reduce the fairness of AI systems in the implementation process. It also proposes a measure to reduce the bias and discrimination that can occur during AI system implementation to ensure the fairness of AI systems. The proposed requirements and measures are expected to enhance the fairness and ensure the reliability of AI systems and to ultimately increase the acceptability of AI technology in human society.
Authored by Yejin Shin, KyoungWoo Cho, Joon Kwak, JaeYoung Hwang
Artificial intelligence (AI) technology is rapidly being introduced and used in all industries as the core technology. Further, concerns about unexpected social issues are also emerging. Therefore, each country, and standard and international organizations, are developing and distributing guidelines to maximize the benefits of AI while minimizing risks and side effects. However, there are several hurdles for developers to use them in actual industrial fields such as ambiguity in terminologies, lack of concreteness according to domain, and non-specific requirements. Therefore, in this paper, approaches to address these problems are presented. If the recommendations or guidelines to be developed in the future refer to the proposed approaches, it would be a guideline for assuring AI trustworthiness that is more developer-friendly.
Authored by Jae Hwang
We have seen the tremendous expansion of machine learning (ML) technology in Artificial Intelligence (AI) applications, including computer vision, voice recognition, and many others. The availability of a vast amount of data has spurred the rise of ML technologies, especially Deep Learning (DL). Traditional ML systems consolidate all data into a central location, usually a data center, which may breach privacy and confidentiality rules. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating data privacy, legality, scalability, and unwanted bandwidth loss problems. This paper outlines a vision for leveraging FL for better traffic steering predictions. Specifically, we propose a hierarchical FL framework that will dynamically update service function chains in a network by predicting future user demand and network state using the FL method.
Authored by Abdullah Bittar, Changcheng Huang
Wearables Security 2022 - One of the biggest new trends in artificial intelligence is the ability to recognise people s movements and take their actions into account. It can be used in a variety of ways, including for surveillance, security, human-computer interaction, and content-based video retrieval. There have been a number of researchers that have presented vision-based techniques to human activity recognition. Several challenges need to be addressed in the creation of a vision-based human activity recognition system, including illumination variations in human activity recognition, interclass similarity between scenes, the environment and recording setting, and temporal variation. To overcome the above mentioned problem, by capturing or sensing human actions with help of wearable sensors, wearable devices, or IoT devices. Sensor data, particularly one-dimensional time series data, are used in the work of human activity recognition. Using 1D-Convolutional Neural Network (CNN) models, this works aims to propose a new approach for identifying human activities. The Wireless Sensor Data Mining (WISDM) dataset is utilised to train and test the 1D-CNN model in this dissertation. The proposed HAR-CNN model has a 95.2\%of accuracy, which is far higher than that of conventional methods.
Authored by P. Deepan, Santhosh Kumar, B. Rajalingam, Santosh Patra, S. Ponnuthurai
Design of High-Confidence Embedded Operating System based on Artificial Intelligence and Smart Chips
Operating Systems Security - Design of the high-confidence embedded operating system based on artificial intelligence and smart chips is studied in this paper. The cooperative physical layer security system is regarded as a state machine. Relay nodes with untrusted behavior will affect the physical layer security of the system, and the system tries to prevent the untrusted behavior of relay nodes. While implementing public verification, it realizes the protection of data privacy. The third party can directly verify the data holding of the data stored in the cloud without verification by the user, and in the process of system expansion and growth, software can ensure vigorous vitality. For the verification, the smart chips are combined for the systematic implementations. The experimental results have shown the satisfactory results.
Authored by Qinmin Ma
Neural Style Transfer - Style transfer is an optimizing technique that aims to blend style of input image to content image. Deep neural networks have previously surpassed humans in tasks such as object identification and detection. Deep neural networks, on the contrary, had been lagging behind in generating higher quality creative products until lately. This article introduces deep-learning techniques, which are vital in accomplishing human characteristics and open up a new world of prospects. The system employs a pre-trained CNN so that the styles of the provided image is transferred to the content image to generate high quality stylized image. The designed systems effectiveness is evaluated based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Metrics (SSIM), it is noticed that the designed method effectively maintains the structural and textural information of the cover image.
Authored by Kishor Bhangale, Pranoti Desai, Saloni Banne, Utkarsh Rajput
Neural Network Resiliency - With the proliferation of Low Earth Orbit (LEO) spacecraft constellations, comes the rise of space-based wireless cognitive communications systems (CCS) and the need to safeguard and protect data against potential hostiles to maintain widespread communications for enabling science, military and commercial services. For example, known adversaries are using advanced persistent threats (APT) or highly progressive intrusion mechanisms to target high priority wireless space communication systems. Specialized threats continue to evolve with the advent of machine learning and artificial intelligence, where computer systems inherently can identify system vulnerabilities expeditiously over naive human threat actors due to increased processing resources and unbiased pattern recognition. This paper presents a disruptive abuse case for an APT-attack on such a CCS and describes a trade-off analysis that was performed to evaluate a variety of machine learning techniques that could aid in the rapid detection and mitigation of an APT-attack. The trade results indicate that with the employment of neural networks, the CCS s resiliency would increase its operational functionality, and therefore, on-demand communication services reliability would increase. Further, modelling, simulation, and analysis (MS\&A) was achieved using the Knowledge Discovery and Data Mining (KDD) Cup 1999 data set as a means to validate a subset of the trade study results against Training Time and Number of Parameters selection criteria. Training and cross-validation learning curves were computed to model the learning performance over time to yield a reasonable conclusion about the application of neural networks.
Authored by Suzanna LaMar, Jordan Gosselin, Lisa Happel, Anura Jayasumana