In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visualbased, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
Cyber Threat Intelligence has been demonstrated to be an effective element of defensive security and cyber protection with examples dating back to the founding of the Financial Sector Information Sharing and Analysis Center (FS ISAC) in 1998. Automated methods are needed today in order to stay current with the magnitude of attacks across the globe. Threat information must be actionable, current and credibly validated if they are to be ingested into computer operated defense systems. False positives degrade the value of the system. This paper outlines some of the progress made in applying artificial intelligence techniques as well as the challenges associated with utilizing machine learning to refine the flow of threat intelligence. A variety of methods have been developed to create learning models that can be integrated with firewalls, rules and heuristics. In addition more work is needed to effectively support the limited number of expert human hours available to evaluate the prioritized threat landscape flagged as malicious in a (Security Operations Center) SOC environment.
Authored by Jon Haass
With the proliferation of Low Earth Orbit (LEO) spacecraft constellations, comes the rise of space-based wireless cognitive communications systems (CCS) and the need to safeguard and protect data against potential hostiles to maintain widespread communications for enabling science, military and commercial services. For example, known adversaries are using advanced persistent threats (APT) or highly progressive intrusion mechanisms to target high priority wireless space communication systems. Specialized threats continue to evolve with the advent of machine learning and artificial intelligence, where computer systems inherently can identify system vulnerabilities expeditiously over naive human threat actors due to increased processing resources and unbiased pattern recognition. This paper presents a disruptive abuse case for an APT-attack on such a CCS and describes a trade-off analysis that was performed to evaluate a variety of machine learning techniques that could aid in the rapid detection and mitigation of an APT-attack. The trade results indicate that with the employment of neural networks, the CCS s resiliency would increase its operational functionality, and therefore, on-demand communication services reliability would increase. Further, modelling, simulation, and analysis (MS\&A) was achieved using the Knowledge Discovery and Data Mining (KDD) Cup 1999 data set as a means to validate a subset of the trade study results against Training Time and Number of Parameters selection criteria. Training and cross-validation learning curves were computed to model the learning performance over time to yield a reasonable conclusion about the application of neural networks.
Authored by Suzanna LaMar, Jordan Gosselin, Lisa Happel, Anura Jayasumana
The new web 3.0 or Web3 is a distributed web technology mainly operated by decentralized blockchain and Artificial Intelligence. The Web 3.0 technologies bring the changes in industry 4.0 especially the business sector. The contribution of this paper to discuss the new web 3.0 (not semantic web) and to explore the essential factors of the new Web 3.0 technologies in business or industry based on 7 layers of decentralized web. The Layers have users, interface, application, execution, settlement, data, and social as main components. The concept 7 layers of decentralized web was introduced by Polynya. This research was carried out using SLR (Systematic Literature Review) methodology to identify certain factors by analyzing high quality papers in the Scopus database. We found 21 essential factors that are Distributed, Real-time, Community, Culture, Productivity, Efficiency, Decentralized, Trust, Security, Performance, Reliability, Scalability, Transparency, Authenticity, Cost Effective, Communication, Telecommunication, Social Network, Use Case, and Business Simulation. We also present opportunities and challenges of the 21 factors in business and Industry.
Authored by Calvin Vernando, Hendry Hitojo, Randy Steven, Meyliana, Surjandy
COVID-19 has taught us the need of practicing social distancing. In the year 2020 because of sudden lockdown across the globe, E-commerce websites and e-shopping were the only escape to fulfill our basic needs and with the advancement of technology putting your websites online has become a necessity. Be it food, groceries, or our favorite outfit, all these things are now available online. It was noticed during the lockdown period that the businesses that had no social presence suffered heavy losses. On the other hand, people who had established their presence on the internet saw a sudden boom in their overall sales. This project discusses how the recent advancement in the field of Machine Learning and Artificial Intelligence has led to an increase in the sales of various businesses. The machine learning model analyses the pattern of customer’s behavior which affects the sales builds a dataset after many observations and finally helps generate an algorithm which is an efficient recommendation system. This project also discusses how cyber security helps us have secured and authenticated transactions which have aided ecommerce business growth by building customer s trust.
Authored by Tanya Pahadi, Abhishek Verma, Raju Ranjan
This paper presents a case study about the initial phases of the interface design for an artificial intelligence-based decision-support system for clinical diagnosis. The study presents challenges and opportunities in implementing a human-centered design (HCD) approach during the early stages of the software development of a complex system. These methods are commonly adopted to ensure that the systems are designed based on users needs. For this project, they are also used to investigate the users potential trust issues and ensure the creation of a trustworthy platform. However, the project stage and heterogeneity of the teams can pose obstacles to their implementation. The results of the implementation of HCD methods have shown to be effective and informed the creation of low fidelity prototypes. The outcomes of this process can assist other designers, developers, and researchers in creating trustworthy AI solutions.
Authored by Gabriela Beltrao, Iuliia Paramonova, Sonia Sousa
The Assessment List for Trustworthy AI (ALTAI) was developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) set up by the European Commission to help assess whether the AI system that is being developed, deployed, procured, or used, complies with the seven requirements of Trustworthy AI, as specified in the AI HLEG’s Ethics Guidelines for Trustworthy AI. This paper describes the self-evaluation process of the SHAPES pilot campaign and presents some individual case results applying the prototype of an interactive version of the Assessment List for Trustworthy AI. Finally, the available results of two individual cases are combined. The best results are obtained from the evaluation category ‘transparency’ and the worst from ‘technical robustness and safety’. Future work will be combining the missing self-assessment results and developing mitigation recommendations for AI-based risk reduction recommendations for new SHAPES services.
Authored by Jyri Rajamaki, Pedro Rocha, Mira Perenius, Fotios Gioulekas
Recent advances in artificial intelligence, specifically machine learning, contributed positively to enhancing the autonomous systems industry, along with introducing social, technical, legal and ethical challenges to make them trustworthy. Although Trustworthy Autonomous Systems (TAS) is an established and growing research direction that has been discussed in multiple disciplines, e.g., Artificial Intelligence, Human-Computer Interaction, Law, and Psychology. The impact of TAS on education curricula and required skills for future TAS engineers has rarely been discussed in the literature. This study brings together the collective insights from a number of TAS leading experts to highlight significant challenges for curriculum design and potential TAS required skills posed by the rapid emergence of TAS. Our analysis is of interest not only to the TAS education community but also to other researchers, as it offers ways to guide future research toward operationalising TAS education.
Authored by Mohammad Naiseh, Caitlin Bentley, Sarvapali Ramchurn
Artificial intelligence (AI) technology is becoming common in daily life as it finds applications in various fields. Consequently, studies have strongly focused on the reliability of AI technology to ensure that it will be used ethically and in a nonmalicious manner. In particular, the fairness of AI technology should be ensured to avoid problems such as discrimination against a certain group (e.g., racial discrimination). This paper defines seven requirements for eliminating factors that reduce the fairness of AI systems in the implementation process. It also proposes a measure to reduce the bias and discrimination that can occur during AI system implementation to ensure the fairness of AI systems. The proposed requirements and measures are expected to enhance the fairness and ensure the reliability of AI systems and to ultimately increase the acceptability of AI technology in human society.
Authored by Yejin Shin, KyoungWoo Cho, Joon Kwak, JaeYoung Hwang
Artificial intelligence (AI) technology is rapidly being introduced and used in all industries as the core technology. Further, concerns about unexpected social issues are also emerging. Therefore, each country, and standard and international organizations, are developing and distributing guidelines to maximize the benefits of AI while minimizing risks and side effects. However, there are several hurdles for developers to use them in actual industrial fields such as ambiguity in terminologies, lack of concreteness according to domain, and non-specific requirements. Therefore, in this paper, approaches to address these problems are presented. If the recommendations or guidelines to be developed in the future refer to the proposed approaches, it would be a guideline for assuring AI trustworthiness that is more developer-friendly.
Authored by Jae Hwang
We have seen the tremendous expansion of machine learning (ML) technology in Artificial Intelligence (AI) applications, including computer vision, voice recognition, and many others. The availability of a vast amount of data has spurred the rise of ML technologies, especially Deep Learning (DL). Traditional ML systems consolidate all data into a central location, usually a data center, which may breach privacy and confidentiality rules. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating data privacy, legality, scalability, and unwanted bandwidth loss problems. This paper outlines a vision for leveraging FL for better traffic steering predictions. Specifically, we propose a hierarchical FL framework that will dynamically update service function chains in a network by predicting future user demand and network state using the FL method.
Authored by Abdullah Bittar, Changcheng Huang
Wearables Security 2022 - One of the biggest new trends in artificial intelligence is the ability to recognise people s movements and take their actions into account. It can be used in a variety of ways, including for surveillance, security, human-computer interaction, and content-based video retrieval. There have been a number of researchers that have presented vision-based techniques to human activity recognition. Several challenges need to be addressed in the creation of a vision-based human activity recognition system, including illumination variations in human activity recognition, interclass similarity between scenes, the environment and recording setting, and temporal variation. To overcome the above mentioned problem, by capturing or sensing human actions with help of wearable sensors, wearable devices, or IoT devices. Sensor data, particularly one-dimensional time series data, are used in the work of human activity recognition. Using 1D-Convolutional Neural Network (CNN) models, this works aims to propose a new approach for identifying human activities. The Wireless Sensor Data Mining (WISDM) dataset is utilised to train and test the 1D-CNN model in this dissertation. The proposed HAR-CNN model has a 95.2\%of accuracy, which is far higher than that of conventional methods.
Authored by P. Deepan, Santhosh Kumar, B. Rajalingam, Santosh Patra, S. Ponnuthurai
Operating Systems Security - Design of the high-confidence embedded operating system based on artificial intelligence and smart chips is studied in this paper. The cooperative physical layer security system is regarded as a state machine. Relay nodes with untrusted behavior will affect the physical layer security of the system, and the system tries to prevent the untrusted behavior of relay nodes. While implementing public verification, it realizes the protection of data privacy. The third party can directly verify the data holding of the data stored in the cloud without verification by the user, and in the process of system expansion and growth, software can ensure vigorous vitality. For the verification, the smart chips are combined for the systematic implementations. The experimental results have shown the satisfactory results.
Authored by Qinmin Ma
Neural Style Transfer - Style transfer is an optimizing technique that aims to blend style of input image to content image. Deep neural networks have previously surpassed humans in tasks such as object identification and detection. Deep neural networks, on the contrary, had been lagging behind in generating higher quality creative products until lately. This article introduces deep-learning techniques, which are vital in accomplishing human characteristics and open up a new world of prospects. The system employs a pre-trained CNN so that the styles of the provided image is transferred to the content image to generate high quality stylized image. The designed systems effectiveness is evaluated based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Metrics (SSIM), it is noticed that the designed method effectively maintains the structural and textural information of the cover image.
Authored by Kishor Bhangale, Pranoti Desai, Saloni Banne, Utkarsh Rajput
Neural Network Resiliency - With the proliferation of Low Earth Orbit (LEO) spacecraft constellations, comes the rise of space-based wireless cognitive communications systems (CCS) and the need to safeguard and protect data against potential hostiles to maintain widespread communications for enabling science, military and commercial services. For example, known adversaries are using advanced persistent threats (APT) or highly progressive intrusion mechanisms to target high priority wireless space communication systems. Specialized threats continue to evolve with the advent of machine learning and artificial intelligence, where computer systems inherently can identify system vulnerabilities expeditiously over naive human threat actors due to increased processing resources and unbiased pattern recognition. This paper presents a disruptive abuse case for an APT-attack on such a CCS and describes a trade-off analysis that was performed to evaluate a variety of machine learning techniques that could aid in the rapid detection and mitigation of an APT-attack. The trade results indicate that with the employment of neural networks, the CCS s resiliency would increase its operational functionality, and therefore, on-demand communication services reliability would increase. Further, modelling, simulation, and analysis (MS\&A) was achieved using the Knowledge Discovery and Data Mining (KDD) Cup 1999 data set as a means to validate a subset of the trade study results against Training Time and Number of Parameters selection criteria. Training and cross-validation learning curves were computed to model the learning performance over time to yield a reasonable conclusion about the application of neural networks.
Authored by Suzanna LaMar, Jordan Gosselin, Lisa Happel, Anura Jayasumana
Network Security Resiliency - Distributed cyber-infrastructures and Artificial Intelligence (AI) are transformative technologies that will play a pivotal role in the future of society and the scientific community. Internet of Things (IoT) applications harbor vast quantities of connected devices that collect a massive amount of sensitive information (e.g., medical, financial), which is usually analyzed either at the edge or federated cloud systems via AI/Machine Learning (ML) algorithms to make critical decisions (e.g., diagnosis). It is of paramount importance to ensure the security, privacy, and trustworthiness of data collection, analysis, and decision-making processes. However, system complexity and increased attack surfaces make these applications vulnerable to system breaches, single-point of failures, and various cyber-attacks. Moreover, the advances in quantum computing exacerbate the security and privacy challenges. That is, emerging quantum computers can break conventional cryptographic systems that offer cyber-security services, public key infrastructures, and privacy-enhancing technologies. Therefore, there is a vital need for new cyber-security paradigms that can address the resiliency, long-term security, and efficiency requirements of distributed cyber infrastructures.
Authored by Attila Yavuz, Saif Nouma, Thang Hoang, Duncan Earl, Scott Packard
Network Security Resiliency - A reliable synchrophasor network of phasor measurement units (PMUs) is essential for modern power system operations and management with rapidly increasing levels of renewable energy sources. Cyber-physical system vulnerabilities such as side-channel based denial of service (DoS) attacks can compromise PMU communications even when using an encrypted virtual private network. To overcome these vulnerabilities, countermeasures to DoS attacks needs to be developed. One such countermeasure is the development and deployment of a virtual synchrophasor network (VSN) to improve the reliability of a synchrophasor network to DoS attacks. A cellular computational networks (CCN) is a distributed artificial intelligence framework suitable for complex system modeling and estimation. CCNs have been proved to mitigate the effects of DoS attacks on single PMUs successfully. In this study, the robustness of a VSN is further investigated and proven to exhibit resiliency under concurrent DoS attacks. Typical results for VSN applications in multi-area power systems with utility-scale photovoltaic solar plants are presented.
Authored by Xingsi Zhong, Ganesh Venayagamoorthy, Richard Brooks
Network on Chip Security - This paper designs a network security protection system based on artificial intelligence technology from two aspects of hardware and software. The system can simultaneously collect Internet public data and secret-related data inside the unit, and encrypt it through the TCM chip solidified in the hardware to ensure that only designated machines can read secret-related materials. The data edgecloud collaborative acquisition architecture based on chip encryption can realize the cross-network transmission of confidential data. At the same time, this paper proposes an edge-cloud collaborative information security protection method for industrial control systems by combining endaddress hopping and load balancing algorithms. Finally, using WinCC, Unity3D, MySQL and other development environments comprehensively, the feasibility and effectiveness of the system are verified by experiments.
Authored by Xiuyun Lu, Wenxing Zhao, Yuquan Zhu
Network Intrusion Detection - Network intrusion detection technology has been a popular application technology for current network security, but the existing network intrusion detection technology in the application process, there are problems such as low detection efficiency, low detection accuracy and other poor detection performance. To solve the above problems, a new treatment combining artificial intelligence with network intrusion detection is proposed. Artificial intelligence-based network intrusion detection technology refers to the application of artificial intelligence techniques, such as: neural networks, neural algorithms, etc., to network intrusion detection, and the application of these artificial intelligence techniques makes the automatic detection of network intrusion detection models possible.
Authored by Chaofan Lu
Network Coding - With the continuous development of the Internet, artificial intelligence, 5G and other technologies, various issues have started to receive attention, among which the network security issue is now one of the key research directions for relevant research scholars at home and abroad. This paper researches on the basis of traditional Internet technology to establish a security identification system on top of the network physical layer of the Internet, which can effectively identify some security problems on top of the network infrastructure equipment and solve the identified security problems on the physical layer. This experiment is to develop a security identification system, research and development in the network physical level of the Internet, compared with the traditional development of the relevant security identification system in the network layer, the development in the physical layer, can be based on the physical origin of the protection, from the root to solve part of the network security problems, can effectively carry out the identification and solution of network security problems. The experimental results show that the security identification system can identify some basic network security problems very effectively, and the system is developed based on the physical layer of the Internet network, and the protection is carried out from the physical device, and the retransmission symbol error rates of CQ-PNC algorithm and ML algorithm in the experiment are 110 and 102, respectively. The latter has a lower error rate and better protection.
Authored by Yunge Huang
Named Data Network Security - With the continuous development of network technology as well as science and technology, artificial intelligence technology and its related scientific and technological applications, in this process, were born. Among them, artificial intelligence technology has been widely used in information detection as well as data processing, and has remained one of the current hot research topics. Those research on artificial intelligence, recently, has focused on the application of network security processing of data as well as fault diagnosis and anomaly detection. This paper analyzes, aiming at the network security detection of students real name data, the relevant artificial intelligence technology and builds the model. In this process, this paper firstly introduces and analyzes some shortcomings of clustering algorithm as well as mean algorithm, and then proposes a cloning algorithm to obtain the global optimal solution. This paper, on this basis, constructs a network security model of student real name data information processing based on trust principle and trust model.
Authored by Wenyan Ye
Microelectronics Security - The boundaries between the real world and the virtual world are going to be blurred by Metaverse. It is transforming every aspect of humans to seamlessly transition from one virtual world to another. It is connecting the real world with the digital world by integrating emerging tech like 5G, 3d reconstruction, IoT, Artificial intelligence, digital twin, augmented reality (AR), and virtual reality (VR). Metaverse platforms inherit many security \& privacy issues from underlying technologies, and this might impede their wider adoption. Emerging tech is easy to target for cybercriminals as security posture is in its infancy. This work elaborates on current and potential security, and privacy risks in the metaverse and put forth proposals and recommendations to build a trusted ecosystem in a holistic manner.
Authored by Sailaja Vadlamudi
MANET Attack Detection - Mobile Ad-hoc network (MANET) has improved to be essential components of our daily lives. Due to its compatibility with multimedia data interchange in a mobile context, MANETs are employed in a variety of applications today, including those for crisis management and the battlefield, The popularity of infrastructure-less networks has grown along with the popularity of ad hoc networks in recent years as a result of the rise in wireless devices and technological developments MANETs have brought about a new type of technologies that allow them to operate without a fixed infrastructure. The dynamic nature of the MANET network makes it susceptible to numerous attacks. One of these is the wormhole, which spreads data from one site to another and can damage the network. If the source node chooses this fictitious route, the attacker has a backup plan to deliver or drop packets. In this paper, we proposed a technique by modifying the Ad-hoc On-demand Distance vector protocol (AODV) in the stage of RREQ and RREP with the sequence number transaction and the detection timer(DT). The proposed method when reached to 100 nodes, achieved the throughput of 95.5kbps, energy consumption of 55.9joule, end to end delay of 0.973sec and Packet Delivery Ratio (PDR) of 96.5%.
Authored by Hussein Jawdat, Muhammad Ilyas
Intrusion Intolerance - The cascaded multi-level inverter (CMI) is becoming increasingly popular for wide range of applications in power electronics dominated grid (PEDG). The increased number of semiconductors devices in these class of power converters leads to an increased need for fault detection, isolation, and selfhealing. In addition, the PEDG’s cyber and physical layers are exposed to malicious attacks. These malicious actions, if not detected and classified in a timely manner, can cause catastrophic events in power grid. The inverters’ internal failures make the anomaly detection and classification in PEDG a challenging task. The main objective of this paper is to address this challenge by implementing a recurrent neural network (RNN), specifically utilizing long short-term memory (LSTM) for detection and classification of internal failures in CMI and distinguish them from malicious activities in PEDG. The proposed anomaly classification framework is a module in the primary control layer of inverters which can provide information for intrusion detection systems in a secondary control layer of PEDG for further analysis.
Authored by Matthew Baker, Hassan Althuwaini, Mohammad Shadmand
Intelligent Data and Security - Artificial technology developed in recent years. It is an intelligent system that can perform tasks without human intervention. AI can be used for various purposes, such as speech recognition, face recognition, etc. AI can be used for good or bad purposes, depending on how it is implemented. The discuss the application of AI in data security technology and its advantages over traditional security methods. We will focus on the good use of AI by analyzing the impact of AI on the development of big data security technology. AI can be used to enhance security technology by using machine learning algorithms, which can analyze large amounts of data and identify patterns that cannot be detected automatically by humans. The computer big data security technology platform based on artificial intelligence in this paper is the process of creating a system that can identify and prevent malicious programs. The system must be able to detect all types of threats, including viruses, worms, Trojans and spyware. It should also be able to monitor network activity and respond quickly in the event of an attack.
Authored by Yu Miao