With the increased computational efficiency, Deep Neural Network gained more importance in the area of medical diagnosis. Nowadays many researchers have noticed the security concerns of various deep neural network models used for the clinical applications. However an efficient model misbehaves frequently when it confronted with intentionally modified data samples, called adversarial examples. These adversarial examples generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Thus, various adversarial attacks and defense methods certainly stand out from both AI and security networks and have turned into a hot exploration point lately. Adversarial attacks can be expected in various applications of deep learning model especially in healthcare area for disease prediction or classification. It should be properly handled with effective defensive mechanisms or else it may be a great threat to human life. This literature work will help to notice various adversarial attacks and defensive mechanisms. In the field of clinical analysis, this paper gives a detailed research on adversarial approaches on deep neural networks. This paper starts with the speculative establishments, various techniques, and utilization of adversarial attacking strategies. The contributions by the various researchers for the defensive mechanisms against adversarial attacks were also discussed. A few open issues and difficulties are accordingly discussed about, which might incite further exploration endeavors.
Authored by K Priya V, Peter Dinesh
In the ever-changing world of blockchain technology, the emergence of smart contracts has completely transformed the way agreements are executed, offering the potential for automation and trust in decentralized systems. Despite their built-in security features, smart contracts still face persistent vulnerabilities, resulting in significant financial losses. While existing studies often approach smart contract security from specific angles, such as development cycles or vulnerability detection tools, this paper adopts a comprehensive, multidimensional perspective. It delves into the intricacies of smart contract security by examining vulnerability detection mechanisms and defense strategies. The exploration begins by conducting a detailed analysis of the current security challenges and issues surrounding smart contracts. It then delves into established frameworks for classifying vulnerabilities and common security flaws. The paper examines existing methods for detecting, and repairing contract vulnerabilities, evaluating their effectiveness. Additionally, it provides a comprehensive overview of the existing body of knowledge in smart contract security-related research. Through this systematic examination, the paper aims to serve as a valuable reference and provide a comprehensive understanding of the multifaceted landscape of smart contract security.
Authored by Nayantara Kumar, Niranjan Honnungar V, Sharwari Prakash, J Lohith
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
Zero-day attacks, which are defined by their abrupt appearance without any previous detection mechanisms, present a substantial obstacle in the field of network security. To address this difficulty, a wide variety of machine learning and deep learning models have been used to identify and minimize zeroday assaults. The models have been assessed for both binary and multi-class classification situations, The objective of this work is to do a thorough comparison and analysis of these models, including the impact of class imbalance and utilizing SHAP (SHapley Additive exPlanations) explainability approaches. Class imbalance is a prevalent problem in cybersecurity datasets, characterized by a considerable disparity between the number of attack cases and non-attack instances. By equalizing the dataset, we guarantee equitable depiction of both categories, so preventing prejudice towards the dominant category throughout the training and assessment of the model. Moreover, the application of SHAP XAI facilitates a more profound comprehension of model predictions, empowering analysts to analyze the fundamental aspects that contribute to the detection of zero-day attacks.
Authored by C.K. Sruthi, Aswathy Ravikumar, Harini Sriraman
The Internet of Things (IoT) refers to the growing network of connected physical objects embedded with sensors, software and connectivity. While IoT has potential benefits, it also introduces new cyber security risks. This paper provides an overview of IoT security issues, vulnerabilities, threats, and mitigation strategies. The key vulnerabilities arising from IoT s scale, ubiquity and connectivity include inadequate authentication, lack of encryption, poor software security, and privacy concerns. Common attacks against IoT devices and networks include denial of service, ransom-ware, man-in-the-middle, and spoofing. An analysis of recent literature highlights emerging attack trends like swarm-based DDoS, IoT botnets, and automated large-scale exploits. Recommended techniques to secure IoT include building security into architecture and design, access control, cryptography, regular patching and upgrades, activity monitoring, incident response plans, and end-user education. Future technologies like blockchain, AI-enabled defense, and post-quantum cryptography can help strengthen IoT security. Additional focus areas include shared threat intelligence, security testing, certification programs, international standards and collaboration between industry, government and academia. A robust multilayered defense combining preventive and detective controls is required to combat rising IoT threats. This paper provides a comprehensive overview of the IoT security landscape and identifies areas for continued research and development.
Authored by Luis Cambosuela, Mandeep Kaur, Rani Astya
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
In an environment where terrorist group actions are heavily predominate, the study introduces novel modeling tools that really are adept at controlling, coordinating, manipulating, detecting, and tracing drones. Modern humans now need to simulate their surroundings in order to boost their comfort and productivity at work. The ability to imitate a person s everyday work has undergone tremendous advancement. A simulation is a representation of how a system or process would work in the actual world.
Authored by Soumya V, S. Sujitha, Mohan R, Sharmi Kanaujia, Sanskriti Agarwalla, Shaik Sameer, Tabasum Manzoor
As vehicles increasingly embed digital systems, new security vulnerabilities are also being introduced. Computational constraints make it challenging to add security oversight layers on top of core vehicle systems, especially when the security layers rely on additional deep learning models for anomaly detection. To improve security-aware decision-making for autonomous vehicles (AV), this paper proposes a bi-level security framework. The first security level consists of a one-shot resource allocation game that enables a single vehicle to fend off an attacker by optimizing the configuration of its intrusion prevention system based on risk estimation. The second level relies on a reinforcement learning (RL) environment where an agent is responsible for forming and managing a platoon of vehicles on the fly while also dealing with a potential attacker. We solve the first problem using a minimax algorithm to identify optimal strategies for each player. Then, we train RL agents and analyze their performance in forming security-aware platoons. The trained agents demonstrate superior performance compared to our baseline strategies that do not consider security risk.
Authored by Dominic Phillips, Talal Halabi, Mohammad Zulkernine
In coalition military operations, secure and effective information sharing is vital to the success of the mission. Protected Core Networking (PCN) provides a way for allied nations to securely interconnect their networks to facilitate the sharing of data. PCN, and military networks in general, face unique security challenges. Heterogeneous links and devices are deployed in hostile environments, while motivated adversaries launch cyberattacks at ever-increasing pace, volume, and sophistication. Humans cannot defend these systems and networks, not only because the volume of cyber events is too great, but also because there are not enough cyber defenders situated at the tactical edge. Thus, autonomous, machine-speed cyber defense capabilities are needed to protect mission-critical information systems from cyberattacks and system failures. This paper discusses the motivation for adding autonomous cyber defense capabilities to PCN and outlines a path toward implementing these capabilities. We propose to leverage existing reference architectures, frameworks, and enabling technologies, in order to adapt autonomous cyber defense concepts to the PCN context. We highlight expected challenges of implementing autonomous cyber defense agents for PCN, including: defining the state space and action space that will be necessary for monitoring and for generating recovery plans; implementing a suite of models, sensors, actuators, and agents specific to the PCN context; and designing metrics and experiments to measure the efficacy of such a system.
Authored by Alexander Velazquez, Joseph Mathews, Roberto Lopes, Tracy Braun, Frederica Free-Nelson
This paper highlights the progress toward securing teleoperating devices over the past ten years of active technology development. The relevance of this issue lies in the widespread development of teleoperating systems with a small number of systems allowed for operations. Anomalous behavior of the operating device, caused by a disruption in the normal functioning of the system modules, can be associated with remote attacks and exploitation of vulnerabilities, which can lead to fatal consequences. There are regulations and mandates from licensing agencies such as the US Food and Drug Administration (FDA) that place restrictions on the architecture and components of teleoperating systems. These requirements are also evolving to meet new cybersecurity threats. In particular, consumers and safety regulatory agencies are attracted by the threat of compromising hardware modules along with software insecurity. Recently, detailed security frameworks and protocols for teleoperating devices have appeared. However, a matter of intelligent autonomous controllers for analyzing anomalous and suspicious actions in the system remain unattended, as well as emergency protocols from the point of cybersecurity view. This work provides a new approach for the intraoperative cybersecurity of intelligent teleoperative surgical systems, taking into account modern requirements for implementing into the Surgical Remote Intelligent Robotic System LevshAI. The proposed principal security model allows a surgeon or autonomous agent to manage the operation process during various attacks.
Authored by Alexandra Bernadotte
Nowadays, the Internet has been greatly popularized and penetrated into all aspects of people s lives. In the campus, the level of network construction has also been continuously improved. However, the issue of campus network security has become an important issue that the whole society is concerned about. The research of this paper focuses on this hot spot, and based on the actual situation and characteristics of the campus network in the specific research process, using the relevant network security technology to develop an intelligent monitoring campus network security system optimization model, through the module test and performance test of the system optimization model, the test results verify that the system in this paper can effectively prevent network attacks, monitor campus network security, and ensure the practicability and scientificity of the system.
Authored by Yuanyuan Liu, Jingtao Lan
Cooperative autonomous systems are a priority objective for military research \& development of unmanned vehicles. Drone teams are one of the most prominent applications of cooperative unmanned systems and represent an ideal solution for providing both autonomous detection and recognition within security surveillance and monitoring. Here, a drone team may be arranged as a mobile and cooperative sensor network, whose coordination mechanism shall ensure real-time reconfiguration and sensing task balancing within the team. This work proposes a dynamic and decentralized mission planner of a drone team to attain a cooperative behaviour concerning detection and recognition for security surveillance. The design of the planner exploits multi-agent task allocation and game theory, and is based on the theory of learning in games to implement a scalable and resilient system. Model-in-the-loop simulation results are reported to validate the effectiveness of the proposed approach.
Authored by Vittorio Castrillo, Ivan Iudice, Domenico Pascarella, Gianpaolo Pigliasco, Angela Vozella
Multi-agent systems offer the advantage of performing tasks in a distributed and decentralized manner, thereby increasing efficiency and effectiveness. However, building these systems also presents challenges in terms of communication, security, and data integrity. Blockchain technology has the potential to address these challenges and to revolutionize the way that data is stored and shared, by providing a tamper-evident log of events in event-driven distributed multi-agent systems. In this paper, we propose a blockchain-based approach for event-sourcing in such systems, which allows for the reliable and transparent recording of events and state changes. Our approach leverages the decentralized nature of blockchains to provide a tamperresistant event log, enabling agents to verify the integrity of the data they rely on.
Authored by Ayman Cherif, Youssef Achir, Mohamed Youssfi, Mouhcine Elgarej, Omar Bouattane
Phishing is a method of online fraud where attackers are targeted to gain access to the computer systems for monetary benefits or personal gains. In this case, the attackers pose themselves as legitimate entities to gain the users' sensitive information. Phishing has been significant concern over the past few years. The firms are recording an increase in phishing attacks primarily aimed at the firm's intellectual property and the employees' sensitive data. As a result, these attacks force firms to spend more on information security, both in technology-centric and human-centric approaches. With the advancements in cyber-security in the last ten years, many techniques evolved to detect phishing-related activities through websites and emails. This study focuses on the latest techniques used for detecting phishing attacks, including the usage of Visual selection features, Machine Learning (ML), and Artificial Intelligence (AI) to see the phishing attacks. New strategies for identifying phishing attacks are evolving, but limited standardized knowledge on phishing identification and mitigation is accessible from user awareness training. So, this study also focuses on the role of security-awareness movements to minimize the impact of phishing attacks. There are many approaches to train the user regarding these attacks, such as persona-centred training, anti-phishing techniques, visual discrimination training and the usage of spam filters, robust firewalls and infrastructure, dynamic technical defense mechanisms, use of third-party certified software to mitigate phishing attacks from happening. Therefore, the purpose of this paper is to carry out a systematic analysis of literature to assess the state of knowledge in prominent scientific journals on the identification and prevention of phishing. Forty-three journal articles with the perspective of phishing detection and prevention through awareness training were reviewed from 2011 to 2020. This timely systematic review also focuses on the gaps identified in the selected primary studies and future research directions in this area.
Authored by Kanchan Patil, Sai Arra