While cloud-based deep learning benefits for high-accuracy inference, it leads to potential privacy risks when exposing sensitive data to untrusted servers. In this paper, we work on exploring the feasibility of steganography in preserving inference privacy. Specifically, we devise GHOST and GHOST+, two private inference solutions employing steganography to make sensitive images invisible in the inference phase. Motivated by the fact that deep neural networks (DNNs) are inherently vulnerable to adversarial attacks, our main idea is turning this vulnerability into the weapon for data privacy, enabling the DNN to misclassify a stego image into the class of the sensitive image hidden in it. The main difference is that GHOST retrains the DNN into a poisoned network to learn the hidden features of sensitive images, but GHOST+ leverages a generative adversarial network (GAN) to produce adversarial perturbations without altering the DNN. For enhanced privacy and a better computation-communication trade-off, both solutions adopt the edge-cloud collaborative framework. Compared with the previous solutions, this is the first work that successfully integrates steganography and the nature of DNNs to achieve private inference while ensuring high accuracy. Extensive experiments validate that steganography has excellent ability in accuracy-aware privacy protection of deep learning.
Authored by Qin Liu, Jiamin Yang, Hongbo Jiang, Jie Wu, Tao Peng, Tian Wang, Guojun Wang
Nowadays Osmotic Computing is emerging as one of the paradigms used to guarantee the Cloud Continuum, and this popularity is strictly related to the capacity to embrace inside it some hot topics like containers, microservices, orchestration and Function as a Service (FaaS). The Osmotic principle is quite simple, it aims to create a federated heterogeneous infrastructure, where an application's components can smoothly move following a concentration rule. In this work, we aim to solve two big constraints of Osmotic Computing related to the incapacity to manage dynamic access rules for accessing the applications inside the Osmotic Infrastructure and the incapacity to keep alive and secure the access to these applications even in presence of network disconnections. For overcoming these limits we designed and implemented a new Osmotic component, that acts as an eventually consistent distributed peer to peer access management system. This new component is used to keep a local Identity and Access Manager (IAM) that permits at any time to access the resource available in an Osmotic node and to update the access rules that allow or deny access to hosted applications. This component has been already integrated inside a Kubernetes based Osmotic Infrastructure and we presented two typical use cases where it can be exploited.
Authored by Christian Sicari, Alessio Catalfamo, Antonino Galletta, Massimo Villari
With their variety of application verticals, smart cities represent a killer scenario for Cloud-IoT computing, e.g. fog computing. Such applications require a management capable of satisfying all their requirements through suitable service placements, and of balancing among QoS-assurance, operational costs, deployment security and, last but not least, energy consumption and carbon emissions. This keynote discusses these aspects over a motivating use case and points to some open challenges.
Authored by Stefano Forti
With the development of 5G networking technology on the Internet of Vehicle (IoV), there are new opportunities for numerous cyber-attacks, such as in-vehicle attacks like hijacking occurrences and data theft. While numerous attempts have been made to protect against the potential attacks, there are still many unsolved problems such as developing a fine-grained access control system. This is reflected by the granularity of security as well as the related data that are hosted on these platforms. Among the most notable trends is the increased usage of smart devices, IoV, cloud services, emerging technologies aim at accessing, storing and processing data. Most popular authentication protocols rely on knowledge-factor for authentication that is infamously known to be vulnerable to subversions. Recently, the zero-trust framework has drawn huge attention; there is an urgent need to develop further the existing Continuous Authentication (CA) technique to achieve the zero-trustiness framework. In this paper, firstly, we develop the static authentication process and propose a secured protocol to generate the smart key for user to unlock the vehicle. Then, we proposed a novel and secure continuous authentication system for IoVs. We present the proof-of-concept of our CA scheme by building a prototype that leverages the commodity fingerprint sensors, NFC, and smartphone. Our evaluations in real-world settings demonstrate the appropriateness of CA scheme and security analysis of our proposed protocol for digital key suggests its enhanced security against the known attack-vector.
Authored by Yangxu Song, Frank Jiang, Syed Shah, Robin Doss
The increasing data generation rate and the proliferation of deep learning applications have led to the development of machine learning-as-a-service (MLaaS) platforms by major Cloud providers. The existing MLaaS platforms, however, fall short in protecting the clients’ private data. Recent distributed MLaaS architectures such as federated learning have also shown to be vulnerable against a range of privacy attacks. Such vulnerabilities motivated the development of privacy-preserving MLaaS techniques, which often use complex cryptographic prim-itives. Such approaches, however, demand abundant computing resources, which undermine the low-latency nature of evolving applications such as autonomous driving.To address these challenges, we propose SCLERA–an efficient MLaaS framework that utilizes trusted execution environment for secure execution of clients’ workloads. SCLERA features a set of optimization techniques to reduce the computational complexity of the offloaded services and achieve low-latency inference. We assessed SCLERA’s efficacy using image/video analytic use cases such as scene detection. Our results show that SCLERA achieves up to 23× speed-up when compared to the baseline secure model execution.
Authored by Abhinav Kumar, Reza Tourani, Mona Vij, Srikathyayani Srikanteswara
The growing amount of data and advances in data science have created a need for a new kind of cloud platform that provides users with flexibility, strong security, and the ability to couple with supercomputers and edge devices through high-performance networks. We have built such a nation-wide cloud platform, called "mdx" to meet this need. The mdx platform's virtualization service, jointly operated by 9 national universities and 2 national research institutes in Japan, launched in 2021, and more features are in development. Currently mdx is used by researchers in a wide variety of domains, including materials informatics, geo-spatial information science, life science, astronomical science, economics, social science, and computer science. This paper provides an overview of the mdx platform, details the motivation for its development, reports its current status, and outlines its future plans.
Authored by Toyotaro Suzumura, Akiyoshi Sugiki, Hiroyuki Takizawa, Akira Imakura, Hiroshi Nakamura, Kenjiro Taura, Tomohiro Kudoh, Toshihiro Hanawa, Yuji Sekiya, Hiroki Kobayashi, Yohei Kuga, Ryo Nakamura, Renhe Jiang, Junya Kawase, Masatoshi Hanai, Hiroshi Miyazaki, Tsutomu Ishizaki, Daisuke Shimotoku, Daisuke Miyamoto, Kento Aida, Atsuko Takefusa, Takashi Kurimoto, Koji Sasayama, Naoya Kitagawa, Ikki Fujiwara, Yusuke Tanimura, Takayuki Aoki, Toshio Endo, Satoshi Ohshima, Keiichiro Fukazawa, Susumu Date, Toshihiro Uchibayashi
The Internet of Things (IoT) aims to introduce pervasive computation into the human environment. The processing on a cloud platform is suggested due to the IoT devices' resource limitations. High latency while transmitting IoT data from its edge network to the cloud is the primary limitation. Modern IoT applications frequently use fog computing, an unique architecture, as a replacement for the cloud since it promises faster reaction times. In this work, a fog layer is introduced in smart vital sign monitor design in order to serve faster. Context aware computing makes use of environmental or situational data around the object to invoke proactive services upon its usable content. Here in this work the fog layer is intended to provide local data storage, data preprocessing, context awareness and timely analysis.
Authored by K. Revathi, T. Tamilselvi, K. Tamilselvi, P. Shanthakumar, A. Samydurai
Forming a secure autonomous vehicle group is extremely challenging since we have to consider threats and vulnerability of autonomous vehicles. Existing studies focus on communications among risk-free autonomous vehicles, which lack metrics to measure passenger security and cargo values. This work proposes a novel autonomous vehicle group formation method. We introduce risk assessment scoring to assess passenger security and cargo values, and propose an autonomous vehicle group formation method based on it. Our vehicle group is composed of a master node, and a number of core and border ones. Finally, the extensive simulation results show that our method is better than a Connectivity Prediction-based Dynamic Clustering model and a Low-InDependently clustering architecture in terms of node survival time, average change count of master nodes, and average risk assessment scoring.
Authored by Jiujun Cheng, Mengnan Hou, MengChu Zhou, Guiyuan Yuan, Qichao Mao
A mail spoofing attack is a harmful activity that modifies the source of the mail and trick users into believing that the message originated from a trusted sender whereas the actual sender is the attacker. Based on the previous work, this paper analyzes the transmission process of an email. Our work identifies new attacks suitable for bypassing SPF, DMARC, and Mail User Agent’s protection mechanisms. We can forge much more realistic emails to penetrate the famous mail service provider like Tencent by conducting the attack. By completing a large-scale experiment on these well-known mail service providers, we find some of them are affected by the related vulnerabilities. Some of the bypass methods are different from previous work. Our work found that this potential security problem can only be effectively protected when all email service providers have a standard view of security and can configure appropriate security policies for each email delivery node. In addition, we also propose a mitigate method to defend against these attacks. We hope our work can draw the attention of email service providers and users and effectively reduce the potential risk of phishing email attacks on them.
Authored by Beiyuan Yu, Pan Li, Jianwei Liu, Ziyu Zhou, Yiran Han, Zongxiao Li
Cloud computing provides customers with enormous compute power and storage capacity, allowing them to deploy their computation and data-intensive applications without having to invest in infrastructure. Many firms use cloud computing as a means of relocating and maintaining resources outside of their enterprise, regardless of the cloud server's location. However, preserving the data in cloud leads to a number of issues related to data loss, accountability, security etc. Such fears become a great barrier to the adoption of the cloud services by users. Cloud computing offers a high scale storage facility for internet users with reference to the cost based on the usage of facilities provided. Privacy protection of a user's data is considered as a challenge as the internal operations offered by the service providers cannot be accessed by the users. Hence, it becomes necessary for monitoring the usage of the client's data in cloud. In this research, we suggest an effective cloud storage solution for accessing patient medical records across hospitals in different countries while maintaining data security and integrity. In the suggested system, multifactor authentication for user login to the cloud, homomorphic encryption for data storage with integrity verification, and integrity verification have all been implemented effectively. To illustrate the efficacy of the proposed strategy, an experimental investigation was conducted.
Authored by M. Rupasri, Anupam Lakhanpal, Soumalya Ghosh, Atharav Hedage, Manoj Bangare, K. Ketaraju
Smart Security Solutions are in high demand with the ever-increasing vulnerabilities within the IT domain. Adjusting to a Work-From-Home (WFH) culture has become mandatory by maintaining required core security principles. Therefore, implementing and maintaining a secure Smart Home System has become even more challenging. ARGUS provides an overall network security coverage for both incoming and outgoing traffic, a firewall and an adaptive bandwidth management system and a sophisticated CCTV surveillance capability. ARGUS is such a system that is implemented into an existing router incorporating cloud and Machine Learning (ML) technology to ensure seamless connectivity across multiple devices, including IoT devices at a low migration cost for the customer. The aggregation of the above features makes ARGUS an ideal solution for existing Smart Home System service providers and users where hardware and infrastructure is also allocated. ARGUS was tested on a small-scale smart home environment with a Raspberry Pi 4 Model B controller. Its intrusion detection system identified an intrusion with 96% accuracy while the physical surveillance system predicts the user with 81% accuracy.
Authored by R.M. Ratnayake, G.D.N.D.K. Abeysiriwardhena, G.A.J. Perera, Amila Senarathne, R. Ponnamperuma, B.A. Ganegoda
The phenomenon known as "Internet ossification" describes the process through which certain components of the Internet’s older design have become immovable at the present time. This presents considerable challenges to the adoption of IPv6 and makes it hard to implement IP multicast services. For new applications such as data centers, cloud computing and virtualized networks, improved network availability, improved internal and external domain routing, and seamless user connectivity throughout the network are some of the advantages of Internet growth. To meet these needs, we've developed Software Defined Networking for the Future Internet (SDN). When compared to current networks, this new paradigm emphasizes control plane separation from network-forwarding components. To put it another way, this decoupling enables the installation of control plane software (such as Open Flow controller) on computer platforms that are substantially more powerful than traditional network equipment (such as switches/routers). This research describes Mininet’s routing techniques for a virtualized software-defined network. There are two obstacles to overcome when attempting to integrate SDN in an LTE/WiFi network. The first problem is that external network load monitoring tools must be used to measure QoS settings. Because of the increased demand for real-time load balancing methods, service providers cannot adopt QoS-based routing. In order to overcome these issues, this research suggests a router configuration method. Experiments have proved that the network coefficient matrix routing arrangement works, therefore it may provide an answer to the above-mentioned concerns. The Java-based SDN controller outperforms traditional routing systems by nine times on average highest sign to sound ratio. The study’s final finding suggests that the field’s future can be forecast. We must have a thorough understanding of this emerging paradigm to solve numerous difficulties, such as creating the Future Internet and dealing with its obliteration problem. In order to address these issues, we will first examine current technologies and a wide range of current and future SDN projects before delving into the most important issues in this field in depth.
Authored by Kumar Gopal, M Sambath, Angelina Geetha, Himanshu Shekhar
The technology advance and convergence of cyber physical systems, smart sensors, short-range wireless communications, cloud computing, and smartphone apps have driven the proliferation of Internet of things (IoT) devices in smart homes and smart industry. In light of the high heterogeneity of IoT system, the prevalence of system vulnerabilities in IoT devices and applications, and the broad attack surface across the entire IoT protocol stack, a fundamental and urgent research problem of IoT security is how to effectively collect, analyze, extract, model, and visualize the massive network traffic of IoT devices for understanding what is happening to IoT devices. Towards this end, this paper develops and demonstrates an end-to-end system with three key components, i.e., the IoT network traffic monitoring system via programmable home routers, the backend IoT traffic behavior analysis system in the cloud, and the frontend IoT visualization system via smartphone apps, for monitoring, analyzing and virtualizing network traffic behavior of heterogeneous IoT devices in smart homes. The main contributions of this demonstration paper is to present a novel system with an end-to-end process of collecting, analyzing and visualizing IoT network traffic in smart homes.
Authored by Keith Erkert, Andrew Lamontagne, Jereming Chen, John Cummings, Mitchell Hoikka, Kuai Xu, Feng Wang
Nowadays, online cloud storage networks can be accessed by third parties. Businesses that host large data centers buy or rent storage space from individuals who need to store their data. According to customer needs, data hub operators visualise the data and expose the cloud storage for storing data. Tangibly, the resources may wander around numerous servers. Data resilience is a prior need for all storage methods. For routines in a distributed data center, distributed removable code is appropriate. A safe cloud cache solution, AES-UCODR, is proposed to decrease I/O overheads for multi-block updates in proxy re-encryption systems. Its competence is evaluated using the real-world finance sector.
Authored by Devaki K, Leena L
AbuSaif is a human-like social robot designed and built at the UAE University's Artificial Intelligence and Robotics Lab. AbuSaif was initially operated by a classical personal computer (PC), like most of the existing social robots. Thus, most of the robot's functionalities are limited to the capacity of that mounted PC. To overcome this, in this study, we propose a web-based platform that shall take the benefits of clustering in cloud computing. Our proposed platform will increase the operational capability and functionality of AbuSaif, especially those needed to operate artificial intelligence algorithms. We believe that the robot will become more intelligent and autonomous using our proposed web platform.
Authored by Mohammed Abduljabbar, Fady Alnajjar
Network security isolation technology is an important means to protect the internal information security of enterprises. Generally, isolation is achieved through traditional network devices, such as firewalls and gatekeepers. However, the security rules are relatively rigid and cannot better meet the flexible and changeable business needs. Through the double sandbox structure created for each user, each user in the virtual machine is isolated from each other and security is ensured. By creating a virtual disk in a virtual machine as a user storage sandbox, and encrypting the read and write of the disk, the shortcomings of traditional network isolation methods are discussed, and the application of cloud desktop network isolation technology based on VMwarer technology in universities is expounded.
Authored by Kai Ye
Virtual machine (VM) based application sandboxes leverage strong isolation guarantees of virtualization techniques to address several security issues through effective containment of malware. Specifically, in end-user physical hosts, potentially vulnerable applications can be isolated from each other (and the host) using VM based sandboxes. However, sharing data across applications executing within different sandboxes is a non-trivial requirement for end-user systems because at the end of the day, all applications are used by the end-user owning the device. Existing file sharing techniques compromise the security or efficiency, especially considering lack of technical expertise of many end-users in the contemporary times. In this paper, we propose MicroBlind, a security hardened file sharing framework for virtualized sandboxes to support efficient data sharing across different application sandboxes. MicroBlind enables a simple file sharing management API for end users where the end user can orchestrate file sharing across different VM sandboxes in a secure manner. To demonstrate the efficacy of MicroBlind, we perform comprehensive empirical analysis against existing data sharing techniques (augmented for the sandboxing setup) and show that MicroBlind provides improved security and efficiency.
Authored by Saketh Maddamsetty, Ayush Tharwani, Debadatta Mishra
With the advent of cloud storage services many users tend to store their data in the cloud to save storage cost. However, this has lead to many security concerns, and one of the most important ones is ensuring data integrity. Public verification schemes are able to employ a third party auditor to perform data auditing on behalf of the user. But most public verification schemes are vulnerable to procrastinating auditors who may not perform auditing on time. These schemes do not have fair arbitration also, i.e. they lack a way to punish the malicious Cloud Service Provider (CSP) and compensate user whose data has been corrupted. On the other hand, CSP might be storing redundant data that could increase the storage cost for the CSP and computational cost of data auditing for the user. In this paper, we propose a Blockchain-based public auditing and deduplication scheme with a fair arbitration system against procrastinating auditors. The key idea requires auditors to record each verification using smart contract and store the result into a Blockchain as a transaction. Our scheme can detect and punish the procrastinating auditors and compensate users in the case of any data loss. Additionally, our scheme can detect and delete duplicate data that improve storage utilization and reduce the computational cost of data verification. Experimental evaluation demonstrates that our scheme is provably secure and does not incur overhead compared to the existing public auditing techniques while offering an additional feature of verifying the auditor’s performance.
Authored by Tariqul Islam, Kamrul Hasan, Saheb Singh, Joon Park
Web-based Application Programming Interfaces (APIs) are often described using SOAP, OpenAPI, and GraphQL specifications. These specifications provide a consistent way to define web services and enable automated fuzz testing. As such, many fuzzers take advantage of these specifications. However, in an enterprise setting, the tools are usually installed and scaled by individual teams, leading to duplication of efforts. There is a need for an enterprise-wide fuzz testing solution to provide shared, cost efficient, off-nominal testing at scale where fuzzers can be plugged-in as needed. Internet cloud-based fuzz testing-as-a-service solutions mitigate scalability concerns but are not always feasible as they require artifacts to be uploaded to external infrastructure. Typically, corporate policies prevent sharing artifacts with third parties due to cost, intellectual property, and security concerns. We utilize API specifications and combine them with cluster computing elasticity to build an automated, scalable framework that can fuzz multiple apps at once and retain the trust boundary of the enterprise.
Authored by Riyadh Mahmood, Jay Pennington, Danny Tsang, Tan Tran, Andrea Bogle
A huge number of cloud users and cloud providers are threatened of security issues by cloud computing adoption. Cloud computing is a hub of virtualization that provides virtualization-based infrastructure over physically connected systems. With the rapid advancement of cloud computing technology, data protection is becoming increasingly necessary. It's important to weigh the advantages and disadvantages of moving to cloud computing when deciding whether to do so. As a result of security and other problems in the cloud, cloud clients need more time to consider transitioning to cloud environments. Cloud computing, like any other technology, faces numerous challenges, especially in terms of cloud security. Many future customers are wary of cloud adoption because of this. Virtualization Technologies facilitates the sharing of recourses among multiple users. Cloud services are protected using various models such as type-I and type-II hypervisors, OS-level, and unikernel virtualization but also offer a variety of security issues. Unfortunately, several attacks have been built in recent years to compromise the hypervisor and take control of all virtual machines running above it. It is extremely difficult to reduce the size of a hypervisor due to the functions it offers. It is not acceptable for a safe device design to include a large hypervisor in the Trusted Computing Base (TCB). Virtualization is used by cloud computing service providers to provide services. However, using these methods entails handing over complete ownership of data to a third party. This paper covers a variety of topics related to virtualization protection, including a summary of various solutions and risk mitigation in VMM (virtual machine monitor). In this paper, we will discuss issues possible with a malicious virtual machine. We will also discuss security precautions that are required to handle malicious behaviors. We notice the issues of investigating malicious behaviors in cloud computing, give the scientific categorization and demonstrate the future headings. We've identified: i) security specifications for virtualization in Cloud computing, which can be used as a starting point for securing Cloud virtual infrastructure, ii) attacks that can be conducted against Cloud virtual infrastructure, and iii) security solutions to protect the virtualization environment from DDOS attacks.
Authored by Tahir Alyas, Karamath Ateeq, Mohammed Alqahtani, Saigeeta Kukunuru, Nadia Tabassum, Rukshanda Kamran
With the continuous development of computer technology, the coverage of informatization solutions covers all walks of life and all fields of society. For colleges and universities, teaching and scientific research are the basic tasks of the school. The scientific research ability of the school will affect the level of teachers and the training of students. The establishment of a good scientific research environment has become a more important link in the development of universities. SR(Scientific research) data is a prerequisite for SR activities. High-quality SR management data services are conducive to ensuring the quality and safety of SRdata, and further assisting the smooth development of SR projects. Therefore, this article mainly conducts research and practice on cloud computing-based scientific research management data services in colleges and universities. First, analyze the current situation of SR data management in colleges and universities, and the results show that the popularity of SR data management in domestic universities is much lower than that of universities in Europe and the United States, and the data storage awareness of domestic researchers is relatively weak. Only 46% of schools have developed SR data management services, which is much lower than that of European and American schools. Second, analyze the effect of CC(cloud computing )on the management of SR data in colleges and universities. The results show that 47% of SR believe that CC is beneficial to the management of SR data in colleges and universities to reduce scientific research costs and improve efficiency, the rest believe that CC can speed up data storage and improve security by acting on SR data management in colleges and universities.
Authored by Di Chen
With the development of cloud computing technology, more and more scientific researchers choose to deliver scientific workflow tasks to public cloud platforms for execution. This mode effectively reduces scientific research costs while also bringing serious security risks. In response to this problem, this article summarizes the current security issues facing cloud scientific workflows, and analyzes the importance of studying cloud scientific workflow security issues. Then this article analyzes, summarizes and compares the current cloud scientific workflow security methods from three perspectives: system architecture, security model, and security strategy. Finally made a prospect for the future development direction.
Authored by Xuanyu Liu, Guozhen Cheng, Yawen Wang, Shuai Zhang
A considerable portion of computing power is always required to perform symbolic calculations. The reliability and effectiveness of algorithms are two of the most significant challenges observed in the field of scientific computing. The terms “feasible calculations” and “feasible computations” refer to the same idea: the algorithms that are reliable and effective despite practical constraints. This research study intends to investigate different types of computing and modelling challenges, as well as the development of efficient integration methods by considering the challenges before generating the accurate results. Further, this study investigates various forms of errors that occur in the process of data integration. The proposed framework is based on automata, which provides the ability to investigate a wide-variety of distinct distance-bounding protocols. The proposed framework is not only possible to produce computational (in)security proofs, but also includes an extensive investigation on different issues such as optimal space complexity trade-offs. The proposed framework in embedded with the already established symbolic framework in order to get a deeper understanding of distance-bound security. It is now possible to guarantee a certain level of physical proximity without having to continually mimic either time or distance.
Authored by Vinod Kumar, Sanjay Pachauri
Security Oriented Deadline Aware Workflow Allocation Strategy for Infrastructure as a Service Clouds
Cloud computing is a model of service provisioning in heterogeneous distributed systems that encourages many researchers to explore its benefits and drawbacks in executing workflow applications. Recently, high-quality security protection has been a new challenge in workflow allocation. Different tasks may and may not have varied security demands, security overhead may vary for different virtual machines (VMs) at which the task is assigned. This paper proposes a Security Oriented Deadline-Aware workflow allocation (SODA) strategy in an IaaS cloud environment to minimize the risk probability of the workflow tasks while considering the deadline met in a deterministic environment. SODA picks out the task based on the highest security upward rank and assigns the selected task to the trustworthy VMs. SODA tries to simultaneously satisfy each task’s security demand and deadline at the maximum possible level. The simulation studies show that SODA outperforms the HEFT strategy on account of the risk probability of the cloud system on scientific workflow, namely CyberShake.
Authored by Mahfooz Alam, Mohammad Shahid, Suhel Mustajab
Since the advent of the Software Defined Networking (SDN) in 2011 and formation of Open Networking Foundation (ONF), SDN inspired projects have emerged in various fields of computer networks. Almost all the networking organizations are working on their products to be supported by SDN concept e.g. openflow. SDN has provided a great flexibility and agility in the networks by application specific control functions with centralized controller, but it does not provide security guarantees for security vulnerabilities inside applications, data plane and controller platform. As SDN can also use third party applications, an infected application can be distributed in the network and SDN based systems may be easily collapsed. In this paper, a security threats assessment model has been presented which highlights the critical areas with security requirements in SDN. Based on threat assessment model a proposed Security Threats Assessment and Diagnostic System (STADS) is presented for establishing a reliable SDN framework. The proposed STADS detects and diagnose various threats based on specified policy mechanism when different components of SDN communicate with controller to fulfil network requirements. Mininet network emulator with Ryu controller has been used for implementation and analysis.
Authored by Pradeep Sharma, Brijesh Kumar, S.S Tyagi