Nowadays, online cloud storage networks can be accessed by third parties. Businesses that host large data centers buy or rent storage space from individuals who need to store their data. According to customer needs, data hub operators visualise the data and expose the cloud storage for storing data. Tangibly, the resources may wander around numerous servers. Data resilience is a prior need for all storage methods. For routines in a distributed data center, distributed removable code is appropriate. A safe cloud cache solution, AES-UCODR, is proposed to decrease I/O overheads for multi-block updates in proxy re-encryption systems. Its competence is evaluated using the real-world finance sector.
Authored by Devaki K, Leena L
AbuSaif is a human-like social robot designed and built at the UAE University's Artificial Intelligence and Robotics Lab. AbuSaif was initially operated by a classical personal computer (PC), like most of the existing social robots. Thus, most of the robot's functionalities are limited to the capacity of that mounted PC. To overcome this, in this study, we propose a web-based platform that shall take the benefits of clustering in cloud computing. Our proposed platform will increase the operational capability and functionality of AbuSaif, especially those needed to operate artificial intelligence algorithms. We believe that the robot will become more intelligent and autonomous using our proposed web platform.
Authored by Mohammed Abduljabbar, Fady Alnajjar
The dynamic state of networks presents a challenge for the deployment of distributed applications and protocols. Ad-hoc schedules in the updating phase might lead to a lot of ambiguity and issues. By separating the control and data planes and centralizing control, Software Defined Networking (SDN) offers novel opportunities and remedies for these issues. However, software-based centralized architecture for distributed environments introduces significant challenges. Security is a main and crucial issue in SDN. This paper presents a deep study of the state-of-the-art of security challenges and solutions for the SDN paradigm. The conducted study helped us to propose a dynamic approach to efficiently detect different security violations and incidents caused by network updates including forwarding loop, forwarding black hole, link congestion, network policy violation, etc. Our solution relies on an intelligent approach based on the use of Machine Learning and Artificial Intelligence Algorithms.
Authored by Amina SAHBI, Faouzi JAIDI, Adel BOUHOULA
Nowadays, lives are very much easier with the help of IoT. Due to lack of protection and a greater number of connections, the management of IoT becomes more difficult To manage the network flow, a Software Defined Networking (SDN) has been introduced. The SDN has a great capability in automatic and dynamic distribution. For harmful attacks on the controller a centralized SDN architecture unlocks the scope. Therefore, to reduce these attacks in real-time, a securing SDN enabled IoT scenario infrastructure of Fog networks is preferred. The virtual switches have network enforcement authorized decisions and these are executed through the SDN network. Apart from this, SDN switches are generally powerful machines and simultaneously these are used as fog nodes. Therefore, SDN looks like a good selection for Fog networks of IoT. Moreover, dynamically distributing the necessary crypto keys are allowed by the centralized and software channel protection management solution, in order to establish the Datagram Transport Layer Security (DTIS) tunnels between the IoT devices, when demanded by the cyber security framework. Through the extensive deployment of this combination, the usage of CPU is observed to be 30% between devices and the latencies are in milliseconds range, and thus it presents the system feasibility with less delay. Therefore, by comparing with the traditional SDN, it is observed that the energy consumption is reduced by more than 90%.
Authored by Venkata Mohan, Sarangam Kodati, V. Krishna
In this paper, we established a unified deep learning-based spam filtering method. The proposed method uses the message byte-histograms as a unified representation for all message types (text, images, or any other format). A deep convolutional neural network (CNN) is used to extract high-level features from this representation. A fully connected neural network is used to perform the classification using the extracted CNN features. We validate our method using several open-source text-based and image-based spam datasets.We obtained an accuracy higher than 94% on all datasets.
Authored by Yassine Belkhouche
Aim: To bring off the spam detection in social media using Support Vector Machine (SVM) algorithm and compare accuracy with Artificial Neural Network (ANN) algorithm sample size of dataset is 5489, Initially the dataset contains several messages which includes spam and ham messages 80% messages are taken as training and 20% of messages are taken as testing. Materials and Methods: Classification was performed by KNN algorithm (N=10) for spam detection in social media and the accuracy was compared with SVM algorithm (N=10) with G power 80% and alpha value 0.05. Results: The value obtained in terms of accuracy was identified by ANN algorithm (98.2%) and for SVM algorithm (96.2%) with significant value 0.749. Conclusion: The accuracy of detecting spam using the ANN algorithm appears to be slightly better than the SVM algorithm.
Authored by Grandhi Svadasu, M. Adimoolam
The evolving and new age cybersecurity threats has set the information security industry on high alert. This modern age cyberattacks includes malware, phishing, artificial intelligence, machine learning and cryptocurrency. Our research highlights the importance and role of Software Quality Assurance for increasing the security standards that will not just protect the system but will handle the cyber-attacks better. With the series of cyber-attacks, we have concluded through our research that implementing code review and penetration testing will protect our data's integrity, availability, and confidentiality. We gathered user requirements of an application, gained a proper understanding of the functional as well as non-functional requirements. We implemented conventional software quality assurance techniques successfully but found that the application software was still vulnerable to potential issues. We proposed two additional stages in software quality assurance process to cater with this problem. After implementing this framework, we saw that maximum number of potential threats were already fixed before the first release of the software.
Authored by Ammar Haider, Wafa Bhatti
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
Based on the campus wireless IPv6 network system, using WiFi contactless sensing and positioning technology and action recognition technology, this paper designs a new campus security early warning system. The characteristic is that there is no need to add new monitoring equipment. As long as it is the location covered by the wireless IPv6 network, personnel quantity statistics and personnel body action status display can be realized. It plays an effective monitoring supplement to the places that cannot be covered by video surveillance in the past, and can effectively prevent campus violence or other emergencies.
Authored by Feng Sha, Ying Wei
Kerberos protocol is a derivative type of server used for the authentication purpose. Kerberos is a network-based authentication protocol which communicates the tickets from one network to another in a secured manner. Kerberos protocol encrypts the messages and provides mutual authentication. Kerberos uses the symmetric cryptography which uses the public key to strengthen the data confidentiality. The KDS Key Distribution System gives the center of securing the messages. Kerberos has certain disadvantages as it provides public key at both ends. In this proposed approach, the Kerberos are secured by using the HMAC Hash-based Message Authentication Code which is used for the authentication of message for integrity and authentication purpose. It verifies the data by authentication, verifies the e-mail address and message integrity. The computer network and security are authenticated by verifying the user or client. These messages which are transmitted and delivered have to be integrated by authenticating it. Kerberos authentication is used for the verification of a host or user. Authentication is based on the tickets on credentials in a secured way. Kerberos gives faster authentication and uses the unique ticketing system. It supports the authentication delegation with faster efficiency. These encrypt the standard by encrypting the tickets to pass the information.
Authored by R. Krishnamoorthy, S. Arun, N. Sujitha, K.M Vijayalakshmi, S. Karthiga, R. Thiagarajan
The impact of digital gadgets is enormous in the current Internet world because of the easy accessibility, flexibility and time-saving benefits for the consumers. The number of computer users is increasing every year. Meanwhile, the time spent and the computers also increased. Computer users browse the internet for various information gathering and stay on the internet for a long time without control. Nowadays working people from home also spend time with the smart devices, computers, and laptops, for a longer duration to complete professional work, personal work etc. the proposed study focused on deriving the impact factors of Smartphones by analyzing the keystroke dynamics Based on the usage pattern of keystrokes the system evaluates the stress level detection using machine learning techniques. In the proposed study keyboard users are intended for testing purposes. Volunteers of 200 members are collectively involved in generating the test dataset. They are allowed to sit for a certain frame of time to use the laptop in the meanwhile the keystroke of the Mouse and keyboard are recorded. The system reads the dataset and trains the model using the Dynamic Cat-Boost algorithm (DCB), which acts as the classification model. The evaluation metrics are framed by calculating Euclidean distance (ED), Manhattan Distance (MahD), Mahalanobis distance (MD) etc. Quantitative measures of DCB are framed through Accuracy, precision and F1Score.
Authored by Bakkialakshmi S, T. Sudalaimuthu
Cyberattacks have been progressed in the fields of Internet of Things, and artificial intelligence technologies using the advanced persistent threat (APT) method recently. The damage caused by ransomware is rapidly spreading among APT attacks, and the range of the damages of individuals, corporations, public institutions, and even governments are increasing. The seriousness of the problem has increased because ransomware has been evolving into an intelligent ransomware attack that spreads over the network to infect multiple users simultaneously. This study used open source endpoint detection and response tools to build and test a framework environment that enables systematic ransomware detection at the network and system level. Experimental results demonstrate that the use of EDR tools can quickly extract ransomware attack features and respond to attacks.
Authored by Sun-Jin Lee, Hye-Yeon Shim, Yu-Rim Lee, Tae-Rim Park, So-Hyun Park, Il-Gu Lee
It is proposed to address existing methodological issues in the educational process with the development of intellectual technologies and knowledge representation systems to improve the efficiency of higher education institutions. For this purpose, the structure of relational database is proposed, it will store the information about defended dissertations in the form of a set of attributes (heuristics), representing the mandatory qualification attributes of theses. An inference algorithm is proposed to process the information. This algorithm represents an artificial intelligence, its work is aimed at generating queries based on the applicant preferences. The result of the algorithm's work will be a set of choices, presented in ranked order. Given technologies will allow applicants to quickly become familiar with known scientific results and serve as a starting point for new research. The demand for co-researcher practice in solving the problem of updating the projective thinking methodology and managing the scientific research process has been justified. This article pays attention to the existing parallels between the concepts of technical and human sciences in the framework of their convergence. The concepts of being (economic good and economic utility) and the concepts of consciousness (humanitarian economic good and humanitarian economic utility) are used to form projective thinking. They form direct and inverse correspondences of technology and humanitarian practice in the techno-humanitarian mathematical space. It is proposed to place processed information from the language of context-free formal grammar dissertation abstracts in this space. The principle of data manipulation based on formal languages with context-free grammar allows to create new structures of subject areas in terms of applicants' preferences.It is believed that the success of applicants’ work depends directly on the cognitive training of applicants, which needs to be practiced psychologically. This practice is based on deepening the objectivity and adequacy qualities of obtaining information on the basis of heuristic methods. It requires increased attention and development of intelligence. The paper studies the use of heuristic methods by applicants to find new research directions leads to several promising results. These results can be perceived as potential options in future research. This contributes to an increase in the level of retention of higher education professionals.
Authored by Valerij Kharitonov, Darya Krivogina, Anna Salamatina, Elina Guselnikova, Varvara Spirina, Vladlena Markvirer
Terrorism, and radicalization are major economic, political, and social issues faced by the world in today's era. The challenges that governments and citizens face in combating terrorism are growing by the day. Artificial intelligence, including machine learning and deep learning, has shown promising results in predicting terrorist attacks. In this paper, we attempted to build a machine learning model to predict terror activities using a global terrorism database in both relational and graphical forms. Using the Neo4j Sandbox, you can create a graph database from a relational database. We used the node2vec algorithm from Neo4j Sandbox's graph data science library to convert the high-dimensional graph to a low-dimensional vector form. In order to predict terror activities, seven machine learning models were used, and the performance parameters that were calculated were accuracy, precision, recall, and F1 score. According to our findings, the Logistic Regression model was the best performing model which was able to classify the dataset with an accuracy of 0.90, recall of 0.94 precision of 0.93, and an F1 score of 0.93.
Authored by Ankit Raj, Sunil Somani
The widespread adoption of eCommerce, iBanking, and eGovernment institutions has resulted in an exponential rise in the use of web applications. Due to a large number of users, web applications have become a prime target of cybercriminals who want to steal Personally Identifiable Information (PII) and disrupt business activities. Hence, there is a dire need to audit the websites and ensure information security. In this regard, several web vulnerability scanners are employed for vulnerability assessment of web applications but attacks are still increasing day by day. Therefore, a considerable amount of research has been carried out to measure the effectiveness and limitations of the publicly available web scanners. It is identified that most of the publicly available scanners possess weaknesses and do not generate desired results. In this paper, the evaluation of publicly available web vulnerability scanners is performed against the top ten OWASP11OWASP® The Open Web Application Security Project (OWASP) is an online community that produces comprehensive articles, documentation, methodologies, and tools in the arena of web and mobile security. vulnerabilities and their performance is measured on the precision of their results. Based on these results, we proposed an Integrated Multi-Agent Blackbox Security Assessment Tool (SAT) for the security assessment of web applications. Research has proved that the vulnerabilities assessment results of the SAT are more extensive and accurate.
Authored by Jahanzeb Shahid, Zia Muhammad, Zafar Iqbal, Muhammad Khan, Yousef Amer, Weisheng Si
This article analyzes the analysis of the joint data security architecture that integrates artificial intelligence and cloud computing in the era of big data. The article discusses and analyzes the integrated applications of big data, artificial intelligence and cloud computing. As an important part of big data security protection, joint data security Protecting the technical architecture is not only related to the security of joint data in the big data era, but also has an important impact on the overall development of the data era. Based on this, the thesis takes the big data security and joint data security protection technical architecture as the research content, and through a simple explanation of big data security, it then conducts detailed research on the big data security and joint data security protection technical architecture from five aspects and thinking.
Authored by Jikui Du
This paper designs a network security protection system based on artificial intelligence technology from two aspects of hardware and software. The system can simultaneously collect Internet public data and secret-related data inside the unit, and encrypt it through the TCM chip solidified in the hardware to ensure that only designated machines can read secret-related materials. The data edge-cloud collaborative acquisition architecture based on chip encryption can realize the cross-network transmission of confidential data. At the same time, this paper proposes an edge-cloud collaborative information security protection method for industrial control systems by combining end-address hopping and load balancing algorithms. Finally, using WinCC, Unity3D, MySQL and other development environments comprehensively, the feasibility and effectiveness of the system are verified by experiments.
Authored by Xiuyun Lu, Wenxing Zhao, Yuquan Zhu
This paper analyzes the problems existing in the existing emergency management technology system in China from various perspectives, and designs the construction of intelligent emergency system in combination with the development of new generation of Internet of Things, big data, cloud computing and artificial intelligence technology. The overall design is based on scientific and technological innovation to lead the reform of emergency management mechanism and process reengineering to build an intelligent emergency technology system characterized by "holographic monitoring, early warning, intelligent research and accurate disposal". To build an intelligent emergency management system that integrates intelligent monitoring and early warning, intelligent emergency disposal, efficient rehabilitation, improvement of emergency standards, safety and operation and maintenance construction.
Authored by Huan Shi, Bo Hui, Biao Hu, RongJie Gu
Explainable Artificial Intelligence (XAI) research focuses on effective explanation techniques to understand and build AI models with trust, reliability, safety, and fairness. Feature importance explanation summarizes feature contributions for end-users to make model decisions. However, XAI methods may produce varied summaries that lead to further analysis to evaluate the consistency across multiple XAI methods on the same model and data set. This paper defines metrics to measure the consistency of feature contribution explanation summaries under feature importance order and saliency map. Driven by these consistency metrics, we develop an XAI process oriented on the XAI criterion of feature importance, which performs a systematical selection of XAI techniques and evaluation of explanation consistency. We demonstrate the process development involving twelve XAI methods on three topics, including a search ranking system, code vulnerability detection and image classification. Our contribution is a practical and systematic process with defined consistency metrics to produce rigorous feature contribution explanations.
Authored by Jun Huang, Zerui Wang, Ding Li, Yan Liu
Cloud service uses CAPTCHA to protect itself from malicious programs. With the explosive development of AI technology and the emergency of third-party recognition services, the factors that influence CAPTCHA’s security are going to be more complex. In such a situation, evaluating the security of mainstream CAPTCHAs in cloud services is helpful to guide better CAPTCHA design choices for providers. In this paper, we evaluate and analyze the security of 6 mainstream CAPTCHA image designs in public cloud services. According to the evaluation results, we made some suggestions of CAPTCHA image design choices to cloud service providers. In addition, we particularly discussed the CAPTCHA images adopted by Facebook and Twitter. The evaluations are separated into two stages: (i) using AI techniques alone; (ii) using both AI techniques and third-party services. The former is based on open source models; the latter is conducted under our proposed framework: CAPTCHAMix.
Authored by Xiaojiang Zuo, Xiao Wang, Rui Han
In this decade, digital transactions have risen exponentially demanding more reliable and secure authentication systems. CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) system plays a major role in these systems. These CAPTCHAs are available in character sequence, picture-based, and audio-based formats. It is very essential that these CAPTCHAs should be able to differentiate a computer program from a human precisely. This work tests the strength of text-based CAPTCHAs by breaking them using an algorithm built on CNN (Convolution Neural Network) and RNN (Recurrent Neural Network). The algorithm is designed in such a way as an attempt to break the security features designers have included in the CAPTCHAs to make them hard to be cracked by machines. This algorithm is tested against the synthetic dataset generated in accordance with the schemes used in popular websites. The experiment results exhibit that the model has shown a considerable performance against both the synthetic and real-world CAPTCHAs.
Authored by A Priya, Abishek Ganesh, Akil Prasath, Jeya Pradeepa
Since data security is an important branch of the wide concept of security, using simple and interpretable data security methods is deemed necessary. A considerable volume of data that is transferred through the internet is in the form of image. Therefore, several methods have focused on encrypting and decrypting images but some of the conventional algorithms are complex and time consuming. On the other hand, denial method or steganography has attracted the researchers' attention leading to more security for transferring images. This is because attackers are not aware of encryption on images and therefore they do not try to decrypt them. Here, one of the most effective and simplest operators (XOR) is employed. The received shares in destination only with XOR operation can recover original images. Users are not necessary to be familiar with computer programing, data coding and the execution time is lesser compared to chaos-based methods or coding table. Nevertheless, for designing the key when we have messy images, we use chaotic functions. Here, in addition to use the XOR operation, eliminating the pixel expansion and meaningfulness of the shared images is of interest. This method is simple and efficient and use both encryption and steganography; therefore, it can guarantee the security of transferred images.
Authored by Maryam Tahmasbi, Reza Boostani, Mohammad Aljaidi, Hani Attar
Artificial intelligence (AI) was engendered by the rapid development of high and new technologies, which altered the environment of business financial audits and caused problems in recent years. As the pioneers of enterprise financial monitoring, auditors must actively and proactively adapt to the new audit environment in the age of AI. However, the performances of the auditors during the adaptation process are not so favorable. In this paper, methods such as data analysis and field research are used to conduct investigations and surveys. In the process of applying AI to the financial auditing of a business, a number of issues are discovered, such as auditors' underappreciation, information security risks, and liability risk uncertainty. On the basis of the problems, related suggestions for improvement are provided, including the cultivation of compound talents, the emphasis on the value of auditors, and the development of a mechanism for accepting responsibility.
Authored by Wenfeng Xiao
Network intrusion detection technology has been a popular application technology for current network security, but the existing network intrusion detection technology in the application process, there are problems such as low detection efficiency, low detection accuracy and other poor detection performance. To solve the above problems, a new treatment combining artificial intelligence with network intrusion detection is proposed. Artificial intelligence-based network intrusion detection technology refers to the application of artificial intelligence techniques, such as: neural networks, neural algorithms, etc., to network intrusion detection, and the application of these artificial intelligence techniques makes the automatic detection of network intrusion detection models possible.
Authored by Chaofan Lu
Artificial intelligence (AI) and machine learning (ML) have been used in transforming our environment and the way people think, behave, and make decisions during the last few decades [1]. In the last two decades everyone connected to the Internet either an enterprise or individuals has become concerned about the security of his/their computational resources. Cybersecurity is responsible for protecting hardware and software resources from cyber attacks e.g. viruses, malware, intrusion, eavesdropping. Cyber attacks either come from black hackers or cyber warfare units. Artificial intelligence (AI) and machine learning (ML) have played an important role in developing efficient cyber security tools. This paper presents Latest Cyber Security Tools Based on Machine Learning which are: Windows defender ATP, DarckTrace, Cisco Network Analytic, IBM QRader, StringSifter, Sophos intercept X, SIME, NPL, and Symantec Targeted Attack Analytic.
Authored by Taher Ghazal, Mohammad Hasan, Raed Zitar, Nidal Al-Dmour, Waleed Al-Sit, Shayla Islam