The impact of digital gadgets is enormous in the current Internet world because of the easy accessibility, flexibility and time-saving benefits for the consumers. The number of computer users is increasing every year. Meanwhile, the time spent and the computers also increased. Computer users browse the internet for various information gathering and stay on the internet for a long time without control. Nowadays working people from home also spend time with the smart devices, computers, and laptops, for a longer duration to complete professional work, personal work etc. the proposed study focused on deriving the impact factors of Smartphones by analyzing the keystroke dynamics Based on the usage pattern of keystrokes the system evaluates the stress level detection using machine learning techniques. In the proposed study keyboard users are intended for testing purposes. Volunteers of 200 members are collectively involved in generating the test dataset. They are allowed to sit for a certain frame of time to use the laptop in the meanwhile the keystroke of the Mouse and keyboard are recorded. The system reads the dataset and trains the model using the Dynamic Cat-Boost algorithm (DCB), which acts as the classification model. The evaluation metrics are framed by calculating Euclidean distance (ED), Manhattan Distance (MahD), Mahalanobis distance (MD) etc. Quantitative measures of DCB are framed through Accuracy, precision and F1Score.
Authored by Bakkialakshmi S, T. Sudalaimuthu
Cyberattacks have been progressed in the fields of Internet of Things, and artificial intelligence technologies using the advanced persistent threat (APT) method recently. The damage caused by ransomware is rapidly spreading among APT attacks, and the range of the damages of individuals, corporations, public institutions, and even governments are increasing. The seriousness of the problem has increased because ransomware has been evolving into an intelligent ransomware attack that spreads over the network to infect multiple users simultaneously. This study used open source endpoint detection and response tools to build and test a framework environment that enables systematic ransomware detection at the network and system level. Experimental results demonstrate that the use of EDR tools can quickly extract ransomware attack features and respond to attacks.
Authored by Sun-Jin Lee, Hye-Yeon Shim, Yu-Rim Lee, Tae-Rim Park, So-Hyun Park, Il-Gu Lee
It is proposed to address existing methodological issues in the educational process with the development of intellectual technologies and knowledge representation systems to improve the efficiency of higher education institutions. For this purpose, the structure of relational database is proposed, it will store the information about defended dissertations in the form of a set of attributes (heuristics), representing the mandatory qualification attributes of theses. An inference algorithm is proposed to process the information. This algorithm represents an artificial intelligence, its work is aimed at generating queries based on the applicant preferences. The result of the algorithm's work will be a set of choices, presented in ranked order. Given technologies will allow applicants to quickly become familiar with known scientific results and serve as a starting point for new research. The demand for co-researcher practice in solving the problem of updating the projective thinking methodology and managing the scientific research process has been justified. This article pays attention to the existing parallels between the concepts of technical and human sciences in the framework of their convergence. The concepts of being (economic good and economic utility) and the concepts of consciousness (humanitarian economic good and humanitarian economic utility) are used to form projective thinking. They form direct and inverse correspondences of technology and humanitarian practice in the techno-humanitarian mathematical space. It is proposed to place processed information from the language of context-free formal grammar dissertation abstracts in this space. The principle of data manipulation based on formal languages with context-free grammar allows to create new structures of subject areas in terms of applicants' preferences.It is believed that the success of applicants’ work depends directly on the cognitive training of applicants, which needs to be practiced psychologically. This practice is based on deepening the objectivity and adequacy qualities of obtaining information on the basis of heuristic methods. It requires increased attention and development of intelligence. The paper studies the use of heuristic methods by applicants to find new research directions leads to several promising results. These results can be perceived as potential options in future research. This contributes to an increase in the level of retention of higher education professionals.
Authored by Valerij Kharitonov, Darya Krivogina, Anna Salamatina, Elina Guselnikova, Varvara Spirina, Vladlena Markvirer
Terrorism, and radicalization are major economic, political, and social issues faced by the world in today's era. The challenges that governments and citizens face in combating terrorism are growing by the day. Artificial intelligence, including machine learning and deep learning, has shown promising results in predicting terrorist attacks. In this paper, we attempted to build a machine learning model to predict terror activities using a global terrorism database in both relational and graphical forms. Using the Neo4j Sandbox, you can create a graph database from a relational database. We used the node2vec algorithm from Neo4j Sandbox's graph data science library to convert the high-dimensional graph to a low-dimensional vector form. In order to predict terror activities, seven machine learning models were used, and the performance parameters that were calculated were accuracy, precision, recall, and F1 score. According to our findings, the Logistic Regression model was the best performing model which was able to classify the dataset with an accuracy of 0.90, recall of 0.94 precision of 0.93, and an F1 score of 0.93.
Authored by Ankit Raj, Sunil Somani
The widespread adoption of eCommerce, iBanking, and eGovernment institutions has resulted in an exponential rise in the use of web applications. Due to a large number of users, web applications have become a prime target of cybercriminals who want to steal Personally Identifiable Information (PII) and disrupt business activities. Hence, there is a dire need to audit the websites and ensure information security. In this regard, several web vulnerability scanners are employed for vulnerability assessment of web applications but attacks are still increasing day by day. Therefore, a considerable amount of research has been carried out to measure the effectiveness and limitations of the publicly available web scanners. It is identified that most of the publicly available scanners possess weaknesses and do not generate desired results. In this paper, the evaluation of publicly available web vulnerability scanners is performed against the top ten OWASP11OWASP® The Open Web Application Security Project (OWASP) is an online community that produces comprehensive articles, documentation, methodologies, and tools in the arena of web and mobile security. vulnerabilities and their performance is measured on the precision of their results. Based on these results, we proposed an Integrated Multi-Agent Blackbox Security Assessment Tool (SAT) for the security assessment of web applications. Research has proved that the vulnerabilities assessment results of the SAT are more extensive and accurate.
Authored by Jahanzeb Shahid, Zia Muhammad, Zafar Iqbal, Muhammad Khan, Yousef Amer, Weisheng Si
This article analyzes the analysis of the joint data security architecture that integrates artificial intelligence and cloud computing in the era of big data. The article discusses and analyzes the integrated applications of big data, artificial intelligence and cloud computing. As an important part of big data security protection, joint data security Protecting the technical architecture is not only related to the security of joint data in the big data era, but also has an important impact on the overall development of the data era. Based on this, the thesis takes the big data security and joint data security protection technical architecture as the research content, and through a simple explanation of big data security, it then conducts detailed research on the big data security and joint data security protection technical architecture from five aspects and thinking.
Authored by Jikui Du
This paper designs a network security protection system based on artificial intelligence technology from two aspects of hardware and software. The system can simultaneously collect Internet public data and secret-related data inside the unit, and encrypt it through the TCM chip solidified in the hardware to ensure that only designated machines can read secret-related materials. The data edge-cloud collaborative acquisition architecture based on chip encryption can realize the cross-network transmission of confidential data. At the same time, this paper proposes an edge-cloud collaborative information security protection method for industrial control systems by combining end-address hopping and load balancing algorithms. Finally, using WinCC, Unity3D, MySQL and other development environments comprehensively, the feasibility and effectiveness of the system are verified by experiments.
Authored by Xiuyun Lu, Wenxing Zhao, Yuquan Zhu
This paper analyzes the problems existing in the existing emergency management technology system in China from various perspectives, and designs the construction of intelligent emergency system in combination with the development of new generation of Internet of Things, big data, cloud computing and artificial intelligence technology. The overall design is based on scientific and technological innovation to lead the reform of emergency management mechanism and process reengineering to build an intelligent emergency technology system characterized by "holographic monitoring, early warning, intelligent research and accurate disposal". To build an intelligent emergency management system that integrates intelligent monitoring and early warning, intelligent emergency disposal, efficient rehabilitation, improvement of emergency standards, safety and operation and maintenance construction.
Authored by Huan Shi, Bo Hui, Biao Hu, RongJie Gu
Explainable Artificial Intelligence (XAI) research focuses on effective explanation techniques to understand and build AI models with trust, reliability, safety, and fairness. Feature importance explanation summarizes feature contributions for end-users to make model decisions. However, XAI methods may produce varied summaries that lead to further analysis to evaluate the consistency across multiple XAI methods on the same model and data set. This paper defines metrics to measure the consistency of feature contribution explanation summaries under feature importance order and saliency map. Driven by these consistency metrics, we develop an XAI process oriented on the XAI criterion of feature importance, which performs a systematical selection of XAI techniques and evaluation of explanation consistency. We demonstrate the process development involving twelve XAI methods on three topics, including a search ranking system, code vulnerability detection and image classification. Our contribution is a practical and systematic process with defined consistency metrics to produce rigorous feature contribution explanations.
Authored by Jun Huang, Zerui Wang, Ding Li, Yan Liu
Cloud service uses CAPTCHA to protect itself from malicious programs. With the explosive development of AI technology and the emergency of third-party recognition services, the factors that influence CAPTCHA’s security are going to be more complex. In such a situation, evaluating the security of mainstream CAPTCHAs in cloud services is helpful to guide better CAPTCHA design choices for providers. In this paper, we evaluate and analyze the security of 6 mainstream CAPTCHA image designs in public cloud services. According to the evaluation results, we made some suggestions of CAPTCHA image design choices to cloud service providers. In addition, we particularly discussed the CAPTCHA images adopted by Facebook and Twitter. The evaluations are separated into two stages: (i) using AI techniques alone; (ii) using both AI techniques and third-party services. The former is based on open source models; the latter is conducted under our proposed framework: CAPTCHAMix.
Authored by Xiaojiang Zuo, Xiao Wang, Rui Han
In this decade, digital transactions have risen exponentially demanding more reliable and secure authentication systems. CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) system plays a major role in these systems. These CAPTCHAs are available in character sequence, picture-based, and audio-based formats. It is very essential that these CAPTCHAs should be able to differentiate a computer program from a human precisely. This work tests the strength of text-based CAPTCHAs by breaking them using an algorithm built on CNN (Convolution Neural Network) and RNN (Recurrent Neural Network). The algorithm is designed in such a way as an attempt to break the security features designers have included in the CAPTCHAs to make them hard to be cracked by machines. This algorithm is tested against the synthetic dataset generated in accordance with the schemes used in popular websites. The experiment results exhibit that the model has shown a considerable performance against both the synthetic and real-world CAPTCHAs.
Authored by A Priya, Abishek Ganesh, Akil Prasath, Jeya Pradeepa
Since data security is an important branch of the wide concept of security, using simple and interpretable data security methods is deemed necessary. A considerable volume of data that is transferred through the internet is in the form of image. Therefore, several methods have focused on encrypting and decrypting images but some of the conventional algorithms are complex and time consuming. On the other hand, denial method or steganography has attracted the researchers' attention leading to more security for transferring images. This is because attackers are not aware of encryption on images and therefore they do not try to decrypt them. Here, one of the most effective and simplest operators (XOR) is employed. The received shares in destination only with XOR operation can recover original images. Users are not necessary to be familiar with computer programing, data coding and the execution time is lesser compared to chaos-based methods or coding table. Nevertheless, for designing the key when we have messy images, we use chaotic functions. Here, in addition to use the XOR operation, eliminating the pixel expansion and meaningfulness of the shared images is of interest. This method is simple and efficient and use both encryption and steganography; therefore, it can guarantee the security of transferred images.
Authored by Maryam Tahmasbi, Reza Boostani, Mohammad Aljaidi, Hani Attar
Artificial intelligence (AI) was engendered by the rapid development of high and new technologies, which altered the environment of business financial audits and caused problems in recent years. As the pioneers of enterprise financial monitoring, auditors must actively and proactively adapt to the new audit environment in the age of AI. However, the performances of the auditors during the adaptation process are not so favorable. In this paper, methods such as data analysis and field research are used to conduct investigations and surveys. In the process of applying AI to the financial auditing of a business, a number of issues are discovered, such as auditors' underappreciation, information security risks, and liability risk uncertainty. On the basis of the problems, related suggestions for improvement are provided, including the cultivation of compound talents, the emphasis on the value of auditors, and the development of a mechanism for accepting responsibility.
Authored by Wenfeng Xiao
Network intrusion detection technology has been a popular application technology for current network security, but the existing network intrusion detection technology in the application process, there are problems such as low detection efficiency, low detection accuracy and other poor detection performance. To solve the above problems, a new treatment combining artificial intelligence with network intrusion detection is proposed. Artificial intelligence-based network intrusion detection technology refers to the application of artificial intelligence techniques, such as: neural networks, neural algorithms, etc., to network intrusion detection, and the application of these artificial intelligence techniques makes the automatic detection of network intrusion detection models possible.
Authored by Chaofan Lu
Artificial intelligence (AI) and machine learning (ML) have been used in transforming our environment and the way people think, behave, and make decisions during the last few decades [1]. In the last two decades everyone connected to the Internet either an enterprise or individuals has become concerned about the security of his/their computational resources. Cybersecurity is responsible for protecting hardware and software resources from cyber attacks e.g. viruses, malware, intrusion, eavesdropping. Cyber attacks either come from black hackers or cyber warfare units. Artificial intelligence (AI) and machine learning (ML) have played an important role in developing efficient cyber security tools. This paper presents Latest Cyber Security Tools Based on Machine Learning which are: Windows defender ATP, DarckTrace, Cisco Network Analytic, IBM QRader, StringSifter, Sophos intercept X, SIME, NPL, and Symantec Targeted Attack Analytic.
Authored by Taher Ghazal, Mohammad Hasan, Raed Zitar, Nidal Al-Dmour, Waleed Al-Sit, Shayla Islam
The latest, modern security camera systems record numerous data at once. With the utilization of artificial intelligence, these systems can even compose an online attendance register of students present during the lectures. Data is primarily recorded on the hard disk of the NVR (Network Video Recorder), and in the long term, it is recommended to save the data in the blockchain. The purpose of the research is to demonstrate how university security cameras can be securely connected to the blockchain. This would be important for universities as this is sensitive student data that needs to be protected from unauthorized access. In my research, as part of the practical implementation, I therefore also use encryption methods and data fragmentation, which are saved at the nodes of the blockchain. Thus, even a DDoS (Distributed Denial of Service) type attack may be easily repelled, as data is not concentrated on a single, central server. To further increase security, it is useful to constitute a blockchain capable of its own data storage at the faculty itself, rather than renting data storage space, so we, ourselves may regulate the conditions of operation, and the policy of data protection. As a practical part of my research, therefore, I created a blockchain called UEDSC (Universities Data Storage Chain) where I saved the student's data.
Authored by Krisztián Bálint
Vulnerability assessment is an important process for network security. However, most commonly used vulnerability assessment methods still rely on expert experience or rule-based automated scripts, which are difficult to meet the security requirements of increasingly complex network environment. In recent years, although scientists and engineers have made great progress on artificial intelligence in both theory and practice, it is a challenging to manufacture a mature high-quality intelligent products in the field of network security, especially in penetration testing based vulnerability assessment for enterprises. Therefore, in order to realize the intelligent penetration testing, Vul.AI with its rich experience in cyber attack and defense for many years has designed and developed a set of intelligent penetration and attack simulation system Ai.Scan, which is based on attack chain, knowledge graph and related evaluation algorithms. In this paper, the realization principle, main functions and application scenarios of Ai.Scan are introduced in detail.
Authored by Wei Hao, Chuanbao Shen, Xing Yang, Chao Wang
With the development of computer technology and information security technology, computer networks will increasingly become an important means of information exchange, permeating all areas of social life. Therefore, recognizing the vulnerabilities and potential threats of computer networks as well as various security problems that exist in reality, designing and researching computer quality architecture, and ensuring the security of network information are issues that need to be resolved urgently. The purpose of this article is to study the design and realization of information security technology and computer quality system structure. This article first summarizes the basic theory of information security technology, and then extends the core technology of information security. Combining the current status of computer quality system structure, analyzing the existing problems and deficiencies, and using information security technology to design and research the computer quality system structure on this basis. This article systematically expounds the function module data, interconnection structure and routing selection of the computer quality system structure. And use comparative method, observation method and other research methods to design and research the information security technology and computer quality system structure. Experimental research shows that when the load of the computer quality system structure studied this time is 0 or 100, the data loss rate of different lengths is 0, and the correct rate is 100, which shows extremely high feasibility.
Authored by Yuanyuan Hu, Xiaolong Cao, Guoqing Li
With the recent advancements in automated communication technology, many traditional businesses that rely on face-to-face communication have shifted to online portals. However, these online platforms often lack the personal touch essential for customer service. Research has shown that face-to- face communication is essential for building trust and empathy with customers. A multimodal embodied conversation agent (ECA) can fill this void in commercial applications. Such a platform provides tools to understand the user’s mental state by analyzing their verbal and non-verbal behaviour and allows a human-like avatar to take necessary action based on the context of the conversation and as per social norms. However, the literature to understand the impact of ECA agents on commercial applications is limited because of the issues related to platform and scalability. In our work, we discuss some existing work that tries to solve the issues related to scalability and infrastructure. We also provide an overview of the components required for developing ECAs and their deployment in various applications.
Authored by Kumar Shubham, Laxmi Venkatesan, Dinesh Jayagopi, Raj Tumuluri
With the rapid development of artificial intelligence (AI), many companies are moving towards automating their services using automated conversational agents. Dialogue-based conversational recommender agents, in particular, have gained much attention recently. The successful development of such systems in the case of natural language input is conditioned by the ability to understand the users’ utterances. Predicting the users’ intents allows the system to adjust its dialogue strategy and gradually upgrade its preference profile. Nevertheless, little work has investigated this problem so far. This paper proposes an LSTM-based Neural Network model and compares its performance to seven baseline Machine Learning (ML) classifiers. Experiments on a new publicly available dataset revealed The superiority of the LSTM model with 95% Accuracy and 94% F1-score on the full dataset despite the relatively small dataset size (9300 messages and 17 intents) and label imbalance.
Authored by Mourad Jbene, Smail Tigani, Rachid Saadane, Abdellah Chehri
Populations move across regions in search of better living possibilities, better life outcomes or going away from problems that affected their lives in the previous region they lived in. In the United States of America, this problem has been happening over decades. Intelligent Conversational Text-based Agents, also called Chatbots, and Artificial Intelligence are increasingly present in our lives and over recent years, their presence has increased considerably, due to the usability cases and the familiarity they are wining constantly. Using NLP algorithms for law in accessible platforms allows scaling of users to access a certain level of law expert who could assist users in need. This paper describes the motivation and circumstances of this problem as well as the description of the development of an Intelligent Conversational Agent system that was used by immigrants in the USA so they could get answers to questions and get suggestions about better legal options they could have access to. This system has helped thousands of people, especially in California
Authored by Jovan Rebolledo-Mendez, Felix Briones, Leslie Cardona
In recent years, business environments are undergoing disruptive changes across sectors [1]. Globalization and technological advances, such as artificial intelligence and the internet of things, have completely redesigned business activities, bringing to light an ever-increasing interest and attention towards the customer [2], especially in healthcare sector. In this context, researchers is paying more and more attention to the introduction of new technologies capable of meeting the patients’ needs [3, 4] and the Covid-19 pandemic has contributed and still contributes to accelerate this phenomenon [5]. Therefore, emerging technologies (i.e., AI-enabled solutions, service robots, conversational agents) are proving to be effective partners in improving medical care and quality of life [6]. Conversational agents, often identified in other ways as “chatbots”, are AI-enabled service robots based on the use of text [7] and capable of interpreting natural language and ensuring automation of responses by emulating human behavior [8, 9, 10]. Their introduction is linked to help institutions and doctors in the management of their patients [11, 12], at the same time maintaining the negligible incremental costs thanks to their virtual aspect [13–14]. However, while the utilization of these tools has significantly increased during the pandemic [15, 16, 17], it is unclear what benefits they bring to service delivery. In order to identify their contributions, there is a need to find out which activities can be supported by conversational agents.This paper takes a grounded approach [18] to achieve contextual understanding design and to effectively interpret the context and meanings related to conversational agents in healthcare interactions. The study context concerns six chatbots adopted in the healthcare sector through semi-structured interviews conducted in the health ecosystem. Secondary data relating to these tools under consideration are also used to complete the picture on them. Observation, interviewing and archival documents [19] could be used in qualitative research to make comparisons and obtain enriched results due to the opportunity to bridge the weaknesses of one source by compensating it with the strengths of others. Conversational agents automate customer interactions with smart meaningful interactions powered by Artificial Intelligence, making support, information provision and contextual understanding scalable. They help doctors to conduct the conversations that matter with their patients. In this context, conversational agents play a critical role in making relevant healthcare information accessible to the right stakeholders at the right time, defining an ever-present accessible solution for patients’ needs. In summary, conversational agents cannot replace the role of doctors but help them to manage patients. By conveying constant presence and fast information, they help doctors to build close relationships and trust with patients.
Authored by Angelo Ranieri, Andrea Ruggiero
Over the past two decades, several forms of non-intrusive technology have been deployed in cooperation with medical specialists in order to aid patients diagnosed with some form of mental, cognitive or psychological condition. Along with the availability and accessibility to applications offered by mobile devices, as well as the advancements in the field of Artificial Intelligence applications and Natural Language Processing, Conversational Agents have been developed with the objective of aiding medical specialists detecting those conditions in their early stages and monitoring their symptoms and effects on the cognitive state of the patient, as well as supporting the patient in their effort of mitigating those symptoms. Coupled with the recent advances in the the scientific field of machine and deep learning, we aim to explore the grade of applicability of such technologies into cognitive health support Conversational Agents, and their impact on the acceptability of such applications bytheir end users. Therefore, we conduct a systematic literature review, following a transparent and thorough process in order to search and analyze the bibliography of the past five years, focused on the implementation of Conversational Agents, supported by Artificial Intelligence technologies and in service of patients diagnosed with Mild Cognitive Impairment and its variants.
Authored by Ioannis Kostis, Konstantinos Karamitsios, Konstantinos Kotrotsios, Magda Tsolaki, Anthoula Tsolaki
National cultural security has existed since ancient times, but it has become a focal proposition in the context of the times and real needs. From the perspective of national security, national cultural security is an important part of national security, and it has become a strategic task that cannot be ignored in defending national security. Cultural diversity and imbalance are the fundamental prerequisites for the existence of national cultural security. Finally, the artificial intelligence algorithm is used as the theoretical basis for this article, the connotation and characteristics of China's national cultural security theory; Xi Jinping's "network view"; network ideological security view. The fourth part is the analysis of the current cultural security problems, hazards and their root causes in our country.
Authored by Weiqiang Wang
As a mature and open mobile operating system, Android runs on many IoT devices, which has led to Android-based IoT devices have become a hotbed of malware. Existing static detection methods for malware using artificial intelligence algorithms focus only on the java code layer when extracting API features, however there is a lot of malicious behavior involving native layer code. Thus, to make up for the neglect of the native code layer, we propose a heterogeneous information network-based Android malware detection method with cross-layer features. We first translate the semantic information of apps and API calls into the form of meta-paths, and construct the adjacency of apps based on API calls, then combine information from different meta-paths using multi-core learning. We implemented our method on the dataset from VirusShare and AndroZoo, and the experimental results show that the accuracy of our method is 93.4%, which is at least 2% higher than other related methods using heterogeneous information networks for malware detection.
Authored by Ren Xixuan, Zhao Lirui, Wang Kai, Xue Zhixing, Hou Anran, Shao Qiao