Now a days there are many online social networks (OSN) which are very popular among Internet users and use this platform for finding new connections, sharing their activities and thoughts. Twitter is such social media platforms which is very popular among this users. Survey says, it has more than 310 million monthly users who are very active and post around 500+ million tweets in a day and this attracts, the spammer or cyber-criminal to misuse this platform for their malicious benefits. Product advertisement, phishing true users, pornography propagation, stealing the trending news, sharing malicious link to get the victims for making money are the common example of the activities of spammers. In Aug-2014, Twitter made public that 8.5% of its active Twitter users (monthly) that is approx. 23+ million users, who have automatically contacted their servers for regular updates. Thus for a spam free environment in twitter, it is greatly required to detect and filter these spammer from the legitimate users. Here in our research paper, effectiveness & features of twitter spam detection, various methods are summarized with their benefits and limitations are presented. [1]
Authored by Lipsa Das, Laxmi Ahuja, Adesh Pandey
All of us are familiar with the importance of social media in facilitating communication. e-mail is one of the safest social media platforms for online communications and information transfer over the internet. As of now, many people rely on email or communications provided by strangers. Because everyone may send emails or a message, spammers have a great opportunity to compose spam messages about our many hobbies and passions, interests, and concerns. Our internet speeds are severely slowed down by spam, which also collects personal information like our phone numbers from our contact list. There is a lot of work involved in identifying these fraudsters and also identifying spam content. Email spam refers to the practice of sending large numbers of messages via email. The recipient bears the bulk of the cost of spam, therefore it's practically free advertising. Spam email is a form of commercial advertising for hackers that is financially viable due of the low cost of sending email. Anti-spam filters have become increasingly important as the volume of unwanted bulk e-mail (also spamming) grows. We can define a message, if it is a spam or not using this proposed model. Machine learning algorithms can be discussed in detail, and our data sets will be used to test them all, with the goal of identifying the one that is most accurate and precise in its identification of email spam. Society of machine learning techniques for detecting unsolicited mass email and spam.
Authored by V. Sasikala, K. Mounika, Sravya Tulasi, D. Gayathri, M. Anjani
Aim: To bring off the spam detection in social media using Support Vector Machine (SVM) algorithm and compare accuracy with Artificial Neural Network (ANN) algorithm sample size of dataset is 5489, Initially the dataset contains several messages which includes spam and ham messages 80% messages are taken as training and 20% of messages are taken as testing. Materials and Methods: Classification was performed by KNN algorithm (N=10) for spam detection in social media and the accuracy was compared with SVM algorithm (N=10) with G power 80% and alpha value 0.05. Results: The value obtained in terms of accuracy was identified by ANN algorithm (98.2%) and for SVM algorithm (96.2%) with significant value 0.749. Conclusion: The accuracy of detecting spam using the ANN algorithm appears to be slightly better than the SVM algorithm.
Authored by Grandhi Svadasu, M. Adimoolam
The evolving and new age cybersecurity threats has set the information security industry on high alert. This modern age cyberattacks includes malware, phishing, artificial intelligence, machine learning and cryptocurrency. Our research highlights the importance and role of Software Quality Assurance for increasing the security standards that will not just protect the system but will handle the cyber-attacks better. With the series of cyber-attacks, we have concluded through our research that implementing code review and penetration testing will protect our data's integrity, availability, and confidentiality. We gathered user requirements of an application, gained a proper understanding of the functional as well as non-functional requirements. We implemented conventional software quality assurance techniques successfully but found that the application software was still vulnerable to potential issues. We proposed two additional stages in software quality assurance process to cater with this problem. After implementing this framework, we saw that maximum number of potential threats were already fixed before the first release of the software.
Authored by Ammar Haider, Wafa Bhatti
Aviation is a highly sophisticated and complex System-of-Systems (SoSs) with equally complex safety oversight. As novel products with autonomous functions and interactions between component systems are adopted, the number of interdependencies within and among the SoS grows. These interactions may not always be obvious. Understanding how proposed products (component systems) fit into the context of a larger SoS is essential to promote the safe use of new as well as conventional technology.UL 4600, is a Standard for Safety for the Evaluation of Autonomous Products specifically written for completely autonomous Load vehicles. The goal-based, technology-neutral features of this standard make it adaptable to other industries and applications.This paper, using the philosophy of UL 4600, gives guidance for creating an assurance case for products in an SoS context. An assurance argument is a cogent structured argument concluding that an autonomous aircraft system possesses all applicable through-life performance and safety properties. The assurance case process can be repeated at each level in the SoS: aircraft, aircraft system, unmodified components, and modified components. The original Equipment Manufacturer (OEM) develops the assurance case for the whole aircraft envisioned in the type certification process. Assurance cases are continuously validated by collecting and analyzing Safety Performance Indicators (SPIs). SPIs provide predictive safety information, thus offering an opportunity to improve safety by preventing incidents and accidents. Continuous validation is essential for risk-based approval of autonomously evolving (dynamic) systems, learning systems, and new technology. System variants, derivatives, and components are captured in a subordinate assurance case by their developer. These variants of the assurance case inherently reflect the evolution of the vehicle-level derivatives and options in the context of their specific target ecosystem. These subordinate assurance cases are nested under the argument put forward by the OEM of components and aircraft, for certification credit.It has become a common practice in aviation to address design hazards through operational mitigations. It is also common for hazards noted in an aircraft component system to be mitigated within another component system. Where a component system depends on risk mitigation in another component of the SoS, organizational responsibilities must be stated explicitly in the assurance case. However, current practices do not formalize accounting for these dependencies by the parties responsible for design; consequently, subsequent modifications are made without the benefit of critical safety-related information from the OEMs. The resulting assurance cases, including 3rd party vehicle modifications, must be scrutinized as part of the holistic validation process.When changes are made to a product represented within the assurance case, their impact must be analyzed and reflected in an updated assurance case. An OEM can facilitate this by integrating affected assurance cases across their customer’s supply chains to ensure their validity. The OEM is expected to exercise the sphere-of-control over their product even if it includes outsourced components. Any organization that modifies a product (with or without assurance argumentation information from other suppliers) is accountable for validating the conditions for any dependent mitigations. For example, the OEM may manage the assurance argumentation by identifying requirements and supporting SPI that must be applied in all component assurance cases. For their part, component assurance cases must accommodate all spheres-of-control that mitigate the risks they present in their respective contexts. The assurance case must express how interdependent mitigations will collectively assure the outcome. These considerations are much more than interface requirements and include explicit hazard mitigation dependencies between SoS components. A properly integrated SoS assurance case reflects a set of interdependent systems that could be independently developed..Even in this extremely interconnected environment, stakeholders must make accommodations for the independent evolution of products in a manner that protects proprietary information, domain knowledge, and safety data. The collective safety outcome for the SoS is based on the interdependence of mitigations by each constituent component and could not be accomplished by any single component. This dependency must be explicit in the assurance case and should include operational mitigations predicated on people and processes.Assurance cases could be used to gain regulatory approval of conventional and new technology. They can also serve to demonstrate consistency with a desired level of safety, especially in SoSs whose existing standards may not be adequate. This paper also provides guidelines for preserving alignment between component assurance cases along a product supply chain, and the respective SoSs that they support. It shows how assurance is a continuous process that spans product evolution through the monitoring of interdependent requirements and SPI. The interdependency necessary for a successful assurance case encourages stakeholders to identify and formally accept critical interconnections between related organizations. The resulting coordination promotes accountability for safety through increased awareness and the cultivation of a positive safety culture.
Authored by Uma Ferrell, Alfred Anderegg
The FAA proposes Safety Continuum that recognizes the public expectation for safety outcomes vary with aviation sectors that have different missions, aircraft, and environments. The purpose is to align the rigor of oversight to the public expectations. An aircraft, its variants or derivatives may be used in operations with different expectations. The differences in mission might bring immutable risks for some applications that reuse or revise the original aircraft type design. The continuum enables a more agile design approval process for innovations in the context of a dynamic ecosystems, addressing the creation of variants for different sectors and needs. Since an aircraft type design can be reused in various operations under part 91 or 135 with different mission risks the assurance case will have many branches reflecting the variants and derivatives.This paper proposes a model for the holistic, performance-based, through-life safety assurance case that focuses applicant and oversight alike on achieving the safety outcomes. This paper describes the application of goal-based, technology-neutral features of performance-based assurance cases extending the philosophy of UL 4600, to the Safety Continuum. This paper specifically addresses component reuse including third-party vehicle modifications and changes to operational concept or eco-system. The performance-based assurance argument offers a way to combine the design approval more seamlessly with the oversight functions by focusing all aspects of the argument and practice together to manage the safety outcomes. The model provides the context to assure mitigated risk are consistent with an operation’s place on the safety continuum, while allowing the applicant to reuse parts of the assurance argument to innovate variants or derivatives. The focus on monitoring performance to constantly verify the safety argument complements compliance checking as a way to assure products are "fit-for-use". The paper explains how continued operational safety becomes a natural part of monitoring the assurance case for growing variety in a product line by accounting for the ecosystem changes. Such a model could be used with the Safety Continuum to promote applicant and operator accountability delivering the expected safety outcomes.
Authored by Alfred Anderegg, Uma Ferrell
IoBTs must feature collaborative, context-aware, multi-modal fusion for real-time, robust decision-making in adversarial environments. The integration of machine learning (ML) models into IoBTs has been successful at solving these problems at a small scale (e.g., AiTR), but state-of-the-art ML models grow exponentially with increasing temporal and spatial scale of modeled phenomena, and can thus become brittle, untrustworthy, and vulnerable when interpreting large-scale tactical edge data. To address this challenge, we need to develop principles and methodologies for uncertainty-quantified neuro-symbolic ML, where learning and inference exploit symbolic knowledge and reasoning, in addition to, multi-modal and multi-vantage sensor data. The approach features integrated neuro-symbolic inference, where symbolic context is used by deep learning, and deep learning models provide atomic concepts for symbolic reasoning. The incorporation of high-level symbolic reasoning improves data efficiency during training and makes inference more robust, interpretable, and resource-efficient. In this paper, we identify the key challenges in developing context-aware collaborative neuro-symbolic inference in IoBTs and review some recent progress in addressing these gaps.
Authored by Tarek Abdelzaher, Nathaniel Bastian, Susmit Jha, Lance Kaplan, Mani Srivastava, Venugopal Veeravalli
Existing solutions for scheduling arbitrarily complex distributed applications on networks of computational nodes are insufficient for scenarios where the network topology is changing rapidly. New Internet of Things (IoT) domains like the Internet of Robotic Things (IoRT) and the Internet of Battlefield Things (IoBT) demand solutions that are robust and efficient in environments that experience constant and/or rapid change. In this paper, we demonstrate how recent advancements in machine learning (in particular, in graph convolutional neural networks) can be leveraged to solve the task scheduling problem with decent performance and in much less time than traditional algorithms.
Authored by Jared Coleman, Mehrdad Kiamari, Lillian Clark, Daniel D'Souza, Bhaskar Krishnamachari
The latest generation of IoT systems incorporate machine learning (ML) technologies on edge devices. This introduces new engineering challenges to bring ML onto resource-constrained hardware, and complications for ensuring system security and privacy. Existing research prescribes iterative processes for machine learning enabled IoT products to ease development and increase product success. However, these processes mostly focus on existing practices used in other generic software development areas and are not specialized for the purpose of machine learning or IoT devices. This research seeks to characterize engineering processes and security practices for ML-enabled IoT systems through the lens of the engineering lifecycle. We collected data from practitioners through a survey (N=25) and interviews (N=4). We found that security processes and engineering methods vary by company. Respondents emphasized the engineering cost of security analysis and threat modeling, and trade-offs with business needs. Engineers reduce their security investment if it is not an explicit requirement. The threats of IP theft and reverse engineering were a consistent concern among practitioners when deploying ML for IoT devices. Based on our findings, we recommend further research into understanding engineering cost, compliance, and security trade-offs.
Authored by Nikhil Gopalakrishna, Dharun Anandayuvaraj, Annan Detti, Forrest Bland, Sazzadur Rahaman, James Davis
The prevalence of mobile devices (smartphones) along with the availability of high-speed internet access world-wide resulted in a wide variety of mobile applications that carry a large amount of confidential information. Although popular mobile operating systems such as iOS and Android constantly increase their defenses methods, data shows that the number of intrusions and attacks using mobile applications is rising continuously. Experts use techniques to detect malware before the malicious application gets installed, during the runtime or by the network traffic analysis. In this paper, we first present the information about different categories of mobile malware and threats; then, we classify the recent research methods on mobile malware traffic detection.
Authored by Mina Kambar, Armin Esmaeilzadeh, Yoohwan Kim, Kazem Taghva
The impact of digital gadgets is enormous in the current Internet world because of the easy accessibility, flexibility and time-saving benefits for the consumers. The number of computer users is increasing every year. Meanwhile, the time spent and the computers also increased. Computer users browse the internet for various information gathering and stay on the internet for a long time without control. Nowadays working people from home also spend time with the smart devices, computers, and laptops, for a longer duration to complete professional work, personal work etc. the proposed study focused on deriving the impact factors of Smartphones by analyzing the keystroke dynamics Based on the usage pattern of keystrokes the system evaluates the stress level detection using machine learning techniques. In the proposed study keyboard users are intended for testing purposes. Volunteers of 200 members are collectively involved in generating the test dataset. They are allowed to sit for a certain frame of time to use the laptop in the meanwhile the keystroke of the Mouse and keyboard are recorded. The system reads the dataset and trains the model using the Dynamic Cat-Boost algorithm (DCB), which acts as the classification model. The evaluation metrics are framed by calculating Euclidean distance (ED), Manhattan Distance (MahD), Mahalanobis distance (MD) etc. Quantitative measures of DCB are framed through Accuracy, precision and F1Score.
Authored by Bakkialakshmi S, T. Sudalaimuthu
Smart Phones being a revolution in this Modern era which is considered a boon as well as a curse, it is a known fact that most kids of the current generation are addictive to smartphones. The National Institute of Health (NIH) has carried out different studies such as exposure of smartphones to children under 12 years old, health risk associated with their usage, social implications, etc. One such study reveals that children who spend more than two hours a day, on smartphones have been seen performing poorly when it comes to language and cognitive skills. In addition, children who spend more than seven hours per day were diagnosed to have a thinner brain cortex. Hence, it is of great importance to control the amount of exposure of children to smartphones, as well as access to irregulated content. Significant research work has gone in this regard with a plethora of inputs features, feature extraction techniques, and machine learning models. This paper is a survey of the State-of-the-art techniques in detecting the age of the user using machine learning models on touch, keystroke dynamics, and sensor data.
Authored by Faheem H, Saad Sait
Malicious attacks, malware, and ransomware families pose critical security issues to cybersecurity, and it may cause catastrophic damages to computer systems, data centers, web, and mobile applications across various industries and businesses. Traditional anti-ransomware systems struggle to fight against newly created sophisticated attacks. Therefore, state-of-the-art techniques like traditional and neural network-based architectures can be immensely utilized in the development of innovative ransomware solutions. In this paper, we present a feature selection-based framework with adopting different machine learning algorithms including neural network-based architectures to classify the security level for ransomware detection and prevention. We applied multiple machine learning algorithms: Decision Tree (DT), Random Forest (RF), Naïve Bayes (NB), Logistic Regression (LR) as well as Neural Network (NN)-based classifiers on a selected number of features for ransomware classification. We performed all the experiments on one ransomware dataset to evaluate our proposed framework. The experimental results demonstrate that RF classifiers outperform other methods in terms of accuracy, F -beta, and precision scores.
Authored by Mohammad Masum, Md Faruk, Hossain Shahriar, Kai Qian, Dan Lo, Muhaiminul Adnan
This paper presents the machine learning algorithm to detect whether an executable binary is benign or ransomware. The ransomware cybercriminals have targeted our infrastructure, businesses, and everywhere which has directly affected our national security and daily life. Tackling the ransomware threats more effectively is a big challenge. We applied a machine-learning model to classify and identify the security level for a given suspected malware for ransomware detection and prevention. We use the feature selection data preprocessing to improve the prediction accuracy of the model.
Authored by Chulan Gao, Hossain Shahriar, Dan Lo, Yong Shi, Kai Qian
Ransomware is an emerging threat that imposed a \$ 5 billion loss in 2017, rose to \$ 20 billion in 2021, and is predicted to hit \$ 256 billion in 2031. While initially targeting PC (client) platforms, ransomware recently leaped over to server-side databases-starting in January 2017 with the MongoDB Apocalypse attack and continuing in 2020 with 85,000 MySQL instances ransomed. Previous research developed countermeasures against client-side ransomware. However, the problem of server-side database ransomware has received little attention so far. In our work, we aim to bridge this gap and present DIMAQS (Dynamic Identification of Malicious Query Sequences), a novel anti-ransomware solution for databases. DIMAQS performs runtime monitoring of incoming queries and pattern matching using two classification approaches (Colored Petri Nets (CPNs) and Deep Neural Networks (DNNs)) for attack detection. Our system design exhibits several novel techniques like dynamic color generation to efficiently detect malicious query sequences globally (i.e., without limiting detection to distinct user connections). Our proof-of-concept and ready-to-use implementation targets MySQL servers. The evaluation shows high efficiency without false negatives for both approaches and a false positive rate of nearly 0%. Both classifiers show very moderate performance overheads below 6%. We will publish our data sets and implementation, allowing the community to reproduce our tests and results.
Authored by Christoph Sendner, Lukas Iffländer, Sebastian Schindler, Michael Jobst, Alexandra Dmitrienko, Samuel Kounev
Recent years have witnessed a surge in ransomware attacks. Especially, many a new variant of ransomware has continued to emerge, employing more advanced techniques distributing the payload while avoiding detection. This renders the traditional static ransomware detection mechanism ineffective. In this paper, we present our Hardware Anomaly Realtime Detection - Lightweight (HARD-Lite) framework that employs semi-supervised machine learning method to detect ransomware using low-level hardware information. By using an LSTM network with a weighted majority voting ensemble and exponential moving average, we are able to take into consideration the temporal aspect of hardware-level information formed as time series in order to detect deviation in system behavior, thereby increasing the detection accuracy whilst reducing the number of false positives. Testing against various ransomware across multiple families, HARD-Lite has demonstrated remarkable effectiveness, detecting all cases tested successfully. What's more, with a hierarchical design that distributing the classifier from the user machine that is under monitoring to a server machine, Hard-Lite enables good scalability as well.
Authored by Chutitep Woralert, Chen Liu, Zander Blasingame
Ransomware uses encryption methods to make data inaccessible to legitimate users. To date a wide range of ransomware families have been developed and deployed, causing immense damage to governments, corporations, and private users. As these cyberthreats multiply, researchers have proposed a range of ransom ware detection and classification schemes. Most of these methods use advanced machine learning techniques to process and analyze real-world ransomware binaries and action sequences. Hence this paper presents a survey of this critical space and classifies existing solutions into several categories, i.e., including network-based, host-based, forensic characterization, and authorship attribution. Key facilities and tools for ransomware analysis are also presented along with open challenges.
Authored by Aldin Vehabovic, Nasir Ghani, Elias Bou-Harb, Jorge Crichigno, Aysegül Yayimli
A recommender system aims to suggest the most relevant items to users based on their personal data. However, data privacy is a growing concern for anyone. Secure recommender system is a research direction to preserve user privacy while maintaining as high performance as possible. The most recent strategy is to use Federated Learning, a machine learning technique for privacy-preserving distributed training. In Federated Learning, a subset of users will be selected for training model using data at local systems, the server will securely aggregate the computing result from local models to generate a global model, finally that model will give recommendations to users. In this paper, we present a novel algorithm to train Collaborative Filtering recommender system specialized for the ranking task in Federated Learning setting, where the goal is to protect user interaction information (i.e., implicit feedback). Specifically, with the help of the algorithm, the recommender system will be trained by Neural Collaborative Filtering, one of the state-of-the-art matrix factorization methods and Bayesian Personalized Ranking, the most common pairwise approach. In contrast to existing approaches which protect user privacy by requiring users to download/upload the information associated with all interactions that they can possibly interact with in order to perform training, the algorithm can protect user privacy at low communication cost, where users only need to obtain/transfer the information related to a small number of interactions per training iteration. Above all, through extensive experiments, the algorithm has demonstrated to utilize user data more efficient than the most recent research called FedeRank, while ensuring that user privacy is still preserved.
Authored by Hong Pham, Khanh Nguyen, Vy Phun, Tran Dang
Terrorism, and radicalization are major economic, political, and social issues faced by the world in today's era. The challenges that governments and citizens face in combating terrorism are growing by the day. Artificial intelligence, including machine learning and deep learning, has shown promising results in predicting terrorist attacks. In this paper, we attempted to build a machine learning model to predict terror activities using a global terrorism database in both relational and graphical forms. Using the Neo4j Sandbox, you can create a graph database from a relational database. We used the node2vec algorithm from Neo4j Sandbox's graph data science library to convert the high-dimensional graph to a low-dimensional vector form. In order to predict terror activities, seven machine learning models were used, and the performance parameters that were calculated were accuracy, precision, recall, and F1 score. According to our findings, the Logistic Regression model was the best performing model which was able to classify the dataset with an accuracy of 0.90, recall of 0.94 precision of 0.93, and an F1 score of 0.93.
Authored by Ankit Raj, Sunil Somani
VCB is an important component to ensure the safe and smooth operation of the power system. As an important driving part of the vacuum circuit breaker, the operating mechanism is prone to mechanical failure, which leads to power grid accidents. This paper offers an in-depth analysis of the mechanical faults of the operating mechanism of vacuum circuit breaker and their causes, extracts the current signal of the opening and closing coil strongly correlated with the mechanical faults of the operating mechanism as the characteristic information to build a Deep Belief Network (DBN) model, trains each data set via Restricted Boltzmann Machine(RBM) and updates the model parameters. The number of hidden layer nodes, the structure of the network layer, and the learning rate are determined, and the mechanical fault diagnosis system of vacuum circuit breaker based on the Deep Belief Network is established. The results show that when the network structure is 8-110-110-6 and the learning rate is 0.01, the recognition accuracy of the DBN model is the highest, which is 0.990871. Compared with BP neural network, DBN has a smaller cross-entropy error and higher accuracy. This method can accurately diagnose the mechanical fault of the vacuum circuit breaker, which lays a foundation for the smooth operation of the power system.
Authored by Yan Tong, Zhaoyu Ku, Nanxin Chen, Hu Sheng
Aim: Object Detection is one of the latest topics in today’s world for detection of real time objects using Deep Belief Networks. Methods & Materials: Real-Time Object Detection is performed using Deep Belief Networks (N=24) over Convolutional Neural Networks (N=24) with the split size of training and testing dataset 70% and 30% respectively. Results: Deep Belief Networks has significantly better accuracy (81.2%) compared to Convolutional Neural Networks (47.7%) and attained significance value of p = 0.083. Conclusion: Deep Belief Networks achieved significantly better object detection than Convolutional Neural Networks for identifying real-time objects in traffic surveillance.
Authored by G. Vinod, Dr. G. Padmapriya
Classic black-box adversarial attacks can take advantage of transferable adversarial examples generated by a similar substitute model to successfully fool the target model. However, these substitute models need to be trained by target models' training data, which is hard to acquire due to privacy or transmission reasons. Recognizing the limited availability of real data for adversarial queries, recent works proposed to train substitute models in a data-free black-box scenario. However, their generative adversarial networks (GANs) based framework suffers from the convergence failure and the model collapse, resulting in low efficiency. In this paper, by rethinking the collaborative relationship between the generator and the substitute model, we design a novel black-box attack framework. The proposed method can efficiently imitate the target model through a small number of queries and achieve high attack success rate. The comprehensive experiments over six datasets demonstrate the effectiveness of our method against the state-of-the-art attacks. Especially, we conduct both label-only and probability-only attacks on the Microsoft Azure online model, and achieve a 100% attack success rate with only 0.46% query budget of the SOTA method [49].
Authored by Jie Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, Chao Wu
Black-box adversarial attack has aroused much research attention for its difficulty on nearly no available information of the attacked model and the additional constraint on the query budget. A common way to improve attack efficiency is to transfer the gradient information of a white-box substitute model trained on an extra dataset. In this paper, we deal with a more practical setting where a pre-trained white-box model with network parameters is provided without extra training data. To solve the model mismatch problem between the white-box and black-box models, we propose a novel algorithm EigenBA by systematically integrating gradient-based white-box method and zeroth-order optimization in black-box methods. We theoretically show the optimal directions of perturbations for each step are closely related to the right singular vectors of the Jacobian matrix of the pretrained white-box model. Extensive experiments on ImageNet, CIFAR-10 and WebVision show that EigenBA can consistently and significantly outperform state-of-the-art baselines in terms of success rate and attack efficiency.
Authored by Linjun Zhou, Peng Cui, Xingxuan Zhang, Yinan Jiang, Shiqiang Yang
The widespread adoption of eCommerce, iBanking, and eGovernment institutions has resulted in an exponential rise in the use of web applications. Due to a large number of users, web applications have become a prime target of cybercriminals who want to steal Personally Identifiable Information (PII) and disrupt business activities. Hence, there is a dire need to audit the websites and ensure information security. In this regard, several web vulnerability scanners are employed for vulnerability assessment of web applications but attacks are still increasing day by day. Therefore, a considerable amount of research has been carried out to measure the effectiveness and limitations of the publicly available web scanners. It is identified that most of the publicly available scanners possess weaknesses and do not generate desired results. In this paper, the evaluation of publicly available web vulnerability scanners is performed against the top ten OWASP11OWASP® The Open Web Application Security Project (OWASP) is an online community that produces comprehensive articles, documentation, methodologies, and tools in the arena of web and mobile security. vulnerabilities and their performance is measured on the precision of their results. Based on these results, we proposed an Integrated Multi-Agent Blackbox Security Assessment Tool (SAT) for the security assessment of web applications. Research has proved that the vulnerabilities assessment results of the SAT are more extensive and accurate.
Authored by Jahanzeb Shahid, Zia Muhammad, Zafar Iqbal, Muhammad Khan, Yousef Amer, Weisheng Si
Speech recognition technology has been applied to all aspects of our daily life, but it faces many security issues. One of the major threats is the adversarial audio examples, which may tamper the recognition results of the acoustic speech recognition system (ASR). In this paper, we propose an adversarial detection framework to detect adversarial audio examples. The method is based on the transformer self-attention mechanism. Spectrogram features are extracted from the audio and divided into patches. Position information are embedded and then fed into transformer encoder. Experimental results show that the method achieves good performance with the detection accuracy of above 96.5% under the white-box attacks and blackbox attacks, and noisy circumstances. Even when detecting adversarial examples generated by the unknown attacks, it also achieves satisfactory results.
Authored by Yunchen Li, Da Luo