Modern web applications are getting more sophisticated by using frameworks that make development easy, but pose challenges for security analysis tools. New analysis techniques are needed to handle such frameworks that grow in number and popularity. In this paper, we describe Gelato that addresses the most crucial challenges for a security-aware client-side analysis of highly dynamic web applications. In particular, we use a feedback-driven and state-aware crawler that is able to analyze complex framework-based applications automatically, and is guided to maximize coverage of security-sensitive parts of the program. Moreover, we propose a new lightweight client-side taint analysis that outperforms the state-of-the-art tools, requires no modification to browsers, and reports non-trivial taint flows on modern JavaScript applications. Gelato reports vulnerabilities with higher accuracy than existing tools and achieves significantly better coverage on 12 applications of which three are used in production.
Authored by Behnaz Hassanshahi, Hyunjun Lee, Paddy Krishnan
Today, many internet-based applications, especially e-commerce and banking applications, require the transfer of personal data and sensitive data such as credit card information, and in this process, all operations are carried out over the Internet. Users frequently perform these transactions, which require high security, on web sites they access via web browsers. This makes the browser one of the most basic software on the Internet. The security of the communication between the user and the website is provided with SSL certificates, which is used for server authentication. Certificates issued by Certificate Authorities (CA) that have passed international audits must meet certain conditions. The criteria for the issuance of certificates are defined in the Baseline Requirements (BR) document published by the Certificate Authority/Browser (CA/B) Forum, which is accepted as the authority in the WEB Public Key Infrastructure (WEB PKI) ecosystem. Issuing the certificates in accordance with the defined criteria is not sufficient on its own to establish a secure SSL connection. In order to ensure a secure connection and confirm the identity of the website, the certificate validation task falls to the web browsers with which users interact the most. In this study, a comprehensive SSL certificate public key infrastructure (SSL Test Suite) was established to test the behavior of web browsers against certificates that do not comply with BR requirements. With the designed test suite, it is aimed to analyze the certificate validation behaviors of web browsers effectively.
Authored by Merve Şimşek, Tamer Ergun, Hüseyin Temuçin
Nowadays, the number of new websites in Thailand has been increasing every year. However, there is a lack of security on some of those websites which causes negative effects and damage. This has also resulted in numerous violations. As a result, these violations cause delays in the situation analysis. Additionally, the cost of effective and well-established digital forensics tools is still expensive. Therefore, this paper has presented the idea of using freeware digital forensics tools to test their performances and compare them with the standards of the digital forensics process. The results of the paper suggest that the tested tools have significant differences in functions and process. WEFA Web Forensics tool is the most effective tool as it supports 3 standards up to 8 out of 10 processes, followed by Browser History View which supports 7 processes, Browser History Spy and Browser Forensic Web Tool respectively, supports 5 processes. The Internet history Browser supports 4 processes as compared to the basic process of the standardization related to forensics.
Authored by Kiattisak Janloy, Pongsarun Boonyopakorn
A rendering regression is a bug introduced by a web browser where a web page no longer functions as users expect. Such rendering bugs critically harm the usability of web browsers as well as web applications. The unique aspect of rendering bugs is that they affect the presented visual appearance of web pages, but those web pages have no pre-defined correct appearance. Therefore, it is challenging to automatically detect errors in their appearance. In practice, web browser vendors rely on non-trivial and time-prohibitive manual analysis to detect and handle rendering regressions. This paper proposes R2Z2, an automated tool to find rendering regressions. R2Z2 uses the differential fuzz testing approach, which repeatedly compares the rendering results of two different versions of a browser while providing the same HTML as input. If the rendering results are different, R2Z2 further performs cross browser compatibility testing to check if the rendering difference is indeed a rendering regression. After identifying a rendering regression, R2Z2 will perform an in-depth analysis to aid in fixing the regression. Specifically, R2Z2 performs a delta-debugging-like analysis to pinpoint the exact browser source code commit causing the regression, as well as inspecting the rendering pipeline stages to pinpoint which pipeline stage is responsible. We implemented a prototype of R2Z2 particularly targeting the Chrome browser. So far, R2Z2 found 11 previously undiscovered rendering regressions in Chrome, all of which were confirmed by the Chrome developers. Importantly, in each case, R2Z2 correctly reported the culprit commit. Moreover, R2Z2 correctly pin-pointed the culprit rendering pipeline stage in all but one case.
Authored by Suhwan Song, Jaewon Hur, Sunwoo Kim, Philip Rogers, Byoungyoung Lee
Due to the rise of the internet a business model known as online advertising has seen unprecedented success. However, it has also become a prime method through which criminals can scam people. Often times even legitimate websites contain advertisements that are linked to scam websites since they are not verified by the website’s owners. Scammers have become quite creative with their attacks, using various unorthodox and inconspicuous methods such as I-frames, Favicons, Proxy servers, Domains, etc. Many modern Anti-viruses are paid services and hence not a feasible option for most users in 3rd world countries. Often people don’t possess devices that have enough RAM to even run such software efficiently leaving them without any options. This project aims to create a Browser extension that will be able to distinguish between safe and unsafe websites by utilizing Machine Learning algorithms. This system is lightweight and free thus fulfilling the needs of most people looking for a cheap and reliable security solution and allowing people to surf the internet easily and safely. The system will scan all the intermittent URL clicks as well, not just the main website thus providing an even greater degree of security.
Authored by Rehan Fargose, Samarth Gaonkar, Paras Jadhav, Harshit Jadiya, Minal Lopes
Security incident handling and response are essen-tial parts of every organization's information and cyber security. Security incident handling consists of several phases, among which digital forensic analysis has an irreplaceable place. Due to particular digital evidence being recorded at a specific time, timelines play an essential role in analyzing this digital evidence. One of the vital tasks of the digital forensic investigator is finding relevant records in this timeline. This operation is performed manually in most cases. This paper focuses on the possibilities of automatically identifying digital evidence pertinent to the case and proposes a model that identifies this digital evidence. For this purpose, we focus on Windows operating system and the NTFS file system and use outlier detection (Local Outlier Factor method). Collected digital evidence is preprocessed, transformed to binary values, and aggregated by file system inodes and names. Subsequently, we identify digital records (file inodes, file names) relevant to the case. This paper analyzes the combinations of attributes, aggregation functions, local outlier factor parameters, and their impact on the resulting selection of relevant file inodes and file names.
Authored by Eva Marková, Pavol Sokol, Kristína Kováćová
The study focused on assessing and testing Windows 10 to identify possible vulnerabilities and their ability to withstand cyber-attacks. CVE data, alongside other vulnerability reports, were instrumental in measuring the operating system's performance. Metasploit and Nmap were essential in penetration and intrusion experiments in a simulated environment. The study applied the following testing procedure: information gathering, scanning and results analysis, vulnerability selection, launch attacks, and gaining access to the operating system. Penetration testing involved eight attacks, two of which were effective against the different Windows 10 versions. Installing the latest version of Windows 10 did not guarantee complete protection against attacks. Further research is essential in assessing the system's vulnerabilities are recommending better solutions.
Authored by Jasmin Softić, Zanin Vejzović
Security of operating system using the Metasploit framework by creating a backdoor from remote setup
The era of technology has seen many rising inventions and with that rise, comes the need to secure our systems. In this paper we have discussed how the old generation of people are falling behind at being updated in tandem with technology, and losing track of the knowledge required to process the same. In addition this factor leads to leakage of critical personal information. This paper throws light upon the steps taken in order to exploit the pre-existing operating system, Windows 7, Ultimate, using a ubiquitous framework used by everyone, i.e. Metasploit. It involves installation of a backdoor on the victim machine, from a remote setup, mostly Kali Linux operating machine. This backdoor allows the attackers to create executable files and deploy them in the windows system to gain access on the machine, remotely. After gaining access, manipulation of sensitive data becomes easy. Access to the admin rights of any system is a red alert because it means that some outsider has intense access to personal information of a human being and since data about someone explains a lot of things about them. It basically is exposing and human hate that. It depraves one of their personal identity. Therefore security is not something that should be taken lightly. It is supposed to be dealt with utmost care.
Authored by Ria Thapa, Bhavya Sehl, Suryaansh Gupta, Ankur Goyal
Data leakage by employees is a matter of concern for companies and organizations today. Previous studies have shown that existing Data Leakage Protection (DLP) systems on the market, the more secure they are, the more intrusive and tedious they are to work with. This paper proposes and assesses the implementation of four technologies that enable the development of secure file systems for insider threat-focused, low-intrusive and user-transparent DLP tools. Two of these technologies are configurable features of the Windows operating system (Minifilters and Server Message Block), the other two are virtual file systems (VFS) Dokan and WinFsp, which mirror the real file system (RFS) allowing it to incorporate security techniques. In the assessment of the technologies, it was found that the implementation of VFS was very efficient and simple. WinFsp and Dokan presented a performance of 51% and 20% respectively, with respect to the performance of the operations in the RFS. This result may seem relatively low, but it should be taken into account that the calculation includes read and write encryption and decryption operations as appropriate for each prototype. Server Message Block (SMB) presented a low performance (3%) so it is not considered viable for a solution like this, while Minifilters present the best performance but require high programming knowledge for its evolution. The prototype presented in this paper and its strategy provides an acceptable level of comfort for the user, and a high level of security.
Authored by Isabel Montano, Isabel Díez, Jose Aranda, Juan Diaz, Sergio Cardín, Juan López
Operating systems have various components that produce artifacts. These artifacts are the outcome of a user’s interaction with an application or program and the operating system’s logging capabilities. Thus, these artifacts have great importance in digital forensics investigations. For example, these artifacts can be utilized in a court of law to prove the existence of compromising computer system behaviors. One such component of the Microsoft Windows operating system is Shellbag, which is an enticing source of digital evidence of high forensics interest. The presence of a Shellbag entry means a specific user has visited a particular folder and done some customizations such as accessing, sorting, resizing the window, etc. In this work, we forensically analyze Shellbag as we talk about its purpose, types, and specificity with the latest version of the Windows 11 operating system and uncover the registry hives that contain Shellbag customization information. We also conduct in-depth forensics examinations on Shellbag entries using three tools of three different types, i.e., open-source, freeware, and proprietary tools. Lastly, we compared the capabilities of tools utilized in Shellbag forensics investigations.
Authored by Ashar Neyaz, Narasimha Shashidhar, Cihan Varol, Amar Rasheed
In the computer field, cybersecurity has always been the focus of attention. How to detect malware is one of the focuses and difficulties in network security research effectively. Traditional existing malware detection schemes can be mainly divided into two methods categories: database matching and the machine learning method. With the rise of deep learning, more and more deep learning methods are applied in the field of malware detection. Deeper semantic features can be extracted via deep neural network. The main tasks of this paper are as follows: (1) Using machine learning methods and one-dimensional convolutional neural networks to detect malware (2) Propose a machine The method of combining learning and deep learning is used for detection. Machine learning uses LGBM to obtain an accuracy rate of 67.16%, and one-dimensional CNN obtains an accuracy rate of 72.47%. In (2), LGBM is used to screen the importance of features and then use a one-dimensional convolutional neural network, which helps to further improve the detection result has an accuracy rate of 78.64%.
Authored by Da Huo, Xiaoyong Li, Linghui Li, Yali Gao, Ximing Li, Jie Yuan
Malware detection and analysis can be a burdensome task for incident responders. As such, research has turned to machine learning to automate malware detection and malware family classification. Existing work extracts and engineers static and dynamic features from the malware sample to train classifiers. Despite promising results, such techniques assume that the analyst has access to the malware executable file. Self-deleting malware invalidates this assumption and requires analysts to find forensic evidence of malware execution for further analysis. In this paper, we present and evaluate an approach to detecting malware that executed on a Windows target and further classify the malware into its associated family to provide semantic insight. Specifically, we engineer features from the Windows prefetch file, a file system forensic artifact that archives process information. Results show that it is possible to detect the malicious artifact with 99% accuracy; furthermore, classifying the malware into a fine-grained family has comparable performance to techniques that require access to the original executable. We also provide a thorough security discussion of the proposed approach against adversarial diversity.
Authored by Adam Duby, Teryl Taylor, Gedare Bloom, Yanyan Zhuang
Cyber-attacks against Industrial Control Systems (ICS) can lead to catastrophic events which can be prevented by the use of security measures such as the Intrusion Prevention Systems (IPS). In this work we experimentally demonstrate how to exploit the configuration vulnerabilities of SNORT one of the most adopted IPSs to significantly degrade the effectiveness of the IPS and consequently allowing successful cyber-attacks. We illustrate how to design a batch script able to retrieve and modify the configuration files of SNORT in order to disable its ability to detect and block Denial of Service (DoS) and ARP poisoning-based Man-In-The-Middle (MITM) attacks against a Programmable Logic Controller (PLC) in an ICS network. Experimental tests performed on a water distribution testbed show that, despite the presence of IPS, the DoS and ARP spoofed packets reach the destination causing respectively the disconnection of the PLC from the ICS network and the modification of packets payload.
Authored by Luca Faramondi, Marta Grassi, Simone Guarino, Roberto Setola, Cristina Alcaraz
Consumer IoT devices may suffer malware attacks, and be recruited into botnets or worse. There is evidence that generic advice to device owners to address IoT malware can be successful, but this does not account for emerging forms of persistent IoT malware. Less is known about persistent malware, which resides on persistent storage, requiring targeted manual effort to remove it. This paper presents a field study on the removal of persistent IoT malware by consumers. We partnered with an ISP to contrast remediation times of 760 customers across three malware categories: Windows malware, non-persistent IoT malware, and persistent IoT malware. We also contacted ISP customers identified as having persistent IoT malware on their network-attached storage devices, specifically QSnatch. We found that persistent IoT malware exhibits a mean infection duration many times higher than Windows or Mirai malware; QSnatch has a survival probability of 30% after 180 days, whereby most if not all other observed malware types have been removed. For interviewed device users, QSnatch infections lasted longer, so are apparently more difficult to get rid of, yet participants did not report experiencing difficulty in following notification instructions. We see two factors driving this paradoxical finding: First, most users reported having high technical competency. Also, we found evidence of planning behavior for these tasks and the need for multiple notifications. Our findings demonstrate the critical nature of interventions from outside for persistent malware, since automatic scan of an AV tool or a power cycle, like we are used to for Windows malware and Mirai infections, will not solve persistent IoT malware infections.
Authored by Elsa Rodríguez, Max Fukkink, Simon Parkin, Michel van Eeten, Carlos Gañán
To solve the current problem of scarce information security talents, this paper proposes to design a network information security attack and defense practical training platform based on ThinkPHP framework. It provides help for areas with limited resources and also offers a communication platform for the majority of information security enthusiasts and students. The platform is deployed using ThinkPHP, and in order to meet the personalized needs of the majority of users, support vector machine algorithms are added to the platform to provide a more convenient service for users.
Authored by Shiming Ma
Improving the accuracy of intruders in innovative Intrusion detection by comparing Machine Learning classifiers such as Random Forest (RF) with Support Vector Machine (SVM). Two groups of supervised Machine Learning algorithms acquire perfection by looking at the Random Forest calculation (N=20) with the Support Vector Machine calculation (N=20)G power value is 0.8. Random Forest (99.3198%) has the highest accuracy than the SVM (9S.56l5%) and the independent T-test was carried out (=0.507) and shows that it is statistically insignificant (p \textgreater0.05) with a confidence value of 95% by comparing RF and SVM. Conclusion: The comparative examination displays that the Random Forest is more productive than the Support Vector Machine for identifying the intruders are significantly tested.
Authored by Marri Kumar, K. Malathi
GIS equipment is an important component of power system, and mechanical failure often occurs in the process of equipment operation. In order to realize GIS equipment mechanical fault intelligent detection, this paper presents a mechanical fault diagnosis model for GIS equipment based on cross-validation parameter optimization support vector machine (CV-SVM). Firstly, vibration experiment of isolating switch was carried out based on true 110 kV GIS vibration simulation experiment platform. Vibration signals were sampled under three conditions: normal, plum finger angle change fault, plum finger abrasion fault. Then, the c and G parameters of SVM are optimized by cross validation method and grid search method. A CV-SVM model for mechanical fault diagnosis was established. Finally, training and verification are carried out by using the training set and test set models in different states. The results show that the optimization of cross-validation parameters can effectively improve the accuracy of SVM classification model. It can realize the accurate identification of GIS equipment mechanical fault. This method has higher diagnostic efficiency and performance stability than traditional machine learning. This study can provide reference for on-line monitoring and intelligent fault diagnosis analysis of GIS equipment mechanical vibration.
Authored by Xiping Jiang, Qian Wang, Mingming Du, Yilin Ding, Jian Hao, Ying Li, Qingsong Liu
Classifying and predicting the accuracy of intrusion detection on cybercrime by comparing machine learning methods such as Innovative Decision Tree (DT) with Support Vector Machine (SVM). By comparing the Decision Tree (N=20) and the Support Vector Machine algorithm (N=20) two classes of machine learning classifiers were used to determine the accuracy. The decision Tree (99.19%) has the highest accuracy than the SVM (98.5615%) and the independent T-test was carried out (=.507) and shows that it is statistically insignificant (p\textgreater0.05) with a confidence value of 95%. by comparing Innovative Decision Tree and Support Vector Machine. The Decision Tree is more productive than the Support Vector Machine for recognizing intruders with substantially checked, according to the significant analysis.
Authored by Marri Kumar, Prof. K.Malathi
A persistent and serious danger to the Internet is a denial of service attack on a large scale (DDoS) attack using machine learning. Because they originate at the low layers, new Infections that use genuine hypertext transfer protocol requests to overload target resources are more untraceable than application layer-based cyberattacks. Using network flow traces to construct an access matrix, this research presents a method for detecting distributed denial of service attack machine learning assaults. Independent component analysis decreases the number of attributes utilized in detection because it is multidimensional. Independent component analysis can be used to translate features into high dimensions and then locate feature subsets. Furthermore, during the training and testing phase of the updated source support vector machine for classification, their performance it is possible to keep track of the detection rate and false alarms. Modified source support vector machine is popular for pattern classification because it produces good results when compared to other approaches, and it outperforms other methods in testing even when given less information about the dataset. To increase classification rate, modified source support Vector machine is used, which is optimized using BAT and the modified Cuckoo Search method. When compared to standard classifiers, the acquired findings indicate better performance.
Authored by S. Umarani, R. Aruna, V. Kavitha
Speech emotion popularity is one of the quite promising and thrilling issues in the area of human computer interaction. It has been studied and analysed over several decades. It’s miles the technique of classifying or identifying emotions embedded inside the speech signal.Current challenges related to the speech emotion recognition when a single estimator is used is difficult to build and train using HMM and neural networks,Low detection accuracy,High computational power and time.In this work we executed emotion category on corpora — the berlin emodb, and the ryerson audio-visible database of emotional speech and track (Ravdess). A mixture of spectral capabilities was extracted from them which changed into further processed and reduced to the specified function set. When compared to single estimators, ensemble learning has been shown to provide superior overall performance. We endorse a bagged ensemble model which consist of support vector machines with a gaussian kernel as a possible set of rules for the hassle handy. Inside the paper, ensemble studying algorithms constitute a dominant and state-of-the-art approach for acquiring maximum overall performance.
Authored by Bini Omman, Shallet Eldho
The major aim of the study is to predict the type of crime that is going to happen based on the crime hotspot detected for the given crime data with engineered spatial features. crime dataset is filtered to have the following 2 crime categories: crime against society, crime against person. Crime hotspots are detected by using the Novel Hierarchical density based Spatial Clustering of Application with Noise (HDBSCAN) Algorithm with the number of clusters optimized using silhouette score. The sample data consists of 501 crime incidents. Future types of crime for the given location are predicted by using the Support Vector Machine (SVM) and Convolutional Neural Network (CNN) algorithms (N=5). The accuracy of crime prediction using Support Vector Machine classification algorithm is 94.01% and Convolutional Neural Network algorithm is 79.98% with the significance p-value of 0.033. The Support Vector Machine algorithm is significantly better in accuracy for prediction of type of crime than Convolutional Neural Network (CNN).
Authored by T. Sravani, M.Raja Suguna
Being a part of today’s technical world, we are connected through a vast network. More we are addicted to these modernization techniques we need security. There must be reliability in a network security system so that it is capable of doing perfect monitoring of the whole network of an organization so that any unauthorized users or intruders wouldn’t be able to halt our security breaches. Firewalls are there for securing our internal network from unauthorized outsiders but still some time possibility of attacks is there as according to a survey 60% of attacks were internal to the network. So, the internal system needs the same higher level of security just like external. So, understanding the value of security measures with accuracy, efficiency, and speed we got to focus on implementing and comparing an improved intrusion detection system. A comprehensive literature review has been done and found that some feature selection techniques with standard scaling combined with Machine Learning Techniques can give better results over normal existing ML Techniques. In this survey paper with the help of the Uni-variate Feature selection method, the selection of 14 essential features out of 41 is performed which are used in comparative analysis. We implemented and compared both binary class classification and multi-class classification-based Intrusion Detection Systems (IDS) for two Supervised Machine Learning Techniques Support Vector Machine and Classification and Regression Techniques.
Authored by Pushpa Singh, Parul Tomar, Madhumita Kathuria
This paper introduces an application of machine learning algorithms. In fact, support vector machine and decision tree approaches are studied and applied to compare their performances in detecting, classifying, and locating faults in the transmission network. The IEEE 14-bus transmission network is considered in this work. Besides, 13 types of faults are tested. Particularly, the one fault and the multiple fault cases are investigated and tested separately. Fault simulations are performed using the SimPowerSystems toolbox in Matlab. Basing on the accuracy score, a comparison is made between the proposed approaches while testing simple faults, on the one hand, and when complicated faults are integrated, on the other hand. Simulation results prove that the support vector machine technique can achieve an accuracy of 87% compared to the decision tree which had an accuracy of 53% in complicated cases.
Authored by Nouha Bouchiba, Azeddine Kaddouri
Today billions of people are accessing the internet around the world. There is a need for new technology to provide security against malicious activities that can take preventive/ defensive actions against constantly evolving attacks. A new generation of technology that keeps an eye on such activities and responds intelligently to them is the intrusion detection system employing machine learning. It is difficult for traditional techniques to analyze network generated data due to nature, amount, and speed with which the data is generated. The evolution of advanced cyber threats makes it difficult for existing IDS to perform up to the mark. In addition, managing large volumes of data is beyond the capabilities of computer hardware and software. This data is not only vast in scope, but it is also moving quickly. The system architecture suggested in this study uses SVM to train the model and feature selection based on the information gain ratio measure ranking approach to boost the overall system's efficiency and increase the attack detection rate. This work also addresses the issue of false alarms and trying to reduce them. In the proposed framework, the UNSW-NB15 dataset is used. For analysis, the UNSW-NB15 and NSL-KDD datasets are used. Along with SVM, we have also trained various models using Naive Bayes, ANN, RF, etc. We have compared the result of various models. Also, we can extend these trained models to create an ensemble approach to improve the performance of IDS.
Authored by Manish Khodaskar, Darshan Medhane, Rajesh Ingle, Amar Buchade, Anuja Khodaskar
Critical infrastructure is a key area in cybersecurity. In the U.S., it was front and center in 1997 with the report from the President’s Commission on Critical Infrastructure Protection (PCCIP), and now affects countries worldwide. Critical Infrastructure Protection must address all types of cybersecurity threats - insider threat, ransomware, supply chain risk management issues, and so on. Unsurprisingly, in the past 25 years, the risks and incidents have increased rather than decreased and appear in the news daily. As an important component of critical infrastructure protection, secure supply chain risk management must be integrated into development projects. Both areas have important implications for security requirements engineering.
Authored by Nancy Mead