With the proliferation of Low Earth Orbit (LEO) spacecraft constellations, comes the rise of space-based wireless cognitive communications systems (CCS) and the need to safeguard and protect data against potential hostiles to maintain widespread communications for enabling science, military and commercial services. For example, known adversaries are using advanced persistent threats (APT) or highly progressive intrusion mechanisms to target high priority wireless space communication systems. Specialized threats continue to evolve with the advent of machine learning and artificial intelligence, where computer systems inherently can identify system vulnerabilities expeditiously over naive human threat actors due to increased processing resources and unbiased pattern recognition. This paper presents a disruptive abuse case for an APT-attack on such a CCS and describes a trade-off analysis that was performed to evaluate a variety of machine learning techniques that could aid in the rapid detection and mitigation of an APT-attack. The trade results indicate that with the employment of neural networks, the CCS s resiliency would increase its operational functionality, and therefore, on-demand communication services reliability would increase. Further, modelling, simulation, and analysis (MS\&A) was achieved using the Knowledge Discovery and Data Mining (KDD) Cup 1999 data set as a means to validate a subset of the trade study results against Training Time and Number of Parameters selection criteria. Training and cross-validation learning curves were computed to model the learning performance over time to yield a reasonable conclusion about the application of neural networks.
Authored by Suzanna LaMar, Jordan Gosselin, Lisa Happel, Anura Jayasumana
Neural Style Transfer - Style transfer is an optimizing technique that aims to blend style of input image to content image. Deep neural networks have previously surpassed humans in tasks such as object identification and detection. Deep neural networks, on the contrary, had been lagging behind in generating higher quality creative products until lately. This article introduces deep-learning techniques, which are vital in accomplishing human characteristics and open up a new world of prospects. The system employs a pre-trained CNN so that the styles of the provided image is transferred to the content image to generate high quality stylized image. The designed systems effectiveness is evaluated based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Metrics (SSIM), it is noticed that the designed method effectively maintains the structural and textural information of the cover image.
Authored by Kishor Bhangale, Pranoti Desai, Saloni Banne, Utkarsh Rajput
Neural Style Transfer - Image style transfer is an important research content related to image processing in computer vision. Compared with traditional artificial computing methods, deep learning-based convolutional neural networks in the field of machine learning have powerful advantages. This new method has high computational efficiency and a good style transfer effect. To further improve the quality and efficiency of image style transfer, the pre-trained VGG-16 neural network model and VGG-19 neural network model are used to achieve image style transfer, and the transferred images generated by the two neural networks are compared. The research results show that the use of the VGG-16 convolutional neural network to achieve image style transfer is better and more efficient.
Authored by Yilin Tao
Neural Network Resiliency - With the proliferation of Low Earth Orbit (LEO) spacecraft constellations, comes the rise of space-based wireless cognitive communications systems (CCS) and the need to safeguard and protect data against potential hostiles to maintain widespread communications for enabling science, military and commercial services. For example, known adversaries are using advanced persistent threats (APT) or highly progressive intrusion mechanisms to target high priority wireless space communication systems. Specialized threats continue to evolve with the advent of machine learning and artificial intelligence, where computer systems inherently can identify system vulnerabilities expeditiously over naive human threat actors due to increased processing resources and unbiased pattern recognition. This paper presents a disruptive abuse case for an APT-attack on such a CCS and describes a trade-off analysis that was performed to evaluate a variety of machine learning techniques that could aid in the rapid detection and mitigation of an APT-attack. The trade results indicate that with the employment of neural networks, the CCS s resiliency would increase its operational functionality, and therefore, on-demand communication services reliability would increase. Further, modelling, simulation, and analysis (MS\&A) was achieved using the Knowledge Discovery and Data Mining (KDD) Cup 1999 data set as a means to validate a subset of the trade study results against Training Time and Number of Parameters selection criteria. Training and cross-validation learning curves were computed to model the learning performance over time to yield a reasonable conclusion about the application of neural networks.
Authored by Suzanna LaMar, Jordan Gosselin, Lisa Happel, Anura Jayasumana
Network Intrusion Detection - Network intrusion detection technology has been a popular application technology for current network security, but the existing network intrusion detection technology in the application process, there are problems such as low detection efficiency, low detection accuracy and other poor detection performance. To solve the above problems, a new treatment combining artificial intelligence with network intrusion detection is proposed. Artificial intelligence-based network intrusion detection technology refers to the application of artificial intelligence techniques, such as: neural networks, neural algorithms, etc., to network intrusion detection, and the application of these artificial intelligence techniques makes the automatic detection of network intrusion detection models possible.
Authored by Chaofan Lu
Intrusion Intolerance - The cascaded multi-level inverter (CMI) is becoming increasingly popular for wide range of applications in power electronics dominated grid (PEDG). The increased number of semiconductors devices in these class of power converters leads to an increased need for fault detection, isolation, and selfhealing. In addition, the PEDG’s cyber and physical layers are exposed to malicious attacks. These malicious actions, if not detected and classified in a timely manner, can cause catastrophic events in power grid. The inverters’ internal failures make the anomaly detection and classification in PEDG a challenging task. The main objective of this paper is to address this challenge by implementing a recurrent neural network (RNN), specifically utilizing long short-term memory (LSTM) for detection and classification of internal failures in CMI and distinguish them from malicious activities in PEDG. The proposed anomaly classification framework is a module in the primary control layer of inverters which can provide information for intrusion detection systems in a secondary control layer of PEDG for further analysis.
Authored by Matthew Baker, Hassan Althuwaini, Mohammad Shadmand
Machine Learning - Estimation for obesity levels is always an important topic in medical field since it can provide useful guidance for people that would like to lose weight or keep fit. The article tries to find a model that can predict obesity and provides people with the information of how to avoid overweight. To be more specific, this article applied dimension reduction to the data set to simplify the data and tried to Figure out a most decisive feature of obesity through Principal Component Analysis (PCA) based on the data set. The article also used some machine learning methods like Support Vector Machine (SVM), Decision Tree to do prediction of obesity and wanted to find the major reason of obesity. In addition, the article uses Artificial Neural Network (ANN) to do prediction which has more powerful feature extraction ability to do this. Finally, the article found that family history of obesity is the most decisive feature, and it may because of obesity may be greatly affected by genes or the family eating diet may have great influence. And both ANN and Decision tree’s accuracy of prediction is higher than 90\%.
Authored by Zhenghao He
Estimation for obesity levels is always an important topic in medical field since it can provide useful guidance for people that would like to lose weight or keep fit. The article tries to find a model that can predict obesity and provides people with the information of how to avoid overweight. To be more specific, this article applied dimension reduction to the data set to simplify the data and tried to Figure out a most decisive feature of obesity through Principal Component Analysis (PCA) based on the data set. The article also used some machine learning methods like Support Vector Machine (SVM), Decision Tree to do prediction of obesity and wanted to find the major reason of obesity. In addition, the article uses Artificial Neural Network (ANN) to do prediction which has more powerful feature extraction ability to do this. Finally, the article found that family history of obesity is the most decisive feature, and it may because of obesity may be greatly affected by genes or the family eating diet may have great influence. And both ANN and Decision tree’s accuracy of prediction is higher than 90\%.
Authored by Zhenghao He
Electrical load forecasting is an essential part of the smart grid to maintain a stable and reliable grid along with helping decisions for economic planning. With the integration of more renewable energy resources, especially solar photovoltaic (PV), and transitioning into a prosumer-based grid, electrical load forecasting is deemed to play a crucial role on both regional and household levels. However, most of the existing forecasting methods can be considered black-box models due to deep digitalization enablers, such as Deep Neural Networks (DNN), where human interpretation remains limited. Additionally, the black box character of many models limits insights and applicability. In order to mitigate this shortcoming, eXplainable Artificial Intelligence (XAI) is introduced as a measure to get transparency into the model’s behavior and human interpretation. By utilizing XAI, experienced power market and system professionals can be integrated into developing the data-driven approach, even without knowing the data science domain. In this study, an electrical load forecasting model utilizing an XAI tool for a Norwegian residential building was developed and presented.
Authored by Eilert Henriksen, Ugur Halden, Murat Kuzlu, Umit Cali
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Transformer is the key equipment of power system, and its stable operation is very important to the security of power system In practical application, with the progress of technology, the performance of transformer becomes more and more important, but faults also occur from time to time in practical application, and the traditional manual fault diagnosis needs to consume a lot of time and energy. At present, the rapid development of artificial intelligence technology provides a new research direction for timely and accurate detection and treatment of transformer faults. In this paper, a method of transformer fault diagnosis using artificial neural network is proposed. The neural network algorithm is used for off-line learning and training of the operation state data of normal and fault states. By adjusting the relationship between neuron nodes, the mapping relationship between fault characteristics and fault location is established by using network layer learning, Finally, the reasoning process from fault feature to fault location is realized to realize intelligent fault diagnosis.
Authored by Li Feng, Ye Bo
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting,
Authored by Xuanwei Lin, Chen Dong, Ximeng Liu, Yuanyuan Zhang
Modern consumer electronic devices often provide intelligence services with deep neural networks. We have started migrating the computing locations of intelligence services from cloud servers (traditional AI systems) to the corresponding devices (on-device AI systems). On-device AI systems generally have the advantages of preserving privacy, removing network latency, and saving cloud costs. With the emergence of on-device AI systems having relatively low computing power, the inconsistent and varying hardware resources and capabilities pose difficulties. Authors' affiliation has started applying a stream pipeline framework, NNStreamer, for on-device AI systems, saving developmental costs and hardware resources and improving performance. We want to expand the types of devices and applications with on-device AI services products of both the affiliation and second/third parties. We also want to make each AI service atomic, re-deployable, and shared among connected devices of arbitrary vendors; we now have yet another requirement introduced as it always has been. The new requirement of “among-device AI” includes connectivity between AI pipelines so that they may share computing resources and hardware capabilities across a wide range of devices regardless of vendors and manufacturers. We propose extensions of the stream pipeline framework, NNStreamer, for on-device AI so that NNStreamer may provide among-device AI capability. This work is a Linux Foundation (LF AI & Data) open source project accepting contributions from the general public.
Authored by MyungJoo Ham, Sangjung Woo, Jaeyun Jung, Wook Song, Gichan Jang, Yongjoo Ahn, Hyoungjoo Ahn
Phishing has become a prominent method of data theft among hackers, and it continues to develop. In recent years, many strategies have been developed to identify phishing website attempts using machine learning particularly. However, the algorithms and classification criteria that have been used are highly different from the real issues and need to be compared. This paper provides a detailed comparison and evaluation of the performance of several machine learning algorithms across multiple datasets. Two phishing website datasets were used for the experiments: the Phishing Websites Dataset from UCI (2016) and the Phishing Websites Dataset from Mendeley (2018). Because these datasets include different types of class labels, the comparison algorithms can be applied in a variety of situations. The tests showed that Random Forest was better than other classification methods, with an accuracy of 88.92% for the UCI dataset and 97.50% for the Mendeley dataset.
Authored by Wendy Sarasjati, Supriadi Rustad, Purwanto, Heru Santoso, Muljono, Abdul Syukur, Fauzi Rafrastara, De Setiadi
Robustness verification of neural networks (NNs) is a challenging and significant problem, which draws great attention in recent years. Existing researches have shown that bound propagation is a scalable and effective method for robustness verification, and it can be implemented on GPUs and TPUs to get parallelized. However, the bound propagation methods naturally produce weak bound due to linear relaxations on the neurons, which may cause failure in verification. Although tightening techniques for simple ReLU networks have been explored, they are not applicable for NNs with general activation functions such as Sigmoid and Tanh. Improving robustness verification on these NNs is still challenging. In this paper, we propose a Branch-and-Bound (BaB) style method to address this problem. The proposed BaB procedure improves the weak bound by splitting the input domain of neurons into sub-domains and solving the corresponding sub-problems. We propose a generic heuristic function to determine the priority of neuron splitting by scoring the relaxation and impact of neurons. Moreover, we combine bound optimization with the BaB procedure to improve the weak bound. Experimental results demonstrate that the proposed method gains up to 35% improvement compared to the state-of-art CROWN method on Sigmoid and Tanh networks.
Authored by Zhengwu Luo, Lina Wang, Run Wang, Kang Yang, Aoshuang Ye
In this paper, we established a unified deep learning-based spam filtering method. The proposed method uses the message byte-histograms as a unified representation for all message types (text, images, or any other format). A deep convolutional neural network (CNN) is used to extract high-level features from this representation. A fully connected neural network is used to perform the classification using the extracted CNN features. We validate our method using several open-source text-based and image-based spam datasets.We obtained an accuracy higher than 94% on all datasets.
Authored by Yassine Belkhouche
Aim: To bring off the spam detection in social media using Support Vector Machine (SVM) algorithm and compare accuracy with Artificial Neural Network (ANN) algorithm sample size of dataset is 5489, Initially the dataset contains several messages which includes spam and ham messages 80% messages are taken as training and 20% of messages are taken as testing. Materials and Methods: Classification was performed by KNN algorithm (N=10) for spam detection in social media and the accuracy was compared with SVM algorithm (N=10) with G power 80% and alpha value 0.05. Results: The value obtained in terms of accuracy was identified by ANN algorithm (98.2%) and for SVM algorithm (96.2%) with significant value 0.749. Conclusion: The accuracy of detecting spam using the ANN algorithm appears to be slightly better than the SVM algorithm.
Authored by Grandhi Svadasu, M. Adimoolam
Malicious attacks, malware, and ransomware families pose critical security issues to cybersecurity, and it may cause catastrophic damages to computer systems, data centers, web, and mobile applications across various industries and businesses. Traditional anti-ransomware systems struggle to fight against newly created sophisticated attacks. Therefore, state-of-the-art techniques like traditional and neural network-based architectures can be immensely utilized in the development of innovative ransomware solutions. In this paper, we present a feature selection-based framework with adopting different machine learning algorithms including neural network-based architectures to classify the security level for ransomware detection and prevention. We applied multiple machine learning algorithms: Decision Tree (DT), Random Forest (RF), Naïve Bayes (NB), Logistic Regression (LR) as well as Neural Network (NN)-based classifiers on a selected number of features for ransomware classification. We performed all the experiments on one ransomware dataset to evaluate our proposed framework. The experimental results demonstrate that RF classifiers outperform other methods in terms of accuracy, F -beta, and precision scores.
Authored by Mohammad Masum, Md Faruk, Hossain Shahriar, Kai Qian, Dan Lo, Muhaiminul Adnan
The selection of distribution network faults is of great significance to accurately identify the fault location, quickly restore power and improve the reliability of power supply. This paper mainly studies the fault phase selection method of distribution network based on wavelet singular entropy and deep belief network (DBN). Firstly, the basic principles of wavelet singular entropy and DBN are analyzed, and on this basis, the DBN model of distribution network fault phase selection is proposed. Firstly, the transient fault current data of the distribution network is processed to obtain the wavelet singular entropy of the three phases, which is used as the input of the fault phase selection model; then the DBN network is improved, and an artificial neural network (ANN) is introduced to make it a fault Select the phase classifier, and specify the output label; finally, use Simulink to build a simulation model of the IEEE33 node distribution network system, obtain a large amount of data of various fault types, generate a training sample library and a test sample library, and analyze the neural network. The adjustment of the structure and the training of the parameters complete the construction of the DBN model for the fault phase selection of the distribution network.
Authored by Jinliang You, Di Zhang, Qingwu Gong, Jiran Zhu, Haiguo Tang, Wei Deng, Tong Kang
In this decade, digital transactions have risen exponentially demanding more reliable and secure authentication systems. CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) system plays a major role in these systems. These CAPTCHAs are available in character sequence, picture-based, and audio-based formats. It is very essential that these CAPTCHAs should be able to differentiate a computer program from a human precisely. This work tests the strength of text-based CAPTCHAs by breaking them using an algorithm built on CNN (Convolution Neural Network) and RNN (Recurrent Neural Network). The algorithm is designed in such a way as an attempt to break the security features designers have included in the CAPTCHAs to make them hard to be cracked by machines. This algorithm is tested against the synthetic dataset generated in accordance with the schemes used in popular websites. The experiment results exhibit that the model has shown a considerable performance against both the synthetic and real-world CAPTCHAs.
Authored by A Priya, Abishek Ganesh, Akil Prasath, Jeya Pradeepa
In the recent development of the online cryptocurrency mining platform, Coinhive, numerous websites have employed “Cryptojacking.” They may need the unauthorized use of CPU resources to mine cryptocurrency and replace advertising income. Web cryptojacking technologies are the most recent attack in information security. Security teams have suggested blocking Cryptojacking scripts by using a blacklist as a strategy. However, the updating procedure of the static blacklist has not been able to promptly safeguard consumers because of the sharp rise in “Cryptojacking kidnapping”. Therefore, we propose a Cryptojacking identification technique based on analyzing the user's computer resources to combat the assault technology known as “Cryptojacking kidnapping.” Machine learning techniques are used to monitor changes in computer resources such as CPU changes. The experiment results indicate that this method is more accurate than the blacklist system and, in contrast to the blacklist system, manually updates the blacklist regularly. The misuse of online Cryptojacking programs and the unlawful hijacking of users' machines for Cryptojacking are becoming worse. In the future, information security undoubtedly addresses the issue of how to prevent Cryptojacking and abduction. The result of this study helps to save individuals from unintentionally becoming miners.
Authored by Min-Hao Wu, Jian-Hung Huang, Jian-Xin Chen, Hao-Jyun Wang, Chen-Yu Chiu
Cognitive radio (CR) networks are an emerging and promising technology to improve the utilization of vacant bands. In CR networks, security is a very noteworthy domain. Two threatening attacks are primary user emulation (PUE) and spectrum sensing data falsification (SSDF). A PUE attacker mimics the primary user signals to deceive the legitimate secondary users. The SSDF attacker falsifies its observations to misguide the fusion center to make a wrong decision about the status of the primary user. In this paper, we propose a scheme based on clustering the secondary users to counter SSDF attacks. Our focus is on detecting and classifying each cluster as reliable or unreliable. We introduce two different methods using an artificial neural network (ANN) for both methods and five more classifiers such as support vector machine (SVM), random forest (RF), K-nearest neighbors (KNN), logistic regression (LR), and decision tree (DR) for the second one to achieve this goal. Moreover, we consider deterministic and stochastic scenarios with white Gaussian noise (WGN) for attack strategy. Results demonstrate that our method outperforms a recently suggested scheme.
Authored by Nazanin Parhizgar, Ali Jamshidi, Peyman Setoodeh
Network intrusion detection technology has been a popular application technology for current network security, but the existing network intrusion detection technology in the application process, there are problems such as low detection efficiency, low detection accuracy and other poor detection performance. To solve the above problems, a new treatment combining artificial intelligence with network intrusion detection is proposed. Artificial intelligence-based network intrusion detection technology refers to the application of artificial intelligence techniques, such as: neural networks, neural algorithms, etc., to network intrusion detection, and the application of these artificial intelligence techniques makes the automatic detection of network intrusion detection models possible.
Authored by Chaofan Lu
With the rapid development of artificial intelligence (AI), many companies are moving towards automating their services using automated conversational agents. Dialogue-based conversational recommender agents, in particular, have gained much attention recently. The successful development of such systems in the case of natural language input is conditioned by the ability to understand the users’ utterances. Predicting the users’ intents allows the system to adjust its dialogue strategy and gradually upgrade its preference profile. Nevertheless, little work has investigated this problem so far. This paper proposes an LSTM-based Neural Network model and compares its performance to seven baseline Machine Learning (ML) classifiers. Experiments on a new publicly available dataset revealed The superiority of the LSTM model with 95% Accuracy and 94% F1-score on the full dataset despite the relatively small dataset size (9300 messages and 17 intents) and label imbalance.
Authored by Mourad Jbene, Smail Tigani, Rachid Saadane, Abdellah Chehri
Intrusion detection for Controller Area Network (CAN) protocol requires modern methods in order to compete with other electrical architectures. Fingerprint Intrusion Detection Systems (IDS) provide a promising new approach to solve this problem. By characterizing network traffic from known ECUs, hazardous messages can be discriminated. In this article, a modified version of Fingerprint IDS is employed utilizing both step response and spectral characterization of network traffic via neural network training. With the addition of feature set reduction and hyperparameter tuning, this method accomplishes a 99.4% detection rate of trusted ECU traffic.
Authored by Kunaal Verma, Mansi Girdhar, Azeem Hafeez, Selim Awad