A fast expanding topic of study on automated AI is focused on the prediction and prevention of cyber-attacks using machine learning algorithms. In this study, we examined the research on applying machine learning algorithms to the problems of strategic cyber defense and attack forecasting. We also provided a technique for assessing and choosing the best machine learning models for anticipating cyber-attacks. Our findings show that machine learning methods, especially random forest and neural network models, are very accurate in predicting cyber-attacks. Additionally, we discovered a number of crucial characteristics, such as source IP, packet size, and malicious traffic that are strongly associated with the likelihood of cyber-attacks. Our results imply that automated AI research on cyber-attack prediction and security planning has tremendous promise for enhancing cyber-security and averting cyber-attacks.
Authored by Ravikiran Madala, N. Vijayakumar, Nandini N, Shanti Verma, Samidha Chandvekar, Devesh Singh
Deep neural networks have been widely applied in various critical domains. However, they are vulnerable to the threat of adversarial examples. It is challenging to make deep neural networks inherently robust to adversarial examples, while adversarial example detection offers advantages such as not affecting model classification accuracy. This paper introduces common adversarial attack methods and provides an explanation of adversarial example detection. Recent advances in adversarial example detection methods are categorized into two major classes: statistical methods and adversarial detection networks. The evolutionary relationship among different detection methods is discussed. Finally, the current research status in this field is summarized, and potential future directions are highlighted.
Authored by Chongyang Zhao, Hu Li, Dongxia Wang, Ruiqi Liu
With the increased computational efficiency, Deep Neural Network gained more importance in the area of medical diagnosis. Nowadays many researchers have noticed the security concerns of various deep neural network models used for the clinical applications. However an efficient model misbehaves frequently when it confronted with intentionally modified data samples, called adversarial examples. These adversarial examples generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Thus, various adversarial attacks and defense methods certainly stand out from both AI and security networks and have turned into a hot exploration point lately. Adversarial attacks can be expected in various applications of deep learning model especially in healthcare area for disease prediction or classification. It should be properly handled with effective defensive mechanisms or else it may be a great threat to human life. This literature work will help to notice various adversarial attacks and defensive mechanisms. In the field of clinical analysis, this paper gives a detailed research on adversarial approaches on deep neural networks. This paper starts with the speculative establishments, various techniques, and utilization of adversarial attacking strategies. The contributions by the various researchers for the defensive mechanisms against adversarial attacks were also discussed. A few open issues and difficulties are accordingly discussed about, which might incite further exploration endeavors.
Authored by K Priya V, Peter Dinesh
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
The Internet of Things (IoT) heralds a innovative generation in communication via enabling regular gadgets to supply, receive, and percentage records easily. IoT applications, which prioritise venture automation, aim to present inanimate items autonomy; they promise increased consolation, productivity, and automation. However, strong safety, privateness, authentication, and recuperation methods are required to understand this goal. In order to assemble give up-to-quit secure IoT environments, this newsletter meticulously evaluations the security troubles and risks inherent to IoT applications. It emphasises the vital necessity for architectural changes.The paper starts by conducting an examination of security worries before exploring emerging and advanced technologies aimed at nurturing a sense of trust, in Internet of Things (IoT) applications. The primary focus of the discussion revolves around how these technologies aid in overcoming security challenges and fostering an ecosystem for IoT.
Authored by Pranav A, Sathya S, HariHaran B
Developing network intrusion detection systems (IDS) presents significant challenges due to the evolving nature of threats and the diverse range of network applications. Existing IDSs often struggle to detect dynamic attack patterns and covert attacks, leading to misidentified network vulnerabilities and degraded system performance. These requirements must be met via dependable, scalable, effective, and adaptable IDS designs. Our IDS can recognise and classify complex network threats by combining the Deep Q-Network (DQN) algorithm with distributed agents and attention techniques.. Our proposed distributed multi-agent IDS architecture has many advantages for guiding an all-encompassing security approach, including scalability, fault tolerance, and multi-view analysis. We conducted experiments using industry-standard datasets including NSL-KDD and CICIDS2017 to determine how well our model performed. The results show that our IDS outperforms others in terms of accuracy, precision, recall, F1-score, and false-positive rate. Additionally, we evaluated our model s resistance to black-box adversarial attacks, which are commonly used to take advantage of flaws in machine learning. Under these difficult circumstances, our model performed quite well.We used a denoising autoencoder (DAE) for further model strengthening to improve the IDS s robustness. Lastly, we evaluated the effectiveness of our zero-day defenses, which are designed to mitigate attacks exploiting unknown vulnerabilities. Through our research, we have developed an advanced IDS solution that addresses the limitations of traditional approaches. Our model demonstrates superior performance, robustness against adversarial attacks, and effective zero-day defenses. By combining deep reinforcement learning, distributed agents, attention techniques, and other enhancements, we provide a reliable and comprehensive solution for network security.
Authored by Malika Malik, Kamaljit Saini
Cloud computing (CC) is vulnerable to existing information technology attacks, since it extends and utilizes information technology infrastructure, applications and typical operating systems. In this manuscript, an Enhanced capsule generative adversarial network (ECGAN) with blockchain based Proof of authority consensus procedure fostered Intrusion detection (ID) system is proposed for enhancing cyber security in CC. The data are collected via NSL-KDD benchmark dataset. The input data is fed to proposed Z-Score Normalization process to eliminate the redundancy including missing values. The pre-processing output is fed to feature selection. During feature selection, extracting the optimum features on the basis of univariate ensemble feature selection (UEFS). Optimum features basis, the data are classified as normal and anomalous utilizing Enhanced capsule generative adversarial networks. Subsequently, blockchain based Proof of authority (POA) consensus process is proposed for improving the cyber security of the data in cloud computing environment. The proposed ECGAN-BC-POA-IDS method is executed in Python and the performance metrics are calculated. The proposed approach has attained 33.7\%, 25.7\%, 21.4\% improved accuracy, 24.6\%, 35.6\%, 38.9\% lower attack detection time, and 23.8\%, 18.9\%, 15.78\% lower delay than the existing methods, like Artificial Neural Network (ANN) with blockchain framework, Integrated Architecture with Byzantine Fault Tolerance consensus, and Blockchain Random Neural Network (RNN-BC) respectively.
Authored by Ravi Kanth, Prem Jacob
Information security construction is a social issue, and the most urgent task is to do an excellent job in information risk assessment. The bayesian neural network currently plays a vital role in enterprise information security risk assessment, which overcomes the subjective defects of traditional assessment results and operates efficiently. The risk quantification method based on fuzzy theory and Bayesian regularization BP neural network mainly uses fuzzy theory to process the original data and uses the processed data as the input value of the neural network, which can effectively reduce the ambiguity of language description. At the same time, special neural network training is carried out for the confusion that the neural network is easy to fall into the optimal local problem. Finally, the risk is verified and quantified through experimental simulation. This paper mainly discusses the problem of enterprise information security risk assessment based on a Bayesian neural network, hoping to provide strong technical support for enterprises and organizations to carry out risk rectification plans. Therefore, the above method provides a new information security risk assessment idea.
Authored by Zijie Deng, Guocong Feng, Qingshui Huang, Hong Zou, Jiafa Zhang
Cyber threats have been a major issue in the cyber security domain. Every hacker follows a series of cyber-attack stages known as cyber kill chain stages. Each stage has its norms and limitations to be deployed. For a decade, researchers have focused on detecting these attacks. Merely watcher tools are not optimal solutions anymore. Everything is becoming autonomous in the computer science field. This leads to the idea of an Autonomous Cyber Resilience Defense algorithm design in this work. Resilience has two aspects: Response and Recovery. Response requires some actions to be performed to mitigate attacks. Recovery is patching the flawed code or back door vulnerability. Both aspects were performed by human assistance in the cybersecurity defense field. This work aims to develop an algorithm based on Reinforcement Learning (RL) with a Convoluted Neural Network (CNN), far nearer to the human learning process for malware images. RL learns through a reward mechanism against every performed attack. Every action has some kind of output that can be classified into positive or negative rewards. To enhance its thinking process Markov Decision Process (MDP) will be mitigated with this RL approach. RL impact and induction measures for malware images were measured and performed to get optimal results. Based on the Malimg Image malware, dataset successful automation actions are received. The proposed work has shown 98\% accuracy in the classification, detection, and autonomous resilience actions deployment.
Authored by Kainat Rizwan, Mudassar Ahmad, Muhammad Habib
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
Anomaly detection is a challenge well-suited to machine learning and in the context of information security, the benefits of unsupervised solutions show significant promise. Recent attention to Graph Neural Networks (GNNs) has provided an innovative approach to learn from attributed graphs. Using a GNN encoder-decoder architecture, anomalous edges between nodes can be detected during the reconstruction phase. The aim of this research is to determine whether an unsupervised GNN model can detect anomalous network connections in a static, attributed network. Network logs were collected from four corporate networks and one artificial network using endpoint monitoring tools. A GNN-based anomaly detection system was designed and employed to score and rank anomalous connections between hosts. The model was validated against four realistic experimental scenarios against the four large corporate networks and the smaller artificial network environment. Although quantitative metrics were affected by factors including the scale of the network, qualitative assessments indicated that anomalies from all scenarios were detected. The false positives across each scenario indicate that this model in its current form is useful as an initial triage, though would require further improvement to become a performant detector. This research serves as a promising step for advancing this methodology in detecting anomalous network connections. Future work to improve results includes narrowing the scope of detection to specific threat types and a further focus on feature engineering and selection.
Authored by Charlie Grimshaw, Brian Lachine, Taylor Perkins, Emilie Coote
Anomaly detection is a challenge well-suited to machine learning and in the context of information security, the benefits of unsupervised solutions show significant promise. Recent attention to Graph Neural Networks (GNNs) has provided an innovative approach to learn from attributed graphs. Using a GNN encoder-decoder architecture, anomalous edges between nodes can be detected during the reconstruction phase. The aim of this research is to determine whether an unsupervised GNN model can detect anomalous network connections in a static, attributed network. Network logs were collected from four corporate networks and one artificial network using endpoint monitoring tools. A GNN-based anomaly detection system was designed and employed to score and rank anomalous connections between hosts. The model was validated against four realistic experimental scenarios against the four large corporate networks and the smaller artificial network environment. Although quantitative metrics were affected by factors including the scale of the network, qualitative assessments indicated that anomalies from all scenarios were detected. The false positives across each scenario indicate that this model in its current form is useful as an initial triage, though would require further improvement to become a performant detector. This research serves as a promising step for advancing this methodology in detecting anomalous network connections. Future work to improve results includes narrowing the scope of detection to specific threat types and a further focus on feature engineering and selection.
Authored by Charlie Grimshaw, Brian Lachine, Taylor Perkins, Emilie Coote
Device recognition is the primary step toward a secure IoT system. However, the existing equipment recognition technology often faces the problems of unobvious data characteristics and insufficient training samples, resulting in low recognition rate. To address this problem, a convolutional neural network-based IoT device recognition method is proposed. We first extract the background icons of various IoT devices through the Internet, and then use the ResNet50 neural network to extract icon feature vectors to build an IoT icon library, and realize accurate identification of device types through image retrieval. The experimental results show that the accuracy rate of sampling retrieval in the icon library can reach 98.5\%, and the recognition accuracy rate outside the library can reach 83.3\%, which can effectively identify the type of IoT devices.
Authored by Minghao Lu, Linghui Li, Yali Gao, Xiaoyong Li
IBMD(Intelligent Behavior-Based Malware Detection) aims to detect and mitigate malicious activities in cloud computing environments by analyzing the behavior of cloud resources, such as virtual machines, containers, and applications.The system uses different machine learning methods like deep learning and artificial neural networks, to analyze the behavior of cloud resources and detect anomalies that may indicate malicious activity. The IBMD system can also monitor and accumulate the data from various resources, such as network traffic and system logs, to provide a comprehensive view of the behavior of cloud resources. IBMD is designed to operate in a cloud computing environment, taking advantage of the scalability and flexibility of the cloud to detect malware and respond to security incidents. The system can also be integrated with existing security tools and services, such as firewalls and intrusion detection systems, to provide a comprehensive security solution for cloud computing environments.
Authored by Jibu Samuel, Mahima Jacob, Melvin Roy, Sayoojya M, Anu Joy
With the rapid development of science and technology, information security issues have been attracting more attention. According to statistics, tens of millions of computers around the world are infected by malicious software (Malware) every year, causing losses of up to several USD billion. Malware uses various methods to invade computer systems, including viruses, worms, Trojan horses, and others and exploit network vulnerabilities for intrusion. Most intrusion detection approaches employ behavioral analysis techniques to analyze malware threats with packet collection and filtering, feature engineering, and attribute comparison. These approaches are difficult to differentiate malicious traffic from legitimate traffic. Malware detection and classification are conducted with deep learning and graph neural networks (GNNs) to learn the characteristics of malware. In this study, a GNN-based model is proposed for malware detection and classification on a renewable energy management platform. It uses GNN to analyze malware with Cuckoo Sandbox malware records for malware detection and classification. To evaluate the effectiveness of the GNN-based model, the CIC-AndMal2017 dataset is used to examine its accuracy, precision, recall, and ROC curve. Experimental results show that the GNN-based model can reach better results.
Authored by Hsiao-Chung Lin, Ping Wang, Wen-Hui Lin, Yu-Hsiang Lin, Jia-Hong Chen
Low probability of detection (LPD) has recently emerged as a means to enhance the privacy and security of wireless networks. Unlike existing wireless security techniques, LPD measures aim to conceal the entire existence of wireless communication instead of safeguarding the information transmitted from users. Motivated by LPD communication, in this paper, we study a privacy-preserving and distributed framework based on graph neural networks to minimise the detectability of a wireless ad-hoc network as a whole and predict an optimal communication region for each node in the wireless network, allowing them to communicate while remaining undetected from external actors. We also demonstrate the effectiveness of the proposed method in terms of two performance measures, i.e., mean absolute error and median absolute error.
Authored by Sivaram Krishnan, Jihong Park, Subhash Sagar, Gregory Sherman, Benjamin Campbell, Jinho Choi
Vehicular Ad Hoc Networks (VANETs) have the capability of swapping every node of every individual while driving and traveling on the roadside. The VANET-connected vehicle can send and receive data such as requests for emergency assistance, current traffic conditions, etc. VANET assistance with a vehicle for communication purposes is desperately needed. The routing method has the characteristics of safe routing to repair the trust-based features on a specific node.When malicious activity is uncovered, intrusion detection systems (IDS) are crucial tools for mitigating the damage. Collaborations between vehicles in a VANET enhance detection precision by spreading information about interactions across their nodes. This makes the machine learning distribution system feasible, scalable, and usable for creating VANET-based cooperative detection techniques. Privacy considerations are a major impediment to collaborative learning due to the data flow between nodes. A malicious node can get private details about other nodes by observing them. This study proposes a cooperative IDS for VANETs that safeguards the data generated by machine learning. In the intrusion detection phase, the selected optimal characteristics is used to detect network intrusion via a hybrid Deep Neural Network and Bidirectional Long Short-Term Memory approach. The Trust-based routing protocol then performs the intrusion prevention process, stopping the hostile node by having it select the most efficient routing path possible.
Authored by Raghunath Kawale, Ritesh Patil, Lalit Patil
This paper investigates the output feedback security control problem of switched nonlinear systems (SNSs) against denial-of-service (DoS) attacks. A novel switched observer-based neural network (NN) adaptive control algorithm is established, which guarantees that all the signals in the closed-loop system remain bounded. Note that when a DoS attacker is active in the Sensor-Controller channel, the controller cannot acquire accurate information, which leads to the standard backstepping technique not being workable. A set of NN adaptive switching-like observers is designed to tackle the obstacle for each subsystem. Further, by combining the proposed observer with the backstepping technique, an NN adaptive controller is constructed and the dynamic surface control method is borrowed to surmount the complexity explosion phenomenon. Finally, an illustrative example is provided to demonstrate the effectiveness of the proposed control algorithm.
Authored by Hongzhen Xie, Guangdeng Zong, Dong Yang, Yudi Wang
Frequency hopping (FH) technology is one of the most effective technologies in the field of radio countermeasures, meanwhile, the recognition of FH signal has become a research hotspot. FH signal is a typical non-stationary signal whose frequency varies nonlinearly with time and the time-frequency analysis technique provides a very effective method for processing this kind of signal. With the renaissance of deep learning, methods based on time-frequency analysis and deep learning are widely studied. Although these methods have achieved good results, the recognition accuracy still needs to be improved. Through the observation of the datasets, we found that there are still difficult samples that are difficult to identify. Through further analysis, we propose a horizontal spatial attention (HSA) block, which can generate spatial weight vector according to the signal distribution, and then readjust the feature map. The HSA block is a plug-and-play module that can be integrated into common convolutional neural network (CNN) to further improve their performance and these networks with HSA block are collectively called HANets. The HSA block also has the advantages of high recognition accuracy (especially under low SNRs), easy to implant, and almost no influence on the number of parameters. We verified our method on two datasets and a series of comparative experiments show that the proposed method achieves good results on FH datasets.
Authored by Pengcheng Liu, Zhen Han, Zhixin Shi, Meimei Li, Meichen Liu
Advanced persistent threat (APT) attacks have caused severe damage to many core information infrastructures. To tackle this issue, the graph-based methods have been proposed due to their ability for learning complex interaction patterns of network entities with discrete graph snapshots. However, such methods are challenged by the computer networking model characterized by a natural continuous-time dynamic heterogeneous graph. In this paper, we propose a heterogeneous graph neural network based APT detection method in smart grid clouds. Our model is an encoderdecoder structure. The encoder uses heterogeneous temporal memory and attention embedding modules to capture contextual information of interactions of network entities from the time and spatial dimensions respectively. We implement a prototype and conduct extensive experiments on real-world cyber-security datasets with more than 10 million records. Experimental results show that our method can achieve superior detection performance than state-of-the-art methods.
Authored by Weiyong Yang, Peng Gao, Hao Huang, Xingshen Wei, Haotian Zhang, Zhihao Qu
Understanding dynamic human behavior based on online video has many applications in security control, crime surveillance, sports, and industrial IoT systems. This paper solves the problem of classifying video data recorded on surveillance cameras in order to identify fragments with instances of shoplifting. It is proposed to use a classifier that is a symbiosis of two neural networks: convolutional and recurrent. The convolutional neural network is used for extraction of features from each frame of the video fragment, and the recurrent network for processing the temporal sequence of processed frames and subsequent classification.
Authored by Lyudmyla Kirichenko, Bohdan Sydorenko, Tamara Radivilova, Petro Zinchenko
Wearables Security 2022 - Mobile devices such as smartphones are increasingly being used to record personal, delicate, and security information such as images, emails, and payment information due to the growth of wearable computing. It is becoming more vital to employ smartphone sensor-based identification to safeguard this kind of information from unwanted parties. In this study, we propose a sensor-based user identification approach based on individual walking patterns and use the sensors that are pervasively embedded into smartphones to accomplish this. Individuals were identified using a convolutional neural network (CNN). Four data augmentation methods were utilized to produce synthetically more data. These approaches included jittering, scaling, and time-warping. We evaluate the proposed identification model’s accuracy, precision, recall, F1-score, FAR, and FRR utilizing a publicly accessible dataset named the UIWADS dataset. As shown by the experiment findings, the CNN with the timewarping approach operates with very high accuracy in user identification, with the lowest false positive rate of 8.80\% and the most incredible accuracy of 92.7\%.
Authored by Sakorn Mekruksavanich, Ponnipa Jantawong, Anuchit Jitpattanakul
Wearables Security 2022 - One of the biggest new trends in artificial intelligence is the ability to recognise people s movements and take their actions into account. It can be used in a variety of ways, including for surveillance, security, human-computer interaction, and content-based video retrieval. There have been a number of researchers that have presented vision-based techniques to human activity recognition. Several challenges need to be addressed in the creation of a vision-based human activity recognition system, including illumination variations in human activity recognition, interclass similarity between scenes, the environment and recording setting, and temporal variation. To overcome the above mentioned problem, by capturing or sensing human actions with help of wearable sensors, wearable devices, or IoT devices. Sensor data, particularly one-dimensional time series data, are used in the work of human activity recognition. Using 1D-Convolutional Neural Network (CNN) models, this works aims to propose a new approach for identifying human activities. The Wireless Sensor Data Mining (WISDM) dataset is utilised to train and test the 1D-CNN model in this dissertation. The proposed HAR-CNN model has a 95.2\%of accuracy, which is far higher than that of conventional methods.
Authored by P. Deepan, Santhosh Kumar, B. Rajalingam, Santosh Patra, S. Ponnuthurai
Privacy Policies - Companies and organizations inform users of how they handle personal data through privacy policies on their websites. Particular information, such as the purposes of collecting personal data and what data are provided to third parties is required to be disclosed by laws and regulations. An example of such a law is the Act on the Protection of Personal Information in Japan. In addition to privacy policies, an increasing number of companies are publishing security policies to express compliance and transparency of corporate behavior. However, it is challenging to update these policies against legal requirements due to the periodic law revisions and rapid business changes. In this study, we developed a method for analyzing privacy policies to check whether companies comply with legal requirements. In particular, the proposed method classifies policy contents using a convolutional neural network and evaluates privacy compliance by comparing the classification results with legal requirements. In addition, we analyzed security policies using the proposed method, to confirm whether the combination of privacy and security policies contributes to privacy compliance. In this study, we collected and evaluated 1,304 privacy policies and 140 security policies for Japanese companies. The results revealed that over 90\% of privacy policies sufficiently describe the handling of personal information by first parties, user rights, and security measures, and over 90\% insufficiently describe the data retention and specific audience. These differences in the number of descriptions are dependent on industry guidelines and business characteristics. Moreover, security policies were found to improve the compliance rates of 46 out of 140 companies by describing security practices not included in privacy policies.
Authored by Keika Mori, Tatsuya Nagai, Yuta Takata, Masaki Kamizono
Neural Style Transfer - With the development of economical society, the problem of product piracy security is becoming more and more serious. In order to protect the copyright of brands, based on the image neural style transfer, this paper proposes an automatic generation algorithm of anti-counterfeiting logo with security shading, which increases the difficulty of illegal copying and packaging production. VGG19 deep neural network is used to extract image features and calculate content response loss and style response loss. Based on the original neural style transfer algorithm, the content loss is added, and the generated security shading is fused with the original binary logo image to generate the anti-counterfeiting logo image with higher recognition rate. In this paper, the global loss function is composed of content loss, content response loss and style response loss. The L-BFGS optimization algorithm is used to iteratively reduce the global loss function, and the relationship between the weight adjustment, the number of iterations and the generated anti-counterfeiting logo among the three losses is studied. The secret keeping of shading style image used in this method increases the anti-attack ability of the algorithm. The experimental results show that, compared with the original logo, this method can generate the distinguishable logo content, complex security shading, and has convergence and withstand the attacks.
Authored by Zhenjie Bao, Chaoyang Liu, Jinqi Chen, Jinwei Su, Yujiao Cao