Current intrusion detection techniques cannot keep up with the increasing amount and complexity of cyber attacks. In fact, most of the traffic is encrypted and does not allow to apply deep packet inspection approaches. In recent years, Machine Learning techniques have been proposed for post-mortem detection of network attacks, and many datasets have been shared by research groups and organizations for training and validation. Differently from the vast related literature, in this paper we propose an early classification approach conducted on CSE-CIC-IDS2018 dataset, which contains both benign and malicious traffic, for the detection of malicious attacks before they could damage an organization. To this aim, we investigated a different set of features, and the sensitivity of performance of five classification algorithms to the number of observed packets. Results show that ML approaches relying on ten packets provide satisfactory results.
Authored by Idio Guarino, Giampaolo Bovenzi, Davide Di Monda, Giuseppe Aceto, Domenico Ciuonzo, Antonio Pescapè
The Manufacturer Usage Description (MUD) standard aims to reduce the attack surface for IoT devices by locking down their behavior to a formally-specified set of network flows (access control entries). Formal network behaviors can also be systematically and rigorously verified in any operating environment. Enforcing MUD flows and monitoring their activity in real-time can be relatively effective in securing IoT devices; however, its scope is limited to endpoints (domain names and IP addresses) and transport-layer protocols and services. Therefore, misconfigured or compromised IoTs may conform to their MUD-specified behavior but exchange unintended (or even malicious) contents across those flows. This paper develops PicP-MUD with the aim to profile the information content of packet payloads (whether unencrypted, encoded, or encrypted) in each MUD flow of an IoT device. That way, certain tasks like cyber-risk analysis, change detection, or selective deep packet inspection can be performed in a more systematic manner. Our contributions are twofold: (1) We analyze over 123K network flows of 6 transparent (e.g., HTTP), 11 encrypted (e.g., TLS), and 7 encoded (e.g., RTP) protocols, collected in our lab and obtained from public datasets, to identify 17 statistical features of their application payload, helping us distinguish different content types; and (2) We develop and evaluate PicP-MUD using a machine learning model, and show how we achieve an average accuracy of 99% in predicting the content type of a flow.
Authored by Arman Pashamokhtari, Arunan Sivanathan, Ayyoob Hamza, Hassan Gharakheili
Currently in El Salvador, efforts are being made to implement the digital signature and as part of this technology, a Public Key Infrastructure (PKI) is required, which must validate Certificate Authorities (CA). For a CA, it is necessary to implement the software that allows it to manage digital certificates and perform security procedures for the execution of cryptographic operations, such as encryption, digital signatures, and non-repudiation of electronic transactions. The present work makes a proposal for a digital certificate management system according to the Digital Signature Law of El Salvador and secure cryptography standards. Additionally, a security discussion is accomplished.
Authored by Álvaro Zavala, Leonel Maye
A digital signature is a type of asymmetric cryptography that is used to ensure that the recipient receives the actual received message from the intended sender. Problems that often arise conventionally when requiring letter approval from the authorized official, and the letter concerned is very important and urgent, often the process of giving the signature is hampered because the official concerned is not in place. With these obstacles, the letter that should be distributed immediately becomes hampered and takes a long time in terms of signing the letter. The purpose of this study is to overcome eavesdropping and data exchange in sending data using Digital Signature as authentication of data authenticity and minimizing fake signatures on letters that are not made and authorized by relevant officials based on digital signatures stored in the database. This research implements the Rivest Shamir Adleman method. (RSA) as outlined in an application to provide authorization or online signature with Digital Signature. The results of the study The application of the Rivest Shamir Adleman (RSA) algorithm can run on applications with the Digital Signature method based on ISO 9126 testing by expert examiners, and the questionnaire distributed to users and application operators obtained good results from an average value of 79.81 based on the scale table ISO 9126 conversion, the next recommendation for encryption does not use MD5 but uses Bcrypt secure database to make it stronger.
Authored by Wahyu Widiyanto, Dwi Iskandar, Sri Wulandari, Edy Susena, Edy Susanto
As the demand for effective information protection grows, security has become the primary concern in protecting such data from attackers. Cryptography is one of the methods for safeguarding such information. It is a method of storing and distributing data in a specific format that can only be read and processed by the intended recipient. It offers a variety of security services like integrity, authentication, confidentiality and non-repudiation, Malicious. Confidentiality service is required for preventing disclosure of information to unauthorized parties. In this paper, there are no ideal hash functions that dwell in digital signature concepts is proved.
Authored by Nagaeswari Bodapati, N. Pooja, Amrutha Varshini, Naga Jyothi
This is the time of internet, and we are communicating our confidential data over internet in daily life. So, it is necessary to check the authenticity in communication to stop non-repudiation, of the sender. We are using the digital signature for stopping the non-repudiation. There are many versions of digital signature are available in the market. But in every algorithm, we are sending the original message and the digest message to the receiver. Hence, there is no security applied on the original message. In this paper we are proposed an algorithm which can secure the original and its integrity. In this paper we are using the RSA algorithm as the encryption and decryption algorithm, and SHA256 algorithm for making the hash.
Authored by Surendra Chauhan, Nitin Jain, Satish Pandey
This research investigates efficient architectures for the implementation of the CRYSTALS-Dilithium post-quantum digital signature scheme on reconfigurable hardware, in terms of speed, memory usage, power consumption and resource utilisation. Post quantum digital signature schemes involve a significant computational effort, making efficient hardware accelerators an important contributor to future adoption of schemes. This is work in progress, comprising the establishment of a comprehensive test environment for operational profiling, and the investigation of the use of novel architectures to achieve optimal performance.
Authored by Donal Campbell, Ciara Rafferty, Ayesha Khalid, Maire O'Neill
The MANET architecture's future growth will make extensive use of encryption and encryption to keep network participants safe. Using a digital signature node id, we illustrate how we may stimulate the safe growth of subjective clusters while simultaneously addressing security and energy efficiency concerns. The dynamic topology of MANET allows nodes to join and exit at any time. A form of attack known as a black hole assault was used to accomplish this. To demonstrate that he had the shortest path with the least amount of energy consumption, an attacker in MATLAB R2012a used a digital signature ID to authenticate the node from which he wished to intercept messages (DSEP). “Digital Signature”, “MANET,” and “AODV” are all terms used to describe various types of digital signatures. Black Hole Attack, Single Black Hole Attack, Digital Signature, and DSEP are just a few of the many terms associated with MANET.
Authored by Sunil Gupta, Mohammad Shahid, Ankur Goyal, Rakesh Saxena, Kamal Saluja
As the nature of the website, the EJBCA digital signatures may have vulnerabilities. The list of web-based vulnerabilities can be found in OWASP's Top 10 2021. Anticipating the attack with an effective and efficient forensics application is necessary. The concept of digital forensic readiness can be applied as a pre-incident plan with a digital forensic lifecycle pipeline to establish an efficient forensic process. Managing digital evidence in the pre-incident plan includes data collection, examination, analysis, and findings report. Based on this concept, we implemented it in designing an information system that carries out the entire flow, provides attack evidence collection, visualization of attack statistics in executive summary, mitigation recommendation, and forensic report generation in a physical form when needed. This research offers an information system that can help the digital forensic process and maintain the integrity of the EJBCA digital signature server web.
Authored by Ihsan Rasyid, Luqman Zagi, Suhardi
The paper presents the concept of the association of digital signature technology with the currently trending blockchain technology for providing a mechanism which would detect any dubious data and store it in a place where it could be secure for the long term. The features of blockchain technology perfectly complement the requirements of the educational fields of today's world. The growing trend of digital certificate usage makes it easier for a dubious certificate to existing, among the others hampering the integrity of professional life. Association of hash key and a time stamp with a digital document would ensure that a third person does not corrupt the following certificate. The blockchain ensures that after verification, nobody else misuses the data uploaded and keeps it safe for a long time. The information from the blockchain can be retrieved at any moment by the user using the unique id associated with every user.
Authored by Adeeba Habeeb, Vinod Shukla, Suchi Dubey, Shaista Anwar
The rapid development of technology, makes it easier for everyone to exchange information and knowledge. Exchange information via the internet is threatened with security. Security issues, especially the issue of the confidentiality of information content and its authenticity, are vital things that must protect. Peculiarly for agencies that often hold activities that provide certificates in digital form to participants. Digital certificates are digital files conventionally used as proof of participation or a sign of appreciation owned by someone. We need a security technology for certificates as a source of information known as cryptography. This study aims to validate and authenticate digital certificates with digital signatures using SHA-256, DSA, and 3DES. The use of the SHA-256 hash function is in line with the DSA method and the implementation of 3DES which uses 2 private keys so that the security of digital certificate files can be increased. The pixel changes that appear in the MSE calculation have the lowest value of 7.4510 and the highest value of 165.0561 when the file is manipulated, it answers the security of the proposed method is maintained because the only valid file is the original file.
Authored by Bagas Yulianto, Budi Handoko, Eko Rachmawanto, Pujiono, Arief Soeleman
Recently, placing vehicles in the parking area is becoming a problem. A smart parking system is proposed to solve the problem. Most smart parking systems have a centralized system, wherein that type of system is at-risk of single-point failure that can affect the whole system. To overcome the weakness of the centralized system, the most popular mechanism that researchers proposed is blockchain. If there is no mechanism implemented in the blockchain to verify the authenticity of every transaction, then the system is not secure against impersonation attacks. This study combines blockchain mechanism with Ring Learning With Errors (RLWE) based digital signature for securing the scheme against impersonation and double-spending attacks. RLWE was first proposed by Lyubashevsky et al. This scheme is a development from the previous scheme Learning with Error or LWE.
Authored by Jihan Atiqoh, Ari Barmawi, Farah Afianti
The rise of social media has brought the rise of fake news and this fake news comes with negative consequences. With fake news being such a huge issue, efforts should be made to identify any forms of fake news however it is not so simple. Manually identifying fake news can be extremely subjective as determining the accuracy of the information in a story is complex and difficult to perform, even for experts. On the other hand, an automated solution would require a good understanding of NLP which is also complex and may have difficulties producing an accurate output. Therefore, the main problem focused on this project is the viability of developing a system that can effectively and accurately detect and identify fake news. Finding a solution would be a significant benefit to the media industry, particularly the social media industry as this is where a large proportion of fake news is published and spread. In order to find a solution to this problem, this project proposed the development of a fake news identification system using deep learning and natural language processing. The system was developed using a Word2vec model combined with a Long Short-Term Memory model in order to showcase the compatibility of the two models in a whole system. This system was trained and tested using two different dataset collections that each consisted of one real news dataset and one fake news dataset. Furthermore, three independent variables were chosen which were the number of training cycles, data diversity and vector size to analyze the relationship between these variables and the accuracy levels of the system. It was found that these three variables did have a significant effect on the accuracy of the system. From this, the system was then trained and tested with the optimal variables and was able to achieve the minimum expected accuracy level of 90%. The achieving of this accuracy levels confirms the compatibility of the LSTM and Word2vec model and their capability to be synergized into a single system that is able to identify fake news with a high level of accuracy.
Authored by Anand Matheven, Burra Kumar
People connect with a plethora of information from many online portals due to the availability and ease of access to the internet and electronic communication devices. However, news portals sometimes abuse press freedom by manipulating facts. Most of the time, people are unable to discriminate between true and false news. It is difficult to avoid the detrimental impact of Bangla fake news from spreading quickly through online channels and influencing people’s judgment. In this work, we investigated many real and false news pieces in Bangla to discover a common pattern for determining if an article is disseminating incorrect information or not. We developed a deep learning model that was trained and validated on our selected dataset. For learning, the dataset contains 48,678 legitimate news and 1,299 fraudulent news. To deal with the imbalanced data, we used random undersampling and then ensemble to achieve the combined output. In terms of Bangla text processing, our proposed model achieved an accuracy of 98.29% and a recall of 99%.
Authored by Md. Rahman, Faisal Bin Ashraf, Md. Kabir
Recently, social networks have become more popular owing to the capability of connecting people globally and sharing videos, images and various types of data. A major security issue in social media is the existence of fake accounts. It is a phenomenon that has fake accounts that can be frequently utilized by mischievous users and entities, which falsify, distribute, and duplicate fake news and publicity. As the fake news resulted in serious consequences, numerous research works have focused on the design of automated fake accounts and fake news detection models. In this aspect, this study designs a hyperparameter tuned deep learning based automated fake news detection (HDL-FND) technique. The presented HDL-FND technique accomplishes the effective detection and classification of fake news. Besides, the HDLFND process encompasses a three stage process namely preprocessing, feature extraction, and Bi-Directional Long Short Term Memory (BiLSTM) based classification. The correct way of demonstrating the promising performance of the HDL-FND technique, a sequence of replications were performed on the available Kaggle dataset. The investigational outcomes produce improved performance of the HDL-FND technique in excess of the recent approaches in terms of diverse measures.
Authored by N. Kanagavalli, Baghavathi Priya, Jeyakumar D
Deep learning have a variety of applications in different fields such as computer vision, automated self-driving cars, natural language processing tasks and many more. One of such deep learning adversarial architecture changed the fundamentals of the data manipulation. The inception of Generative Adversarial Network (GAN) in the computer vision domain drastically changed the way how we saw and manipulated the data. But this manipulation of data using GAN has found its application in various type of malicious activities like creating fake images, swapped videos, forged documents etc. But now, these generative models have become so efficient at manipulating the data, especially image data, such that it is creating real life problems for the people. The manipulation of images and videos done by the GAN architectures is done in such a way that humans cannot differentiate between real and fake images/videos. Numerous researches have been conducted in the field of deep fake detection. In this paper, we present a structured survey paper explaining the advantages, gaps of the existing work in the domain of deep fake detection.
Authored by Pramod Bide, Varun, Gaurav Patil, Samveg Shah, Sakshi Patil
False news has become widespread in the last decade in political, economic, and social dimensions. This has been aided by the deep entrenchment of social media networking in these dimensions. Facebook and Twitter have been known to influence the behavior of people significantly. People rely on news/information posted on their favorite social media sites to make purchase decisions. Also, news posted on mainstream and social media platforms has a significant impact on a particular country’s economic stability and social tranquility. Therefore, there is a need to develop a deceptive system that evaluates the news to avoid the repercussions resulting from the rapid dispersion of fake news on social media platforms and other online platforms. To achieve this, the proposed system uses the preprocessing stage results to assign specific vectors to words. Each vector assigned to a word represents an intrinsic characteristic of the word. The resulting word vectors are then applied to RNN models before proceeding to the LSTM model. The output of the LSTM is used to determine whether the news article/piece is fake or otherwise.
Authored by Qamber Abbas, Muhammad Zeshan, Muhammad Asif
Fake news is a new phenomenon that promotes misleading information and fraud via internet social media or traditional news sources. Fake news is readily manufactured and transmitted across numerous social media platforms nowadays, and it has a significant influence on the real world. It is vital to create effective algorithms and tools for detecting misleading information on social media platforms. Most modern research approaches for identifying fraudulent information are based on machine learning, deep learning, feature engineering, graph mining, image and video analysis, and newly built datasets and online services. There is a pressing need to develop a viable approach for readily detecting misleading information. The deep learning LSTM and Bi-LSTM model was proposed as a method for detecting fake news, In this work. First, the NLTK toolkit was used to remove stop words, punctuation, and special characters from the text. The same toolset is used to tokenize and preprocess the text. Since then, GLOVE word embeddings have incorporated higher-level characteristics of the input text extracted from long-term relationships between word sequences captured by the RNN-LSTM, Bi-LSTM model to the preprocessed text. The proposed model additionally employs dropout technology with Dense layers to improve the model's efficiency. The proposed RNN Bi-LSTM-based technique obtains the best accuracy of 94%, and 93% using the Adam optimizer and the Binary cross-entropy loss function with Dropout (0.1,0.2), Once the Dropout range increases it decreases the accuracy of the model as it goes 92% once Dropout (0.3).
Authored by Govind Mahara, Sharad Gangele
This paper deals with the problem of image forgery detection because of the problems it causes. Where The Fake im-ages can lead to social problems, for example, misleading the public opinion on political or religious personages, de-faming celebrities and people, and Presenting them in a law court as evidence, may Doing mislead the court. This work proposes a deep learning approach based on Deep CNN (Convolutional Neural Network) Architecture, to detect fake images. The network is based on a modified structure of Xception net, CNN based on depthwise separable convolution layers. After extracting the feature maps, pooling layers are used with dense connection with Xception output, to in-crease feature maps. Inspired by the idea of a densenet network. On the other hand, the work uses the YCbCr color system for images, which gave better Accuracy of %99.93, more than RGB, HSV, and Lab or other color systems.
Authored by Ihsan Sahib, Tawfiq AlAsady
Social media has beneficial and detrimental impacts on social life. The vast distribution of false information on social media has become a worldwide threat. As a result, the Fake News Detection System in Social Networks has risen in popularity and is now considered an emerging research area. A centralized training technique makes it difficult to build a generalized model by adapting numerous data sources. In this study, we develop a decentralized Deep Learning model using Federated Learning (FL) for fake news detection. We utilize an ISOT fake news dataset gathered from "Reuters.com" (N = 44,898) to train the deep learning model. The performance of decentralized and centralized models is then assessed using accuracy, precision, recall, and F1-score measures. In addition, performance was measured by varying the number of FL clients. We identify the high accuracy of our proposed decentralized FL technique (accuracy, 99.6%) utilizing fewer communication rounds than in previous studies, even without employing pre-trained word embedding. The highest effects are obtained when we compare our model to three earlier research. Instead of a centralized method for false news detection, the FL technique may be used more efficiently. The use of Blockchain-like technologies can improve the integrity and validity of news sources.
Authored by Nirosh Jayakody, Azeem Mohammad, Malka Halgamuge
Nowadays, although it is much more convenient to obtain news with social media and various news platforms, the emergence of all kinds of fake news has become a headache and urgent problem that needs to be solved. Currently, the fake news recognition algorithm for fake news mainly uses GCN, including some other niche algorithms such as GRU, CNN, etc. Although all fake news verification algorithms can reach quite a high accuracy with sufficient datasets, there is still room for improvement for unsupervised learning and semi-supervised. This article finds that the accuracy of the GCN method for fake news detection is basically about 85% through comparison with other neural network models, which is satisfactory, and proposes that the current field lacks a unified training dataset, and that in the future fake news detection models should focus more on semi-supervised learning and unsupervised learning.
Authored by Zhichao Wang
A “tripartite and bilateral” dynamic game model was constructed to study the impact of space deterrence on the challenger's military strategy in a military conflict. Based on the signal game theory, the payment matrices and optimal strategies of the sheltering side and challenging side were analyzed. In a theoretical framework, the indicators of the effectiveness of the challenger's response to space deterrence and the influencing factors of the sheltering's space deterrence were examined. The feasibility and effective means for the challenger to respond to the space deterrent in a “tripartite and bilateral” military conflict were concluded.
Authored by Zhiyong Wu, Yanhua Cao
In this paper, we propose a novel watermarking-based copy deterrence scheme for identifying data leaks through authorized query users in secure image outsourcing systems. The scheme generates watermarks unique to each query user, which are embedded in the retrieved encrypted images. During unauthorized distribution, the watermark embedded in the image is extracted to determine the untrustworthy query user. Experimental results show that the proposed scheme achieves minimal information loss, faster embedding and better resistance to JPEG compression attacks compared with the state-of-the-art schemes.
Authored by J. Anju, R. Shreelekshmi
Axie infinity is a complicated card game with a huge-scale action space. This makes it difficult to solve this challenge using generic Reinforcement Learning (RL) algorithms. We propose a hybrid RL framework to learn action representations and game strategies. To avoid evaluating every action in the large feasible action set, our method evaluates actions in a fixed-size set which is determined using action representations. We compare the performance of our method with two baseline methods in terms of their sample efficiency and the winning rates of the trained models. We empirically show that our method achieves an overall best winning rate and the best sample efficiency among the three methods.
Authored by Zhiyuan Yao, Tianyu Shi, Site Li, Yiting Xie, Yuanyuan Qin, Xiongjie Xie, Huan Lu, Yan Zhang
The reigning U.S. paradigm for deterring malicious cyberspace activity carried out by or condoned by other countries is to levy penalties on them. The results have been disappointing. There is little evidence of the permanent reduction of such activity, and the narrative behind the paradigm presupposes a U.S./allied posture that assumes the morally superior role of judge upon whom also falls the burden of proof–-a posture not accepted but nevertheless exploited by other countries. In this paper, we explore an alternative paradigm, obnoxious deterrence, in which the United States itself carries out malicious cyberspace activity that is used as a bargaining chip to persuade others to abandon objectionable cyberspace activity. We then analyze the necessary characteristics of this malicious cyberspace activity, which is generated only to be traded off. It turns out that two fundamental criteria–that the activity be sufficiently obnoxious to induce bargaining but be insufficiently valuable to allow it to be traded away–may greatly reduce the feasibility of such a ploy. Even if symmetric agreements are easier to enforce than pseudo-symmetric agreements (e.g., the XiObama agreement of 2015) or asymmetric red lines (e.g., the Biden demand that Russia not condone its citizens hacking U.S. critical infrastructure), when violations occur, many of today’s problems recur. We then evaluate the practical consequences of this approach, one that is superficially attractive.
Authored by Martin Libicki