Machine Learning (ML) and Artificial Intelligence (AI) techniques are widely adopted in the telecommunication industry, especially to automate beyond 5G networks. Federated Learning (FL) recently emerged as a distributed ML approach that enables localized model training to keep data decentralized to ensure data privacy. In this paper, we identify the applicability of FL for securing future networks and its limitations due to the vulnerability to poisoning attacks. First, we investigate the shortcomings of state-of-the-art security algorithms for FL and perform an attack to circumvent FoolsGold algorithm, which is known as one of the most promising defense techniques currently available. The attack is launched with the addition of intelligent noise at the poisonous model updates. Then we propose a more sophisticated defense strategy, a threshold-based clustering mechanism to complement FoolsGold. Moreover, we provide a comprehensive analysis of the impact of the attack scenario and the performance of the defense mechanism.
Authored by Yushan Siriwardhana, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila
Cyber attacks keep states, companies and individuals at bay, draining precious resources including time, money, and reputation. Attackers thereby seem to have a first mover advantage leading to a dynamic defender attacker game. Automated approaches taking advantage of Cyber Threat Intelligence on past attacks bear the potential to empower security professionals and hence increase cyber security. Consistently, there has been a lot of research on automated approaches in cyber risk management including works on predictive attack algorithms and threat hunting. Combining data on countermeasures from “MITRE Detection, Denial, and Disruption Framework Empowering Network Defense” and adversarial data from “MITRE Adversarial Tactics, Techniques and Common Knowledge” this work aims at developing methods that enable highly precise and efficient automatic incident response. We introduce Attack Incident Responder, a methodology working with simple heuristics to find the most efficient sets of counter-measures for hypothesized attacks. By doing so, the work contributes to narrowing the attackers first mover advantage. Experimental results are promising high average precisions in predicting effiective defenses when using the methodology. In addition, we compare the proposed defense measures against a static set of defensive techniques offering robust security against observed attacks. Furthermore, we combine the approach of automated incidence response to an approach for threat hunting enabling full automation of security operation centers. By this means, we define a threshold in the precision of attack hypothesis generation that must be met for predictive defense algorithms to outperform the baseline. The calculated threshold can be used to evaluate attack hypothesis generation algorithms. The presented methodology for automated incident response may be a valuable support for information security professionals. Last, the work elaborates on the combination of static base defense with adaptive incidence response for generating a bio-inspired artificial immune system for computerized networks.
Authored by Florian Kaiser, Leon Andris, Tim Tennig, Jonas Iser, Marcus Wiens, Frank Schultmann
Effective information security risk management is essential for survival of any business that is dependent on IT. In this paper we present an efficient and effective solution to find best parameters for managing cyber risks using artificial intelligence. Genetic algorithm is use as it can provide our required optimization and intelligence. Results show that GA is professional in finding the best parameters and minimizing the risk.
Authored by Osama Hosam
Traditional power consumption management systems are not showing enough reliability and thus, smart grid technology has been introduced to reduce the excess power wastages. In the context of smart grid systems, network communication is another term that is used for developing the network between the users and the load profiles. Cloud computing and clustering are also executed for efficient power management. Based on the facts, this research is going to identify wireless network communication systems to monitor and control smart grid power consumption. Primary survey-based research has been carried out with 62 individuals who worked in the smart grid system, tracked, monitored and controlled the power consumptions using WSN technology. The survey was conducted online where the respondents provided their opinions via a google survey form. The responses were collected and analyzed on Microsoft Excel. Results show that hybrid commuting of cloud and edge computing technology is more advantageous than individual computing. Respondents agreed that deep learning techniques will be more beneficial to analyze load profiles than machine learning techniques. Lastly, the study has explained the advantages and challenges of using smart grid network communication systems. Apart from the findings from primary research, secondary journal articles were also observed to emphasize the research findings.
Authored by Santosh Kumar, N Kumar, B.T. Geetha, M. Sangeetha, Kalyan Chakravarthi, Vikas Tripathi
This article describes an analysis of the key technologies currently applied to improve the quality, efficiency, safety and sustainability of Smart Grid systems and identifies the tools to optimize them and possible gaps in this area, considering the different energy sources, distributed generation, microgrids and energy consumption and production capacity. The research was conducted with a qualitative methodological approach, where the literature review was carried out with studies published from 2019 to 2022, in five (5) databases following the selection of studies recommended by the PRISMA guide. Of the five hundred and four (504) publications identified, ten (10) studies provided insight into the technological trends that are impacting this scenario, namely: Internet of Things, Big Data, Edge Computing, Artificial Intelligence and Blockchain. It is concluded that to obtain the best performance within Smart Grids, it is necessary to have the maximum synergy between these technologies, since this union will enable the application of advanced smart digital technology solutions to energy generation and distribution operations, thus allowing to conquer a new level of optimization.
Authored by Ivonne Núñez, Elia Cano, Carlos Rovetto, Karina Ojo-Gonzalez, Andrzej Smolarz, Juan Saldana-Barrios
In view of the problems that the existing power grid risk assessment mainly depends on the data fusion of decision-making level, which has strong subjectivity and less effective information, this paper proposes a risk assessment method of microgrid system based on random matrix theory. Firstly, the time series data of multiple sensors are constructed into a high-dimensional matrix according to the different parameter types and nodes; Then, based on random matrix theory and sliding time window processing, the average spectral radius sequence is calculated to characterize the state of microgrid system. Finally, an example is given to verify the effectiveness of the method.
Authored by Xi Cheng, Yafeng Liang, Jianhong Qiu, XiaoLi Zhao, Lihong Ma
With the proliferation of data in Internet-related applications, incidences of cyber security have increased manyfold. Energy management, which is one of the smart city layers, has also been experiencing cyberattacks. Furthermore, the Distributed Energy Resources (DER), which depend on different controllers to provide energy to the main physical smart grid of a smart city, is prone to cyberattacks. The increased cyber-attacks on DER systems are mainly because of its dependency on digital communication and controls as there is an increase in the number of devices owned and controlled by consumers and third parties. This paper analyzes the major cyber security and privacy challenges that might inflict, damage or compromise the DER and related controllers in smart cities. These challenges highlight that the security and privacy on the Internet of Things (IoT), big data, artificial intelligence, and smart grid, which are the building blocks of a smart city, must be addressed in the DER sector. It is observed that the security and privacy challenges in smart cities can be solved through the distributed framework, by identifying and classifying stakeholders, using appropriate model, and by incorporating fault-tolerance techniques.
Authored by Tarik Himdi, Mohammed Ishaque, Muhammed Ikram
Automatic Identification System (AIS) plays a leading role in maritime navigation, traffic control, local and global maritime situational awareness. Today, the reliable and secure AIS operation is threatened by probable cyber attacks such as imitation of ghost vessels, false distress or security messages, or fake virtual aids-to-navigation. We propose a method for ensuring the authentication and integrity of AIS messages based on the use of the Message Authentication Code scheme and digital watermarking (WM) technology to organize an additional tag transmission channel. The method provides full compatibility with the existing AIS functionality.
Authored by Oleksandr Shyshkin
To fulfill different requirements from various services, the smart grid typically uses 5G network slicing technique for splitting the physical network into multiple virtual logical networks. By doing so, end users in smart grid can select appropriate slice that is suitable for their services. Privacy has vital significance in network slicing selection, since both the end user and the network entities are afraid that their sensitive slicing features are leaked to an adversary. At the same time, in the smart grid, there are many low-power users who are not suitable for complex security schemes. Therefore, both security and efficiency are basic requirements for 5G slicing selection schemes. Considering both security and efficiency, we propose a 5G slicing selection security scheme based on matching degree estimation, called SS-MDE. In SS-MDE, a set of random numbers is used to hide the feature information of the end user and the AMF which can provide privacy protection for exchanged slicing features. Moreover, the best matching slice is selected by calculating the Euclid distance between two slices. Since the algorithms used in SS-MDE include only several simple mathematical operations, which are quite lightweight, SS-MDE can achieve high efficiency. At the same time, since third-party attackers cannot extract the slicing information, SS-MDE can fulfill security requirements. Experimental results show that the proposed scheme is feasible in real world applications.
Authored by Wei Wang, Jiming Yao, Weiping Shao, Yangzhou Xu, Shaowu Peng
5G has significantly facilitated the development of attractive applications such as autonomous driving and telemedicine due to its lower latency, higher data rates, and enormous connectivity. However, there are still some security and privacy issues in 5G, such as network slicing privacy and flexibility and efficiency of network slicing selection. In the smart grid scenario, this paper proposes a 5G slice selection security scheme based on the Pohlig-Hellman algorithm, which realizes the protection of slice selection privacy data between User i(Ui) and Access and Mobility Management function (AMF), so that the data will not be exposed to third-party attackers. Compared with other schemes, the scheme proposed in this paper is simple in deployment, low in computational overhead, and simple in process, and does not require the help of PKI system. The security analysis also verifies that the scheme can accurately protect the slice selection privacy data between Ui and AMF.
Authored by Jiming Yao, Peng Wu, Duanyun Chen, Wei Wang, Youxu Fang
In today's era, the smart grid is the carrier of the new energy technology revolution and a very critical development stage for grid intelligence. In the process of smart grid operation, maintenance and maintenance, many heterogeneous and polymorphic data can be formed, that is to say big data. This paper analyzes the power big data prediction technology for smart grid applications, and proposes practical application strategies In this paper, an in-depth analysis of the relationship between cloud computing and big data key technologies and smart grid is carried out, and an overview of the key technologies of electric power big data is carried out.
Authored by Guang-ye Li, Jia-xin Zhang, Xin Wen, Lang-Ming Xu, Ying Yuan
5G network slicing plays a key role in the smart grid business. The existing authentication schemes for 5G slicing in smart grids require high computing costs, so they are time-consuming and do not fully consider the security of authentication. Aiming at the application scenario of 5G smart grid, this paper proposes an identity-based lightweight secondary authentication scheme. Compared with other well-known methods, in the protocol interaction of this paper, both the user Ui and the grid server can authenticate each other's identities, thereby preventing illegal users from pretending to be identities. The grid user Ui and the grid server can complete the authentication process without resorting to complex bilinear mapping calculations, so the computational overhead is small. The grid user and grid server can complete the authentication process without transmitting the original identification. Therefore, this scheme has the feature of anonymous authentication. In this solution, the authentication process does not require infrastructure such as PKI, so the deployment is simple. Experimental results show that the protocol is feasible in practical applications
Authored by Yue Yu, Jiming Yao, Wei Wang, Lanxin Qiu, Yangzhou Xu
With the continuous development of the Internet, artificial intelligence, 5G and other technologies, various issues have started to receive attention, among which the network security issue is now one of the key research directions for relevant research scholars at home and abroad. This paper researches on the basis of traditional Internet technology to establish a security identification system on top of the network physical layer of the Internet, which can effectively identify some security problems on top of the network infrastructure equipment and solve the identified security problems on the physical layer. This experiment is to develop a security identification system, research and development in the network physical level of the Internet, compared with the traditional development of the relevant security identification system in the network layer, the development in the physical layer, can be based on the physical origin of the protection, from the root to solve part of the network security problems, can effectively carry out the identification and solution of network security problems. The experimental results show that the security identification system can identify some basic network security problems very effectively, and the system is developed based on the physical layer of the Internet network, and the protection is carried out from the physical device, and the retransmission symbol error rates of CQ-PNC algorithm and ML algorithm in the experiment are 110 and 102, respectively. The latter has a lower error rate and better protection.
Authored by Yunge Huang
In recent years, the blackout accident shows that the cause of power failure is not only in the power network, but also in the cyber network. Aiming at the problem of cyber network fault Cyber-physical power systems, combined with the structure and functional attributes of cyber network, the comprehensive criticality of information node is defined. By evaluating the vulnerability of ieee39 node system, it is found that the fault of high comprehensive criticality information node will cause greater load loss to the system. The simulation results show that the comprehensive criticality index can effectively identify the key nodes of the cyber network.
Authored by Duanyun Chen, Zewen Chen, Jie Li, Jidong Liu
Software vulnerabilities threaten the security of computer system, and recently more and more loopholes have been discovered and disclosed. For the detected vulnerabilities, the relevant personnel will analyze the vulnerability characteristics, and combine the vulnerability scoring system to determine their severity level, so as to determine which vulnerabilities need to be dealt with first. In recent years, some characteristic description-based methods have been used to predict the severity level of vulnerability. However, the traditional text processing methods only grasp the superficial meaning of the text and ignore the important contextual information in the text. Therefore, this paper proposes an innovative method, called BERT-CNN, which combines the specific task layer of Bert with CNN to capture important contextual information in the text. First, we use Bert to process the vulnerability description and other information, including Access Gained, Attack Origin and Authentication Required, to generate the feature vectors. Then these feature vectors of vulnerabilities and their severity levels are input into a CNN network, and the parameters of the CNN are gotten. Next, the fine-tuned Bert and the trained CNN are used to predict the severity level of a vulnerability. The results show that our method outperforms the state-of-the-art method with 91.31% on F1-score.
Authored by Xuming Ni, Jianxin Zheng, Yu Guo, Xu Jin, Ling Li
Due to the simplicity of implementation and high threat level, SQL injection attacks are one of the oldest, most prevalent, and most destructive types of security attacks on Web-based information systems. With the continuous development and maturity of artificial intelligence technology, it has been a general trend to use AI technology to detect SQL injection. The selection of the sample set is the deciding factor of whether AI algorithms can achieve good results, but dataset with tagged specific category labels are difficult to obtain. This paper focuses on data augmentation to learn similar feature representations from the original data to improve the accuracy of classification models. In this paper, deep convolutional generative adversarial networks combined with genetic algorithms are applied to the field of Web vulnerability attacks, aiming to solve the problem of insufficient number of SQL injection samples. This method is also expected to be applied to sample generation for other types of vulnerability attacks.
Authored by Dongzhe Lu, Jinlong Fei, Long Liu, Zecun Li
In the present innovation, for the trading of information, the internet is the most well-known and significant medium. With the progression of the web and data innovation, computerized media has become perhaps the most famous and notable data transfer tools. This advanced information incorporates text, pictures, sound, video etc moved over the public organization. The majority of these advanced media appear as pictures and are a significant part in different applications, for example, chat, talk, news, website, web-based business, email, and digital books. The content is still facing various challenges in which including the issues of protection of copyright, modification, authentication. Cryptography, steganography, embedding techniques is widely used to secure the digital data. In this present the hybrid model of LSB steganography and Advanced Encryption Standard (AES) cryptography techniques to enhanced the security of the digital image and text that is undeniably challenging to break by the unapproved person. The security level of the secret information is estimated in the term of MSE and PSNR for better hiding required the low MSE and high PSNR values.
Authored by Manish Kumar, Aman Soni, Ajay Shekhawat, Akash Rawat
Protection of private and sensitive information is the most alarming issue for security providers in surveillance videos. So to provide privacy as well as to enhance secrecy in surveillance video without affecting its efficiency in detection of violent activities is a challenging task. Here a steganography based algorithm has been proposed which hides private information inside the surveillance video without affecting its accuracy in criminal activity detection. Preprocessing of the surveillance video has been performed using Tunable Q-factor Wavelet Transform (TQWT), secret data has been hidden using Discrete Wavelet Transform (DWT) and after adding payload to the surveillance video, detection of criminal activities has been conducted with maintaining same accuracy as original surveillance video. UCF-crime dataset has been used to validate the proposed framework. Feature extraction is performed and after feature selection it has been trained to Temporal Convolutional Network (TCN) for detection. Performance measure has been compared to the state-of-the-art methods which shows that application of steganography does not affect the detection rate while preserving the perceptual quality of the surveillance video.
Authored by Sonali Rout, Ramesh Mohapatra
Network Intrusion Detection Systems (IDSs) have been used to increase the level of network security for many years. The main purpose of such systems is to detect and block malicious activity in the network traffic. Researchers have been improving the performance of IDS technology for decades by applying various machine-learning techniques. From the perspective of academia, obtaining a quality dataset (i.e. a sufficient amount of captured network packets that contain both malicious and normal traffic) to support machine learning approaches has always been a challenge. There are many datasets publicly available for research purposes, including NSL-KDD, KDDCUP 99, CICIDS 2017 and UNSWNB15. However, these datasets are becoming obsolete over time and may no longer be adequate or valid to model and validate IDSs against state-of-the-art attack techniques. As attack techniques are continuously evolving, datasets used to develop and test IDSs also need to be kept up to date. Proven performance of an IDS tested on old attack patterns does not necessarily mean it will perform well against new patterns. Moreover, existing datasets may lack certain data fields or attributes necessary to analyse some of the new attack techniques. In this paper, we argue that academia needs up-to-date high-quality datasets. We compare publicly available datasets and suggest a way to provide up-to-date high-quality datasets for researchers and the security industry. The proposed solution is to utilize the network traffic captured from the Locked Shields exercise, one of the world’s largest live-fire international cyber defence exercises held annually by the NATO CCDCOE. During this three-day exercise, red team members consisting of dozens of white hackers selected by the governments of over 20 participating countries attempt to infiltrate the networks of over 20 blue teams, who are tasked to defend a fictional country called Berylia. After the exercise, network packets captured from each blue team’s network are handed over to each team. However, the countries are not willing to disclose the packet capture (PCAP) files to the public since these files contain specific information that could reveal how a particular nation might react to certain types of cyberattacks. To overcome this problem, we propose to create a dedicated virtual team, capture all the traffic from this team’s network, and disclose it to the public so that academia can use it for unclassified research and studies. In this way, the organizers of Locked Shields can effectively contribute to the advancement of future artificial intelligence (AI) enabled security solutions by providing annual datasets of up-to-date attack patterns.
Authored by Maj. Halisdemir, Hacer Karacan, Mauno Pihelgas, Toomas Lepik, Sungbaek Cho
Security is a key concern across the world, and it has been a common thread for all critical sectors. Nowadays, it may be stated that security is a backbone that is absolutely necessary for personal safety. The most important requirements of security systems for individuals are protection against theft and trespassing. CCTV cameras are often employed for security purposes. The biggest disadvantage of CCTV cameras is their high cost and the need for a trustworthy individual to monitor them. As a result, a solution that is both easy and cost-effective, as well as secure has been devised. The smart door lock is built on Raspberry Pi technology, and it works by capturing a picture through the Pi Camera module, detecting a visitor's face, and then allowing them to enter. Local binary pattern approach is used for Face recognition. Remote picture viewing, notification, on mobile device are all possible with an IOT based application. The proposed system may be installed at front doors, lockers, offices, and other locations where security is required. The proposed system has an accuracy of 89%, with an average processing time is 20 seconds for the overall process.
Authored by Om Doshi, Hitesh Bendale, Aarti Chavan, Shraddha More
Phishing is a method of online fraud where attackers are targeted to gain access to the computer systems for monetary benefits or personal gains. In this case, the attackers pose themselves as legitimate entities to gain the users' sensitive information. Phishing has been significant concern over the past few years. The firms are recording an increase in phishing attacks primarily aimed at the firm's intellectual property and the employees' sensitive data. As a result, these attacks force firms to spend more on information security, both in technology-centric and human-centric approaches. With the advancements in cyber-security in the last ten years, many techniques evolved to detect phishing-related activities through websites and emails. This study focuses on the latest techniques used for detecting phishing attacks, including the usage of Visual selection features, Machine Learning (ML), and Artificial Intelligence (AI) to see the phishing attacks. New strategies for identifying phishing attacks are evolving, but limited standardized knowledge on phishing identification and mitigation is accessible from user awareness training. So, this study also focuses on the role of security-awareness movements to minimize the impact of phishing attacks. There are many approaches to train the user regarding these attacks, such as persona-centred training, anti-phishing techniques, visual discrimination training and the usage of spam filters, robust firewalls and infrastructure, dynamic technical defense mechanisms, use of third-party certified software to mitigate phishing attacks from happening. Therefore, the purpose of this paper is to carry out a systematic analysis of literature to assess the state of knowledge in prominent scientific journals on the identification and prevention of phishing. Forty-three journal articles with the perspective of phishing detection and prevention through awareness training were reviewed from 2011 to 2020. This timely systematic review also focuses on the gaps identified in the selected primary studies and future research directions in this area.
Authored by Kanchan Patil, Sai Arra
Phishing has become a prominent method of data theft among hackers, and it continues to develop. In recent years, many strategies have been developed to identify phishing website attempts using machine learning particularly. However, the algorithms and classification criteria that have been used are highly different from the real issues and need to be compared. This paper provides a detailed comparison and evaluation of the performance of several machine learning algorithms across multiple datasets. Two phishing website datasets were used for the experiments: the Phishing Websites Dataset from UCI (2016) and the Phishing Websites Dataset from Mendeley (2018). Because these datasets include different types of class labels, the comparison algorithms can be applied in a variety of situations. The tests showed that Random Forest was better than other classification methods, with an accuracy of 88.92% for the UCI dataset and 97.50% for the Mendeley dataset.
Authored by Wendy Sarasjati, Supriadi Rustad, Purwanto, Heru Santoso, Muljono, Abdul Syukur, Fauzi Rafrastara, De Setiadi
Virtual Private Networks (VPNs) have become a communication medium for accessing information, data exchange and flow of information. Many organizations require Intranet or VPN, for data access, access to servers from computers and sharing different types of data among their offices and users. A secure VPN environment is essential to the organizations to protect the information and their IT infrastructure and their assets. Every organization needs to protect their computer network environment from various malicious cyber threats. This paper presents a comprehensive network security management which includes significant strategies and protective measures during the management of a VPN in an organization. The paper also presents the procedures and necessary counter measures to preserve the security of VPN environment and also discussed few Identified Security Strategies and measures in VPN. It also briefs the Network Security and their Policies Management for implementation by covering security measures in firewall, visualized security profile, role of sandbox for securing network. In addition, a few identified security controls to strengthen the organizational security which are useful in designing a secure, efficient and scalable VPN environment, are also discussed.
Authored by Srinivasa Pedapudi, Nagalakshmi Vadlamani
In this study, the nature of human trust in communication robots was experimentally investigated comparing with trusts in other people and artificial intelligence (AI) systems. The results of the experiment showed that trust in robots is basically similar to that in AI systems in a calculation task where a single solution can be obtained and is partly similar to that in other people in an emotion recognition task where multiple interpretations can be acceptable. This study will contribute to designing a smooth interaction between people and communication robots.
Authored by Akihiro Maehigashi
Global traffic data are proliferating, including in Indonesia. The number of internet users in Indonesia reached 205 million in January 2022. This data means that 73.7% of Indonesia’s population has used the internet. The median internet speed for mobile phones in Indonesia is 15.82 Mbps, while the median internet connection speed for Wi-Fi in Indonesia is 20.13 Mbps. As predicted by many, real-time traffic such as multimedia streaming dominates more than 79% of traffic on the internet network. This condition will be a severe challenge for the internet network, which is required to improve the Quality of Experience (QoE) for user mobility, such as reducing delay, data loss, and network costs. However, IP-based networks are no longer efficient at managing traffic. Named Data Network (NDN) is a promising technology for building an agile communication model that reduces delays through a distributed and adaptive name-based data delivery approach. NDN replaces the ‘where’ paradigm with the concept of ‘what’. User requests are no longer directed to a specific IP address but to specific content. This paradigm causes responses to content requests to be served by a specific server and can also be served by the closest device to the requested data. NDN router has CS to cache the data, significantly reducing delays and improving the internet network’s quality of Service (QoS). Motivated by this, in 2019, we began intensive research to achieve a national flagship product, an NDN router with different functions from ordinary IP routers. NDN routers have cache, forwarding, and routing functions that affect data security on name-based networks. Designing scalable NDN routers is a new challenge as NDN requires fast hierarchical name-based lookups, perpackage data field state updates, and large-scale forward tables. We have a research team that has conducted NDN research through simulation, emulation, and testbed approaches using virtual machines to get the best NDN router design before building a prototype. Research results from 2019 show that the performance of NDN-based networks is better than existing IP-based networks. The tests were carried out based on various scenarios on the Indonesian network topology using NDNsimulator, MATLAB, Mininet-NDN, and testbed using virtual machines. Various network performance parameters, such as delay, throughput, packet loss, resource utilization, header overhead, packet transmission, round trip time, and cache hit ratio, showed the best results compared to IP-based networks. In addition, NDN Testbed based on open source is free, and the flexibility of creating topology has also been successfully carried out. This testbed includes all the functions needed to run an NDN network. The resource capacity on the server used for this testbed is sufficient to run a reasonably complex topology. However, bugs are still found on the testbed, and some features still need improvement. The following exploration of the NDN testbed will run with more new strategy algorithms and add Artificial Intelligence (AI) to the NDN function. Using AI in cache and forwarding strategies can make the system more intelligent and precise in making decisions according to network conditions. It will be a step toward developing NDN router products by the Bandung Institute of Technology (ITB) Indonesia.
Authored by Nana Syambas, Tutun Juhana, Hendrawan, Eueung Mulyana, Ian Edward, Hamonangan Situmorang, Ratna Mayasari, Ridha Negara, Leanna Yovita, Tody Wibowo, Syaiful Ahdan, Galih Nurkahfi, Ade Nurhayati, Hafiz Mulya, Mochamad Budiana