This paper studies a power conversion system supplying a High-Speed Permanent Magnet Motor (HSPMM). In opposite of classical approach, this study observes a dynamic trajectory modelling an electric drive chain with a constant acceleration of the machine to its nominal speed. This global approach allows to observe different phenomena at the same time (resonance, subharmonic, and harmonic distortion - THD) specific to the trajectory. The method reconciles electrical phenomena with a powerful mechanism of analysis from the Short-Time Fourier Transform (STFT) and the visual representation of the frequency spectrum (spectrogram tool). The Predictive Time-Frequency analysis applied on Electric Drive Systems (PreTiFEDS) offers a powerful tool for engineers and electric conversion system architects when designing the drive system chain.
Authored by Andre De Andrade, Lakdar Sadi-Haddad, Ramdane Lateb, Joaquim Da Silva
Threat hunting has become very popular due to the present dynamic cyber security environment. As there remains increase in attacks’ landscape, the traditional way of monitoring threats is not scalable anymore. Consequently, threat hunting modeling technique is implemented as an emergent activity using machine learning (ML) paradigms. ML predictive analytics was carried out on OSTO-CID dataset using four algorithms to develop the model. Cross validation ratio of 80:20 was used to train and test the model. Decision tree classifier (DTC) gives the best metrics results among the four ML algorithms with 99.30\% accuracy. Therefore, DTC can be used for developing threat hunting model to mitigate cyber-attacks using data mining approach.
Authored by Akinsola T., Olajubu A., Aderounmu A.
Predictive Security Metrics - Most IoT systems involve IoT devices, communication protocols, remote cloud, IoT applications, mobile apps, and the physical environment. However, existing IoT security analyses only focus on a subset of all the essential components, such as device firmware or communication protocols, and ignore IoT systems’ interactive nature, resulting in limited attack detection capabilities. In this work, we propose IOTA, a logic programmingbased framework to perform system-level security analysis for IoT systems. IOTA generates attack graphs for IoT systems, showing all of the system resources that can be compromised and enumerating potential attack traces. In building IOTA, we design novel techniques to scan IoT systems for individual vulnerabilities and further create generic exploit models for IoT vulnerabilities. We also identify and model physical dependencies between different devices as they are unique to IoT systems and are employed by adversaries to launch complicated attacks. In addition, we utilize NLP techniques to extract IoT app semantics based on app descriptions. IOTA automatically translates vulnerabilities, exploits, and device dependencies to Prolog clauses and invokes MulVAL to construct attack graphs. To evaluate vulnerabilities’ system-wide impact, we propose two metrics based on the attack graph, which provide guidance on fortifying IoT systems. Evaluation on 127 IoT CVEs (Common Vulnerabilities and Exposures) shows that IOTA’s exploit modeling module achieves over 80\% accuracy in predicting vulnerabilities’ preconditions and effects. We apply IOTA to 37 synthetic smart home IoT systems based on real-world IoT apps and devices. Experimental results show that our framework is effective and highly efficient. Among 27 shortest attack traces revealed by the attack graphs, 62.8\% are not anticipated by the system administrator. It only takes 1.2 seconds to generate and analyze the attack graph for an IoT system consisting of 50 devices.
Authored by Zheng Fang, Hao Fu, Tianbo Gu, Pengfei Hu, Jinyue Song, Trent Jaeger, Prasant Mohapatra
Predictive Security Metrics - With the emergence of Zero Trust (ZT) Architecture, industry leaders have been drawn to the technology because of its potential to handle a high level of security threats. The Zero Trust Architecture (ZTA) is paving the path for a security industrial revolution by eliminating location-based implicant access and focusing on asset, user, and resource security. Software Defined Perimeter (SDP) is a secure overlay network technology that can be used to implement a Zero Trust framework. SDP is a next-generation network technology that allows network architecture to be hidden from the outside world. It also hides the overlay communication from the underlay network by employing encrypted communications. With encrypted information, detecting abnormal behavior of entities on an overlay network becomes exceedingly difficult. Therefore, an automated system is required. We proposed a method in this paper for understanding the normal behavior of deployed polices by mapping network usage behavior to the policy. An Apache Spark collects and processes the streaming overlay monitoring data generated by the built-in fabric API in order to do this mapping. It sends extracted metrics to Prometheus for storage, and then uses the data for machine learning training and prediction. The cluster-id of the link that it belongs to is predicted by the model, and the cluster-ids are mapped onto the policies. To validate the legitimacy of policy, the labeled polices hash is compared to the actual polices hash that is obtained from blockchain. Unverified policies are notified to the SDP controller for additional action, such as defining new policy behavior or marking uncertain policies.
Authored by Waleed Akbar, Javier Rivera, Khan Ahmed, Afaq Muhammad, Wang-Cheol Song
Predictive Security Metrics - Network security personnel are expected to provide uninterrupted services by handling attacks irrespective of the modus operandi. Multiple defensive approaches to prevent, curtail, or mitigate an attack are the primary responsibilities of a security personnel. Considering the fact that, predicting security attacks is an additional technique currently used by most organizations to accurately measure the security risks related to overall system performance, several approaches have been used to predict network security attacks. However, high predicting accuracy and difficulty in analyzing very large amount of dataset and getting a reliable dataset seem to be the major constraints. The uncertain behavior would be subjected to verification and validation by the network administrator. KDDD CUPP 99 dataset and NSL KDD dataset were both used in the research. NSL KDD provides 0.997 average micro and macro accuracy, having average LogLoss of 0.16 and average LogLossReduction of 0.976. Log-Loss Reduction ranges from infinity to 1, where 1 and 0 represent perfect prediction and mean prediction respectively. Log-Loss reduction should be as close to 1 as possible for a good model. LogLoss in the classification is an evaluation metrics that characterized the accuracy of a classifier. Log-loss is a measure of the performance of a classifier where the prediction input is a probability value between “0.00 to 1.00”. It should be as close to zero as possible. This paper proposes a FastTree Model for predicting network security incidents. Therefore, ML.NET Framework and FastTree Regression Technique have a high prediction accuracy and ability to analyze large datasets of normal, abnormal and uncertain behaviors.
Authored by Marcus Magaji, Abayomi Jegede, Nentawe Gurumdimma, Monday Onoja, Gilbert Aimufua, Ayodele Oloyede
Predictive Security Metrics - Predicting vulnerabilities through source code analysis and using it to guide software maintenance can effectively improve software security. One effective way to predict vulnerabilities is by analyzing library references and function calls used in code. In this paper, we extract library references and function calls from project files through source code analysis, generate sample sets for statistical learning based on these data. Design and train an integrated learning model that can be used for prediction. The designed model has a high accuracy rate and accomplishes the prediction task well. It also proves the correlation between vulnerabilities and library references and function calls.
Authored by Yiyi Liu, Minjie Zhu, Yilian Zhang, Yan Chen
Predictive Security Metrics - Across the globe, renewable generation integration has been increasing in the last decades to meet ever-increasing power demand and emission targets. Wind power has dominated among various renewable sources due to the widespread availability and advanced low-cost technologies. However, the stochastic nature of wind power results in power system reliability and security issues. This is because the uncertain variability of wind power results in challenges to various system operations such as unit commitment and economic dispatch. Such problems can be addressed by accurate wind power forecasts to great extent. This attracted wider investigations for obtaining accurate power forecasts using various forecasting models such as time series, machine learning, probabilistic, and hybrid. These models use different types of inputs and obtain forecasts in different time horizons, and have different applications. Also, different investigations represent forecasting performance using different performance metrics. Limited classification reviews are available for these areas and detailed classification on these areas will help researchers and system operators to develop new accurate forecasting models. Therefore, this paper proposes a detailed review of those areas. It concludes that even though quantum investigations are available, wind power forecasting accuracy improvement is an ever-existing research problem. Also, forecasting performance indication in financial term such as deviation charges can be used to represent the economic impact of forecasting accuracy improvement.
Authored by Sandhya Kumari, Arjun Rathi, Ayush Chauhan, Nigarish Khan, Sreenu Sreekumar, Sonika Singh
Predictive Security Metrics - A threat source that might exploit or create a hole in an information system, system security procedures, internal controls, or implementation is a computer operating system vulnerability. Since information security is a problem for everyone, predicting it is crucial. The typical method of vulnerability prediction involves manually identifying traits that might be related to unsafe code. An open framework for defining the characteristics and seriousness of software problems is called the Common Vulnerability Scoring System (CVSS). Base, Temporal, and Environmental are the three metric categories in CVSS. In this research, neural networks are utilized to build a predictive model of Windows 10 vulnerabilities using the published vulnerability data in the National Vulnerability Database. Different variants of neural networks are used which implements the back propagation for training the operating system vulnerabilities scores ranging from 0 to 10. Additionally, the research identifies the influential factors using Loess variable importance in neural networks, which shows that access complexity and polarity are only marginally important for predicting operating system vulnerabilities, while confidentiality impact, integrity impact, and availability impact are highly important.
Authored by Freeh Alenezi, Tahir Mehmood
Predictive Security Metrics - This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong PUFs. These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial PUF complexity analyses, and are much easier and more efficient to apply: They do not require detailed knowledge of various ML methods, substantial computation times, or the availability of an internal parametric model of the studied PUF. Our metrics also can be standardized particularly easily. This avoids the sometimes inconclusive or contradictory findings of existing ML-based security test, which may result from the usage of different or non-optimized ML algorithms and hyperparameters, differing hardware resources, or varying numbers of challenge-response pairs in the training phase.
Authored by Fynn Kappelhoff, Rasmus Rasche, Debdeep Mukhopadhyay, Ulrich Rührmair
Identifying Evolution of Software Metrics by Analyzing Vulnerability History in Open Source Projects
Predictive Security Metrics - Software developers mostly focus on functioning code while developing their software paying little attention to the software security issues. Now a days, security is getting priority not only during the development phase, but also during other phases of software development life cycle (starting from requirement specification till maintenance phase). To that end, research have been expanded towards dealing with security issues in various phases. Current research mostly focused on developing different prediction models and most of them are based on software metrics. The metrics based models showed higher precision but poor recall rate in prediction. Moreover, they did not analyze the roles of individual software metric on the occurrences of vulnerabilities separately. In this paper, we target to track the evolution of metrics within the life-cycle of a vulnerability starting from its born version through the last affected version till fixed version. In particular, we studied a total of 250 files from three major releases of Apache Tomcat (8, 9 , and 10). We found that four metrics: AvgCyclomatic, AvgCyclomaticStrict, CountDeclM ethod, and CountLineCodeExe show significant changes over the vulnerability history of Tomcat. In addition, we discovered that Tomcat team prioritizes in fixing threatening vulnerabilities such as Denial of Service than less severe vulnerabilities. The results of our research will potentially motivate further research on building more accurate vulnerability prediction models based on the appropriate software metrics. It will also help to assess developer’s mindset about fixing different types of vulnerabilities in open source projects.
Authored by Erik Maza, Kazi Sultana
Predictive Security Metrics - Security metrics for software products give a quantifiable assessment of a software system s trustworthiness. Metrics can also help detect vulnerabilities in systems, prioritize corrective actions, and raise the level of information security within the business. There is a lack of studies that identify measurements, metrics, and internal design properties used to assess software security. Therefore, this paper aims to survey security measurements used to assess and predict security vulnerabilities. We identified the internal design properties that were used to measure software security based on the internal structure of the software. We also identified the security metrics used in the studies we examined. We discussed how software refactoring had been used to improve software security. We observed that a software system with low coupling, low complexity, and high cohesion is more secure and vice versa. Current research directions have been identified and discussed.
Authored by Abdullah Almogahed, Mazni Omar, Nur Zakaria, Abdulwadood Alawadhi
Insider Threat - Among the greatest obstacles in cybersecurity is insider threat, which is a well-known massive issue. This anomaly shows that the vulnerability calls for specialized detection techniques, and resources that can help with the accurate and quick detection of an insider who is harmful. Numerous studies on identifying insider threats and related topics were also conducted to tackle this problem are proposed. Various researches sought to improve the conceptual perception of insider risks. Furthermore, there are numerous drawbacks, including a dearth of actual cases, unfairness in drawing decisions, a lack of self-optimization in learning, which would be a huge concern and is still vague, and the absence of an investigation that focuses on the conceptual, technological, and numerical facets concerning insider threats and identifying insider threats from a wide range of perspectives. The intention of the paper is to afford a thorough exploration of the categories, levels, and methodologies of modern insiders based on machine learning techniques. Further, the approach and evaluation metrics for predictive models based on machine learning are discussed. The paper concludes by outlining the difficulties encountered and offering some suggestions for efficient threat identification using machine learning.
Authored by Nagabhushana Babu, M Gunasekaran
Information Theoretic Security - From an information-theoretic standpoint, the intrusion detection process can be examined. Given the IDS output(alarm data), we should have less uncertainty regarding the input (event data). We propose the Capability of Intrusion Detection (CID) measure, which is simply the ratio of mutual information between IDS input and output, and the input of entropy. CID has the desirable properties of (1) naturally accounting for all important aspects of detection capability, such as true positive rate, false positive rate, positive predictive value, negative predictive value, and base rate, (2) objectively providing an intrinsic measure of intrusion detection capability, and (3) being sensitive to IDS operation parameters. When finetuning an IDS, we believe that CID is the best performance metric to use. In terms of the IDS’ inherent ability to classify input data, the so obtained operation point is the best that it can achieve.
Authored by Noor Hashim, Sattar Sadkhan
Cyberspace is the fifth largest activity space after land, sea, air and space. Safeguarding Cyberspace Security is a major issue related to national security, national sovereignty and the legitimate rights and interests of the people. With the rapid development of artificial intelligence technology and its application in various fields, cyberspace security is facing new challenges. How to help the network security personnel grasp the security trend at any time, help the network security monitoring personnel respond to the alarm information quickly, and facilitate the tracking and processing of the monitoring personnel. This paper introduces a method of using situational awareness micro application actual combat attack and defense robot to quickly feed back the network attack information to the monitoring personnel, timely report the attack information to the information reporting platform and automatically block the malicious IP.
Authored by Lei Yan, Xinrui Liu, Chunhui Du, Junjie Pei
Cyber Physical Systems (CPS), which contain devices to aid with physical infrastructure activities, comprise sensors, actuators, control units, and physical objects. CPS sends messages to physical devices to carry out computational operations. CPS mainly deals with the interplay among cyber and physical environments. The real-time network data acquired and collected in physical space is stored there, and the connection becomes sophisticated. CPS incorporates cyber and physical technologies at all phases. Cyber Physical Systems are a crucial component of Internet of Things (IoT) technology. The CPS is a traditional concept that brings together the physical and digital worlds inhabit. Nevertheless, CPS has several difficulties that are likely to jeopardise our lives immediately, while the CPS's numerous levels are all tied to an immediate threat, therefore necessitating a look at CPS security. Due to the inclusion of IoT devices in a wide variety of applications, the security and privacy of users are key considerations. The rising level of cyber threats has left current security and privacy procedures insufficient. As a result, hackers can treat every person on the Internet as a product. Deep Learning (DL) methods are therefore utilised to provide accurate outputs from big complex databases where the outputs generated can be used to forecast and discover vulnerabilities in IoT systems that handles medical data. Cyber-physical systems need anomaly detection to be secure. However, the rising sophistication of CPSs and more complex attacks means that typical anomaly detection approaches are unsuitable for addressing these difficulties since they are simply overwhelmed by the volume of data and the necessity for domain-specific knowledge. The various attacks like DoS, DDoS need to be avoided that impact the network performance. In this paper, an effective Network Cluster Reliability Model with enhanced security and privacy levels for the data in IoT for Anomaly Detection (NSRM-AD) using deep learning model is proposed. The security levels of the proposed model are contrasted with the proposed model and the results represent that the proposed model performance is accurate
Authored by Maloth Sagar, Vanmathi C
Classifying and predicting the accuracy of intrusion detection on cybercrime by comparing machine learning methods such as Innovative Decision Tree (DT) with Support Vector Machine (SVM). By comparing the Decision Tree (N=20) and the Support Vector Machine algorithm (N=20) two classes of machine learning classifiers were used to determine the accuracy. The decision Tree (99.19%) has the highest accuracy than the SVM (98.5615%) and the independent T-test was carried out (=.507) and shows that it is statistically insignificant (p\textgreater0.05) with a confidence value of 95%. by comparing Innovative Decision Tree and Support Vector Machine. The Decision Tree is more productive than the Support Vector Machine for recognizing intruders with substantially checked, according to the significant analysis.
Authored by Marri Kumar, Prof. K.Malathi
Being a part of today’s technical world, we are connected through a vast network. More we are addicted to these modernization techniques we need security. There must be reliability in a network security system so that it is capable of doing perfect monitoring of the whole network of an organization so that any unauthorized users or intruders wouldn’t be able to halt our security breaches. Firewalls are there for securing our internal network from unauthorized outsiders but still some time possibility of attacks is there as according to a survey 60% of attacks were internal to the network. So, the internal system needs the same higher level of security just like external. So, understanding the value of security measures with accuracy, efficiency, and speed we got to focus on implementing and comparing an improved intrusion detection system. A comprehensive literature review has been done and found that some feature selection techniques with standard scaling combined with Machine Learning Techniques can give better results over normal existing ML Techniques. In this survey paper with the help of the Uni-variate Feature selection method, the selection of 14 essential features out of 41 is performed which are used in comparative analysis. We implemented and compared both binary class classification and multi-class classification-based Intrusion Detection Systems (IDS) for two Supervised Machine Learning Techniques Support Vector Machine and Classification and Regression Techniques.
Authored by Pushpa Singh, Parul Tomar, Madhumita Kathuria
The Network Security and Risk (NSR) management team in an enterprise is responsible for maintaining the network which includes switches, routers, firewalls, controllers, etc. Due to the ever-increasing threat of capitalizing on the vulnerabilities to create cyber-attacks across the globe, a major objective of the NSR team is to keep network infrastructure safe and secure. NSR team ensures this by taking proactive measures of periodic audits of network devices. Further external auditors are engaged in the audit process. Audit information is primarily stored in an internal database of the enterprise. This generic approach could result in a trust deficit during external audits. This paper proposes a method to improve the security and integrity of the audit information by using blockchain technology, which can greatly enhance the trust factor between the auditors and enterprises.
Authored by Santosh Upadhyaya, B. Thangaraju
The article deals with the issues of improving the quality of corporate information systems functioning and ensuring the information security of financial organizations that have a complex structure and serve a significant number of customers. The formation of the company's informational system and its integrated information security system is studied based on the process approach, methods of risk management and quality management. The risks and threats to the security of the informational system functioning and the quality of information support for customer service of a financial organization are analyzed. The methods and tools for improving the quality of information services and ensuring information security are considered on the example of an organization for social insurance. Recommendations are being developed to improve the quality of the informational system functioning in a large financial company.
Authored by Marina Tokareva, Anton Kublitskii, Natalia Telyatnikova, Anatoly Rogov, Ilya Shkolnik
This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong PUFs. These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial PUF complexity analyses, and are much easier and more efficient to apply: They do not require detailed knowledge of various ML methods, substantial computation times, or the availability of an internal parametric model of the studied PUF. Our metrics also can be standardized particularly easily. This avoids the sometimes inconclusive or contradictory findings of existing ML-based security test, which may result from the usage of different or non-optimized ML algorithms and hyperparameters, differing hardware resources, or varying numbers of challenge-response pairs in the training phase.This first manuscript within the abovementioned sequence treats one of the conceptually most straightforward security metrics on that path: It investigates the effects that small perturbations in the PUF-challenges have on the resulting PUF-responses. We first develop and implement several sub-metrics that realize this approach in practice. We then empirically show that these metrics have surprising predictive power, and compare our obtained test scores with the known real-world security of several popular Strong PUF designs. The latter include (XOR) Arbiter PUFs, Feed-Forward Arbiter PUFs, and (XOR) Bistable Ring PUFs. Along the way, our manuscript also suggests techniques for representing the results of our metrics graphically, and for interpreting them in a meaningful manner.
Authored by Fynn Kappelhoff, Rasmus Rasche, Debdeep Mukhopadhyay, Ulrich Rührmair
According to the characteristics of security threats and massive users in power mobile applications, a mobile application security situational awareness method based on big data architecture is proposed. The method uses open-source big data technology frameworks such as Kafka, Flink, Elasticsearch, etc. to complete the collection, analysis, storage and visual display of massive power mobile application data, and improve the throughput of data processing. The security situation awareness method of power mobile application takes the mobile terminal threat index as the core, divides the risk level for the mobile terminal, and predicts the terminal threat index through support vector machine regression algorithm (SVR), so as to construct the security profile of the mobile application operation terminal. Finally, through visualization services, various data such as power mobile applications and terminal assets, security operation statistics, security strategies, and alarm analysis are displayed to guide security operation and maintenance personnel to carry out power mobile application security monitoring and early warning, banning disposal and traceability analysis and other decision-making work. The experimental analysis results show that the method can meet the requirements of security situation awareness for threat assessment accuracy and response speed, and the related results have been well applied in a power company.
Authored by Li Yong, Chen Mu, Dai ZaoJian, Chen Lu
Real-time situational awareness (SA) plays an essential role in accurate and timely incident response. Maintaining SA is, however, extremely costly due to excessive false alerts generated by intrusion detection systems, which require prioritization and manual investigation by security analysts. In this paper, we propose a novel approach to prioritizing alerts so as to maximize SA, by formulating the problem as that of active learning in a hidden Markov model (HMM). We propose to use the entropy of the belief of the security state as a proxy for the mean squared error (MSE) of the belief, and we develop two computationally tractable policies for choosing alerts to investigate that minimize the entropy, taking into account the potential uncertainty of the investigations' results. We use simulations to compare our policies to a variety of baseline policies. We find that our policies reduce the MSE of the belief of the security state by up to 50% compared to static baseline policies, and they are robust to high false alert rates and to the investigation errors.
Authored by Yeongwoo Kim, György Dán
Control room video surveillance is an important source of information for ensuring public safety. To facilitate the process, a Decision-Support System (DSS) designed for the security task force is vital and necessary to take decisions rapidly using a sea of information. In case of mission critical operation, Situational Awareness (SA) which consists of knowing what is going on around you at any given time plays a crucial role across a variety of industries and should be placed at the center of our DSS. In our approach, SA system will take advantage of the human factor thanks to the reinforcement signal whereas previous work on this field focus on improving knowledge level of DSS at first and then, uses the human factor only for decision-making. In this paper, we propose a situational awareness-centric decision-support system framework for mission-critical operations driven by Quality of Experience (QoE). Our idea is inspired by the reinforcement learning feedback process which updates the environment understanding of our DSS. The feedback is injected by a QoE built on user perception. Our approach will allow our DSS to evolve according to the context with an up-to-date SA.
Authored by Abhishek Djeachandrane, Said Hoceini, Serge Delmas, Jean-Michel Duquerrois, Abdelhamid Mellouk
The features of socio-cyber-physical systems are presented, which dictate the need to revise traditional management methods and transform the management system in such a way that it takes into account the presence of a person both in the control object and in the control loop. The use of situational control mechanisms is proposed. The features of this approach and its comparison with existing methods of situational awareness are presented. The comparison has demonstrated wider possibilities and scope for managing socio-cyber-physical systems. It is recommended to consider a wider class of types of relations that exist in socio-cyber-physical systems. It is indicated that such consideration can be based on the use of pseudo-physical logics considered in situational control. It is pointed out that it is necessary to design a classifier of situations (primarily in cyberspace), instead of traditional classifiers of threats and intruders.
Authored by Oleksandr Milov, Vladyslav Khvostenko, Voropay Natalia, Olha Korol, Nataliia Zviertseva
With the electric power distribution grid facing ever increasing complexity and new threats from cyber-attacks, situational awareness for system operators is quickly becoming indispensable. Identifying de-energized lines on the distribution system during a SCADA communication failure is a prime example where operators need to act quickly to deal with an emergent loss of service. Loss of cellular towers, poor signal strength, and even cyber-attacks can impact SCADA visibility of line devices on the distribution system. Neural Networks (NNs) provide a unique approach to learn the characteristics of normal system behavior, identify when abnormal conditions occur, and flag these conditions for system operators. This study applies a 24-hour load forecast for distribution line devices given the weather forecast and day of the week, then determines the current state of distribution devices based on changes in SCADA analogs from communicating line devices. A neural network-based algorithm is applied to historical events on Alabama Power's distribution system to identify de-energized sections of line when a significant amount of SCADA information is hidden.
Authored by Matthew Leak, Ganesh Venayagamoorthy