In recent years, machine learning technology has been extensively utilized, leading to increased attention to the security of AI systems. In the field of image recognition, an attack technique called clean-label backdoor attack has been widely studied, and it is more difficult to detect than general backdoor attacks because data labels do not change when tampering with poisoning data during model training. However, there remains a lack of research on malware detection systems. Some of the current work is under the white-box assumption that requires knowledge of machine learning-based models which can be advantageous for attackers. In this study, we focus on clean-label backdoor attacks in malware detection systems and propose a new clean-label backdoor attack under the black-box assumption that does not require knowledge of machine learning-based models, which is riskier. The experimental evaluation of the proposed attack method shows that the attack success rate is up to 80.50\% when the poisoning rate is 14.00\%, demonstrating the effectiveness of the proposed attack method. In addition, we experimentally evaluated the effectiveness of the dimensionality reduction techniques in preventing clean-label backdoor attacks, and showed that it can reduce the attack success rate by 76.00\%.
Authored by Wanjia Zheng, Kazumasa Omote
As artificial intelligent models continue to grow in their capacity and sophistication, they are often trusted with very sensitive information. In the sub-field of adversarial machine learning, developments are geared solely towards finding reliable methods to systematically erode the ability of artificial intelligent systems to perform as intended. These techniques can cause serious breaches of security, interruptions to major systems, and irreversible damage to consumers. Our research evaluates the effects of various white box adversarial machine learning attacks on popular computer vision deep learning models leveraging a public X-ray dataset from the National Institutes of Health (NIH). We make use of several experiments to gauge the feasibility of developing deep learning models that are robust to adversarial machine learning attacks by taking into account different defense strategies, such as adversarial training, to observe how adversarial attacks evolve over time. Our research details how a variety white box attacks effect different components of InceptionNet, DenseNet, and ResNeXt and suggest how the models can effectively defend against these attacks.
Authored by Ilyas Bankole-Hameed, Arav Parikh, Josh Harguess
AI is one of the most popular field of technologies nowadays. Developers implement these technologies everywhere forgetting sometimes about its robustness to unobvious types of traffic. This omission can be used by attackers, who are always seeking to develop new attacks. So, the growth of AI is highly correlates with the rise of adversarial attacks. Adversarial attacks or adversarial machine learning is a technique when attackers attempt to fool ML systems with deceptive data. They can use inconspicuous, natural-looking perturbations in input data to mislead neural networks without inferring into a model directly and often without the risk to be detected. Adversarial attacks usually are divided into three primary axes - the security violation, poisoning and evasion attacks, which further can be categorized on “targeted”, “untargeted”, “whitebox” and “blackbox” types. This research examines most of the adversarial attacks are known by 2023 relating to all these categories and some others.
Authored by Natalie Grigorieva, Sergei Petrenko
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
ChatGPT, a conversational Artificial Intelligence, has the capacity to produce grammatically accurate and persuasively human responses to numerous inquiry types from various fields. Both its users and applications are growing at an unbelievable rate. Sadly, abuse and usage often go hand in hand. Since the words produced by AI are nearly comparable to those produced by humans, the AI model can be used to influence people or organizations in a variety of ways. In this paper, we test the accuracy of various online tools widely used for the detection of AI-generated and Human generated texts or responses.
Authored by Prerana Singh, Aditya Singh, Sameer Rathi, Sonika Vasesi
With the increasing deployment of machine learning models across various domains, ensuring AI security has become a critical concern. Model evasion, a specific area of concern, involves attackers manipulating a model s predictions by perturbing the input data. The Fast Gradient Sign Method (FGSM) is a well-known technique for model evasion, typically used in white-box settings where the attacker has direct access to the model s architecture. In this method, the attacker intelligently manipulates the inputs to cause mispredictions by accessing the gradients of the input. To address the limitations of FGSM in black-box settings, we propose an extension of this approach called FGSM on ZOO. This method leverages the Zeroth Order Optimization (ZOO) technique to intellectually manipulate the inputs. Unlike white-box attacks, black-box attacks rely solely on observing the model s input-output behavior without access to its internal structure or parameters. We conducted experiments using the MNIST Digits and CIFAR datasets to establish a baseline for vulnerability assessment and to explore future prospects for securing models. By examining the effectiveness of FGSM on ZOO in these experiments, we gain insights into the potential vulnerabilities and the need for improved security measures in AI systems
Authored by Aravindhan G, Yuvaraj Govindarajulu, Pavan Kulkarni, Manojkumar Parmar
Software vulnerability detection (SVD) aims to identify potential security weaknesses in software. SVD systems have been rapidly evolving from those being based on testing, static analysis, and dynamic analysis to those based on machine learning (ML). Many ML-based approaches have been proposed, but challenges remain: training and testing datasets contain duplicates, and building customized end-to-end pipelines for SVD is time-consuming. We present Tenet, a modular framework for building end-to-end, customizable, reusable, and automated pipelines through a plugin-based architecture that supports SVD for several deep learning (DL) and basic ML models. We demonstrate the applicability of Tenet by building practical pipelines performing SVD on real-world vulnerabilities.
Authored by Eduard Pinconschi, Sofia Reis, Chi Zhang, Rui Abreu, Hakan Erdogmus, Corina Păsăreanu, Limin Jia
In various fields, such as medical engi-neering or aerospace engineering, it is difficult to apply the decisions of a machine learning (ML) or a deep learning (DL) model that do not account for the vast amount of human limitations which can lead to errors and incidents. Explainable Artificial Intelligence (XAI) comes to explain the results of artificial intelligence software (ML or DL) still considered black boxes to understand their decisions and adopt them. In this paper, we are interested in the deployment of a deep neural network (DNN) model able to predict the Remaining Useful Life (RUL) of a turbofan engine of an aircraft. Shapley s method was then applied in the explanation of the DL results. This made it possible to determine the participation rate of each parameter in the RUL and to identify the most decisive parameters for extending or shortening the RUL of the turbofan engine.
Authored by Anouar BOUROKBA, Ridha HAMDI, Mohamed Njah
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
Despite intensive research, survival rate for pancreatic cancer, a fatal and incurable illness, has not dramatically improved in recent years. Deep learning systems have shown superhuman ability in a considerable number of activities, and recent developments in Artificial Intelligence (AI) have led to its widespread use in predictive analytics of pancreatic cancer. However, the improvement in performance is the result of model complexity being raised, which transforms these systems into “black box” methods and creates uncertainty about how they function and, ultimately, how they make judgements. This ambiguity has made it difficult for deep learning algorithms to be accepted in important field like healthcare, where their benefit may be enormous. As a result, there has been a significant resurgence in recent years of scholarly interest in the topic of Explainable Artificial Intelligence (XAI), which is concerned with the creation of novel techniques for interpreting and explaining deep learning models. In this study, we utilize Computed Tomography (CT) images and Clinical data to predict and analyze pancreatic cancer and survival rate respectively. Since pancreatic tumors are small to identify, the region marking through XAI will assist medical professionals to identify the appropriate region and determine the presence of cancer. Various features are taken into consideration for survival prediction. The most prominent features can be identified with the help of XAI, which in turn aids medical professionals in making better decisions. This study mainly focuses on the XAI strategy for deep and machine learning models rather than prediction and survival methodology.
Authored by Srinidhi B, M Bhargavi
Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
Machine learning models have become increasingly complex, making it difficult to understand how they make predictions. Explainable AI (XAI) techniques have been developed to enhance model interpretability, thereby improving model transparency, trust, and accountability. In this paper, we present a comparative analysis of several XAI techniques to enhance the interpretability of machine learning models. We evaluate the performance of these techniques on a dataset commonly used for regression or classification tasks. The XAI techniques include SHAP, LIME, PDP, and GAM. We compare the effectiveness of these techniques in terms of their ability to explain model predictions and identify the most important features in the dataset. Our results indicate that XAI techniques significantly improve model interpretability, with SHAP and LIME being the most effective in identifying important features in the dataset. Our study provides insights into the strengths and limitations of different XAI techniques and their implications for the development and deployment of machine learning models. We conclude that XAI techniques have the potential to significantly enhance model interpretability and promote trust and accountability in the use of machine learning models. The paper emphasizes the importance of interpretability in medical applications of machine learning and highlights the significance of XAI techniques in developing accurate and reliable models for medical applications.
Authored by Swathi Y, Manoj Challa
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
The Internet of Things (IoT) heralds a innovative generation in communication via enabling regular gadgets to supply, receive, and percentage records easily. IoT applications, which prioritise venture automation, aim to present inanimate items autonomy; they promise increased consolation, productivity, and automation. However, strong safety, privateness, authentication, and recuperation methods are required to understand this goal. In order to assemble give up-to-quit secure IoT environments, this newsletter meticulously evaluations the security troubles and risks inherent to IoT applications. It emphasises the vital necessity for architectural changes.The paper starts by conducting an examination of security worries before exploring emerging and advanced technologies aimed at nurturing a sense of trust, in Internet of Things (IoT) applications. The primary focus of the discussion revolves around how these technologies aid in overcoming security challenges and fostering an ecosystem for IoT.
Authored by Pranav A, Sathya S, HariHaran B
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
Healthcare systems have recently utilized the Internet of Medical Things (IoMT) to assist intelligent data collection and decision-making. However, the volume of malicious threats, particularly new variants of malware attacks to the connected medical devices and their connected system, has risen significantly in recent years, which poses a critical threat to patients’ confidential data and the safety of the healthcare systems. To address the high complexity of conventional software-based detection techniques, Hardware-supported Malware Detection (HMD) has proved to be efficient for detecting malware at the processors’ micro-architecture level with the aid of Machine Learning (ML) techniques applied to Hardware Performance Counter (HPC) data. In this work, we examine the suitability of various standard ML classifiers for zero-day malware detection on new data streams in the real-world operation of IoMT devices and demonstrate that such methods are not capable of detecting unknown malware signatures with a high detection rate. In response, we propose a hybrid and adaptive image-based framework based on Deep Learning and Deep Reinforcement Learning (DRL) for online hardware-assisted zero-day malware detection in IoMT devices. Our proposed method dynamically selects the best DNN-based malware detector at run-time customized for each device from a pool of highly efficient models continuously trained on all stream data. It first converts tabular hardware-based data (HPC events) into small-size images and then leverages a transfer learning technique to retrain and enhance the Deep Neural Network (DNN) based model’s performance for unknown malware detection. Multiple DNN models are trained on various stream data continuously to form an inclusive model pool. Next, a DRL-based agent constructed with two Multi-Layer Perceptrons (MLPs) is trained (one acts as an Actor and another acts as a Critic) to align the decision of selecting the most optimal DNN model for highly accurate zero-day malware detection at run-time using a limited number of hardware events. The experimental results demonstrate that our proposed AI-enabled method achieves 99\% detection rate in both F1-score and AUC, with only 0.01\% false positive rate and 1\% false negative rate.
Authored by Zhangying He, Hossein Sayadi
Increasing automation in vehicles enabled by increased connectivity to the outside world has exposed vulnerabilities in previously siloed automotive networks like controller area networks (CAN). Attributes of CAN such as broadcast-based communication among electronic control units (ECUs) that lowered deployment costs are now being exploited to carry out active injection attacks like denial of service (DoS), fuzzing, and spoofing attacks. Research literature has proposed multiple supervised machine learning models deployed as Intrusion detection systems (IDSs) to detect such malicious activity; however, these are largely limited to identifying previously known attack vectors. With the ever-increasing complexity of active injection attacks, detecting zero-day (novel) attacks in these networks in real-time (to prevent propagation) becomes a problem of particular interest. This paper presents an unsupervised-learning-based convolutional autoencoder architecture for detecting zero-day attacks, which is trained only on benign (attack-free) CAN messages. We quantise the model using Vitis-AI tools from AMD/Xilinx targeting a resource-constrained Zynq Ultrascale platform as our IDS-ECU system for integration. The proposed model successfully achieves equal or higher classification accuracy (\textgreater 99.5\%) on unseen DoS, fuzzing, and spoofing attacks from a publicly available attack dataset when compared to the state-of-the-art unsupervised learning-based IDSs. Additionally, by cleverly overlapping IDS operation on a window of CAN messages with the reception, the model is able to meet line-rate detection (0.43 ms per window) of high-speed CAN, which when coupled with the low energy consumption per inference, makes this architecture ideally suited for detecting zero-day attacks on critical CAN networks.
Authored by Shashwat Khandelwal, Shanker Shreejith
In the evolving landscape of Internet of Things (IoT) security, the need for continuous adaptation of defenses is critical. Class Incremental Learning (CIL) can provide a viable solution by enabling Machine Learning (ML) and Deep Learning (DL) models to ( i) learn and adapt to new attack types (0-day attacks), ( ii) retain their ability to detect known threats, (iii) safeguard computational efficiency (i.e. no full re-training). In IoT security, where novel attacks frequently emerge, CIL offers an effective tool to enhance Intrusion Detection Systems (IDS) and secure network environments. In this study, we explore how CIL approaches empower DL-based IDS in IoT networks, using the publicly-available IoT-23 dataset. Our evaluation focuses on two essential aspects of an IDS: ( a) attack classification and ( b) misuse detection. A thorough comparison against a fully-retrained IDS, namely starting from scratch, is carried out. Finally, we place emphasis on interpreting the predictions made by incremental IDS models through eXplainable AI (XAI) tools, offering insights into potential avenues for improvement.
Authored by Francesco Cerasuolo, Giampaolo Bovenzi, Christian Marescalco, Francesco Cirillo, Domenico Ciuonzo, Antonio Pescapè
Significant progress has been made towards developing Deep Learning (DL) in Artificial Intelligence (AI) models that can make independent decisions. However, this progress has also highlighted the emergence of malicious entities that aim to manipulate the outcomes generated by these models. Due to increasing complexity, this is a concerning issue in various fields, such as medical image classification, autonomous vehicle systems, malware detection, and criminal justice. Recent research advancements have highlighted the vulnerability of these classifiers to both conventional and adversarial assaults, which may skew their results in both the training and testing stages. The Systematic Literature Review (SLR) aims to analyse traditional and adversarial attacks comprehensively. It evaluates 45 published works from 2017 to 2023 to better understand adversarial attacks, including their impact, causes, and standard mitigation approaches.
Authored by Tarek Ali, Amna Eleyan, Tarek Bejaoui
Zero-day attacks, which are defined by their abrupt appearance without any previous detection mechanisms, present a substantial obstacle in the field of network security. To address this difficulty, a wide variety of machine learning and deep learning models have been used to identify and minimize zeroday assaults. The models have been assessed for both binary and multi-class classification situations, The objective of this work is to do a thorough comparison and analysis of these models, including the impact of class imbalance and utilizing SHAP (SHapley Additive exPlanations) explainability approaches. Class imbalance is a prevalent problem in cybersecurity datasets, characterized by a considerable disparity between the number of attack cases and non-attack instances. By equalizing the dataset, we guarantee equitable depiction of both categories, so preventing prejudice towards the dominant category throughout the training and assessment of the model. Moreover, the application of SHAP XAI facilitates a more profound comprehension of model predictions, empowering analysts to analyze the fundamental aspects that contribute to the detection of zero-day attacks.
Authored by C.K. Sruthi, Aswathy Ravikumar, Harini Sriraman
The advent of the Internet of Things (IoT) has ushered in the concept of smart cities – urban environments where everything from traffic lights to waste management is interconnected and digitally managed. While this transformation offers unparalleled efficiency and innovation, it opens the door to myriad cyber-attacks. Threats range from data breaches to infrastructure disruptions, with one subtle yet potent risk emerging: fake clients. These seemingly benign entities have the potential to carry out a multitude of cyber attacks, leveraging their deceptive appearance to infiltrate and compromise systems. This research presents a novel simulation model for a smart city based on the Internet of Things using the Netsim program. This city consists of several sectors, each of which consists of several clients that connect to produce the best performance, comfort and energy savings for this city. Fake clients are added to this simulation, who are they disguise themselves as benign clients while, in reality, they are exploiting this trust to carry out cyber attacks on these cities, then after preparing the simulation perfectly, the data flow of this system is captured and stored in a CSV file and classified into fake and normal, then this data set is subjected to several experiments using the Machine Learning using the MATLAB program. Each of them shows good results, based on the detection results shown by Model Machine Learning. The highest detection accuracy was in the third experiment using the k-nearest neighbors classifier and was 98.77\%. Concluding, the research unveils a robust prevention model.
Authored by Mahmoud Aljamal, Ala Mughaid, Rabee Alquran, Muder Almiani, Shadi bi
Developing network intrusion detection systems (IDS) presents significant challenges due to the evolving nature of threats and the diverse range of network applications. Existing IDSs often struggle to detect dynamic attack patterns and covert attacks, leading to misidentified network vulnerabilities and degraded system performance. These requirements must be met via dependable, scalable, effective, and adaptable IDS designs. Our IDS can recognise and classify complex network threats by combining the Deep Q-Network (DQN) algorithm with distributed agents and attention techniques.. Our proposed distributed multi-agent IDS architecture has many advantages for guiding an all-encompassing security approach, including scalability, fault tolerance, and multi-view analysis. We conducted experiments using industry-standard datasets including NSL-KDD and CICIDS2017 to determine how well our model performed. The results show that our IDS outperforms others in terms of accuracy, precision, recall, F1-score, and false-positive rate. Additionally, we evaluated our model s resistance to black-box adversarial attacks, which are commonly used to take advantage of flaws in machine learning. Under these difficult circumstances, our model performed quite well.We used a denoising autoencoder (DAE) for further model strengthening to improve the IDS s robustness. Lastly, we evaluated the effectiveness of our zero-day defenses, which are designed to mitigate attacks exploiting unknown vulnerabilities. Through our research, we have developed an advanced IDS solution that addresses the limitations of traditional approaches. Our model demonstrates superior performance, robustness against adversarial attacks, and effective zero-day defenses. By combining deep reinforcement learning, distributed agents, attention techniques, and other enhancements, we provide a reliable and comprehensive solution for network security.
Authored by Malika Malik, Kamaljit Saini
Computer networks are increasingly vulnerable to security disruptions such as congestion, malicious access, and attacks. Intrusion Detection Systems (IDS) play a crucial role in identifying and mitigating these threats. However, many IDSs have limitations, including reduced performance in terms of scalability, configurability, and fault tolerance. In this context, we aim to enhance intrusion detection through a cooperative approach. To achieve this, we propose the implementation of ICIDS-BB (Intelligent Cooperative Intrusion Detection System based on Blockchain). This system leverages Blockchain technology to secure data exchange among collaborative components. Internally, we employ two machine learning algorithms, the decision tree and random forest, to improve attack detection.
Authored by Ferdaws Bessaad, Farah Ktata, Khalil Ben Kalboussi
Container-based virtualization has gained momentum over the past few years thanks to its lightweight nature and support for agility. However, its appealing features come at the price of a reduced isolation level compared to the traditional host-based virtualization techniques, exposing workloads to various faults, such as co-residency attacks like container escape. In this work, we propose to leverage the automated management capabilities of containerized environments to derive a Fault and Intrusion Tolerance (FIT) framework based on error detection-recovery and fault treatment. Namely, we aim at deriving a specification-based error detection mechanism at the host level to systematically and formally capture security state errors indicating breaches potentially caused by malicious containers. Although the paper focuses on security side use cases, results are logically extendable to accidental faults. Our aim is to immunize the target environments against accidental and malicious faults and preserve their core dependability and security properties.
Authored by Taous Madi, Paulo Esteves-Verissimo
In the face of a large number of network attacks, intrusion detection system can issue early warning, indicating the emergence of network attacks. In order to improve the traditional machine learning network intrusion detection model to identify the behavior of network attacks, improve the detection accuracy and accuracy. Convolutional neural network is used to construct intrusion detection model, which has better ability to solve complex problems and better adaptability of algorithm. In order to solve the problems such as dimension explosion caused by input data, the albino PCA algorithm is used to extract data features and reduce data dimensions. For the common problem of convolutional neural networks in intrusion detection such as overfitting, Dropout layers are added before and after the fully connected layer of CNN, and Sigmoid is selected as the intrusion classification prediction function. This reduces the overfitting, improves the robustness of the intrusion detection model, and enhances the fault tolerance and generalization ability of the model to improve the accuracy of the intrusion detection model. The effectiveness of the proposed method in intrusion detection is verified by comparison and analysis of numerical examples.
Authored by Peiqing Zhang, Guangke Tian, Haiying Dong