According to Shakeri, “the computational complexity of solving the optimal multiple-fault isolation problem is super exponential.” Most processes and procedures assume that there will be only one fault at any given time. Many algorithms are designed to do sequential diagnostics. With the growth of cloud computing and multicore processors and the ubiquity of sensors, the problem of multiple fault diagnosis has grown even larger. The research cited here, from the first half of 2014, looks at different detection methods in a variety of media.
- M. El-Koujok, M. Benammar, N. Meskin, M. Al-Naemi, R. Langari, “Multiple Sensor Fault Diagnosis By Evolving Data-Driven Approach,” Information Sciences: an International Journal, Volume 259, February, 2014, Pages 346-358. doi>10.1016/j.ins.2013.04.012 Sensors are indispensable components of modern plants and processes and their reliability is vital to ensure reliable and safe operation of complex systems. In this paper, the problem of design and development of a data-driven Multiple Sensor Fault Detection and Isolation (MSFDI) algorithm for nonlinear processes is investigated. The proposed scheme is based on an evolving multi-Takagi Sugeno framework in which each sensor output is estimated using a model derived from the available input/output measurement data. Our proposed MSFDI algorithm is applied to Continuous-Flow Stirred-Tank Reactor (CFSTR). Simulation results demonstrate and validate the performance capabilities of our proposed MSFDI algorithm.
Keywords: Data-driven approach, Nonlinear system, Sensor fault diagnosis (ID#:14-2206)
URL: http://dl.acm.org/citation.cfm?id=2564929.2565018&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.04.012
- Yu-Lin He, Ran Wang, Sam Kwong, Xi-Zhao Wang, “Bayesian Classifiers Based On Probability Density Estimation And Their Applications To Simultaneous Fault Diagnosis,” Information Sciences: an International Journal, Volume 259, February, 2014, Pages 252-268. doi>10.1016/j.ins.2013.09.003 A key characteristic of simultaneous fault diagnosis is that the features extracted from the original patterns are strongly dependent. This paper proposes a new model of Bayesian classifier, which removes the fundamental assumption of naive Bayesian, i.e., the independence among features. In our model, the optimal bandwidth selection is applied to estimate the class-conditional probability density function (p.d.f.), which is the essential part of joint p.d.f. estimation. Three well-known indices, i.e., classification accuracy, area under ROC curve, and probability mean square error, are used to measure the performance of our model in simultaneous fault diagnosis. Simulations show that our model is significantly superior to the traditional ones when the dependence exists among features.
Keywords: Bayesian classification, Dependent feature, Joint probability density estimation, Optimal bandwidth, Simultaneous fault diagnosis, Single fault (ID#:14-2207)
URL: http://dl.acm.org/citation.cfm?id=2564929.2564984&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.09.003
- Marcin Perzyk, Andrzej Kochanski, Jacek Kozlowski, Artur Soroczynski, Robert Biernacki, “Comparison of Data Mining Tools For Significance Analysis Of Process Parameters In Applications To Process Fault Diagnosis,” Information Sciences: an International Journal, Volume 259, February, 2014, Pages 380-392. doi>10.1016/j.ins.2013.10.019 This paper presents an evaluation of various methodologies used to determine relative significances of input variables in data-driven models. Significance analysis applied to manufacturing process parameters can be a useful tool in fault diagnosis for various types of manufacturing processes. It can also be applied to building models that are used in process control. The relative significances of input variables can be determined by various data mining methods, including relatively simple statistical procedures as well as more advanced machine learning systems. Several methodologies suitable for carrying out classification tasks which are characteristic of fault diagnosis were evaluated and compared from the viewpoint of their accuracy, robustness of results and applicability. Two types of testing data were used: synthetic data with assumed dependencies and real data obtained from the foundry industry. The simple statistical method based on contingency tables revealed the best overall performance, whereas advanced machine learning models, such as ANNs and SVMs, appeared to be of less value.
Keywords: Data mining, Fault diagnosis, Input variable significance, Manufacturing industries (ID#:14-2208)
URL: http://dl.acm.org/citation.cfm?id=2564929.2564988&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.10.019
- Suming Chen, Arthur Choi, Adnan Darwiche,” Algorithms and Applications For The Same-Decision Probability “Journal of Artificial Intelligence Research, Volume 49 Issue 1, January 2014, Pages 601-633. (doi not provided) When making decisions under uncertainty, the optimal choices are often difficult to discern, especially if not enough information has been gathered. Two key questions in this regard relate to whether one should stop the information gathering process and commit to a decision (stopping criterion), and if not, what information to gather next (selection criterion). In this paper, we show that the recently introduced notion, Same-Decision Probability (SDP), can be useful as both a stopping and a selection criterion, as it can provide additional insight and allow for robust decision making in a variety of scenarios. This query has been shown to be highly intractable, being PPPP-complete, and is exemplary of a class of queries which correspond to the computation of certain expectations. We propose the first exact algorithm for computing the SDP, and demonstrate its effectiveness on several real and synthetic networks. Finally, we present new complexity results, such as the complexity of computing the SDP on models with a Naive Bayes structure. Additionally, we prove that computing the non-myopic value of information is complete for the same complexity class as computing the SDP
Keywords: (not provided) (ID#:14-2209)
URL: http://dl.acm.org/citation.cfm?id=2655713.2655730
- Nithiyanantham Janakiraman, Palanisamy Nirmal Kumar, “Multi-objective Module Partitioning Design For Dynamic And Partial Reconfigurable System-On-Chip Using Genetic Algorithm,” Journal of Systems Architecture: the EUROMICRO Journal, Volume 60 Issue 1, January, 2014, Pages 119-139. doi>10.1016/j.sysarc.2013.10.001 This paper proposes a novel architecture for module partitioning problems in the process of dynamic and partial reconfigurable computing in VLSI design automation. This partitioning issue is deemed as Hypergraph replica. This can be treated by a probabilistic algorithm like the Markov chain through the transition probability matrices due to non-deterministic polynomial complete problems. This proposed technique has two levels of implementation methodology. In the first level, the combination of parallel processing of design elements and efficient pipelining techniques are used. The second level is based on the genetic algorithm optimization system architecture. This proposed methodology uses the hardware/software co-design and co-verification techniques. This architecture was verified by implementation within the MOLEN reconfigurable processor and tested on a Xilinx Virtex-5 based development board. This proposed multi-objective module partitioning design was experimentally evaluated using an ISPD'98 circuit partitioning benchmark suite. The efficiency and throughput were compared with that of the hMETIS recursive bisection partitioning approach. The results indicate that the proposed method can improve throughput and efficiency up to 39 times with only a small amount of increased design space. The proposed architecture style is sketched out and concisely discussed in this manuscript, and the existing results are compared and analyzed. (ID#:14-2210)
URL: http://dl.acm.org/citation.cfm?id=2566270.2566391&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.sysarc.2013.10.001
- Cook, A; Wunderlich, H.-J., "Diagnosis of Multiple Faults With Highly Compacted Test Responses," Test Symposium (ETS), 2014 19th IEEE European , vol., no., pp.1,6, 26-30 May 2014. doi: 10.1109/ETS.2014.6847796 Defects cluster, and the probability of a multiple fault is significantly higher than just the product of the single fault probabilities. While this observation is beneficial for high yield, it complicates fault diagnosis. Multiple faults will occur especially often during process learning, yield ramp-up and field return analysis. In this paper, a logic diagnosis algorithm is presented which is robust against multiple faults and which is able to diagnose multiple faults with high accuracy even on compressed test responses as they are produced in embedded test and built-in self-test. The developed solution takes advantage of the linear properties of a MISR compactor to identify a set of faults likely to produce the observed faulty signatures. Experimental results show an improvement in accuracy of up to 22 % over traditional logic diagnosis solutions suitable for comparable compaction ratios.
Keywords: built-in self test; fault diagnosis; integrated circuit testing; integrated circuit yield; probability; MISR compactor; built-in self-test; compacted test responses; compressed test responses; defects cluster; embedded test; faulty signatures; field return analysis; linear properties; logic diagnosis; multiple fault diagnosis; multiple fault probability; process learning; yield ramp-up; Accuracy; Built-in self-test; Circuit faults; Compaction; Equations; Fault diagnosis; Mathematical model; Diagnosis; Multiple Faults; Response Compaction (ID#:14-2211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847796&isnumber=6847779
- Kundu, S.; Jha, A; Chattopadhyay, S.; Sengupta, I; Kapur, R., "Framework for Multiple-Fault Diagnosis Based on Multiple Fault Simulation Using Particle Swarm Optimization," Very Large Scale Integration (VLSI) Systems, IEEE Transactions on , vol.22, no.3, pp.696,700, March 2014. doi: 10.1109/TVLSI.2013.2249542 This brief proposes a framework to analyze multiple faults based on multiple fault simulation in a particle swarm optimization environment. Experimentation shows that up to ten faults can be diagnosed in a reasonable time. However, the scheme does not put any restriction on the number of simultaneous faults.
Keywords: fault simulation; integrated circuit testing; particle swarm optimisation; multiple fault diagnosis; multiple fault simulation; particle swarm optimization; Automatic test pattern generation (ATPG);effect-cause analysis ;fault diagnosis; multiple fault injection; particle swarm optimization (PSO) (ID#:14-2212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488883&isnumber=6746074
- Cheng-Hung Wu; Kuen-Jong Lee; Wei-Cheng Lien, "An Efficient Diagnosis Method To Deal With Multiple Fault-Pairs Simultaneously Using A Single Circuit Model," VLSI Test Symposium (VTS), 2014 IEEE 32nd , vol., no., pp.1,6, 13-17 April 2014. doi: 10.1109/VTS.2014.6818790 This paper proposes an efficient diagnosis-aware ATPG method that can quickly identify equivalent-fault pairs and generate diagnosis patterns for nonequivalent-fault pairs, where an (non)equivalent-fault pair contains two stuck-at faults that are (not) equivalent. A novel fault injection method is developed which allows one to embed all fault pairs undistinguished by the conventional test patterns into a circuit model with only one copy of the original circuit. Each pair of faults to be processed is transformed to a stuck-at fault and all fault pairs can be dealt with by invoking an ordinary ATPG tool for stuck-at faults just once. High efficiency of diagnosis pattern generation can be achieved due to 1) the circuit to be processed is read only once, 2) the data structure for ATPG process is constructed only once, 3) multiple fault pairs can be processed at a time, and 4) only one copy of the original circuit is needed. Experimental results show that this is the first reported work that can achieve 100% diagnosis resolutions for all ISCAS'89 and IWLS'05 benchmark circuits using an ordinary ATPG tool. Furthermore, we also find that the total number of patterns required to deal with all fault pairs in our method is smaller than that of the current state-of-the-art work.
Keywords: automatic test pattern generation; fault diagnosis;ISCAS'89 benchmark circuit;IWLS'05 benchmark circuit; automatic test pattern generation; diagnosis pattern generation; diagnosis-aware ATPG method; fault injection; fault pairs diagnosis; nonequivalent-fault pairs; single circuit model; stuck-at faults; Automatic test pattern generation; Central Processing Unit;Circuit faults; Fault diagnosis; Integrated circuit modeling; Logic gates; Multiplexing; Fault diagnosis; diagnosis pattern generation;multi-pair diagnosis (ID#:14-2213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818790&isnumber=6818727
- Zhao, Chunhui, "Fault subspace selection and analysis of relative changes based reconstruction modeling for multi-fault diagnosis," Control and Decision Conference (2014 CCDC), The 26th Chinese , vol., no., pp.235,240, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852151 Online fault diagnosis has been a crucial task for industrial processes. Reconstruction-based fault diagnosis has been drawing special attentions as a good alternative to the traditional contribution plot. It identifies the fault cause by finding the specific fault subspace that can well eliminate alarming signals from a bunch of alternatives that have been prepared based on historical fault data. However, in practice, the abnormality may result from the joint effects of multiple faults, which thus can not be well corrected by single fault subspace archived in the historical fault library. In the present work, an aggregative reconstruction-based fault diagnosis strategy is proposed to handle the case where multiple fault causes jointly contribute to the abnormal process behaviors. First, fault subspaces are extracted based on historical fault data in two different monitoring subspaces where analysis of relative changes is taken to enclose the major fault effects that are responsible for different alarming monitoring statistics. Then, a fault subspace selection strategy is developed to analyze the combinatorial fault nature which will sort and select the informative fault subspaces that are most likely to be responsible for the concerned abnormalities. Finally, an aggregative fault subspace is calculated by combining the selected fault subspaces which represents the joint effects from multiple faults and works as the final reconstruction model for online fault diagnosis. Theoretical support is framed and the related statistical characteristics are analyzed. Its feasibility and performance are illustrated with simulated multi-faults using data from the Tennessee Eastman (TE) benchmark process.
Keywords: Analytical models; Data models; Fault diagnosis; Joints; Libraries; Monitoring; Principal component analysis; analysis of relative changes; fault subspace selection; joint fault effects; multi-fault diagnosis; reconstruction modeling (ID#:14-2214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852151&isnumber=6852105
- Liu, K.; Ma, Q.; Gong, W.; Miao, X.; Liu, Y., "Self-Diagnosis for Large Scale Wireless Sensor Networks," Wireless Communications, IEEE Transactions on, vol. PP, no.99, pp.1, 1, July 2014. doi: 10.1109/TWC.2014.2336653 Existing approaches to diagnosing sensor networks are generally sink-based, which rely on actively pulling state information from sensor nodes so as to conduct centralized analysis. First, sink-based tools incur huge communication overhead to the traffic sensitive sensor networks. Second, due to the unreliable wireless communications, sink often obtains incomplete and suspicious information, leading to inaccurate judgments. Even worse, it is always more difficult to obtain state information from problematic or critical regions. To address above issues, we present a novel self-diagnosis approach, which encourages each single sensor to join the fault decision process. We design a series of fault detectors through which multiple nodes can cooperate with each other in a diagnosis task. Fault detectors encode the diagnosis process to state transitions. Each sensor can participate in the diagnosis by transiting the detector’s current state to a new one based on local evidences and then pass the detector to other nodes. Having sufficient evidences, the fault detector achieves the Accept state and outputs final diagnosis report. We examine the performance of our self-diagnosis tool called TinyD2 on a 100 nodes indoor testbed and conduct field studies in the GreenOrbs system which is an operational sensor network with 330 nodes outdoor.
Keywords: Debugging; Detectors; Fault detection; Fault diagnosis; Measurement; Wireless communication; Wireless sensor networks (ID#:14-2215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850017&isnumber=4656680
- Kannan, S.; Karimi, N.; Karri, R.; Sinanoglu, O., "Detection, Diagnosis, And Repair Of Faults In Memristor-Based Memories," VLSI Test Symposium (VTS), 2014 IEEE 32nd , vol., no., pp.1,6, 13-17 April 2014. doi: 10.1109/VTS.2014.6818762 Memristors are an attractive option for use in future memory architectures due to their non-volatility, high density and low power operation. Notwithstanding these advantages, memristors and memristor-based memories are prone to high defect densities due to the non-deterministic nature of nanoscale fabrication. The typical approach to fault detection and diagnosis in memories entails testing one memory cell at a time. This is time consuming and does not scale for the dense, memristor-based memories. In this paper, we integrate solutions for detecting and locating faults in memristors, and ensure post-silicon recovery from memristor failures. We propose a hybrid diagnosis scheme that exploits sneak-paths inherent in crossbar memories, and uses March testing to test and diagnose multiple memory cells simultaneously, thereby reducing test time. We also provide a repair mechanism that prevents faults in the memory from being activated. The proposed schemes enable and leverage sneak paths during fault detection and diagnosis modes, while still maintaining a sneak-path free crossbar during normal operation. The proposed hybrid scheme reduces fault detection and diagnosis time by ~44%, compared to traditional March tests, and repairs the faulty cell with minimal overhead.
Keywords: fault diagnosis; memristors; random-access storage; March testing; crossbar memories; fault detection; fault diagnosis; faulty cell repairs; future memory architectures; high defect densities; hybrid diagnosis scheme; memristor failures; memristor-based memories; multiple memory cells testing; nanoscale fabrication; post-silicon recovery; sneak-path free crossbar; test time; Circuit faults; Fault detection; Integrated circuits; Maintenance engineering; Memristors; Resistance; Testing; Memory; Memristor; Sneak-paths; Testing (ID#:14-2216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818762&isnumber=6818727
- Xin Xia; Yang Feng; Lo, D.; Zhenyu Chen; Xinyu Wang, "Towards More Accurate Multi-Label Software Behavior Learning," Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on , vol., no., pp.134,143, 3-6 Feb. 2014. doi: 10.1109/CSMR-WCRE.2014.6747163In a modern software system, when a program fails, a crash report which contains an execution trace would be sent to the software vendor for diagnosis. A crash report which corresponds to a failure could be caused by multiple types of faults simultaneously. Many large companies such as Baidu organize a team to analyze these failures, and classify them into multiple labels (i.e., multiple types of faults). However, it would be time-consuming and difficult for developers to manually analyze these failures and come out with appropriate fault labels. In this paper, we automatically classify a failure into multiple types of faults, using a composite algorithm named MLL-GA, which combines various multi-label learning algorithms by leveraging genetic algorithm (GA). To evaluate the effectiveness of MLL-GA, we perform experiments on 6 open source programs and show that MLL-GA could achieve average F-measures of 0.6078 to 0.8665. We also compare our algorithm with Ml.KNN and show that on average across the 6 datasets, MLL-GA improves the average F-measure of MI.KNN by 14.43%.
Keywords: genetic algorithms ;learning (artificial intelligence);public domain software; software fault tolerance; software maintenance; Baidu;F-measures; MLL-GA;Ml. KNN; crash report; execution trace; fault labels; genetic algorithm; modern software system; multilabel software behavior learning; open source programs; software vendor; Biological cells; Computer crashes; Genetic algorithms; Prediction algorithms; Software; Software algorithms; Training; Genetic Algorithm; Multi-label Learning; Software Behavior Learning (ID#:14-2217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747163&isnumber=6747152
- Sun, J.; Liao, H.; Upadhyaya, B.R., "A Robust Functional-Data-Analysis Method for Data Recovery in Multichannel Sensor Systems," Cybernetics, IEEE Transactions on , vol.44, no.8, pp.1420,1431, Aug. 2014. doi: 10.1109/TCYB.2013.2285876 Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of t- e proposed method in recovering non-skewed signals.
Keywords: Bandwidth; Data models; Eigenvalues and eigenfunctions; Predictive models; Robustness; Sensor systems; Sun; Asynchronous data; condition monitoring; data recovery; robust functional principal component analysis (ID#:14-2218)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670785&isnumber=6856256
- Simon, S.; Liu, S., "An Automated Design Method for Fault Detection and Isolation of Multidomain Systems Based on Object-Oriented Models," Mechatronics, IEEE/ASME Transactions on, vol. PP, no.99, pp. 1, 13, July 2014. doi: 10.1109/TMECH.2014.2330904 In this paper, it is shown that the high automation level of the object-oriented modeling paradigm for physical systems can significantly rationalize the design procedure of fault detection and isolation (FDI) systems. Consequently, an object-oriented FDI method for complex engineering systems consisting of subsystems from different physical domains like mechatronic systems, commercial vehicles, and chemical process plants is developed. The mathematical composition of the objects corresponding to the subsystems results in a differential algebraic equation (DAE) that describes the overall system. This DAE is automatically analyzed and transferred into a set of residual generators that enable a two-stage FDI procedure for multiple fault modes.
Keywords: Automated design of fault detection and isolation (FDI) systems; model-based diagnosis; object-oriented modeling of multidomain systems (ID#:14-2219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6857410&isnumber=4785241
- Mehdi Namdari, Hooshang Jazayeri-Rad, “Incipient Fault Diagnosis Using Support Vector Machines Based On Monitoring Continuous Decision Functions,” Engineering Applications of Artificial Intelligence, Volume 28, February, 2014, Pages 22-35. doi>10.1016/j.engappai.2013.11.013 Support Vector Machine (SVM) as an innovative machine learning tool, based on statistical learning theory, is recently used in process fault diagnosis tasks. In the application of SVM to a fault diagnosis problem, typically a discrete decision function with discrete output values is utilized in order to solely define the label of the fault. However, for incipient faults in which fault steadily progresses over time and there is a changeover from normal operation to faulty operation, using discrete decision function does not reveal any evidence about the progress and depth of the fault. Numerous process faults, such as the reactor fouling and degradation of catalyst, progress slowly and can be categorized as incipient faults. In this work a continuous decision function is anticipated. The decision function values not only define the fault label, but also give qualitative evidence about the depth of the fault. The suggested method is applied to incipient fault diagnosis of a continuous binary mixture distillation column and the result proves the practicability of the proposed approach. In incipient fault diagnosis tasks, the proposed approach outperformed some of the conventional techniques. Moreover, the performance of the proposed approach is better than typical discrete based classification techniques employing some monitoring indexes such as the false alarm rate, detection time and diagnosis time.
Keywords: Binary mixture distillation column, Continuous decision function, Incipient fault diagnosis, Pattern recognition, Support vector machines (ID#:14-2220)
URL: http://dl.acm.org/citation.cfm?id=2574578.2574707&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.engappai.2013.11.013
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.