System Recovery 2015

 

 
Image removed.

System Recovery 2015

 

System recovery following an attack is a core cybersecurity issue.  Current research into methods to undo data manipulation and to recover lost or extruded data in distributed, cloud-based or other large scale complex systems is discovering new approaches and methods. For the Science of Security community, it is an essential element of resiliency. The articles cited here are from 2015.

Di Martino, C.; Kramer, W.; Kalbarczyk, Z.; Iyer, R., "Measuring and Understanding Extreme-Scale Application Resilience: A Field Study of 5,000,000 HPC Application Runs," in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, pp. 25-36, 22-25 June 2015. doi: 10.1109/DSN.2015.50

Abstract: This paper presents an in-depth characterization of the resiliency of more than 5 million HPC application runs completed during the first 518 production days of Blue Waters, a 13.1 petaflop Cray hybrid supercomputer. Unlike past work, we measure the impact of system errors and failures on user applications, i.e., the compiled programs launched by user jobs that can execute across one or more XE (CPU) or XK (CPU+GPU) nodes. The characterization is performed by means of a joint analysis of several data sources, which include workload and error/failure logs. In order to relate system errors and failures to the executed applications, we developed LogDiver, a tool to automate the data pre-processing and metric computation. Some of the lessons learned in this study include: i) while about 1.53% of applications fail due to system problems, the failed applications contribute to about 9% of the production node hours executed in the measured period, i.e., the system consumes computing resources, and system-related issues represent a potentially significant energy cost for the work lost, ii) there is a dramatic increase in the application failure probability when executing full-scale applications: 20x (from 0.008 to 0.162) when scaling XE applications from 10,000 to 22,000 nodes, and 6x (from 0.02 to 0.129) when scaling GPU/hybrid applications from 2000 to 4224 nodes, and iii) the resiliency of hybrid applications is impaired by the lack of adequate error detection capabilities in hybrid nodes.

Keywords: Cray computers; failure analysis; parallel machines; parallel processing; system monitoring; system recovery; Blue Waters; Cray hybrid supercomputer; HPC application runs;LogDiver; application failure probability; error-failure logs; extreme-scale application resilience; system errors; system failures; workload logs; Blades; Graphics processing units;Hardware; Random access memory; Servers; Torque; Xenon; application resilience; data analysis; data-driven resilience; extreme-scale; hybrid machines; resilience; supercomputer (ID#: 15-8038)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266835&isnumber=7266818

 

Padma, V.; Yogesh, P., "Proactive failure recovery in OpenFlow based Software Defined Networks," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-6, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219846

Abstract: Software Defined Networking (SDN) is a network architecture that decouples the control and data planes. SDN enables network control to become directly programmable and the underlying infrastructure to be abstracted from the network services. The foundation for open standards based software defined networking is the OpenFlow protocol. The OpenFlow architecture which is originally designed for Local Area Networks (LANs), doesn't include effective mechanisms for fast resiliency. But metro, carrier grade Ethernet networks and industrial area networks have to guarantee fast resiliency upon network failure. This paper experiments the link protection scheme that aims to enhance the OpenFlow architecture by adding fast recovery mechanisms in the switch and the controller. This is achieved by enabling the controller to add backup paths proactively along with the working paths and enabling the switches to perform the recovery actions locally. As this avoids controller intervention during recovery, the recovery time solely depends upon the failure detection time of the switch. As this will be less compared to the switch-controller round trip time, this gives better results. The performance of the system is evaluated by finding the packet loss and switch over time and comparing it with the current OpenFlow implementations. The system performs reasonably better than the existing systems in terms of switch over time. However the number of backup path entries increase relatively.

Keywords: computer network reliability; local area networks; protocols; signal detection ;software defined networking; LAN; OpenFlow protocol architecture; SDN architecture; carrier grade Ethernet network; controller intervention avoidance; failure detection; industrial area network; link protection scheme; local area network; metro grade Ethernet network; network control; proactive failure recovery; software defined network; Computer architecture; Ports (Computers); Protocols; Signal processing; Software defined networking;Switches; Failure recovery; Fast resiliency Link protection; OpenFlow; Software Defined Networking (ID#: 15-8039)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219846&isnumber=7219823

 

Hukerikar, S.; Diniz, P.C.; Lucas, R.F., "Enabling Application Resilience Through Programming Model Based Fault Amelioration," in High Performance Extreme Computing Conference (HPEC), 2015 IEEE, pp. 1-6, 15-17 Sept. 2015. doi: 10.1109/HPEC.2015.7322460

Abstract: High-performance computing applications that will run on future exascale-class supercomputing systems are projected to encounter accelerated rates of faults and errors. For these large-scale systems, maintaining fault resilient operation is a key challenge. The most widely used resiliency approach today, which is based on checkpoint and rollback (C/R) recovery, is not expected to remain viable in the presence of frequent errors and failures. In this paper, we present a framework for enabling application-level recovery from error states through fault amelioration. Our approach is based on programming model extensions that enable algorithm-based fault amelioration knowledge to be expressed as an intrinsic feature of the programming environment. This is accomplished through a set of language extensions that are supported by a compiler infrastructure and a runtime system. We experimentally demonstrate that the framework enables recovery from errors in the program state with low overhead to the application performance.

Keywords: checkpointing; parallel processing; program compilers; software fault tolerance; software maintenance; C/R recovery; algorithm-based fault amelioration knowledge; application resiliency; application-level recovery; checkpoint and rollback recovery; compiler infrastructure; exascale-class supercomputing systems; fault resilient operation maintenance; high-performance computing applications; large-scale systems; programming model extensions; runtime system; Data structures; Program processors; Programming; Resilience; Runtime; Semantics; Syntactics (ID#: 15-8040)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322460&isnumber=7322434

 

Crowcroft, J.; Levin, L.; Segal, M., "Using Data Mules for Sensor Network Resiliency," in Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), 2015 13th International Symposium on, pp. 427-434, 25-29 May 2015. doi: 10.1109/WIOPT.2015.7151102

Abstract: In this paper, we study the problem of efficient data recovery using the data mules approach, where a set of mobile sensors with advanced mobility capabilities re-acquire lost data by visiting the neighbors of failed sensors, thereby improving network resiliency. Our approach involves defining the optimal communication graph and mules' placements such that the overall traveling time and distance is minimized regardless to which sensors crashed. We explore this problem under different practical network topologies such as general graphs, grids and random linear networks and provide approximation algorithms based on multiple combinatorial techniques. Simulation experiments demonstrate that our algorithms outperform various competitive solutions for different network models, and that they are applicable for practical scenarios.

Keywords: approximation theory; graph theory; minimisation; mobility management (mobile radio);telecommunication network topology; wireless sensor networks; advanced mobility capabilities; approximation algorithms; data mules; data recovery; general graphs; mobile sensors; multiple combinatorial techniques; network topologies; optimal communication graph; overall traveling distance minimization; overall traveling time minimization; random linear networks; sensor network resiliency improvement; Ad hoc networks; Approximation algorithms; Mobile communication; Mobile computing; Optimized production technology; Robot sensing systems; Topology (ID#: 15-8041)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7151102&isnumber=7151020

 

Pham Phuoc Hung; Xuan-Qui Pham; Ga-Won Lee; Tuan-Anh Bui; Eui-Nam Huh; "A Procedure to Achieve Cost and Performance Optimization for Recovery in Cloud Computing;" Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, pp. 596-599, 19-21 Aug. 2015. doi: 10.1109/APNOMS.2015.7275402

Abstract: This research discusses a system architecture that comes up with potentially better resiliency and faster recovery from failures based on the renowned genetic algorithm. Additionally, we aim to achieve a globally optimized performance as well as a service solution that can remain financially and operationally balanced according to customer preferences. The proposed methodology has undergone numerous and severe evaluations to be proclaimed of their effectiveness and efficiency, even when put under tight comparison with other existing work.

Keywords: cloud computing; genetic algorithms; software architecture software performance evaluation; system recovery; cloud computing; customer preferences; genetic algorithm; performance optimization; recovery time; service solution; system architecture; Cloud computing; Genetic algorithms; Processor scheduling; Program processors; Schedules; Sociology; Statistics; Task scheduling; big data; cloud computing; parallel computing; recovery time (ID#: 15-8042)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275402&isnumber=7275336

 

Soowoong Eo; Wooyeon Jo; Seokjun Lee; Shon, T., "A Phase of Deleted File Recovery for Digital Forensics Research in Tizen," in IT Convergence and Security (ICITCS), 2015 5th International Conference on, pp. 1-3, 24-27 Aug. 2015. doi: 10.1109/ICITCS.2015.7292924

Abstract: Digital Forensics, not only for the computers of suspect, needs to collect the various digital evidences especially in many different kinds of mobile devices and operating systems. Moreover, in case of acquiring digital evidences, recovering a deleted file is more meaningful that it can find the concealed evidence by the suspect. In this paper, the phase of deleted file recovery in Tizen operating system is suggested and certified with the experiment.

Keywords: back-up procedures; digital forensics; operating systems (computers); system recovery; Tizen operating system; concealed evidence; deleted file recovery; digital evidences; digital forensics; mobile devices; operating systems; Digital forensics; File systems; Mobile communication; Operating systems; Smart phones (ID#: 15-8043)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292924&isnumber=7292885

 

Fairbanks, K.D., "A Technique for Measuring Data Persistence Using the Ext4 File System Journal," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, no., pp. 18-23, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.164

Abstract: In this paper, we propose a method of measuring data persistence using the Ext4 journal. Digital Forensic tools and techniques are commonly used to extract data from media. A great deal of research has been dedicated to the recovery of deleted data, however, there is a lack of information on quantifying the chance that an investigator will be successful in this endeavor. To that end, we suggest the file system journal be used as a source to gather empirical evidence of data persistence, which can later be used to formulate the probability of recovering deleted data under various conditions. Knowing this probability can help investigators decide where to best invest their resources. We have implemented a proof of concept system that interrogates the Ext4 file system journal and logs relevant data. We then detail how this information can be used to track the reuse of data blocks from the examination of file system metadata structures. This preliminary design contributes a novel method of tracking deleted data persistence that can be used to generate the information necessary to formulate probability models regarding the full and/or partial recovery of deleted data.

Keywords: digital forensics; file organisation; probability; Ext4 file system journal; data extraction; data persistence; digital forensic tools; probability; proof of concept system; Data mining; Data structures; Digital forensics; File systems; Media; Metadata; Operating systems;Data Persistence; Data Recovery; Digital Forensics;Ext4;File System Forensics; Journal; Persistence Measurement (ID#: 15-8044)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273317&isnumber=7273299

 

Leom, Ming Di; DOrazio, Christian Javier; Deegan, Gaye; Choo, Kim-Kwang Raymond, "Forensic Collection and Analysis of Thumbnails in Android," in Trustcom/BigDataSE/IEEESPA, 2015 IEEE, vol. 1, pp. 1059-1066, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.483

Abstract: JPEG thumbnail images are of interest in forensic investigations as images from the thumbnail cache could be intact even when the original pictures have been deleted. In addition, a deleted thumbnail is less likely to be fragmented due to its small size. The focus of existing literature is generally on the desktop environment. Considering the increasing capability of smart mobile devices, particularly Android devices, to take pictures and videos on the go, it is important to understand how thumbnails can be collected from these devices. In this paper, we examine and describe the various thumbnail sources in Android devices and propose a methodology for thumbnail collection and analysis from Android devices. We also demonstrate the utility of our proposed methodology using a case study (e.g. thumbnails could be recovered even when the file system is heavily fragmented). Our findings also indicate that collective information obtained from the recovered fragmented JPEG image (e.g. metadata) and the thumbnail could be akin to recovering the full image for forensic purposes.

Keywords: Androids; Australia; File systems; Forensics; Humanoid robots; Media; Mobile handsets; Android forensics; forensic recovery; mobile forensics; thumbcache; thumbnail recovery (ID#: 15-8045)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345391&isnumber=7345233

 

Mohite, M.P.; Ardhapurkar, S.B., "Design and Implementation of a Cloud Based Computer Forensic Tool," in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, pp. 1005-1009, 4-6 April 2015. doi: 10.1109/CSNT.2015.180

Abstract: Nowadays, Cloud computing is receiving more and more attention from the information and communication technology industry recently. Thus, From the demand of cloud users digital forensics in cloud computing are a raw expanse of study linked to the increasing use of information processing governance, internet and digital computer storage devices in numerous criminal actions in both traditional and Hi-Tech. The digital forensics, including handle, conduct of, study, and document digital evidence in a court of law. Digital Forensic tool in a cloud computing environment is a big demand from forensic investigator. Thus, in the process of digital forensics, it is needed to create an image of the original digital data without damage and to show that the computer evidence existed at the specific time. The evidences are then analyzed by the forensic investigator. After the proof is examined, it is obliged to make a report to embrace it as legitimately successful confirmation in the law court. To give an advanced crime scene investigation benefit on cloud environment, a cloud based computer forensic tool is proposed in this paper. To probe the evidence multiple features are provided in this tool like data recovery, sorting, indexing, hex viewer, data bookmarking.

Keywords: cloud computing; image forensics; law; Internet; advanced crime scene investigation; cloud computing; cloud-based computer forensic tool; computer forensic tool; court of law; digital computer storage device; digital forensic tool; document digital evidence; information processing governance; Cloud computing; Digital forensics; Media; Portable computers; Cloud Computing; Computer Forensic; Digital Evidence; Forensic Investigation (ID#: 15-8046)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280070&isnumber=7279856

 

Ramisch, F.; Rieger, M., "Recovery of SQLite Data Using Expired Indexes," in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, pp. 19-25, 18-20 May 2015. doi: 10.1109/IMF.2015.11

Abstract: SQLite databases have tremendous forensic potential. In addition to active data, expired data remain in the database file, if the option secure delete is not applied. Tests of available forensic tools show, that the indexes were not considered, although they may complete the recovery of the table structures. Algorithms for their recovery and combination with each other or with table data are worked out. A new tool, SQLite Index Recovery, was developed for this study. The use with test data and data of Apple Mail shows, that the recovery of indexes is possible and enriches the recovery of ordinary table data.

Keywords: database indexing; digital forensics; relational databases; Apple Mail data; SQLite data recovery; SQLite databases; SQLite index recovery; active data; database file; expired data; forensic tools; table data; table structure recovery; test data; File systems; Forensics; Indexes; Metadata; Oxygen; Postal services; Apple Mail; SQLite; database; expired data; forensic tool; free block; index; recovery (ID#: 15-8047)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195803&isnumber=7195793

 

Bao, Jianrong; Gao, Xiqi; Liu, Chao; Jiang, Bin, "Iterative Carrier Recovery in an LDPC Coded QPSK System at Low SNRs," in Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, pp. 1-5, 15-17 Oct. 2015. doi: 10.1109/WCSP.2015.7340999

Abstract: This paper presents an iterative carrier recovery (ICR) via soft decision metrics (SDMs) of low-density parity-check (LDPC) decoding in an LDPC coded quadrature phase shift keying (QPSK) system. It is crucial for wireless communication systems to work effectively, especially at low signal-to-noise ratios (SNRs). By maximizing the sum of the square of the SDMs of LDPC decoding with gradient oriented optimization of the objective function, it adaptively updates the carrier phase and frequency parameter accurately. The structure of the proposed scheme is also given, along with the phase ambiguity solution. Meanwhile, it is combined with the Costas loop tracking and the LDPC decoding feedback to eliminate residual carrier offsets. Simulation results indicate that the proposed ICR algorithm achieves good performance in an LDPC coded QPSK system under rather large carrier phase offsets, which is just within 0.1 dB of the ideal code performance at the cost of some moderate complexity. By the proposed scheme, a rate-1/2 LDPC coded QPSK system can even work at low bit SNR (Eb/N0) about 1–2 dB, which is useful in energy-limited wireless communications.

Keywords: Approximation methods; Complexity theory; Iterative decoding; Linear programming; Maximum likelihood decoding; Phase shift keying; LDPC codes; carrier synchronization; iterative carrier recovery; soft decision metrics (ID#: 15-8048)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7340999&isnumber=7340966

 

Silveira, J.; Marcon, C.; Cortez, P.; Barroso, G.; Ferreira, J.M.; Mota, R., "Preprocessing of Scenarios for Fast and Efficient Routing Reconfiguration in Fault-Tolerant NoCs," in Parallel, Distributed and Network-Based Processing (PDP), 2015 23rd Euromicro International Conference on, pp. 404-411, 4-6 March 2015. doi: 10.1109/PDP.2015.22

Abstract: Newest processes of CMOS manufacturing allow integrating billions of transistors in a single chip. This huge integration enables to perform complex circuits, which require an energy efficient communication architecture with high scalability and parallelism degree, such as a Network-on-Chip (NoC). However, these technologies are very close to physical limitations implying the susceptibility increase of faults on manufacture and at runtime. Therefore, it is essential to provide a method for efficient fault recovery, enabling the NoC operation even in the presence of faults on routers or links, and still ensure deadlock-free routing even for irregular topologies. A preprocessing approach of the most probable fault scenarios enables to anticipate the computation of deadlock-free routings, reducing the time necessary to interrupt the system operation in a fault event. This work describes a preprocessing technique of fault scenarios based on forecasting fault tendency, which employs a fault threshold circuit and a high-level software that identifies the most relevant fault scenarios. We propose methods for dissimilarity analysis of scenarios based on measurements of cross-correlation of link fault matrices. At runtime, the preprocessing technique employs analytic metrics of average distance routing and links load for fast search of sound fault scenarios. Finally, we use RTL simulation with synthetic traffic to prove the quality of our approach.

Keywords: fault tolerance; network-on-chip; topology; CMOS manufacturing; deadlock-free routing reconfiguration; fault threshold circuit; fault-tolerant NoC operation; forecasting fault tendency; high-level software; irregular topology; link fault matrices; network-on-chip; Circuit faults; Computer architecture; Fault tolerance; Fault tolerant systems; Ports (Computers);Routing; System recovery; NoC; fault-tolerance; irregular topology; routing (ID#: 15-8049)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092752&isnumber=7092002

 

Mondal, S.K.; Xiaoyan Yin; Muppala, J.K.; Alonso Lopez, J.; Trivedi, K.S., "Defects per Million Computation in Service-Oriented Environments," in Services Computing, IEEE Transactions on, vol. 8, no. 1, pp. 32-46, Jan.-Feb. 2015. doi: 10.1109/TSC.2013.52

Abstract: Traditional system-oriented dependability metrics like reliability and availability do not fully reflect the impact of system failure-repair behavior in service-oriented environments. The telecommunication systems community prefers to use Defects Per Million (DPM), defined as the number of calls dropped out of a million calls due to failures, as a user-perceived dependability metric. In this paper, we provide new formulation for the computation of the DPM metric for a system supporting Voice over IP functionality using the Session Initiation Protocol (SIP). We evaluate different replication schemes that can be used at the SIP application server. They include the effects of software failure, failure detection, recovery mechanisms, and imperfect coverage for recovery mechanisms. We derive closed-form expressions for the DPM taking into account the transient behavior of recovery after a failure. Our approach and underlying models can be readily extended to other types of service-oriented environments.

Keywords: Internet telephony; signalling protocols; software fault tolerance; software metrics; system recovery; DPM metric; SIP application server; Session Initiation Protocol; defects per million computation; failure detection; imperfect coverage; recovery mechanisms; reliability; replication schemes; service-oriented environments; software failure; system failure-repair behavior; system-oriented dependability metrics; telecommunication systems community; transient behavior; user-perceived dependability metric; voice over IP functionality; Availability; Computational modeling; Equations; Mathematical model; Measurement; Servers; Session initiation protocol; defects per million; fault tolerance; replication; user-perceived service reliability (ID#: 15-8050)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671595&isnumber=7029726

 

Camara, M.S.; Fall, I.; Mendy, G.; Diaw, S., "Activity Failure Prediction Based on Process Mining," in System Theory, Control and Computing (ICSTCC), 2015 19th International Conference on, pp.854-859, 14-16 Oct. 2015. doi: 10.1109/ICSTCC.2015.7321401

Abstract: Based on the state of the art of process mining, we can conclude that quality characteristics (failure rate metrics or loops) are poorly represented or absent in most predictive models that can be found in the literature. The main goal of this present research work is to analyze how to learn prediction model defining failure as response variable. A model of this type can be used for active real-time-controlling (e. g. through the reassignment of workflow activities based on prediction results) or for the automated support of redesign (i.e., prediction results are transformed in software requirements used to implement process improvements). The proposed methodology is based on the application of a data mining process because the objective of this work can be considered as a data mining goal.

Keywords: business data processing; data mining; system recovery; BPM; active real-time-controlling; activity failure prediction; automated support; business process management; data mining goal; failure rate metrics; predictive models; process improvements; process mining; quality characteristics; response variable; software requirements; workflow activities; Analytical models;Business; Data mining; Data models; Measurement; Predictive models; Process control; Business Process Management; Data mining; Process mining; Supervised learning; Workflow management software (ID#: 15-8051)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321401&isnumber=7321255

 

Jiyeon Kim; Kim, H.S., "PBAD: Perception-Based Anomaly Detection System for Cloud Datacenters," in Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, pp. 678-685, June 27 2015-July 2 2015. doi: 10.1109/CLOUD.2015.95

Abstract: Detection of anomalies in large Cloud infrastructure is challenging. Understanding operational behavior of Cloud is extremely difficult due to the heterogeneity of different technologies, virtualized platforms and complex interactions among the systems. Many of existing system models for Cloud are based on utilization metrics such as CPU, memory, network and I/O. Such system models are quite complex and their anomaly detection mechanisms are mostly based on threshold scheme. Utilization metrics exceeding a certain threshold would trigger an alarm. In fact, it is impossible to determine proper threshold for all anomalies. These system models fail to assess the state of the system accurately. We propose a novel anomaly detection system based on user perception rather than complex system models. In our Perception-Based Anomaly Detection system (PBAD), each component within multi-tier applications monitors response time and determines whether overall service response time is adequate. PBAD also locates the anomaly by analyzing component behaviors. PBAD masks the complexity of Cloud and addresses what matters, how user perceives the service provided by the Cloud applications. The key advantages of the proposed algorithm are simplicity and scalability. We implement and deploy PBAD in our production data center environment. The experimental results show that PBAD detects numerous types of anomalies as well as the combination of anomalies where existing systems fail.

Keywords: cloud computing; computer centres; security of data; system monitoring; system recovery; virtual machines; virtualisation; CPU utilization; I/O utilization; PBAD; anomaly detection mechanism; cloud application; cloud complexity; cloud datacenters; cloud operational behavior; complex system interactions; component behavior analysis; large cloud infrastructure; memory utilization; multitier application; network utilization; perception-based anomaly detection system; production data center environment; response time monitoring; service response time; system failure; system model; system state assessment; technology heterogeneity; threshold scheme; user perception; utilization metrics; virtual machine; virtualized platform; Cloud computing; Computational modeling; Delays; Servers; Support vector machines; Time factors; anomaly detection; cloud computing; cloud datacenter; response time; virtual machine (ID#: 15-8052)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214105&isnumber=7212169

 

Dasgupta, S.; Paramasivam, M.; Vaidya, U.; Ajjarapu, V., "Entropy-Based Metric for Characterization of Delayed Voltage Recovery," in Power Systems, IEEE Transactions on, vol. 30, no. 5, pp. 2460-2468, Sept. 2015. doi: 10.1109/TPWRS.2014.2361649

Abstract: In this paper, we introduce a novel entropy-based metric to characterize the fault-induced delayed voltage recovery (FIDVR) phenomenon. In particular, we make use of Kullback-Leibler (KL) divergence to determine both the rate and the level of voltage recovery following a fault or disturbance. The computation of the entropy-based measure relies on voltage time-series data and is independent of the underlying system model used to generate the voltage time-series. The proposed measure provides quantitative information about the degree of WECC voltage performance violation for FIDVR phenomenon. The quantitative measure for violation allows one to compare the voltage responses of different buses to various contingencies and to rank order them, based on the degree of violation.

Keywords: entropy; power system faults; power system measurement; signal processing; time series; Kullback-Leibler divergence; WECC voltage performance violation; delayed voltage recovery characterization; entropy based metrics;fault induced delayed voltage recovery; voltage recovery rate; voltage time-series data; Approximation methods; Density functional theory; Entropy; Probability density function; Probability distribution; Steady-state; Voltage measurement; Contingency analysis; delayed voltage recovery; entropy (ID#: 15-8053)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6942243&isnumber=7161453

 

Perova, I.; Mulesa, P., "Fuzzy Spatial Extrapolation Method Using Manhattan Metrics for Tasks Of Medical Data Mining," in Scientific and Technical Conference "Computer Sciences and Information Technologies" (CSIT), 2015 Xth International, pp. 104-106, 14-17 Sept. 2015. doi: 10.1109/STC-CSIT.2015.7325443

Abstract: In this paper the approach for fuzzy clustering-classification of medical short data samples using the method of fuzzy spatial extrapolation is considered. The proposed procedure refers to the direction of Medical Data Mining, and is hybrid system that can solve the task of diagnosing of various diseases in a limited sample, complete or partial overlapping of classes, their different densities, different numerical filling and requires for its training small volumes of a priori information. Also this procedure can realize a filling of gaps in feature vector based on recovery of hidden dependencies that are contained in data set.

Keywords: data mining; fuzzy set theory; medical administrative data processing; fuzzy clustering-classification; fuzzy spatial extrapolation method; fuzzy spatial extrapolation; hybrid system; manhattan metrics; medical data mining; Computational intelligence; Data mining; Extrapolation; Filling; Measurement; Medical diagnostic imaging; Neural networks; classification; deficit of information; feature vector; fuzzy clustering; fuzzy spatial extrapolation; gap (ID#: 15-8054)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325443&isnumber=7325415

 

Chowdhury, M.; Goldsmith, A., "Reliable Uncoded Communication in the SIMO MAC," in Information Theory, IEEE Transactions on, vol. 61, no. 1, pp. 388-403, Jan. 2015. doi: 10.1109/TIT.2014.2371040

Abstract: A single-input multiple-output multiple access channel, with a large number of uncoded noncooperating single-antenna transmitters and joint processing at a multiantenna receiver is considered. The minimum number of receiver antennas per transmitter that is needed for perfect recovery of the transmitted signals with overwhelming probability is investigated. It is shown that in the limit of a large number of transmitters, and in a rich scattering environment, the per-transmitter number of receiver antennas can be arbitrarily small, not only with the optimal maximum likelihood decoding rule, but also with much lower complexity decoders. Comparison with the ergodic capacity of the channel in the limit of a large number of transmitters suggests that uncoded transmissions achieve the Shannon-theoretic scaling behavior of the minimum per-transmitter number of receiver antennas. Thus, the diversity of a large system not only makes the performance metrics for some coded systems similar to that of uncoded systems, but also allows efficient decoders to realize close to the optimal performance of maximum likelihood decoding.

Keywords: MIMO communication; antenna arrays; channel capacity; electromagnetic wave scattering; maximum likelihood decoding; multiuser channels; probability; radio receivers; radio transmitters; telecommunication network reliability; wireless channels; Shannon-theoretic scaling; channel capacity; lower complexity decoder; maximum likelihood decoding rule; multiantenna receiver; probability; rich scattering environment; single antenna transmitter; single input multiple output multiple access channel; uncoded communication reliability; Maximum likelihood decoding; Receiving antennas; Reliability; Transmitting antennas; Convex programming; Maximum likelihood detection; Multiuser detection; Spatial diversity; convex programming; maximum likelihood detection; multiuser detection (ID#: 15-8055)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957541&isnumber=6994912

 

Jyothirmai, P.; Raj, J.S., "Secure Interoperable Architecture Construction for Overlay Networks," in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, pp. 1-6, 19-20 March 2015. doi: 10.1109/ICIIECS.2015.7193261

Abstract: Delay-tolerant networking (DTN) is an approach to computer network architecture that seeks to address the technical issues in heterogeneous networks that may lack continuous network connectivity. Examples of such networks are those operating in mobile or extreme terrestrial environments, or planned networks in space. In Disruption Tolerant Networks packets storage exists when there is any link breakage between the nodes in the network so delay is tolerable in this type of network during the data transmission. But this delay is not tolerable in wireless network for voice packet transmission. This evokes the use of wireless networks. Different wireless networks are interoperating with each other so the communication across the network is called overlay network. This network is vulnerable to attacks due to mobile behaviour of nodes. One of these is the wormhole attack. It is a critical threat to normal operation in wireless networks which results in the degradation of the network performance. It can be identified by using a technique called forbidden topology. The proposed recovery algorithm will increase the performance of the network. The performance metrics such as throughput, packet delivery ratio and delay are evaluated.

Keywords: computer network security; data communication; delay tolerant networks; open systems; overlay networks; radio links; telecommunication network topology; DTN; computer network architecture; data transmission; delay tolerant networking; disruption tolerant network; forbidden topology; overlay network; secure interoperable architecture construction; voice packet transmission; wireless network; wormhole attack; Delays; Network topology; Overlay networks; Security; Throughput; Topology; Wireless networks; Interoperable; Overlay Networks; Security; Wireless Networks; Wormhole Attack (ID#: 15-8056)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193261&isnumber=7192777

 

Shaohan Hu; Shen Li; Shuochao Yao; Lu Su; Govindan, R.; Hobbs, R.; Abdelzaher, T.F., "On Exploiting Logical Dependencies for Minimizing Additive Cost Metrics in Resource-Limited Crowdsensing," in Distributed Computing in Sensor Systems (DCOSS), 2015 International Conference on, pp. 189-198, 10-12 June 2015. doi: 10.1109/DCOSS.2015.26

Abstract: We develop data retrieval algorithms for crowd-sensing applications that reduce the underlying network bandwidth consumption or any additive cost metric by exploiting logical dependencies among data items, while maintaining the level of service to the client applications. Crowd sensing applications refer to those where local measurements are performed by humans or devices in their possession for subsequent aggregation and sharing purposes. In this paper, we focus on resource-limited crowd sensing, such as disaster response and recovery scenarios. The key challenge in those scenarios is to cope with resource constraints. Unlike the traditional application design, where measurements are sent to a central aggregator, in resource limited scenarios, data will typically reside at the source until requested to prevent needless transmission. Many applications exhibit dependencies among data items. For example, parts of a city might tend to get flooded together because of a correlated low elevation, and some roads might become useless for evacuation if a bridge they lead to fails. Such dependencies can be encoded as logic expressions that obviate retrieval of some data items based on values of others. Our algorithm takes logical data dependencies into consideration such that application queries are answered at the central aggregation node, while network bandwidth usage is minimized. The algorithms consider multiple concurrent queries and accommodate retrieval latency constraints. Simulation results show that our algorithm outperforms several baselines by significant margins, maintaining the level of service perceived by applications in the presence of resource-constraints.

Keywords: data handling; query processing; additive cost metric minimization; central aggregation node; data items; data retrieval algorithms; logic expressions; logical data dependency; multiple concurrent query; network bandwidth consumption; network bandwidth usage; resource constraints; resource-limited crowdsensing; retrieval latency constraints; Algorithm design and analysis; Bandwidth; Decision trees; Engines; Optimization; Sensors; System analysis and design; cost optimization; crowd sensing; logical dependency; resource limitation (ID#: 15-8057)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165037&isnumber=7164869

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.