Scientific Computing 2015

 

 
Image removed.

Scientific Computing 2015

Scientific computing is concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. As a practical matter, scientific computing is the use of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve specific problems such as cybersecurity.  For the Science of Security community, it relates to predictive metrics, compositionality, and resilience.  The works cited here were presented in 20915.

Donghoon Kim; Vouk, M.A., "Securing Scientific Workflows," in Software Quality, Reliability and Security - Companion (QRS-C), 2015 IEEE International Conference on, pp. 95-104, 3-5 Aug. 2015. doi: 10.1109/QRS-C.2015.25

Abstract: This paper investigates security of Kepler scientific workflow engine. We are especially interested in Kepler-based scientific workflows that may operate in cloud environments. We find that (1) three security properties (i.e., input validation, remote access validation, and data integrity) are essential for making Kepler-based workflows more secure, and (2) that use of the Kepler provenance module may help secure Kepler based workflows. We implemented a prototype security enhanced Kepler engine to demonstrate viability of use of the Kepler provenance module in provision and management of the desired security properties.

 Keywords: authorisation; cloud computing; data integrity; scientific information systems; workflow management software; Kepler provenance module; Kepler scientific workflow engine security; cloud environment; data integrity; input validation; remote access validation; Cloud computing; Conferences; Databases; Engines; Security; Software quality; Uniform resource locators; Cloud; Kepler; Provenance; Scientific workflow; Vulnerability (ID#: 15-7975)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7322130&isnumber=7322103

 

Ionita, M.-G.; Patriciu, V.-V., "Cyber Incident Response Aided by Neural Networks and Visual Analytics," in Control Systems and Computer Science (CSCS), 2015 20th International Conference on, pp. 229-233, 27-29 May 2015. doi: 10.1109/CSCS.2015.41

Abstract: The world security context is changing more than ever. Military interest has shifted from the conventional means of warfare to that of cyber warfare. The most potent nations have entire armies that are watching the international cyberspace for anomalies. And these forces are ready to intervene for keeping peace at home or for an enemy nation. The international interest in exploit development has risen significantly. And has gone from an underground activity of a group of hackers to a semi-covert operation of a governmental agency [1]. In this context, where over 70 exabytes of data are moved over the internet, per month [2], and the level of significant cyber-attacks is almost 43 million per year [3] the sheer number of security events a SIEM operator has to triage can be impressive and overwhelming. This is why a human operator has to be helped by technology. This is where neural networks can bring a huge plus for detecting previously unknown attacks and zero-day exploits. And visual analytics to help a human being understand and process the huge volume of information coming to him, by presenting it in a cognitive fashion that helps him better understand and classify it in the correct context. Both of the concepts evoked are presented in this paper, the detection algorithm based on neural networks and the scientific representation scheme based on visual analytics.

Keywords: computer crime; government; military computing; neural nets; cyber incident response; cyber warfare; governmental agency; hackers; military interest; neural networks; visual analytics; world security context; Computer crime; Control systems; Google; Neural networks; Protocols; Visual analytics; Cyber security; Incident response; Neural networks; Visual analytics (ID#: 15-7976)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168435&isnumber=7168393

 

Yuan, Shijin; Yan, Jinghao; Mu, Bin; Li, HongYu, "Parallel Dynamic Step Size Sphere-Gap Transferring Algorithm for Solving Conditional Nonlinear Optimal Perturbation," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on,  pp. 559-565, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.261

Abstract: Intelligent algorithms have been extensively applied in scientific computing. Recently, some researchers apply intelligent algorithms to solve conditional nonlinear optimal perturbation (CNOP) which is proposed to study the predictability of numerical weather and climate prediction. The difficulty of solving CNOP using the intelligent algorithm is the high dimensionality of complex numerical models. Therefore, previous researches either are just tested in ideal models or have low time efficiency in complex numerical models which limited the application of CNOP. In this paper, we proposed a parallel dynamic step size sphere-gap transferring algorithm (DSGT) to solve CNOP in complex numerical models. A dynamic step size factor is also designed to speed up convergence of sphere-gap transferring algorithm. Through the singular value decomposition, the original problem is reduced into a low-dimensional space to hunt the coordinate of the optimal CNOP with the DSGT algorithm. Moreover, in order to accelerate the computation speed, we parallelize the DSGT method with MPI technology. To demonstrate the validity, the proposed method has been studied in the Zebiak-Cane model to solve the CNOP. Experimental results prove that the proposed method can efficiently and stably obtain a satisfactory CNOP, and the parallel version can reach the speedup of 7.18 times with 10 cores.

Keywords: Algorithm design and analysis; Atmospheric modeling; Computational modeling; Heuristic algorithms; Numerical models; Optimization; Prediction algorithms; CNOP; Zebiak-Cane model; parallel; sphere-gap transferring algorithm (ID#: 15-7977)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336217&isnumber=7336120

 

Liu, Yueming; Zhang, Peng; Qiu, Meikang, "Fast Numerical Evaluation for Symbolic Expressions in Java," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 599-604, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.19

Abstract: The symbolic-numeric computation has been extensively developed in scientific computing for experimenting mathematics in numerical programs, like in optimization problems and finite element methods. Many software and libraries have been developed to support symbolic-numeric computation especially in the recent years. However, most of the implementations are cumbersome and inefficient for numerically evaluating symbolic expressions. The popular implementation chooses the way that generates C/C++/FORTRAN source codes for symbolic expressions and compiles the source files using the external compilers. The compiled machine codes are then linked back to the symbolic manipulation language environment. Thi sprocess suffers from slow compilation and significant overhead of external function calls. To address this problem, this paper presents a handy approach that provides fast numerical evaluation for symbolic expressions in Java. In our approach, Java bytecode is generated in memory for symbolic expressions and further Just-In-Time (JIT) compiled to machine codes onJava Virtual Machine (JVM) at runtime. We have developed SymJava (https://github.com/yuemingl/SymJava) to implement our approach and tested a range of benchmark problems. The results show that SymJava is 1~3 orders of magnitude faster than the existing implementations including Matlab, Mathematica, Sage, Theano and SymPy. Additionally, SymJava offers a human friendly programming style for symbolic expressions by overloading operators in Java. Our approach opens up a new avenue for the development of next generation symbolic-numeric software.

Keywords: Benchmark testing; Java; Libraries; MATLAB; Mathematics; Runtime; JIT; bytecode; compile; java; numeric; symbolic (ID#: 15-7978)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336223&isnumber=7336120

 

Zhou, Wenhao; Chen, Juan; Wang, Zhiyuan; Xu, Xinhai; Xu, Liyang; Tang, Yuhua, "Time-Dimension Communication Characterization of Representative Scientific Applications on Tianhe-2," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 423-429, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.15

Abstract: Exascale computing is one of the major challenges of this decade, and several studies have shown that the communication is becoming one of the bottlenecks for scaling parallel applications. The characteristic analysis of communication is an important means to improve the performance of scientific applications. In this paper, we focus on the statistical regularity in time-dimension communication characteristics of representative scientific applications and find that the distribution of interval of communication events has a power-law decay, which is widely found in scientific interests and human activities. For a quantitative study on characteristics of power-law distribution, we count two groups of typical measures: bursty vs. memory and periodicity vs. dispersion. Our analysis shows that the communication events reflect a "strong-bursty and weak-memory" characteristic and we also capture the periodicity and dispersion in interval distribution. All of the quantitative results are verified with eight representative scientific applications on Tianhe-2 supercomputer with a fat-tree-like interconnection network. Finally, our study provides an insight on the relationship between communication optimization and time-dimension communication characteristics.

Keywords: Benchmark testing; Dispersion; High performance computing; Histograms; Libraries; Supercomputers; Power-law distributions; Supercomputing; Tianhe-2; Time-dimension Communication Characterization (ID#: 15-7979)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336198&isnumber=7336120

 

Suresh, N.; Mbale, J.; Terzoli, A.; Mufeti, T.K., "Enhancing Cloud Connectivity Among NRENs in the SADC Region Through a Novel Institution Cloud Infrastructure Framework," in Emerging Trends in Networks and Computer Communications (ETNCC), 2015 International Conference on, pp. 179-184, 17-20 May 2015. doi: 10.1109/ETNCC.2015.7184830

Abstract: It is increasingly being recognized that faster socioeconomic development in Africa is dependent upon the development of Information and Communication Technology (ICT) Infrastructure for the dissemination of data and educational services. The scalability and flexibility provided by Cloud services in terms of resource management, service provisioning and virtualization makes it an attractive system for use with educational and ICT services. The flexibility of pay-as-you-go models combined with the ability to scale computing, storage and/or networking resources makes Cloud computing an ideal candidate for use with education, research and scientific infrastructures. Notwithstanding its benefits, transitioning from a traditional IT infrastructure to a Cloud computing paradigm raises security concerns with respect to data storage, data transmission and user privacy. This paper presents on-going research for the development of Science, Technology and Innovation (STI) infrastructure for the distribution of Information Communication technologies (ICT) services in the African context. The Inter-Cloud Infrastructure Framework (ICIF) proposed, is conceived as a Cloud computing framework suitable for use with National Research and Education Networks (NRENs) in the SADC region. The ICIF system is used to create an Inter-Cloud infrastructure, and helps NRENs transition from traditional IT infrastructure systems to the Cloud computing paradigm. It also provides new functional/operational components and Cloud services to support the interconnection and/or interoperability among SADC NRENs through the ICIF infrastructure.

Keywords: cloud computing; data privacy; innovation management; virtualisation; Africa; ICIF; ICT infrastructure; NRENs; National Research and Education Networks; SADC region; STI infrastructure; cloud computing; cloud connectivity; data dissemination; data storage; data transmission; educational services; information and communication technology infrastructure; innovation infrastructure; institution cloud infrastructure framework; intercloud infrastructure framework; pay-as-you-go models; resource management; science infrastructure; service provisioning; socioeconomic development; user privacy; virtualization; Collaboration; Computational modeling; Computer architecture; Organizations; Platform as a service; Security; Cloud Computing; Cloud Services; Inter-Cloud Infrastructure (ID#: 15-7980)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7184830&isnumber=7184793

 

Memon, S.; Riedel, M.; Koeritz, C.; Grimshaw, A., "Interoperable Job Execution and Data Access Through UNICORE and the Global Federated File System," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 269-274, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160278

Abstract: Computing middlewares play a vital role for abstracting complexities of backend resources by providing a seamless access to heterogeneous execution management services. Scientific communities are taking advantage of such technologies to focus on science rather than dealing with technical intricacies of accessing resources. Multi-disciplinary communities often bring dynamic requirements which are not trivial to realize. Specifically, to attain massivley parallel data processing on supercomputing resources which require an access to large data sets from widely distributed and dynamic sources located across organizational boundaries. In order to support this abstract scenario, we bring a combination that integrates UNICORE middleware and the Global Federated File System. Furthermore, the paper gives architectural and implementation perspective of UNICORE extension and its interaction with Global Federated File System space through computing, data and security standards.

Keywords: file organisation; information retrieval; middleware; parallel processing; UNICORE middleware; backend resource complexity abstracting; data access; global federated file system; heterogeneous execution management services; interoperable job execution; multidisciplinary community; organizational boundary; parallel data processing; security standards; supercomputing resources; Communities; File systems; Security; Servers; Standards; Web services (ID#: 15-7981)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160278&isnumber=7160221

 

Skalicky, Sam; Lopez, Sonia; Lukowiak, Marcin; Schmidt, Andrew G., "A Parallelizing Matlab Compiler Framework and Run Time for Heterogeneous Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 232-237, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.51

Abstract: Compute-intensive applications incorporate ever increasing data processing requirements on hardware systems. Many of these applications have only recently become feasible thanks to the increasing computing power of modern processors. The Matlab language is uniquely situated to support the description of these compute-intensive scientific applications, and consequently has been continuously improved to provide increasing computational support in the form of multithreading for CPUs and utilizing accelerators such as GPUs and FPGAs. Moreover, to take advantage of the computational support in these heterogeneous systems from the problem domain to the computer architecture necessitates a wide breadth of knowledge and understanding. In this work, we present a framework for the development of compute-intensive scientific applications in Matlab using heterogeneous processor systems. We investigate systems containing CPUs, GPUs, and FPGAs. We leverage the capabilities of Matlab and supplement them by automating the mapping, scheduling, and parallel code generation. Our experimental results on a set of benchmarks achieved from 20x to 60x speedups compared to the standard Matlab CPU environment with minimal effort required on the part of the user.

Keywords: Data transfer; Field programmable gate arrays; Kernel; MATLAB; Message systems; Processor scheduling; Scheduling; Heterogeneous computing; Matlab; compiler (ID#: 15-7982)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336169&isnumber=7336120

 

Gomez-Folgar, F.; Indalecio, G.; Garcia-Loureiro, A.J.; Pena, T.F., "A Flexible Cluster System for the Management of Virtual Clusters in the Cloud," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1693-1698, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.120

Abstract: Cluster computing is a fundamental tool to support enterprise services. It also provides the computing capacity for modelling and simulation research fields. There have been several initiatives to improve the access of the scientific community to the cluster resources that they need. Some of them are focused on specific research field, or they are enterprise grade solutions. In order to overcome this situation and to provide system administrators and users the possibility of deploying specific Virtual Clusters on demand in Cloud, we have developed a new tool called Flexible Cluster Manager (FCM). It allows user selectable cluster configuration packages, and it is very easy to include more software by means of the definition of the deployment workflow. FCM allows changing the software configuration of the deployed cluster on-line, including the support of fixing damaged virtual clusters, i.e clusters that have damaged or missing nodes. The performance of our tool, using commodity hardware, is also presented using serial and parallel deploying of the virtual cluster.

Keywords: Cloud computing; Computer architecture; Databases; Resource management; Software packages; Virtualization; Apache CloudStack; KVM; Virtual clusters; performance (ID#: 15-7983)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336414&isnumber=7336120

 

Zhimin Gao; Desalvo, N.; Pham Dang Khoa; Seung Hun Kim; Lei Xu; Won Woo Ro; Verma, R.M.; Weidong Shi, "Integrity Protection for Big Data Processing with Dynamic Redundancy Computation," in Autonomic Computing (ICAC), 2015 IEEE International Conference on, pp. 159-160, 7-10 July 2015. doi: 10.1109/ICAC.2015.34

Abstract: Big data is a hot topic and has found various applications in different areas such as scientific research, financial analysis, and market studies. The development of cloud computing technology provides an adequate platform for big data applications. No matter public or private, the outsourcing and sharing characteristics of the computation model make security a big concern for big data processing in the cloud. Most existing works focus on protection of data privacy but integrity protection of the processing procedure receives little attention, which may lead the big data application user to wrong conclusions and cause serious consequences. To address this challenge, we design an integrity protection solution for big data processing in cloud environments using reputation based redundancy computation. The implementation and experiment results show that the solution only adds limited cost to achieve integrity protection and is practical for real world applications.

Keywords: Big Data; cloud computing; data integrity; data privacy; Big Data processing; cloud computing technology; dynamic redundancy computation; integrity protection solution; reputation based redundancy computation; Conferences; MapReduce; cloud computing; integrity protection (ID#: 15-7984)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266957&isnumber=7266915

 

Yount, Charles, "Vector Folding: Improving Stencil Performance via Multi-dimensional SIMD-vector Representation," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 865-870, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.27

Abstract: Stencil computation is an important class of algorithms used in a large variety of scientific-simulation applications. Modern CPUs are employing increasingly longer SIMD vector registers and operations to improve computational throughput. However, the traditional use of vectors to contain sequential data elements along one dimension is not always the most efficient representation, especially in the multicore and hyper-threaded context where caches are shared among many simultaneous compute streams. This paper presents a general technique for representing data in vectors for 2D and 3D stencils. This method reduces the number of memory accesses required by storing a small multi-dimensional block of data in each vector compared to the single dimension in the traditional approach. Experiments on an Intel Xeon Phi Coprocessor show performance speedups over traditional vectors ranging from 1.2x to 2.7x, depending on the problem size and stencil type. This technique is independent of and complementary to a variety of existing stencil-computation tuning algorithms such as cache blocking, loop tiling, and wavefront parallelization.

Keywords: Jacobian matrices; Layout; Memory management; Registers; Shape; Three-dimensional displays; Intel; SIMD; Xeon Phi; high-performance computing; stencil; vector folding; vectorization (ID#: 15-7985)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336272&isnumber=7336120

 

Sasidharan, Aparna; Dennis, John M.; Snir, Marc, "A General Space-filling Curve Algorithm for Partitioning 2D Meshes," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 875-879, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.192

Abstract: This paper describes a recursive algorithm for constructing a general Space-Filling Curve (SFC) for an arbitrary distribution of points in2D. We use the SFC to partition 2D meshes, both structured and unstructured, and compare the quality of partitions with traditional SFCs and the multilevel partitioning schemes of Metis and Scotch. The algorithm is independent of the geometry of the mesh and can be easily adapted to irregular meshes. We discuss the advantages of SFCs over multilevel partitioners for meshes in scientific simulations. We define three performance metrics for a reasonable comparison of partitions: volume or load per partition, degree or the number of distinct edges of a partition in the communication graph and communication volume or the sum of the weights of outgoing edges for each partition in the communication graph. We propose a performance model for modern architectures using these metrics. We find our partitions comparable to and in some cases better than the best multilevel partitions, while being computed much faster. Unlike Metis, our hierarchical approach yields good hierarchical partitions (e.g., for partitioning to node and core level), and is appropriate for adaptive mesh refinement kernels.

Keywords: Adaptation models; Computer science; Electronic mail; Load modeling; Measurement; Partitioning algorithms; Shape; Geometric Partitioning; Mesh Partitioning; Metis; Scotch; Space-filling Curve (ID#: 15-7986)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336274&isnumber=7336120

 

Gulhane, S.; Bodkhe, S., "DDAS Using Kerberos with Adaptive Huffman Coding to Enhance Data Retrieval Speed and Security," in Pervasive Computing (ICPC), 2015 International Conference on, pp. 1-6, 8-10 Jan. 2015. doi: 10.1109/PERVASIVE.2015.7086987

Abstract: The increasing fad of deploying application over the web and store as well as retrieve database to/from particular server. As data stored in distributed manner so scalability, flexibility, reliability and security are important aspects need to be considered while established data management system. There are several systems for database management. After reviewing Distributed data aggregation service (DDAS) system which is relying on Blobseer it found that it provide a high level performance in aspects such as data storage as a Blob (Binary large objects) and data aggregation. For complicated analysis and instinctive mining of scientific data, Blobseer serve as a repository backend. WS-Aggregation is another framework which is viewed as a web services but it is actually carried out aggregation of data. In this framework for executing multi-site queries a single-site interface is provided to the clients. Simple storage service (S3) is another type of storage utility. This S3 system provides an anytime available and low cost service. Kerberos is a method which provides a secure authentication as only authorized clients are able to access distributed database. Kerberos consist of four steps i.e. Authentication Key exchange, Ticket granting service Key exchange, Client/Server service exchange and Build secure communication. Adaptive Huffman method to writing (also referred to as Dynamic Huffman method) is associate accommodative committal to writing technique basic of Huffman coding. It permits compression as well as decompression of data and also permits building the code because the symbols square measure is being transmitted, having no initial information of supply distribution, that enables one-pass cryptography and adaptation to dynamical conditions in data.

Keywords: Huffman codes; Web services; cryptography; data mining; distributed databases; query processing; Blob; Blobseer; DDAS; Kerberos; WS-Aggregation; Web services; adaptive Huffman coding; authentication key exchange; binary large objects; client-server service exchange; data aggregation; data management system; data retrieval security; data retrieval speed; data storage; distributed data aggregation service system; distributed database; dynamic Huffman method; instinctive scientific data mining; multisite queries; one-pass cryptography; secure communication; Authentication; Catalogs; Distributed databases; Memory; Servers; XML; adaptive huffman method; blobseer; distributed database; kerberos; simple storage service; ws aggregation (ID#: 15-7987)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086987&isnumber=7086957

 

Elmore, R.A.; Charlton, W.S., "Nonproliferation Informatics: Employing Bayesian Analysis, Agent Based Modeling, And Information Theory For Dynamic Proliferation Pathway Studies," in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, pp. 43-48, 27-29 May 2015. doi: 10.1109/ISI.2015.7165937

Abstract: Decision making on weapons of mass effect (WME) proliferation and counter-proliferation is information driven. However, the large data requirements, along with associated knowledge gaps and intelligence uncertainties, impedes optimal strategy selection. Combining Bayesian analysis, agent based modeling (ABM), and information theory within a security informatics context can aid understanding of dynamic WME proliferation and counter-proliferation pathways and possibilities. The Bayesian ABM Nonproliferation Enterprise (BANE) was developed to incorporate large databases and information sets. There are three broad BANE agent classes: 1) proliferator, 2) defensive, and 3) neutral. Within each agent class exists significant flexibility for them pursuing different objectives. Bayesian analysis cover the technical linkages realistically tying proliferation pathway process steps together. In BANE, Bayesian networks using the Netica software program provide a wide array of scientific and engineering pathway options. Information theory, especially entropy reduction and mutual information, in a Bayesian security informatics arrangement help identify optimal technical areas to master or disrupt. Concurrently, interlocking factors such as available resources, technical sophistication, time horizons, detection risks, and agent affinities impact agents' ability to achieve their goals. Actions taken by one BANE agent on the proliferation or counter-proliferation front affect its future opportunities and those of potential partner or adversarial agents. An explanation of the BANE framework and several key security informatics aspects crucial to WME proliferation and counter-proliferation analysis are provided.

Keywords: belief networks; decision making; military computing; multi-agent systems; security of data; weapons; BANE; Bayesian ABM nonproliferation enterprise; Bayesian analysis; Bayesian network; Bayesian security informatics; Netica software program; WME proliferation; agent based modelling; decision making; dynamic proliferation pathway; information theory; nonproliferation informatics; weapons of mass effect; Bayes methods; Databases; Decision making; Informatics; Information theory; Security; Uncertainty; Agent Based Modeling; Bayesian Analysis; Information Theory; Intelligence Informatics; Nonproliferation; Nuclear} (ID#: 15-7988)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165937&isnumber=7165923

 

Zhicong Huang; Ayday, E.; Fellay, J.; Hubaux, J.-P.; Juels, A., "GenoGuard: Protecting Genomic Data against Brute-Force Attacks," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 447-462, 17-21 May 2015. doi: 10.1109/SP.2015.34

Abstract: Secure storage of genomic data is of great and increasing importance. The scientific community's improving ability to interpret individuals' genetic materials and the growing size of genetic database populations have been aggravating the potential consequences of data breaches. The prevalent use of passwords to generate encryption keys thus poses an especially serious problem when applied to genetic data. Weak passwords can jeopardize genetic data in the short term, but given the multi-decade lifespan of genetic data, even the use of strong passwords with conventional encryption can lead to compromise. We present a tool, called Geno Guard, for providing strong protection for genomic data both today and in the long term. Geno Guard incorporates a new theoretical framework for encryption called honey encryption (HE): it can provide information-theoretic confidentiality guarantees for encrypted data. Previously proposed HE schemes, however, can be applied to messages from, unfortunately, a very restricted set of probability distributions. Therefore, Geno Guard addresses the open problem of applying HE techniques to the highly non-uniform probability distributions that characterize sequences of genetic data. In Geno Guard, a potential adversary can attempt exhaustively to guess keys or passwords and decrypt via a brute-force attack. We prove that decryption under any key will yield a plausible genome sequence, and that Geno Guard offers an information-theoretic security guarantee against message-recovery attacks. We also explore attacks that use side information. Finally, we present an efficient and parallelized software implementation of Geno Guard.

Keywords: biology computing; cryptography; data privacy; genetics; statistical distributions; storage management; GenoGuard; HE; brute-force attacks; data breaches; encryption keys; genetic database populations; genetic materials; genomic data protection; honey encryption; information-theoretic confidentiality; parallelized software implementation; passwords; probability distributions; storage security; Bioinformatics; Encoding; Encryption; Genomics; brute-force attack; distribution-transforming encoder; genomic privacy; honey encryption (ID#: 15-7989)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163041&isnumber=7163005

 

Djellalbia, Amina; Benmeziane, Souad; Badache, Nadjib; Bensimessaoud, Sihem, "An Adaptive Anonymous Authentication for Cloud Environment," in Cloud Technologies and Applications (CloudTech), 2015 International Conference on, pp. 1-8, 2-4 June 2015. doi: 10.1109/CloudTech.2015.7337010

Abstract: Preserving identity privacy is a significant challenge for the security in cloud services. Indeed, an important barrier to the adoption of cloud services is user fear of privacy loss in the cloud. One interesting issue from a privacy perspective is to hide user's usage behavior or meta-information which includes access patterns and frequencies when accessing services. Users may not want the cloud provider to learn which resources they access and how often they use a service by making them anonymous. In this paper, we will propose an adaptive and flexible approach to protect the identity privacy through an anonymous authentication scheme.

Keywords: Authentication; Biological system modeling; Cloud computing; Computational modeling; Data privacy; Privacy; Anonymity; Authentication; Blind signature; Cloud environment; Onion Routing; Privacy; Security (ID#: 15-7990)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7337010&isnumber=7336956

 

Hauger, Werner K.; Olivier, Martin S., "The State of Database Forensic Research," in Information Security for South Africa (ISSA), 2015, pp. 1-8, 12-13 Aug. 2015. doi: 10.1109/ISSA.2015.7335071

Abstract: A sentiment that is quite often encountered in database forensic research material is the scarcity of scientific research in this vital area of digital forensics. Databases have been around for many years in the digital space and have moved from being exclusively used in specialised applications of big corporations to becoming a means to an end in even the simplest end-user applications. Newer disciplines such as cloud forensics seem to be producing a far greater volume of new research material than database forensics. This paper firstly investigates the validity of the expressed sentiment. It also attempts to establish possible reasons for the apparent lack of research in this area. A survey was conducted of scientific research material that was published after an initial assessment was performed in 2009. The gathered database forensic material was compared to scientific material published in the same period in the cloud forensic discipline. The survey indicated that the speed of research into database forensics has increased since the 2009 paper. However the area of cloud forensics has produced twice the amount of new research in the same time period. The factors that made cloud forensics an attractive research area are either not applicable to database forensics or no longer play a significant role. This would explain the lesser interest in performing research in database forensics.

Keywords: Cloud computing; Computers; Database systems; Digital forensics; Google; database forensics; scientific research; survey (ID#: 15-7991)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335071&isnumber=7335039

 

Malyuk, A.; Miloslavskaya, N., "Information Security Theory for the Future Internet," in Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, pp. 150-157, 24-26 Aug. 2015. doi: 10.1109/FiCloud.2015.12

Abstract: The Future Internet and the Internet of Things (IoT) and clouds as its integral parts need a specialized theory for their information protection from different threats and intruders. The history and main results of research aimed at creating a scientific and methodological foundation of the Information Security Theory in Russia are examined. The discussion considers the formulation of the informal systems theory and approaches for creating the simulation models of information security (IS) maintenance (ISM) processes in conditions of incomplete and insufficiently reliable input data. The structure of a unified IS concept is proposed. Theoretical problems of designing an integrated information protection system's functioning, including IS assessment methodology, methodology of defining requirements to ISM and methodology of creating information protection systems (IPSs) are described. Finally, the results of the IS theory development are summarized and areas of further research are outlined.

Keywords: Internet of Things; security of data; IPSs; IS assessment methodology; IS maintenance; ISM; Internet of Things; IoT; future Internet; informal systems theory; information protection systems; information security maintenance; information security theory; integrated information protection system; simulation models; Analytical models; Cloud computing; Data models; IP networks; Information security; Future Internet security; Internet of Things security; cloud security; information protection systems ;information security concept; information security theory (ID#: 15-7992)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300812&isnumber=7300539

Zhenyu Wen; Cala, J.; Watson, P.; Romanovsky, A.; "Cost Effective, Reliable, and Secure Workflow Deployment over Federated Clouds;" in Cloud Computing (CLOUD),  2015 IEEE 8th International Conference on, pp. 604-612, June 27 2015-July 2 2015.  doi: 10.1109/CLOUD.2015.86

Abstract: The federation of clouds can provide benefits for cloud-based applications. Different clouds have different advantages - one might be more reliable whilst another might be more secure or less expensive. However, being able to select the best combination of clouds to meet the application requirements is not trivial. This paper presents a novel algorithm to deploy workflow applications on federated clouds. Firstly, we introduce an entropy-based method to quantify the most reliable workflow deployments. Secondly, we apply an extension of the Bell-LaPadula Multi-Level security model to meet application security requirements. Finally, we optimise deployment in terms of its entropy and also its monetary cost, taking into account the price of computing power, data storage and inter-cloud communication. To evaluate the new algorithm we compared it against two existing scheduling algorithms: Dynamic Constraint Algorithm (DCA) and Biobjective dynamic level scheduling (BDLS). We show that our algorithm can find deployments that are of equivalent reliability, but are less expensive and also meet security requirements. We have validated our solution using workflows implemented in the e-Science Central cloud-based data analysis system.

Keywords: business data processing; cloud computing; costing; data analysis; scheduling; scientific information systems; security of data; BDLS; Bell-LaPadula multilevel security model; DCA; application requirements; biobjective dynamic level scheduling; cloud-based applications; computing power; cost effective workflow deployment; data storage; dynamic constraint algorithm; e-Science central cloud-based data analysis system; federated clouds; intercloud communication; monetary cost; reliable workflow deployment; scheduling algorithm; secure workflow deployment; security requirements; workflow applications; Algorithm design and analysis; Cloud computing; Computational modeling; Entropy; Optimization; Reliability; Security; Cloud Computing; Cost; Reliability; Scheduling; Security; Workflow (ID#: 15-7993)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7214096&isnumber=7212169

Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.