Attribution (2014 Year in Review) Part 2

 

 
SoS Newsletter Logo

Attribution

(2014 Year in Review)

Part 2

 

Attribution of the source of an attack or the author of malware is a continuing problem in computer forensics.  The research presented here addresses a number of issues in each context published in 2014. 

 

Yun Shen; Thonnard, O., "MR-TRIAGE: Scalable Multi-Criteria Clustering For Big Data Security Intelligence Applications," Big Data (Big Data), 2014 IEEE International Conference on; pp.627,635, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004285

Abstract: Security companies have recently realised that mining massive amounts of security data can help generate actionable intelligence and improve their understanding of Internet attacks. In particular, attack attribution and situational understanding are considered critical aspects to effectively deal with emerging, increasingly sophisticated Internet attacks. This requires highly scalable analysis tools to help analysts classify, correlate and prioritise security events, depending on their likely impact and threat level. However, this security data mining process typically involves a considerable amount of features interacting in a non-obvious way, which makes it inherently complex. To deal with this challenge, we introduce MR-TRIAGE, a set of distributed algorithms built on MapReduce that can perform scalable multi-criteria data clustering on large security data sets and identify complex relationships hidden in massive datasets. The MR-TRIAGE workflow is made of a scalable data summarisation, followed by scalable graph clustering algorithms in which we integrate multi-criteria evaluation techniques. Theoretical computational complexity of the proposed parallel algorithms are discussed and analysed. The experimental results demonstrate that the algorithms can scale well and efficiently process large security datasets on commodity hardware. Our approach can effectively cluster any type of security events (e.g., spam emails, spear-phishing attacks, etc) that are sharing at least some commonalities among a number of predefined features.

Keywords: Big Data; computer crime; data mining; graph theory; parallel algorithms; pattern clustering; Big Data security intelligence applications; Internet attacks; MR-TRIAGE workflow; MapReduce; attack attribution; commodity hardware; computational complexity; distributed algorithms; large security data sets; large security datasets; multicriteria evaluation techniques; parallel algorithms; scalable data summarisation; scalable graph clustering algorithms; scalable multicriteria data clustering; security companies; security data mining; security events; situational understanding; threat level; Algorithm design and analysis; Clustering algorithms; Data mining; Electronic mail; Open wireless architecture; Prototypes; Security   (ID#:15-3994)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004285&isnumber=7004197

 

Jing Li; Ming Chen, "On-Road Multiple Obstacles Detection in Dynamical Background," Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2014 Sixth International Conference on, vol. 1, pp. 102, 105, 26-27 Aug. 2014. doi: 10.1109/IHMSC.2014.33

Abstract: Road In this paper, we focus on both the road vehicle and pedestrians detection, namely obstacle detection. At the same time, a new obstacle detection and classification technique in dynamical background is proposed. Obstacle detection is based on inverse perspective mapping and homography. Obstacle classification is based on fuzzy neural network. The estimation of the vanishing point relies on feature extraction strategy, which segments the lane markings of the images by combining a histogram-based segmentation with temporal filtering. Then, the vanishing point of each image is stabilized by means of a temporal filtering along the estimates of previous images. The IPM image is computed based on the stabilized vanishing point. The method exploits the geometrical relations between the elements in the scene so that obstacle can be detected. The estimated homography of the road plane between successive images is used for image alignment. A new fuzzy decision fusion method with fuzzy attribution for obstacle detection and classification application is described. The fuzzy decision function modifies parameters with auto-adapted algorithm to get better classification probability. It is shown that the method can achieve better classification result.

Keywords: fuzzy neural nets; image classification; object detection; pedestrians; IPM image; auto-adapted algorithm; dynamical background; feature extraction strategy; fuzzy attribution; fuzzy decision function; fuzzy decision fusion method; fuzzy neural network; histogram-based segmentation; homography; image alignment; inverse perspective mapping; lane markings; obstacle classification probability; on-road multiple obstacle detection; pedestrians detection; road plane; road vehicle; stabilized vanishing point; temporal filtering; Cameras; Computer vision; Feature extraction; Fuzzy neural networks; Radar; Roads; Vehicles; Inverse perspective mapping; fuzzy neural network; homography; image alignment   (ID#:15-3995)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6917316&isnumber=6917283

 

Andŕe, N.S.; Louchet, H.; Habel, K.; Richter, A., "Analytical Formulation for SNR Prediction in DMDD OFDM-Based Access Systems," Photonics Technology Letters, IEEE, vol. 26, no. 12, pp.1255, 1258, June15, 2014. doi: 10.1109/LPT.2014.2320825

Abstract: In multicarrier direct modulation direct detection systems, interaction between laser chirp and fiber group velocity dispersion induces subcarrier-to-subcarrier intermixing interferences (SSII) after detection. Such SSII become a major impairment in orthogonal frequency division multiplexing-based access systems, where a high modulation index, leading to large chirp, is required to maximize the system power budget. In this letter, we present and experimentally verify an analytical formulation to predict the level of signal and SSII and estimate the signal to noise ratio of each subcarrier, enabling improved bit-and-power loading and subcarrier attribution. The reported model is compact, and only requires the knowledge of basic link characteristics and laser parameters that can easily be measured.

Keywords: OFDM modulation; chirp modulation; optical fibre communication; optical fibre dispersion; DMDD OFDM-based access system; SNR prediction; SSII; fiber group velocity dispersion; high modulation index ;improved bit-and-power loading; laser chirp; multicarrier direct modulation direct detection system; orthogonal frequency division multiplexing-based access system; subcarrier attribution; subcarrier-to-subcarrier intermixing interference; Chirp; Frequency modulation; Laser modes; OFDM; Optical fibers; Signal to noise ratio; Chirp; OFDM; chromatic dispersion; intensity modulation; optical fiber communication   (ID#:15-3996)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807719&isnumber=6814330

 

Khosmood, F.; Nico, P.L.; Woolery, J., "User Identification Through Command History Analysis," Computational Intelligence in Cyber Security (CICS), 2014 IEEE Symposium on, pp. 1, 7, 9-12 Dec. 2014. doi: 10.1109/CICYBS.2014.7013363

Abstract: As any veteran of the editor wars can attest, Unix users can be fiercely and irrationally attached to the commands they use and the manner in which they use them. In this work, we investigate the problem of identifying users out of a large set of candidates (25-97) through their command-line histories. Using standard algorithms and feature sets inspired by natural language authorship attribution literature, we demonstrate conclusively that individual users can be identified with a high degree of accuracy through their command-line behavior. Further, we report on the best performing feature combinations, from the many thousands that are possible, both in terms of accuracy and generality. We validate our work by experimenting on three user corpora comprising data gathered over three decades at three distinct locations. These are the Greenberg user profile corpus (168 users), Schonlau masquerading corpus (50 users) and Cal Poly command history corpus (97 users). The first two are well known corpora published in 1991 and 2001 respectively. The last is developed by the authors in a year-long study in 2014 and represents the most recent corpus of its kind. For a 50 user configuration, we find feature sets that can successfully identify users with over 90% accuracy on the Cal Poly, Greenberg and one variant of the Schonlau corpus, and over 87% on the other Schonlau variant.

Keywords: Unix; information analysis; learning (artificial intelligence);natural language processing; Cal Poly command history corpus; Schonlau corpus; Schonlau masquerading corpus; Schonlau variant; Unix user; command history analysis; command-line behavior; command-line history; editor war; feature set; natural language authorship attribution literature; standard algorithm; user configuration; user corpora; user identification ;user profile corpus; Accuracy; Computer science; Decision trees; Entropy; Feature extraction; History; Semantics   (ID#:15-3997)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013363&isnumber=7013356

 

Skoberne, N.; Maennel, O.; Phillips, I.; Bush, R.; Zorz, J.; Ciglaric, M., "IPv4 Address Sharing Mechanism Classification and Tradeoff Analysis," Networking, IEEE/ACM Transactions on, vol. 22, no.2, pp.391,404, April 2014. doi: 10.1109/TNET.2013.2256147

Abstract: The growth of the Internet has made IPv4 addresses a scarce resource. Due to slow IPv6 deployment, IANA-level IPv4 address exhaustion was reached before the world could transition to an IPv6-only Internet. The continuing need for IPv4 reachability will only be supported by IPv4 address sharing. This paper reviews ISP-level address sharing mechanisms, which allow Internet service providers to connect multiple customers who share a single IPv4 address. Some mechanisms come with severe and unpredicted consequences, and all of them come with tradeoffs. We propose a novel classification, which we apply to existing mechanisms such as NAT444 and DS-Lite and proposals such as 4rd, MAP, etc. Our tradeoff analysis reveals insights into many problems including: abuse attribution, performance degradation, address and port usage efficiency, direct intercustomer communication, and availability.

Keywords: IP networks; Internet; DS-Lite; IANA-level IPv4 address exhaustion; IPv4 address sharing mechanism classification;IPv4 reachability; IPv6 deployment;IPv6-only Internet; ISP-level address sharing mechanisms; Internet service providers; NAT444; abuse attribution; address efficiency; direct intercustomer communication; performance degradation; port usage efficiency; Address family translation; IPv4 address sharing;IPv6 transition; address plus port (A+P);carrier grade NAT (CGN); network address translation (NAT)  (ID#:15-3998)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6504560&isnumber=6799946

 

Pandey, A.K.; Agrawal, C.P., "Analytical Network Process based Model To Estimate The Quality Of Software Components," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.678,682, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781361

Abstract: Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.

Keywords: analytic hierarchy process; decision theory; object-oriented programming; software quality; ANP based model; analytical network process based model; decision parameter attribution; decision problem;  dependency networks; final software application system; software component quality estimation; software quality numeric value estimation; software units; Interoperability; Measurement; Software reliability; Stability analysis; Usability; ANP; Software component; prioritization and software application; quality   (ID#:15-3998)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781361&isnumber=6781240

 

Biswas, A.R.; Giaffreda, R., "IoT and Cloud Convergence: Opportunities And Challenges," Internet of Things (WF-IoT), 2014 IEEE World Forum on, pp.375, 376, 6-8 March 2014. doi: 10.1109/WF-IoT.2014.6803194

Abstract: The success of the IoT world requires service provision attributed with ubiquity, reliability, high-performance, efficiency, and scalability. In order to accomplish this attribution, future business and research vision is to merge the Cloud Computing and IoT concepts, i.e., enable an “Everything as a Service” model: specifically, a Cloud ecosystem, encompassing novel functionality and cognitive-IoT capabilities, will be provided. Hence the paper will describe an innovative IoT centric Cloud smart infrastructure addressing individual IoT and Cloud Computing challenges.

Keywords: Internet of Things; cloud computing; Internet of Things; IoT centric cloud smart infrastructure; cloud computing; cloud convergence; cloud ecosystem; cognitive-IoT capabilities; everything as a service model; Cloud computing; Convergence; Data handling; Data storage systems ;Information management; Reliability; Cloud Computing; Convergence; Internet of Things   (ID#:15-3999)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803194&isnumber=6803102

 

Watney, M., "Challenges Pertaining To Cyber War Under International Law," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp. 1,5, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913962

Abstract: State-level intrusion in the cyberspace of another country seriously threatens a state's peace and security. Consequently many types of cyberspace intrusion are being referred to as cyber war with scant regard to the legal position under international law. This is but one of the challenges facing state-level cyber intrusion. The current rules of international law prohibit certain types of intrusion. However, international law does not define which intrusion fall within the prohibited category of intrusion nor when the threshold of intrusion is surpassed. International lawyers have to determine the type of intrusion and threshold on a case-by-case basis. The Tallinn Manual may serve as guideline in this assessment, but determination of the type of intrusion and attribution to a specific state is not easily established. The current rules of international law do not prohibit all intrusion which on statelevel may be highly invasive and destructive. Unrestrained cyber intrusion may result in cyberspace becoming a battle space in which state(s) with strong cyber abilities dominate cyberspace resulting in resentment and fear among other states. The latter may be prevented on an international level by involving all states on an equal and transparent manner in cyberspace governance.

Keywords: law; security of data; Tallinn Manual; cyber war; cyberspace governance; cyberspace intrusion; international law; legal position; state-level cyber intrusion; Computer crime; Cyberspace; Force; Law; Manuals; Cyber war; Estonia; Stuxnet; challenges; cyberspace governance; cyberspace state-level intrusion; international law   (ID#:15-4000)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913962&isnumber=6913961

 

Honghui Dong; Xiaoqing Ding; Mingchao Wu; Yan Shi; Limin Jia; Yong Qin; Lianyu Chu, "Urban Traffic Commuting Analysis Based On Mobile Phone Data," Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on, pp. 611, 616, 8-11 Oct. 2014. doi: 10.1109/ITSC.2014.6957757

Abstract: With the urban traffic planning and management development, it is a highly considerable issue to analyze and estimate the original-destination data in the city. Traditional method to acquire the OD information usually uses household survey, which is inefficient and expensive. In this paper, the new methodology proposed that using mobile phone data to analyze the mechanism of trip generation, trip attraction and the OD information. The mobile phone data acquisition is introduced. A pilot study is implemented on Beijing by using the new method. And, much important traffic information can be extracted from the mobile phone data. We use the K-means clustering algorithm to divide the traffic zone. The attribution of traffic zone is identified using the mobile phone data. Then the OD distribution and the commuting travel are analyzed. At last, an experiment is done to verify availability of the mobile phone data, that analyzing the "Traffic tide phenomenon" in Beijing. The results of the experiments in this paper show a great correspondence to the actual situation. The validated results reveal the mobile phone data has tremendous potential on OD analysis.

Keywords: data acquisition; feature extraction; mobile computing; pattern clustering; traffic information systems; OD information; k-means clustering algorithm; mobile phone data acquisition; traffic information extraction ;trip attraction ;trip generation mechanism; urban traffic commuting analysis; Base stations; Cities and towns; Mobile communication; Mobile handsets; Real-time systems; Sociology; Statistics    (ID#:15-4001)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957757&isnumber=6957655

 

Wenqun Xiu; Xiaoming Li, "The Design of Cybercrime Spatial Analysis System," Information Science and Technology (ICIST), 2014 4th IEEE International Conference on, pp. 132, 135, 26-28 April 2014. doi: 10.1109/ICIST.2014.6920348

Abstract: Artificial monitoring is no longer able to match the rapid growth of cybercrime, it is in great need to develop a new spatial analysis technology which allows emergency events to get rapidly and accurately locked in real environment, furthermore, to establish correlative analysis model for cybercrime prevention strategy. On the other hand, Geography information system has been changed virtually in data structure, coordinate system and analysis model due to the “uncertainty and hyper-dimension” characteristics of network object and behavior. In this paper, the spatial rules of typical cybercrime are explored on base of GIS with Internet searching and IP tracking technology: (1) Setup spatial database through IP searching based on criminal evidence. (2)Extend GIS data-structure and spatial models, add network dimension and virtual attribution to realize dynamic connection between cyber and real space. (3)Design cybercrime monitoring and prevention system to discover the cyberspace logics based on spatial analysis.

Keywords: Internet; geographic information systems; monitoring; security of data; GIS data-structure; IP tracking technology; Internet searching; correlative analysis model; cybercrime monitoring design; cybercrime prevention strategy; geographic information systems; spatial analysis system; Analytical models; Computer crime; Data models; Geographic information systems; IP networks; Internet; Spatial databases; Cybercrime; GIS; Spatial analysis   (ID#:15-4002)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6920348&isnumber=6920317

 

Bou-Harb, E.; Debbabi, M.; Assi, C., "Cyber Scanning: A Comprehensive Survey," Communications Surveys & Tutorials, IEEE, vol. 16, no.3, pp.1496, 1519, Third Quarter 2014. doi: 10.1109/SURV.2013.102913.00020

Abstract: Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.

Keywords: Internet; security of data; Internet wide services; cyber scanning technique; distributed cyber scanning detection method; enterprise networks; targeted cyber attack; Cyberspace; Internet; Monitoring; Ports (Computers); Probes; Protocols; Servers; Cyber scanning; Network reconnaissance; Probing; Probing campaigns; Scanning events   (ID#:15-4003)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657498&isnumber=6880447

 

Caso, J.S., "The Rules Of Engagement for Cyber-Warfare and the Tallinn Manual: A Case

Study," Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2014 IEEE 4th Annual International Conference on, pp. 252, 257, 4-7 June 2014. doi: 10.1109/CYBER.2014.6917470

Abstract: Documents such as the Geneva (1949) and Hague Conventions (1899 and 1907) that have clearly outlined the rules of engagement for warfare find themselves challenged by the presence of a new arena: cyber. Considering the potential nature of these offenses, operations taking place in the realm of cyber cannot simply be generalized as “cyber-warfare,” as they may also be acts of cyber-espionage, cyber-terrorism, cyber-sabaotge, etc. Cyber-attacks, such as those on Estonia in 2007, have begun to test the limits of NATO's Article 5 and the UN Charter's Article 2(4) against the use of force. What defines “force” as it relates to cyber, and what kind of response is merited in the case of uncertainty regarding attribution? In 2009, NATO's Cooperative Cyber Defence Centre of Excellence commissioned a group of experts to publish a study on the application of international law to cyber-warfare. This document, the Tallinn Manual, was published in 2013 as a non-binding exercise to stimulate discussion on the codification of international law on the subject. After analysis, this paper concludes that the Tallinn Manual classifies the 2010 Stuxnet attack on Iran's nuclear program as an illegal act of force. The purpose of this paper is the following: (1) to analyze the historical and technical background of cyber-warfare, (2) to evaluate the Tallinn Manual as it relates to the justification cyber-warfare, and (3) to examine the applicability of the Tallinn Manual in a case study of a historical example of a cyber-attacks.

Keywords: law; security of data; Cooperative Cyber Defence Centre of Excellence; Tallinn manual;cyber-attacks;cyber-espionage;cyber-sabaotge;cyber-terrorism; cyber-warfare; international law; Computer crime; Computers; Force; Manuals; Organizations;  Protocols; Standards   (ID#:15-4004)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6917470&isnumber=6917419

 

Barbosa de Carvalho, M.; Pereira Esteves, R.; da Cunha Rodrigues, G.; Cassales Marquezan, C.; Zambenedetti Granville, L.; Rockenbach Tarouco, L.M., "Efficient Configuration Of Monitoring Slices For Cloud Platform Administrators," Computers and Communication (ISCC), 2014 IEEE Symposium on,  pp. 1, 7, 23-26 June 2014. doi: 10.1109/ISCC.2014.6912568

Abstract: Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.

Keywords: cloud computing; system monitoring; FlexACMS response time; IaaS; automation techniques; cloud computing; cloud environments; cloud slices; infrastructure-as-a-service; load balancing; monitoring server load; Indium phosphide; Measurement; Monitoring; Scalability; Servers; Time factors; Web services; Cloud computing; monitoring configuration}   (ID#:15-4005)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912568&isnumber=6912451

 

Boukhtouta, A.; Lakhdari, N.-E.; Debbabi, M., "Inferring Malware Family through Application Protocol Sequences Signature," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 5, March 30 2014-April 2 2014.doi: 10.1109/NTMS.2014.6814026

Abstract: The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.

Keywords: computer network security; invasive software; learning (artificial intelligence); application protocol sequences signature; cyber-threats; machine learning algorithm; malicious packets detection; malware automatic dynamic analysis; malware traffic detection; network traffic; Cryptography; Databases; Engines; Feeds; Malware; Protocols   (ID#:15-4006)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814026&isnumber=6813963

 

Gimenez, A.; Gamblin, T.; Rountree, B.; Bhatele, A.; Jusufi, I.; Bremer, P.-T.; Hamann, B., "Dissecting On-Node Memory Access Performance: A Semantic Approach," High Performance Computing, Networking, Storage and Analysis, SC14: International Conference for, pp.166,176, 16-21 Nov. 2014. doi: 10.1109/SC.2014.19

Abstract: Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.

Keywords: distributed memory systems; multi-threading; storage management; CPU manufacturers; PMU ;attribute semantic information; code regions; data motion; data objects; design decisions; distributed memory systems; domain decomposition; fine-grained memory access performance data; memory access optimization; memory behaviour; multithreading; on-node memory access performance; performance ramifications; power efficiency; sampled memory accesses; sampling-based performance measurement units; semantic approach; Context; Hardware; Kernel; Libraries; Program processors; Semantics; Topology   (ID#:15-4007)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013001&isnumber=7012182

 

Hessami, A., "A Framework For Characterisation Of Complex Systems And System of Systems," World Automation Congress (WAC), 2014, vol., no., pp.346,354, 3-7 Aug. 2014. doi: 10.1109/WAC.2014.6935936

Abstract: The objective of this paper is to explore the current notions of systems and “System of Systems” and establish the case for quantitative characterization of their structural, behavioural and contextual facets that will pave the way for further formal development (mathematical formulation). This is partly driven by stakeholder needs and perspectives and also in response to the necessity to attribute and communicate the properties of a system more succinctly, meaningfully and efficiently. The systematic quantitative characterization framework proposed will endeavor to extend the notion of emergence that allows the definition of appropriate metrics in the context of a number of systems ontologies. The general characteristic and information content of the ontologies relevant to system and system of system will be specified but not developed at this stage. The current supra-system, system and sub-system hierarchy is also explored for the formalisation of a standard notation in order to depict a relative scale and order and avoid the seemingly arbitrary attributions.

Keywords: Unified Modeling Language; ontologies (artificial intelligence);programming ;complex systems characterisation; emergence notion; formal development; ontologies; quantitative characterization; system-of-systems characterisation; Aggregates ;Collaboration; Complexity theory; Indexes; Measurement; Rail transportation; Systems engineering and theory; Complexity; Metrics; Ontology; System of Systems; Systems   (ID#:15-4008)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6935936&isnumber=6935633


Note:



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.