C3E 2013 Weekly Planning Meeting

Submitted by Lisa Coote on

You have been invited to attend the C3E Weekly Planning Meeting on Friday August 2, 2013 from 12:30PM - 1:30PM. 

Conference Call-In Line:  1-605-475-4350

Access Code:  783 6212

To ensure the the CPS-VO's e-vite capability will work to our standards for C3E 2013, I have created a "test invition" for this week's C3E planning meeting. Please follow the instructions to either Accept or Decline the invitaiton for this week's meeting. Feedback is greatly appreciated!

Thank you!

CMU SoS 2013 Quarterly Lablet PI Meeting

Submitted by Anonymous on

The 2013 third quarter Science of Security quarterly Lablet meeting will be held at Carnegie Mellon University on Thursday, September 26th and Friday, September 27th. The meeting for both days will take place on the CMU Campus in the Gates-Hillman Center room 4405.

The first day will feature open workshop sessions, focused on two topics: (1) Addressing usability and security challenges through design and empirical methods, (2) Addressing challenges of scale through composable modeling and analysis.

 

Geo-Temporal Characterization of Security Threats
Lead PI:
Kathleen Carley
Abstract

Cyber security is a global phenomenon.  For example, recent socially-engineered attacks that target CEOs of global corporations appear to be instigated by the Chinese group dubbed the “comment crew.”  In their 2011 survey Symantec found that the number one cyber risk business concern was external cyber-attacks, followed by concerns about both unintentional insider error (2nd risk) and intentional insider error ( 3rd risk). Analysis by Verizon’s cyber forensics team indicates that the massive increase in external threats overshadows insider attacks.   Despite the increase in external threats little is known about the source of such threats; or the global implications this evolving threat environment.

At the global level, cyber security requires not only attribution and forensics, but harmonized laws and effective information sharing.  In spite of this growing consensus there is still little empirical understanding of the global cyber threat environment, an understanding that is critical for forensics. Currently, many cyber theories are based on anecdotal evidence and case studies.  However, the science of security needs a strong empirical base for strong theory.  It is now possible to create such an empirical base as companies like Symantec have been amassing large quantities of data on attacks.  In contrast to much of the work in cyber security we take a socio-technical approach looking at the human element.   As such, we postulate that the potential severity of the threat is a function of the political environment rather than the just the technology. 

The objective of this project is to empirically characterize the global cyber threat environment and to test this hypotheses using Symantec data. A virtual machine will be constructed and global data on the threat network (which IP attacks which) attributed by location, type of attack, severity and potential impact will be collected by time period. The resultant geo-temporal network will then analyzed at the global level controlling for factors such as machines per country, internet access and the interstate hostility and alliance. The proposed research will create a global mapping of the threat environment, changes in that environment, and its relation to geographical and political factors.  This will provide an empirical baseline for reasoning about the threat environment.  An empirical basis is critical for the growth of science.

 

Internet Access – Red High Blue Low

Image removed.

Kathleen Carley

Dr. Kathleen M. Carley is a Professor of Computation, Organizations and Society in the department – Institute for Software Research, in the School of Computer Science at Carnegie Mellon University & CEO of Carley Technologies Inc. Dr. Carley is the director of the center for Computational Analysis of Social and Organizational Systems (CASOS) which has over 25 members, both students and research staff.  Dr. Carley’s received her Ph.D. in Mathematical Sociology from Harvard University, and her undergraduate degrees in Economics and Political Science from MIT.  Her research combines cognitive science, organization science, social networks and computer science to address complex social and organizational problems. Her specific research areas are dynamic network analysis, computational social and organization theory, adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, information diffusion, disease contagion and response within and among groups particularly in disaster or crisis situations.  She and the members of the CASOS center have developed infrastructure tools for analyzing large scale dynamic networks and various multi-agent simulation systems.  The infrastructure tools include ORA, AutoMap and SmartCard.  ORA is a statistical toolkit for analyzing and visualizing multi-dimensional networks.  ORA results are organized into reports that meet various needs such as the management report, the mental model report, and the intelligence report.  Another tool is AutoMap, a text-mining system for extracting semantic networks from texts and then cross-classifying them using an organizational ontology into the underlying social, knowledge, resource and task networks. SmartCard is a network and behavioral estimation system for cities in the U.S..  Carley’s simulation models meld multi-agent technology with network dynamics and empirical data resulting in reusable large scale models: BioWar  a city-scale model for understanding the spread of disease and illness due to natural epidemics, chemical spills, and weaponized biological attacks; and Construct  an information and belief diffusion model that enables assessment of interventions.  She is the current and a founding editor of the journal Computational Organization Theory and has published over 200 papers and co-edited several books using computational and dynamic network models.

Race Vulnerability Study and Hybrid Race Detection
Lead PI:
Jonathan Aldrich
Abstract

The prevalence of multi-core systems has resulted in increasingly common concurrency faults, challenging computer systems' reliability and security.  Races, including low-level data races and high-level atomicity violations, are one of the most common concurrency faults.  Races impair not only the correctness of programs, but may also threaten system security in a variety of ways.  It is therefore critical to efficiently and precisely detect races in order to defend against attacks.

Existing race detectors fall into two categories: static and dynamic approaches.  However, neither category alone has produced satisfactory results so far.  Static approaches are generally complete, that is, they rarely miss races, but they suffer from false positives.  In contrast, dynamic race detectors can ensure soundness but their runtime overhead is prohibitively high.  The purpose of this research is to gain a better scientific understanding of vulnerabilities due to races, and to evaluate the hypothesis that a hybrid race-detection mechanism can combine the benefits of static and dynamic approaches, providing a more effective means of addressing race-related vulnerabilities.

Our Team

Jonathan Aldrich, PI

Du Li, Post-Doctoral Associate

Matthew Dwyer, Collaborator

Witawas Srisa-an, Collaborator

Scientific Questions.  We plan to pursue the purpose described above by answering the following scientific questions:

  • How do races introduce security vulnerabilities in real world systems?
  • Can existing security tools effectively identify and eliminate the vulnerabilities caused by races?
  • Can static analysis help dynamic race detectors to reduce runtime overhead?
  • Can dynamic analysis help static race detectors to rule out false warnings?
  • Can we build a hybrid approach efficient enough for deployed systems while maintaining high coverage for races?
  • Can such an approach help to identify and mitigate race-related vulnerabilities in practice?

 

Activities.  This project incorporates the following thrusts:

  1. Conduct an empirical study on security vulnerabilities in real world systems based on public data such as reports in National Vulnerability Database (NVD).  Evaluate how well existing tools deal with these vulnerabilities.
  2. Build a dynamic race detector that uses static analysis to filter out unnecessary monitoring for operations that cannot contribute to enhancing race coverage.
  3. Employ a smart sampling mechanism to control runtime overhead without losing too much race coverage based on the potential race distribution information produced by static analysis.
  4. Compare the performance, scalability, soundness (relevant to usability), and completeness of our race detector with state-of-the-art race detectors on widely used benchmark suites, and on challenge problems identified in the security vulnerability study. 
Jonathan Aldrich

Jonathan Aldrich is an Associate Professor of the School of Computer Science. He does programming languages and software engineering research focused on developing better ways of expressing and enforcing software design within source code, typically through language design and type systems. Jonathan works at the intersection of programming languages and software engineering. His research explores how the way we express software affects our ability to engineer software at scale. A particular theme of much of his work is improving software quality and programmer productivity through better ways to express structural and behavioral aspects of software design within source code. Aldrich has contributed to object-oriented typestate verification, modular reasoning techniques for aspects and stateful programs, and new object-oriented language models. For his work specifying and verifying architecture, he received a 2006 NSF CAREER award and the 2007 Dahl-Nygaard Junior Prize. Currently, Aldrich excited to be working on the design of Wyvern, a new modularly extensible programming language.

Composability of Big Data and Algorithms for Social Networks Analysis Metrics
Lead PI:
Juergen Pfeffer
Abstract

Applying social network analysis to Social Media data supports better assessment of cyber-security threats by analyzing underground Social Media activities, dynamics between cyber-criminals, and topologies of dark networks. However, Social Media data are big and state of the art algorithms for social network analysis metrics require >=O(n + m) space and run in >=O(nm) time - some in O(n^2) or O(n^3) - with n = number of nodes, m = number of edges. Therefore, real-time analysis of Social Media activities to mitigate cyber-security threats with sophisticated social network metrics is not possible. To tackle this problem, we apply ideas of composability to big data and algorithms for social network analysis metrics. A network of humans, organizations, etc. is modeled with a graph G = (N, E) by aggregation of observed interactions E between targeted entities N. Because of the algorithmic complexity, composing network analysis metrics by analyzing sub-networks G1, G2, etc. can result in tremendous gain in calculation time.

Juergen Pfeffer
Subscribe to