2013 Computational CyberSecurity In Compromised Environments (C3E) Workshop
Drawing upon past C3E themes of predictive analytics, visualization, decision-making and others, we’ll take C3E into its fifth year by focusing in two key areas:
Drawing upon past C3E themes of predictive analytics, visualization, decision-making and others, we’ll take C3E into its fifth year by focusing in two key areas:
Welcome to C3E2013!
Now in our fifth year, we'll gather again at West Point later this October to continue our exploration of work begun during the April 2013 mid-year event by focusing in two areas:
You have been invited to attend the C3E Weekly Planning Meeting on Friday August 2, 2013 from 12:30PM - 1:30PM.
Conference Call-In Line: 1-605-475-4350
Access Code: 783 6212
To ensure the the CPS-VO's e-vite capability will work to our standards for C3E 2013, I have created a "test invition" for this week's C3E planning meeting. Please follow the instructions to either Accept or Decline the invitaiton for this week's meeting. Feedback is greatly appreciated!
Thank you!
Test event
The 2013 third quarter Science of Security quarterly Lablet meeting will be held at Carnegie Mellon University on Thursday, September 26th and Friday, September 27th. The meeting for both days will take place on the CMU Campus in the Gates-Hillman Center room 4405.
The first day will feature open workshop sessions, focused on two topics: (1) Addressing usability and security challenges through design and empirical methods, (2) Addressing challenges of scale through composable modeling and analysis.
The second 2013 quarterly Science of Security lablet meeting will be hosted by David Nicol and Bill Sanders, at the University of Illinois in Urbana-Champaign.
Cyber security is a global phenomenon. For example, recent socially-engineered attacks that target CEOs of global corporations appear to be instigated by the Chinese group dubbed the “comment crew.” In their 2011 survey Symantec found that the number one cyber risk business concern was external cyber-attacks, followed by concerns about both unintentional insider error (2nd risk) and intentional insider error ( 3rd risk). Analysis by Verizon’s cyber forensics team indicates that the massive increase in external threats overshadows insider attacks. Despite the increase in external threats little is known about the source of such threats; or the global implications this evolving threat environment.
At the global level, cyber security requires not only attribution and forensics, but harmonized laws and effective information sharing. In spite of this growing consensus there is still little empirical understanding of the global cyber threat environment, an understanding that is critical for forensics. Currently, many cyber theories are based on anecdotal evidence and case studies. However, the science of security needs a strong empirical base for strong theory. It is now possible to create such an empirical base as companies like Symantec have been amassing large quantities of data on attacks. In contrast to much of the work in cyber security we take a socio-technical approach looking at the human element. As such, we postulate that the potential severity of the threat is a function of the political environment rather than the just the technology.
The objective of this project is to empirically characterize the global cyber threat environment and to test this hypotheses using Symantec data. A virtual machine will be constructed and global data on the threat network (which IP attacks which) attributed by location, type of attack, severity and potential impact will be collected by time period. The resultant geo-temporal network will then analyzed at the global level controlling for factors such as machines per country, internet access and the interstate hostility and alliance. The proposed research will create a global mapping of the threat environment, changes in that environment, and its relation to geographical and political factors. This will provide an empirical baseline for reasoning about the threat environment. An empirical basis is critical for the growth of science.
Internet Access – Red High Blue Low |
Dr. Kathleen M. Carley is a Professor of Computation, Organizations and Society in the department – Institute for Software Research, in the School of Computer Science at Carnegie Mellon University & CEO of Carley Technologies Inc. Dr. Carley is the director of the center for Computational Analysis of Social and Organizational Systems (CASOS) which has over 25 members, both students and research staff. Dr. Carley’s received her Ph.D. in Mathematical Sociology from Harvard University, and her undergraduate degrees in Economics and Political Science from MIT. Her research combines cognitive science, organization science, social networks and computer science to address complex social and organizational problems. Her specific research areas are dynamic network analysis, computational social and organization theory, adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, information diffusion, disease contagion and response within and among groups particularly in disaster or crisis situations. She and the members of the CASOS center have developed infrastructure tools for analyzing large scale dynamic networks and various multi-agent simulation systems. The infrastructure tools include ORA, AutoMap and SmartCard. ORA is a statistical toolkit for analyzing and visualizing multi-dimensional networks. ORA results are organized into reports that meet various needs such as the management report, the mental model report, and the intelligence report. Another tool is AutoMap, a text-mining system for extracting semantic networks from texts and then cross-classifying them using an organizational ontology into the underlying social, knowledge, resource and task networks. SmartCard is a network and behavioral estimation system for cities in the U.S.. Carley’s simulation models meld multi-agent technology with network dynamics and empirical data resulting in reusable large scale models: BioWar a city-scale model for understanding the spread of disease and illness due to natural epidemics, chemical spills, and weaponized biological attacks; and Construct an information and belief diffusion model that enables assessment of interventions. She is the current and a founding editor of the journal Computational Organization Theory and has published over 200 papers and co-edited several books using computational and dynamic network models.
The prevalence of multi-core systems has resulted in increasingly common concurrency faults, challenging computer systems' reliability and security. Races, including low-level data races and high-level atomicity violations, are one of the most common concurrency faults. Races impair not only the correctness of programs, but may also threaten system security in a variety of ways. It is therefore critical to efficiently and precisely detect races in order to defend against attacks.
Existing race detectors fall into two categories: static and dynamic approaches. However, neither category alone has produced satisfactory results so far. Static approaches are generally complete, that is, they rarely miss races, but they suffer from false positives. In contrast, dynamic race detectors can ensure soundness but their runtime overhead is prohibitively high. The purpose of this research is to gain a better scientific understanding of vulnerabilities due to races, and to evaluate the hypothesis that a hybrid race-detection mechanism can combine the benefits of static and dynamic approaches, providing a more effective means of addressing race-related vulnerabilities.
Our Team
Jonathan Aldrich, PI
Du Li, Post-Doctoral Associate
Matthew Dwyer, Collaborator
Witawas Srisa-an, Collaborator
Scientific Questions. We plan to pursue the purpose described above by answering the following scientific questions:
Activities. This project incorporates the following thrusts:
Jonathan Aldrich is an Associate Professor of the School of Computer Science. He does programming languages and software engineering research focused on developing better ways of expressing and enforcing software design within source code, typically through language design and type systems. Jonathan works at the intersection of programming languages and software engineering. His research explores how the way we express software affects our ability to engineer software at scale. A particular theme of much of his work is improving software quality and programmer productivity through better ways to express structural and behavioral aspects of software design within source code. Aldrich has contributed to object-oriented typestate verification, modular reasoning techniques for aspects and stateful programs, and new object-oriented language models. For his work specifying and verifying architecture, he received a 2006 NSF CAREER award and the 2007 Dahl-Nygaard Junior Prize. Currently, Aldrich excited to be working on the design of Wyvern, a new modularly extensible programming language.
Applying social network analysis to Social Media data supports better assessment of cyber-security threats by analyzing underground Social Media activities, dynamics between cyber-criminals, and topologies of dark networks. However, Social Media data are big and state of the art algorithms for social network analysis metrics require >=O(n + m) space and run in >=O(nm) time - some in O(n^2) or O(n^3) - with n = number of nodes, m = number of edges. Therefore, real-time analysis of Social Media activities to mitigate cyber-security threats with sophisticated social network metrics is not possible. To tackle this problem, we apply ideas of composability to big data and algorithms for social network analysis metrics. A network of humans, organizations, etc. is modeled with a graph G = (N, E) by aggregation of observed interactions E between targeted entities N. Because of the algorithmic complexity, composing network analysis metrics by analyzing sub-networks G1, G2, etc. can result in tremendous gain in calculation time.
Making sound security decisions when designing, operating, and maintaining a complex enterprise-scope system is a challenging task. Quantitative security metrics have the potential to provide valuable insight on system security and to aid in security decisions. To produce model-based quantitative security metrics, we developed the ADversary VIew Security Evaluation (ADVISE) method and implemented it in the prototype tool Möbius-SE (Möbius Security Edition), which is suitable for use by security modeling experts. Our goal in this project is to extend the ADVISE method and tool to explicitly account for the behavior of human users as part of the system. While cyber security models traditionally model the behavior of the attacker, they usually do not explicitly account for the behavior of the users of a system, and how that use can create or eliminate system vulnerabilities.
Increasingly, accumulated cyber security data indicate that system users can play an important role in the creation or elimination of cyber security vulnerabilities. Thus, there is a need for cyber security analysis tools that take into account the actions and decisions of human users.
We are 1) developing a Möbius-SE-compatible, process-oriented modeling formalism for modeling how human users interact with systems, using the concept of a human decision point to explicitly represent decisions that affect the security of a system; 2) implementing the formalism as an atomic model editor in Möbius-SE that generates models that can interact with other Möbius-SE models, e.g., models of the systems itself and the attacker; and 3) demonstrating the use of the implemented tool in a variety of government- and industry-motivated case studies, as suggested by our sponsor and industry partners (HP and GE).
William H. Sanders, David M. Nicol, Jim Blythe (University of Southern California), and Sean W. Smith (Dartmouth College)