SoS Lablet Quarterly Meeting - Synopses of Research Presentations

 

 
SoS Logo

Synopses of Research Presentations

Quarterly Lablet Meeting



College Park, MD

26 – 27 October 2015

Researchers from the four Science of Security Lablets made presentations at the quarterly Lablet meeting held 26-27 October 2015 at the University of Maryland, College Park (UMD). In addition to UMD, the Lablets are centered at the University of Illinois Urbana-Champaign (UIUC), Carnegie Mellon University (CMU), and North Carolina State University (NCSU). Unclassified research from NSA was also presented. 



Yule Williams (NSA): “An Operational Cybersecurity Perspective”

Yule Williams is Technical Director at NSA’s NTOC. He offered an overview of cybersecurity from the perspective of an organization with “retail” cyber knowledge. It performs continuous threat monitoring. Mr. Williams described “success” in the cyber domain as maintaining one’s mission in the face of threats. He stated most organizations lack knowledge of their assets and the value of those assets. The cybersecurity problem, then, is in prioritizing cybersecurity. He sees the goal of the Science of Security as taking us to where the threat can actually be dealt with. We need to predict the threat instead of being reactive; forensics is too late. He is seeing emergence into the Cloud—movement from physical space to virtual space, which makes threats harder to trace. He is using the data to try to learn what the signs are in various domains to indicate an emerging threat. (ID#: 15-7673)

 

Ahmad Ridley (NSA): “Cyber Resilience”

Ahmad Ridley is at the NSA/CSS Research Directorate. His talk focused on the importance of cybersecurity resilience. He described a rise in attacks, growth in the diversity of targets, and that only 33% of organizations discovered intrusions themselves. Currently, attacks have a 229-day median persistence. To deal with the issues posed, he identified good security habits and hygiene and migrating from cyber security to cyber resilience. Resilience is, he said, the “measured ability to fight-through cyber attacks to achieve and sustain acceptable levels of mission performance.” The long term goal should be to develop and implement automated decision making and automated response!  (ID#: 15-7674)

 

John Baras (UMD): “Trust, Mistrust, Recommendation Systems, and Collaboration”

Prof. John Baras presented an overview of three research papers about multiagent systems and trust. The research goal of the first paper was to develop a transformational framework for science of trust, and its impact on local policies for collaboration in networked multi-agent systems, taking human behavior into account from the start. This study developed a trust model with various decision rules based on local evidence in the setting of Byzantine adversaries. Their proposed Trust-Aware consensus algorithm is flexible and can be extended to incorporate more complicated trust models and decision rules. Simulations show their algorithm can effectively detect malicious strategies even in sparse networks of connectivity.  The second study looked at a semiring-based trust evaluation for information fusion in social network services (SNS). They modeled the trust relationships in SNS as a 2-dimensional vector to denote both trust and certainty information contained in opinions. Both trust and distrust were considered in their model of trust. They proposed a novel semiring structure, called “Distrust-semiring,” for trust propagation and fusion, where transitivity of trust and distrust are handled differently. A trust inference algorithm, RingTrust, is developed by integrating trust propagation and aggregation processes in an FATP fashion. The design can be used in trustworthy decision making and information fusion in SNS.  The third paper addressed trust-aware crowdsourcing with domain knowledge. The researchers developed a scalable inference based on the alternating direction method of multipliers (ADMM) that optimizes variables in the bottom layer while fixing variables in the top layer, optimizes variables in the top layer while fixing variables in the bottom layer, and iterating through previous steps until achieving convergence. Using the model, they achieved accuracy rates in excess of 90%. (ID#: 15-7675)

 

V.S. Subrahmanian (UMD): “The Global Cyber-Vulnerability Report: Behaviors, Vulnerability, and Malware Spread Forecasting”

Maryland’s Prof. V.S. Subrahmanian presented on three research works. These studies addressed real-world operational data to identify human behaviors linked to cyber vulnerabilities using Symantec WINE data. The research looked at 1.6 million machines and 13.7 billion malware and telemetry reports, providing a high confidence in accuracy rate. The first study addressed user vulnerability. It determined that software developers and professional administrators are most vulnerable. Though they are professional, their behavior shows that they do not follow needed cyber hygiene.  The second study addressed specific nations. This country vulnerability study showed that the most vulnerable nations are India, South Korea, Saudi Arabia, China/Malaysia, and Russia. The safest nations are Norway, Finland, Denmark, Sweden, and Switzerland. A tweak of measures would put Japan in the top five for safety; Germany is just outside the top five. The US is twelfth.  The third paper was the Country Forecast Study. Again, using Symantec data, they sought to identify how to predict the extent of malware spread in 40 countries. Using this data, they are prototyping a country cyberattack forecast engine (CCAFE).  (ID#: 15-7676)

 

Özgür Kafali (NCSU): “Policy Governance via Social Norms” 

Dr. Özgür Kafali from NCSU did a two-part presentation on sociotechnical systems (STS) and policy governance. The first part addressed the specific application of social norms, including certain policy elements: social aspects of security, policy-governed systems, sociotechnical systems, social norms, security properties, formal methods and tools, and open challenges. Policies can be used, he said, to model expectations about system and user behavior. What is needed is a unifying framework to connect social and technical aspects of security. Unexplored areas in the relationship between human factors and technical solutions include models, formalizing security properties, understanding real-life policies, and validation and dissemination of the developed models. The second portion of his presentation, “Norm Revision for Sociotechnical Systems: A Formal Approach for Secure, Privacy-enhanced Collaboration,” also addressed STS. Using a HIPAA and privacy example, he showed how the rule applies and suggested an “ideal” solution for this problem set. He also addressed the willingness of the user—in this case the physician—to actually use the method employed. He suggested the studies show that while using a formal approach is good, there are limitations in terms of expressiveness, lack of scalability, and boundaries of the scenario. (ID#: 15-7677)

 

David Roberts (NCSU): “Leveraging Input-Device Analytics for Cognition-based Security Proofs”

NCSU researcher David Roberts uses games as a window into personality and mind to aid in understanding and accounting for human behavior. In his presentation, he compared it to looking at messy rooms to infer behaviors and personalities. In the security context, abnormal means not just different, but unusual in a way that causes problems. Microstrategies are ways of accomplishing a task that vary in timing, accuracy, and payoff using the analogy of a layup compared to a slam dunk. His broad work is to develop Human Subtlety Proofs (HSPs) as an alternative to CAPTCHAs. These expand on human interactive proofs (HIPs) and human observational proofs (HOPs), and are more precise and less obtrusive. He demonstrated the games used in his research including a Scrabble-like game with a twist, “Concentration” with a twist, and a typing game with a twist. (ID#: 15-7678)

 

Michael Maass (CMU): “Applying Sandboxes Effectively”

Researcher Michael Maass of CMU looked at ten years of specific IEEE, USENIX, and ACM conferences to find 101 papers on sandboxing. His research questions were about the kind of claims made about sandboxing and how those claims are validated. His method was to use systematic analysis, followed by qualitative content analysis, and then statistical analysis. Using rank-based analysis, he determined that there is an overhead cost of about 6.9% when using a sandbox. (ID#: 15-7679)

 

Sudarshan Wadkar (CMU): “Quantifying Risk Through Information Ontology”

CMU researcher Sudarshan Wadkar looked at how to quantify risk using information ontology. To do so, he avers, requires extracting ontology from policy—there is a need to align terminology. Texts are not clear when discussing “information” transfers to “third parties” for vague purposes. He collected polices from websites, itemized them, and identified action verbs using information types and targets. He then identified refinements in information types and quantified privacy risks. Parsing this way his team achieved results of .6. He concludes that perception of risk is unequally distributed across categories and that vague categories conceal risk.  (ID#: 15-7680)

 

Sayan Mitra (UIUC): “Model-Based Analysis and Synthesis for Security of Cyber-Physical Systems”

Illinois’ Sayan Mitra provided both a project update and outlined three specific research projects. The Lablet this year has produced eight conference papers; two journal articles; has four manuscripts in review; and has earned three best paper awards. The UIUC research focused on how to build cyber-physical systems “people can bet their lives on.” Their research addressed automatic controller synthesis, information flow sensitive distributed control, and applications of the algorithms in meeting CPS verification goals. Ongoing research is looking at synthesis of attacks on power networks and safe controllers for drones. (ID#: 15-7681)

 

Eric Badger (UIUC): “Scalable Data Analytics Pipeline for Validation of Real-Time Attack Detection”

Eric Badger (UIUC) presented a new attack detection tool named Attack Tagger, a demonstration validation of Attack Tagger, and an outline of future work. The challenges posed are how to detect attacks before the system is misused, how to validate how their tool works on real-world data, and how to transfer their concept from theory to practice. Using data from the National Center for Supercomputing Applications (NCSA), they determined 26% of NCSA’s incidents were credential-stealing over five years. Of these, 28% were not detected. Attack Tagger results correctly detected 74.2% of malicious users as malicious. (ID#: 15-7682)

(ID#: 15-7672)

 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.