Paper Presentations—Quarterly Lablet Meeting at CMU, July 2015

 

 
SoS Logo

Technical Papers Presentations

Quarterly Lablet Meeting at CMU

July 2015

The Science of Security (SoS) quarterly Lablet meeting, sponsored by NSA, was hosted by the Carnegie Mellon University Lablet on July 14 and 15, 2015. Quarterly meetings are held to share research, coordinate, present interim findings, and stimulate thought and discussion about the Science of Security.  Technical papers were presented about subjects related to the five Hard Problems in the Science of Security.  Individual researchers from each Lablet and their teams presented materials from work. 

Alain Forget (CMU) “Early Results from the Security Behavior Observatory: An Infrastructure for Long Term Monitoring of Client Machines”

The Security Behavior Observatory consists of a large panel of home users’ computers used to do a longitudinal study of security use.  The research goals are to collect data over a long period of time, to provide usage data for multiple research domains and to gain ecologically-valid insights on the most pressing usable security challenges users face.  Since it was begun a year ago, the SBO has met a number of milestones including client development, secure server architecture, participant recruitment and data ingestion. 

The SBO now has about 50 clients, but researchers had expected recruiting to be easier and numbers to be larger.  Preliminary results identified the prevalence of unwanted software.  They found  ~3000 distinct software product names including “malware” and “suspicious software”  used on the client machines.  They advised the clients to use http://shouldiremoveit.com to determine malware.  The researchers coded adware, negative online reputation, changed settings, disguised itself, difficult to remove or uninstall as “suspicious.”  Using this rubric, they classified 18 programs as non-suspicious; 27 as malware and 16 as suspicious. The SBO has looked only at Windows clients so far.

(ID#: 15-6138)

Arbob Ahmad et al (CMU) “Declassification and Authorization in Epistemic Logic”

This work focuses on the hard problem of security policy and governance from the viewpoint of formal logic.  Defining epistemic logic as a form of logic for reasoning about knowledge, the researchers aver knowledge transfers are extrapolated from a trace of the relevant reads and writes from memory.  Non-interference is repositioned as the adequacy of the epistemic model of information flow.  Non-interference and epistemic logic are compared in the study.

For examples, authorized declassification permits limited violations of non-interference provided a proof authorizing the violation is supplied according to some authorization policy. In practice this could be verified by a cryptographic key from an authorized authority, e.g., a doctor may consent to release of a medical record.

They argue that non-interference is too restrictive since it does not permit limited disclosure of information while the rest remains secret, does not account for authorization proofs that are essential to many policies, and the statement of non-interference says if a low observer sees a high input then false. We want to generalize this to say if a low observer sees a high input then there is an authorization proof permitting that particular flow of knowledge

In contrast, epistemic logic is a form of logic for reasoning about knowledge using common connectives: A B, A ^ B; knowledge modality: [k]A \k knows A"; knowledge transfers are extrapolated from a trace of the relevant reads and writes from memory and the epistemic logic used is taken from the work of DeYoung and Pfenning (2009).

The authors provide an approach that combines epistemic logic and authorization logic. They are both embedded in the same logic so that a proof that a flow of knowledge is possible also includes the proof that it is authorized.

They conclude epistemic logic can directly reason about how knowledge is transferred through the execution of a program so that non-interference is repositioned as the adequacy of the epistemic model and can express the consequences of authorized declassification so that the derivation that a flow of knowledge requiring declassification takes place includes the proof that that declassification is authorized.

(ID#: 15-6139)

Javier Cámara et al. (CMU) “Reasoning about Human Involvement in Self-Protecting Systems”

Addressing the hard problem of human behavior issues, the authors compare human oversight to fully automated systems for system security.  The problems, they say, are that modern software systems operate in constantly changing environments and that security must respond to a constant appearance of new threats and vulnerabilities.  Human oversight has scalability and timeliness issues. Current approaches to self-protection are agnostic to system specifics, threat-specific, ignore business context, and application-level approaches are often designed as part of the system.

Their approach is to formally reason about human participation in adaptation and to reason about security in the context of other business concerns, discriminate situations that should involve humans, and focus on actuation.  They conclude that “Humans better at providing context for protection mechanisms” than fully automated systems.

(ID#: 15-6140)

Kathleen M. Carley, et al. (CMU SEI CERT) “Characterizing Insider Threats Using Network Analytics”

This presentation showed an analysis of two case studies using network analytics and semi-automated metadata extracted from texts.  The case studies were drawn from SEI CERT.

The first case looked at insider threat examples: extracted metadata from texts; semi-automated.  Coding was done from the perspective of a “spy”; roles of actors were coded and attributes of a “spy”  determined using e.g., PFC Manning as a lone wolf example and John Walker in contrast as one running a spy ring.  In the second case, ENRON, data from network anomaly detection was used.  In this case, covert actors are not top actors but are interstitial—hiding in plain sight.  From them, the researchers concluded that when under cyber-attacks, inadvertent leaks go down.  This finding led to the hypothesis that insider threats will be more likely in mesh and hierarchies and go down during attacks.

They determined there are differences in ego networks for covert actors: Insider Threat Example; Lone Wolf example, gang example. This research shows patterns of behavior that can inform network analytics using machine learning.  Conclusions about insiders indicate it is hard to define “good” versus “threat employees.

(ID#: 15-6141)

Serge Egelman, UC Berkeley / ICSI (CMU team) “Individualizing Privacy and Security Mechanisms”

According to the presenter, systems are now designed for “the user”, who is “33 and has one ovary and one testicle.” [This description conveys the idea that these systems are not accurately identifying the user in a meaningful way.]  In contrast, ad targeting can identify geography, demographics, behavioral, and psychographic distinctions.  From this, he infers that security can also be tailored.  His hypothesis is that security mitigations can be optimized by tailoring to the individual and not the “average.”

His research demonstrates that the best approach will take into account the need for cognition, general decision making styles, domain specific risk attitude, Barratt Impulsivity scale, and consideration for future consequences.  Since there is no standardized psychometric scale in the security literature about behavior and intentions, so they created one labelled the Security Behavior Intentions Scale (SeBIS).  SeBIS allows segmentation.  Next after segmentation is customization. Targeted security mitigations may help us optimize, rather than continue to satisfice

(ID#: 15-6142)

Favonia, (CMU) “Logic Programming for Social Networking Sites”

Addressing the hard problem of human behavior, the presenter looked at modeling a social networking website in order to reason about its privacy properties, generation IDs, and generate lists of people as IDs.  To test their hypothesis, three factors that were added into a new logic programming language with these features built in.  They concluded there is a sound and complete compiler; that sound logic can facilitate modeling and reasoning and that there is a sound and complete compilation to a known logic programming language created by DeYoung and Pfenning. Logic programming facilitates modeling and reasoning. 

(ID#: 15-6143)

Lindsey McGowen (NC State) “Evaluating the Science of Security”

NCSU has continuing research under way to determine ways to evaluate the quality and placement of Science of Security research.   They are developing a tool to evaluate quality of SoS work based on custom bibliometrics because traditional citation-based bibliometrics are not appropriate for SoS evaluation since citations are a lagging indicator in a very fast paced field, existing databases are incomplete and sometimes inaccurate. Instead, expert based evaluation is a more appropriate and accepted method for assessment in Computer Science.

This work is based on the need to assess the potential impact of our work on the security community based on expert based assessment of publication venues (tier ranking), will allow us to demonstrate what % of our publications are appearing in top tier venues, may be used to identify venues to target for future work, and the tool will be shared with all Lablets, for optional use.

(ID#: 15-6144)

Emerson Murphy-Hill NC State) “Developers’ Adoption and Use of Security Tools”

This presentation examined the adoption and use of security tools by software developers.

They learned that only 8% of developers do use security tools and asked why the other 92% do not use these tools.  Their approach began with a qualitative study consisting of interviews with developers; from these, they developed quantitative surveys.

They conducted 42 interviews about an hour long with developers from US companies and personal contacts. The found developers thought security didn’t matter internally as much as externally, that  free tools led to adoption, that functionality is the top concern, that is, the task is more important than security. They also found a false belief that all tools are equally effective. In the survey round, they found for security tool use: 16 frequent users, 48 occasional users, and 130 developers who never use security tools.

From their work, they concluded social learning makes a big difference, that is, seeing others use them. Security importance alone is not a factor. In addition, they found that misconfiguration is a problem.  When developers do use tools, they may misuse them, e.g., spring security annotations in Java; methodologies flawed.  The researchers looked at 125 repositories and found 248 misconfiguration fixes.  However, they also recognized it is hard to distinguish between misconfigurations and enhancements.

(ID#: 15-6145)

Bill Sanders, et al. (UIUC), “Accounting for User Behavior in Predictive Cyber Security Models”

This study provided evidence on the importance of modeling human behavior for giving insights into security analysis and assessment.  Its overall goal is the development of the Mobius-SE Quantitative Security Evaluation Tool. Mobius-SE Security Evaluation Approach is an analysis that considers the characteristics and capabilities of adversaries in order to account for user behavior and its impact on system cyber security, considers multi-step attacks, enables trade-off comparisons among alternatives and measures the aspects of security important to owners/operators of the system. 

This software development is informed by theories of human behavior.  The large challenges is turning human behavior models into executable mathematical models that can be used for analysis because descriptive theories are closer to reality but are harder to quantify and normative theories are easier to quantify but they can be different than the real world behavior.  Their initial case study illustrates the use of bounded rationality and deterrence theory in the context of cyber-security.

(ID#: 15-6146)

Russ Koppel, (Penn—UIUC collaborator) “Progress, Problems, Publications, Plans and Promises of the Group Studying Passwords and Cyber Security Circumvention”

The presenter described some false assumptions of security designers: that circumventions are not common, that they come only from outside, that they reflect laziness, never happen, and can be solved by technology. In contrast, this study shows people circumvent security controls or make uninformed decisions.  The consequences of bad decision making or misuse of controls is a pandemic/ubiquitous circumvention that undermines the effectiveness of systems, corrodes belief in administrators and creates an environment of workarounds.  The research challenge is to develop metrics to enable security engineers to fix a broken system.  Semiotics (study of signs and symbols) is one tool that works.  Developers and users create workarounds because of the perceived importance of the task, perceived authority to act outside of the rules, and perceived insensitivity or misunderstanding of the security designers and administrators.  Some conclusions: Password reset is the most common call to help desks. People don’t think about credentialing within the organization.  Rather, they assume the threat is all external. People are just trying to get their work done.

(ID#: 15-6147)

Dave Levin (UMD), “Analyzing Certificate Management in the Web’s PKI”

Revocation is an issue in PKI distribution.  If not revoked, invalid certificates can be transferred. If the Certification Authority (CA) revokes the invalid certificate, the problem is solved. But generally, they are not revoked and there are many problems. The browser is supposed to check the certificate revocation list periodically to check for validity. 

The researchers are looking at revocation in 3 ways: whether administrators revoke them when they should; whether browsers retain the revocations; and what the hosting provider’s role is.  Their data shows problems of reissuance of the same key: patched=93%; revocations=13%; reissued=27%.  Their data suggests Admins aren’t doing what the PKI needs them to do.  CAs give incentives to reissue the old key and a disincentive to revoke.  Cost is a factor since the storage and issuance of the new key costs the CAs.  People aren’t revoking.  Go to http://securepki.org

(ID#: 15-6148)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.