HotSoS 2018
Hot Topics in the Science of Security: Symposium and Bootcamp (HotSoS) 2018
Hosted by the North Carolina State University (NCSU) Lablet, the 2018 Hot Topics in the Science of Security: Symposium and Bootcamp (HotSoS) was held April 10 and 11 in Raleigh, NC. This was the fifth time researchers have come together to interact and to see presentations demonstrating rigorous scientific approaches to prevent, detect and mitigate cyber threats. A major continuing focus of the conference is the advancement of scientific methods in approaching the Hard Problems in cybersecurity. The agenda included research papers, keynote and invited presentations, industry presentations, and tutorials. A panel discussion and poster sessions rounded out the agenda. Details are provided below.
WELCOME
NCSU Lablet co-PIs Laurie Williams and Munindar Singh welcomed the audience to Hot SoS 2018. They pointed out that there were about 120 attendees from government, industry and academia for the two days of presentations in both research and industry tracks. They noted that the 29 research paper submissions and the 57 poster submissions from 15 universities worldwide indicate the growing interest in HotSoS and the growth in collaboration.
OPENING REMARKS
George Coker, Chief of NSA Information Assurance Research, also welcomed the attendees and challenged them to advocate for the science of security. He pointed out that since cybersecurity is the intersection of multiple disciplines, we need to build on the science of those multiple disciplines to build the science of cybersecurity. He concluded by noting that the Science of Security has grown to the Science of Security and Privacy with six Lablets, all addressing the five Hard Problems, with one focused on privacy and one focused on Cyber-Physical Systems.
RESEARCH PAPERS
Nine refereed papers were selected for presentation out of 29 submissions. The paper tracks were organized into three focus areas: Vulnerabilities and Detection, Secure Construction, and Applications and Risk Evaluation.
Vulnerabilities and Detection
1. Robustness of Deep Autoencoder in Intrusion Detection under Adversarial Contamination
Pooria Madani and Natalija Vlajic, University of York
Intrusion detection systems (IDSs) generally use some machine learning (ML) algorithms. However, a sophisticated adversary could target the learning module of these IDSs in order to circumvent future detections. Consequently, robustness of ML-based IDSs against adversarial manipulation (i.e., poisoning) will be a key factor for the overall success of these systems. The authors presented a novel evaluation framework for performance testing under adversarial contamination, studying the viability of using deep autoencoders in the detection of anomalies in adaptive IDSs and their overall robustness against adversarial poisoning.
2. Understanding the Challenges to Adoption of the Microsoft Elevation of Privilege Game
Inger Anne Tøndel, Norwegian University of Science and Technology, and Tosin Daniel Oyetoyan, Martin Gilje Jaatun, and Daniela S. Cruze, SINTEF Digital
In Norway, there is very low adoption of threat modeling or security for software security. This study used a card game, Microsoft Elevation of Privilege (EoP), to make threat modeling more fun and available to developers. The EoP card game helps clarify the details of threat modeling and examines possible threats to software and computer systems. The EoP game focuses on spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege, and uses a simple point system that allows players to challenge other developers and to become the opponent's biggest threat. Results of the study suggest using the game has the potential to improve security interest and awareness and may also be useful in training for threat modeling.
3. Reinventing the Privilege Drop: How Principled Preservation of Programmer Intent Would Prevent Security Bugs
Ira Ray Jenkins, Sergey Bratus, and Sean Smith, Dartmouth College, and Maxwell Koo, Narf Industries
The principle of least privilege requires that components of a program have access to only those resources necessary for their proper function. Defining proper function is a difficult task.
The authors present the use of their ELF-based access control (ELFbac), a technique for policy definition and enforcement. ELFbac leverages the common programmer's existing mental model of scope and allows for policy definition at the Application Binary Interface (ABI) level.
Secure Construction
4. SecureMR: Secure MapReduce Computation Using Homomorphic Encryption and Program Partitioning
Yao Dong and Ana Milanova, Rensselaer Polytechnic Institute, and Julian Dolby, IBM
As customers upload data and computation to cloud providers, they typically give up data confidentiality. In this study, the speakers describe SecureMR, a system to analyze and transform MapReduce programs to operate over encrypted data. SecureMR uses partially homomorphic encryption and a trusted client. Their "secret sauce" is using a partially homomorphic encryption instead of fully homomorphic which reduces overhead. They evaluated SecureMR on a set of complex computation-intensive MapReduce benchmarks on Google Cloud with good results. In their evaluation, 89% required no conversions.
5. Integrated Instruction Set Randomization and Control Reconfiguration for Securing Cyber-Physical Systems
Bradley Potteiger, Zhenkai Zhang and Xenofon Koutsoukos, Vanderbilt University
Cyber-Physical Systems (CPS) require proper control reconfiguration mechanisms to prevent a loss of availability in system operation. This presentation addressed the problem of maintaining system and security properties of a CPS under attack by integrating ISR, detection, and recovery capabilities to ensure safe, reliable, and predictable system operation. The authors consider the problem of detecting code injection attacks and reconfiguring the controller in real-time for an autonomous vehicle case.
6. Formal Verification of the W3C Web Authentication Protocol
Iness Ben Guirat, INRIA, and Harry Halpin, INSAT
The formal verification of protocols can set the science of security on firm foundations. The design validation of new protocols in an automated method, allows protocol designs to be scientifically compared in a neutral manner. The authors demonstrate how formal verification can be used to analyze new protocols such as the Web Authentication Working Group (W3C) Web Authentication.
Applications and Risk Evaluation
7. Application of Capability-Based Cyber Risk Assessment Methodology to a Space System
Martha McNeil, Thomas Llanso and Dallas Pearson, Johns Hopkins University Applied Physics Laboratory
Cyber threats remain a growing concern that requires stakeholders to perform cyber risk assessments in order to understand potential mission impacts from cyber threats. The authors present an automated, capability-based risk assessment approach, compare it to manual event-based analysis approaches, describe its application to a notional space system ground segment, and describe the results. Their capability-based approach "BluGen" uses mission impact for every combination of mission, asset, data type and effect. Effectiveness is derived from Reference Catalog data, but it is currently immature. Risk plots are reusable. They seek metrics instead of expert systems with the objective of getting humans out of the loop. There are limitations in what they have produced and are testing it now, but need more and better data.
8. Challenges and Approaches of Performing Canonical Action Research in Software Security
Daniela S. Cruzes, Martin Gilje Jaatun, and Tosin Daniel Oyetoyan, SINTEF Digital
The objective of this research was to develop a research-based model of security engineering for agile software development through science of security. Their methodology was to create software than can withstand a malicious attack by creating work processes to handle security issues in order to assure security will be addressed by the software team. Canonical Action Research (CAR) has well defined principles and is based on a 2004 work by Davison et al in the Information Systems Journal. The principles of this methodology include Researcher-Client agreement, cyclical process model, theory, change through action, and learning through reflection. Challenges to implementation include building trust, data collection, analysis of data, security, use of other theories (not just technical), and metrics to measure success in adding security.
9. Quantifying the Security Effectiveness of Firewalls and DMZs
Huashan Chen and Shouhuai Xu, University of Texas at San Antonio, and Jin-Hee Cho, Army Research Lab
The authors present a framework for investigating the security effectiveness of Firewalls and Demilitarized Zones (DMZs) in protecting enterprise networks. Their objective is to provide a systematic, fine-grained framework for modeling firewalls and DMZs by treating an entire enterprise network as a whole and by treating individual applications and operating system functions as "atomic" entities. They are accommodating realistic, APT-like attacks. Their global view, they assert, allows them to quantify the network-wide effectiveness of replacing one mechanism with an improved mechanism.
KEYNOTES
1. Foundational Cybersecurity Research: Report of a Study by NASEM
Steve Lipner, Executive Director, SAFECode
From 2012-2014, a committee of the National Academies of Science, Engineering, and Medicine (NASEM) conducted a study at the request of NSA Information Assurance Research to look at the science of cybersecurity. The committee included representatives of academia as well as cybersecurity practitioners from industry, and the report reviewers were well-known academicians in the field. The study noted that despite investments, significant problems remained and old approaches hadn’t been adequate for a number of reasons including asymmetry, difficult routes for solution adoption, system complexity, and risk aversion. The study identified four broad aims for cybersecurity research: strengthening the scientific underpinnings of cybersecurity; integrating the social, behavioral, and decision sciences in security science; integrating engineering, operational, and life-cycle challenges in security science; and supporting and sustaining foundational research for security science. The speaker addressed the following institutional challenges and opportunities: demand SoS standards; support joint projects across disciplines; emphasize operational and lifecycle perspective in design and evaluation; and integrate with business cases to support adoption. He concluded by calling for scientific rigor, interdisciplinary approaches, and real-world applications as well as theory.
2. You’ve Got a Vuln, I’ve Got a Vuln, Everybody’s Got a Vuln
Ari Schwartz, Venable LLP
The speaker addressed vulnerability disclosure policies and how they affect government, academic and private security research. Existing vulnerability standards focus on what vendors do when they receive notification of vulnerabilities, and he advocates for standards for Coordinated Vulnerability Disclosure (CVD) that would go beyond vendors. He noted that the Vulnerabilities Equity Process (VEP) had been reinvigorated following media leaks, and he addressed criticisms of and recommendations to improve the VEP. He noted the following issues for the future of CVD:
· How to adapt CVD
· How to encourage research in the right areas while limiting researcher liabilities
· Can increased govt hacking, with oversight, make up for lost data from greater end-to-end encryption
3. Cyber Security for Aviation Weapon Systems
David Burke, Technical Director NAVAIR Cyber Warfare Detachment;
The Naval Air Systems Command (NAVAIR) is the acquisition arm for Naval Aviation and is responsible for the cybersecurity posture for all naval weapons systems, systems which represent a diversity of both information and operational technologies. Dr. Burke said that NAVAIR wants to be able to take advantage of advances in cybersecurity, but that poses a difficulty given the number of legacy systems in the inventory. He also addressed the challenge of how to take people who understand military aviation and enable them to deal with cybersecurity challenges, discussing his efforts to train hackers--those who can make systems work better through tweak and shortcuts. He believes that an opportunity for academic research is how to quantify risk in cybersecurity, specifically as risk relates to CPS.
4. An Access Control Perspective on the Science of Security
Ravi Sandhu, University of Texas at San Antonio
Dr. Sandu provided comparisons between cybersecurity and the physical sciences but proposed that cybersecurity is an inherently different science and shouldn’t be compared to natural sciences. He believes that there are stronger parallels with medicine and perhaps economics. He argued against the traditional boundary between basic and applied research for cybersecurity and suggested teams that address both aspects. Echoing a point made by many of the other speakers, Dr. Sandhu noted that cybersecurity is asymmetric and that makes it unique. He discussed the evolution of Access Control and suggested that cybersecurity can learn from the Access Control environment. He concluded by emphasizing the need to combine basic and applied research, treating cybersecurity holistically, and drawing inspiration from other sciences while not depending on the comparisons.
INVITED PRESENTATIONS
1. Building a Virtually Air-gapped Secure Environment in AWS
Erkang Zheng, Phil Gates-Idem and Matt Lavin, LifeOmic, Inc.
The speakers talked about the work their company is doing building a platform on top of the cloud dealing with health care information where, because of the sensitivity of data, security is critical. They addressed the ten principles on which their work was built, including assuming the cloud is secure, assuring no single point of compromise, engaging everyone, and automation, and talked about the unique aspects of their program, including lessons-learned and future development.
2. You Get Where You’re Looking For: The Impact of Information Sources on Code Security
Michelle Mazurek, University of Maryland
The presented paper was the winner of the NSA Fifth Annual Best Scientific Cybersecurity Paper Competition by Yasemin Acar, Michael Backes, Sascha Fahl, Doowon Kim, Michelle L. Mazurek, Christian Stransky, Saarland University in Germany and at the University of Maryland, College Park in the United States. The paper was presented at the 2016 IEEE Symposium on Security and Privacy.
Author Michelle Mazurek gave a presentation on their research which was inspired by a common problem, specifically why software developers are writing programs that have security vulnerabilities. When software developers get "stuck", they often turn to resources such as Stack Overflow to find solutions. Unfortunately, many of the posted solutions are not necessarily secure. The researchers investigated how different information sources available to the developer influence the developer’s abilities to quickly program and to program securely. They explored developers’ problem-solving choices, and the impact on the software ecosystem. They noticed that an unsettling number of Android apps used readily available, and insecure, code snippets. They studied 54 developers, both professionals and students, in Germany and the United States in a controlled laboratory setting where they had them write security and privacy-relevant code under time constraints. They examined four conditions: developers were allowed to use 1) any source (free choice); 2) Stack Overflow only; 3) official Android documentation only; and 4) books only. After describing their methodology of subjecting Android developers to various security-relevant tasks and varying their choices of resources, they reviewed their findings on the impacts to both functional correctness, and security correctness. The researchers found that Official API documentation is secure but hard to use, while informal documentation such as Stack Overflow is more accessible but often leads to insecurity. Interestingly, books (the only paid resource) perform well both for security and functionality, but are rarely used. While suggesting that project managers should "take developers offline and give them a book," they chose to explore a more practical solution. They concluded that Stack Overflow provides quick functional solutions, but is less secure, and they developed several ideas to integrate both aspects. They noted that while professionals tended to produce functional code more reliably, they were no better than the students at security.
3. Microarchitectural Attacks: From the Basics to Arbitrary Read and Write Primitives without any Software Bugs
Daniel Gruss, Graz U. of Technology
The speaker used the analogy of a safe to explain how systems may give clues to an attack and provided multiple examples. He demonstrated the Meltdown attack which exploited out of order execution/flush and reload attacks. He also demonstrated the Rowhammer attack in which cells leak faster from proximate accesses. He noted that microarchitectural attacks have been ignored in the past. Finally, he suggested that what we have learned from these attacks is the opportunity to rethink processor design, to "grow up" as other fields have done, to find trade-offs between security and performance, and to spend more time on identifying problems rather than mitigating known problems.
PANEL DISCUSSION
Four Cybersecurity Framework (CSF) Practitioners participated in a panel discussion moderated by Nikola Vouk. The NIST CSF was described as the result of a collaborative effort with government and industry to identify what matters, figure out how to protect it, know when things go bad, and know what to do to correct them.
Jeremy Maxwell, Allscripts, a national provider of health IT, led off with a description of his company’s approach to incidence response and risk management, noting that the company went with an ISO approach rather than the CSF because they view ISO as a holistic program that helps in preparation for incidents.
Andrew Porter, Merck Pharmaceuticals said that analytics group in risk management of Merck was one of the participants who worked on the CSF. He noted that the analytics group is having varying levels of success with the framework within the company. The primary benefit has been development of a common language--they chose to use business with financial outcomes.
Alex Rogozhin, BB&T Bank, described his team as looking for data intelligence and security and a data-driven way to assess risk. In searching for a better reporting framework to keep their leadership aware of issues, they chose CSF. Their main difficulty has been practical implementation. They are trying to be compliant, which does not necessarily also mean secure.
Greg Witte, G2, said NIST was charged to standardize and coordinate cybersecurity and, as a result, developed the CSF. He noted that the framework doesn’t provide a lot of guidance since NIST wanted to avoid squashing innovation, and the design criteria included being flexible, agile, and applicable to companies of many sizes. The CSF aims to drive discussions about the "what is and what should be" and is purposely designed not to be prescriptive.
POSTERS
The posters presented at HoTSoS 2018 were:
1. A Comparative Analysis of Manual Methods for Analyzing Security Requirements in Regulatory Documents
Sarah Elder and Anna Mattapallil, North Carolina State University
This presentation is designed to assist analysts in selecting an appropriate approach for developing security requirements from regulatory documents by comparing the output of approaches from academic publications with similar outputs from industry. Initial results show that there is wide variance in how information is aggregated from security regulations at the requirement level.
2. An Expert-Based Bibliometric for a Science of Security
Lindsey McGowen and Angela Stoica, North Carolina State University
The research objective was to develop a scalable bibliometric customized for the Science of Security that would address limitations of existing citation-based bibliometrics. Existing citation databases do not adequately capture conferences and workshops where security researchers often publish, nor are they adaptive enough to be used with emerging fields of study. Computer science databases such as SiteSeerX and dbpl fall short of capturing venues appropriate for disseminating multidisciplinary research. Any citation-based metric will be a lagging indicator for fields that evolve at an extraordinarily fast pace. Expert-based review is a preferred method for evaluating faculty in computer science, which may be usefully applied to evaluation of publications. Their expert-based method shows potential for developing custom bibliometrics for evaluating publication venues in emerging and multidisciplinary fields.
3. Cryptography in a Post-Quantum World
Katharine Ahrens, North Carolina State University
Quantum resilience makes lattice-based hard problems a leading candidate for implementation in future public key cryptographic schemes. Lattice cryptosystems can offer both encryption schemes (to securely transmit data from sender to receiver) and signature schemes (used for a receiver to verify that information actually originated from the claimed sender). This presentation gives an overview of past attempts to approach a lattice hard problem known as the Shortest Vector Problem (SVP) in a class of ideal lattices generated using the cyclotomic integers, a type of mathematical object known as a ring. This poster includes preliminary results on the security of the SVP in ideal lattices generated in a ring previously unstudied and discusses the practically of using the ring in place of the cyclotomic integers in some lattice cryptosystems.
4. Detecting Monitor Compromise Using Evidential Reasoning
Uttam Thakore, University of Illinois at Urbana-Champaign
This poster demonstrates a data-driven technique to detect monitor compromise using evidential reasoning. Since hiding from multiple sensors is difficult for an attacker by combing alerts from different sensors by using Dempster-Shafer theory to identify potential monitor compromise and compare the results to find outliers.
5. Ethics, Values and Personal Agents
Nirav Ajmeri, North Carolina State University
The research question addressed is "How can we engineer an ethical Socially Intelligent Personal Agent (SIPA) such that it understands its user’s preferences among values, and reasons about values to make ethical policy decisions?" They developed Ainur, a framework for engineering value-driven, ethical SIPAs that can make value-promoting ethical decisions, especially, in scenarios where the applicable norms conflict.
6. Exploring the Raspberry Pi for Data Summarization in Wireless Sensor Networks
Andres Alejos, Matthew Ball, Conner Eckert, Michael Ma, Hayden Ward, Peter Hanlon, and Suzanne J. Matthews, USMA
Single board computers are good candidates for at-node data summarization tasks in a wireless sensor network. Reducing data transfer in a wireless sensor network is critical for energy efficiency and improved latency. This poster shows the viability of a wireless sensor network composed of Raspberry Pis for video and audio summarization tasks. Contributions include a novel sensor and gateway node design and a user interface implemented as an Android App.
7. Hourglass-Shaped Architecture for Model-Based Development of Safe and Secure Cyber-Physical Systems
Muhammad Umer Tariq and Marilyn Wolf, Georgia Tech
This proposed approach is inspired by the hourglass-shaped architecture of Internet. It can support the goals of an integrated CPS theory and development methodology while taking into account the differences between the domain-specific skillset that control system engineers and embedded system engineers typically possess. This poster also outlines CPS-related safety and security concerns that the proposed hourglass-shaped architecture for networked CPS development must meet in order to address safety and security concerns.
8. How Bad Is It, Really? An Analysis of Severity Scores for Vulnerabilities
Christopher Theisen and Laurie Williams, North Carolina State University
In this presentation, a distribution of 2,979 vulnerabilities mined for Fedora 24 and 25 was analyzed using a high-medium-low evaluation rather than the usual binary vulnerability/no vulnerability method. The authors also verify the security vulnerabilities reported publicly as actual vulnerabilities and use keyword searches to identify bugs that should be included in vulnerability datasets.
9. Indirect Cyber Attacks by Perturbation of Environment Control: A Data-Driven Attack Model
Keywhan Chung, Zbigniew T. Kalbarczyk, and Ravishankar K. Iyer, University of Illinois at Urbana-Champaign
The indirect attack model targets a super computer by obfuscating the control of a Cyber-Physical System (CPS) responsible for maintaining the operational environment. The authors’ approach consists of four steps: data preparation, parameter analysis, inference of critical condition, and validation. Initial results indicate that their approach would have effectively identified two CPS-related incidents: a chilled water leakage at the construction site of a new building which could have caused an outage of the computing infrastructure, and a maintenance operation on the campus chilled water loop, which shut down a set of cabinets of the computer infrastructure.
10. Integrating Historical and Real-Time Anomaly Detection to Create a More Resilient Smart Grid Architecture
Spencer Drakontaidis, Michael Stanchi, Gabriel Glazer, Madison Stark, Caleb Clay, Jason Hussey, Nick Barry, Aaron St. Leger, Suzanne J. Matthews, USMA
The authors developed a novel MapReduce algorithm to detect anomalies in historical grid data that leverages the cluster computing framework Apache Spark. The algorithm checks a sliding "window" of data for power fluctuations that meet the criteria of constraint and temporal anomalies described by Matthews and St. Leger. Experimentation was performed on a 36-core compute node on a supercomputer with a 1 million real measurements dataset collected from thier test bed. Preliminary results show that the algorithm is capable of detecting constraint and temporal anomalies simultaneously.
11. Investigating TensorFlow for Airport Facial Identification
Nikolay Shopov, Mingu Jeong, Evin Rude, Brennan Nessaralla, Scott Hutchison, Alexander Mentis, and Suzanne J. Matthews, USMA
The authors describe a facial identification approach that can be deployed at airports. Their contributions include facial identification software built on top of Google’s TensorFlow framework; a data collection scheme that can be implemented at airports nationally; and a user interface for collecting data.
12. Quantifying the Security Effectiveness of Network Diversity
Huashan Chen and Shouhuai Xu, UT, San Antonio
This poster demonstrates a framework that quantifies the security effectiveness of network diversity in computer networks. The potential value of enforcing diversity in networks is well recognized, but security effectiveness of enforcing network diversity has not yet been quantified. In this work, they propose a systematic, fine-grained framework for modeling the diversification of software stacks in networks and quantifying network diversity security effectiveness using a suite of security metrics.
13. Quantitative Underpinnings of Secure Graceful Degradation
Ryan Wagner, David Garlan, Matt Fredrikson, Carnegie Mellon University
Defenders need a way to reason about and react to the impact of an attacker with existing presence in a system. It may not be possible to maintain one hundred percent of the system’s original utility; instead, the attacker might need to gracefully degrade the system, trading off some functional utility to keep an attacker away from the most critical functionality.
14. Ransomware Research Framework
Dan Wolf and Don Goff, Cyber Pack Ventures, Inc.
This research presented a series of joint efforts designed to produce a framework for studying ransomware. The seven contributors addressed detection, response, mitigation, consequences and attribution, as well as encryption, and an approach to modeling the problem from the behavioral viewpoint of criminology.
15. Toward Extraction of Security Requirements from Text
Özgur Kafali, University of Kent; Anne-Liz Jeukeng, University of Florida; Laurie Williams, Hui Guo, and Munindar P. Singh, North Carolina State University
This research goal was to produce improved security and privacy requirements that accommodate both social and technical considerations and incorporate knowledge from post-deployment artifacts such as breach reports. This framework combines crowdsourcing with automated methods to produce improved security and privacy requirements incorporating knowledge from post-deployment artifacts such as breach reports.
16. Understanding Privacy Concerns of Whatsapp Users in India
Jayati Dev, Sanchari Das, and Dr. L. Jean Camp, Indiana University Bloomington
WhatsApp is a leading platform for mobile messaging with the largest user base being in India, yet research on Indian perspectives towards privacy and security in social networking platforms is sparse. WhatsApp incorporates features which pose privacy challenges, including Last Seen, Live Location, and personal profile information. The researchers implemented a survey, querying both privacy attitudes and privacy behaviors, with 213 Indian participants. They found the majority of participants reported that they actively use the privacy controls provided by WhatsApp to restrict access to their information. They provide visualizations of the raw results and initial recommendations.
17. Using Object Capabilities and Effects to Build an Authority-Safe Module System
Darya Melicher, Yangqingwei Shi, Valerie Zhao, Wellesley College; Alex Potanin, Victoria University of Wellington; and Jonathan Aldrich, Carnegie Mellon University
The research team designed and implemented a capability-based module system that facilitates controlling the security capabilities of software modules. Their approach ensures that a software system maintains the principle of least authority and also allows for attenuation of module authority. This design is implemented as part of the Wyvern programming language.
18. What Proportion of Vulnerabilities Can Be Attributed to Ordinary Coding Errors?
Rick Kuhn and Raghu Kacker, National Institute of Standards and Technology and Mohammad Raunak, Loyola University
The key question the authors sought to address is the degree to which vulnerabilities arise from ordinary program errors, which may be detected in code reviews and functional testing, rather than post-release. Findings include high severity vulnerabilities trends downward, declining about 15 percentage points in the last ten years. About two-thirds of this fraction has shifted to medium severity vulnerabilities. Implementation or coding errors account for roughly two thirds of the total. They consider the proportion of implementation vulnerabilities, rather than absolute numbers, because the number of vulnerabilities is partially a function of the number of applications released, which has increased over time. Implementation vulnerabilities for 2008-2016 are close to the 64% reported for 1998 – 2003. This high proportion of errors suggests little progress has been made in reducing vulnerabilities from simple mistakes, and that more extensive use of static analysis tools, code reviews, and testing could lead to significant improvement.
INDUSTRY PRESENTATIONS
DevSecOps: Security at the Speed of Software Development
Larry Maccherone, Comcast
Mr Maccherone referenced Mr. Lipner’s morning keynote presentation in addressing obstacles to adoption, noting that "bolt-on" security by security specialists won’t scale, so security must be a primary concern during development. He defined DevSecOps (DSO) as empowered engineering teams taking ownership of how their product performs in production, including security. He described a three-part framework for adopting new practices and DSO culture change: 1) principles acceptable to lean/agile development teams; 2) make it easy for the DSO teams to both understand what the right thing is and actually do it; and 3) engage management.
HACSAW: A Trusted Framework for Cyber Situational Awareness
William Glodeck, Department of Defense
Mr. Gloduck discussed the DoD High Performance Computing (HPC) modernization program and the HPC Architecture for Cyber Situational Awareness (HACSAW). The HACSAW initiative is a multi-disciplinary, multi-year project that examines the applicability of HPC to cyber SA using the most comprehensive cybersecurity dataset available to the DoD R&D community. The goal of HACSAW is to meet mission-essential tasks and reduce barriers to data and computing resources.
Compliance as Code: Policy-Governed Automated Security Checkpoints:
Nikola Vouk, Independent, and David González, nearForm
The presentation centered on how to move away from the stage-gate model and work at the speed of development. Mr. Vouk noted that governance is now with stage gate at the end and that there are not enough resources at that point: 1 security person per 75 developers. He suggested the need to change that to 1 teacher per 75 students. He proposes an automated governance workflow, still with some manual steps, though noting that manual steps needed to be minimized. Mr. González provided brief demos that introduced policies that can lead to visualization on a dashboard and introduce metrics.
TUTORIALS
Combinatorial Security Testing Course
Rick Kuhn, NIST, and Dimitris Simos, SBA Research
The tutorial explained the background, process, and tools available for combinatorial testing for security, including illustrations based on industry’s experience with the method. Mr. Kuhn presented the basics of combinatorial testing: what it is how it works, and why it works. He noted that software testing may be up to half of overall software development cost, and that there was still a need to estimate the residual risk that remains after testing. This talk formulated the problem of software security testing as combinatorial problems, citing the need for empirical data to inform assumptions. Characterizing Combinatorial Security Testing (CST) as large-scale software testing for security, he noted that CST can make software security testing more efficient and effective than conventional approaches. Dr. Simos gave examples of how CTS is used in real world scenarios, developed models to demonstrate CTS, provided cased studies, and addressed experimental evaluation using different frameworks for different scenarios.
Applying the Framework for Improving Critical Infrastructure Cybersecurity
Greg Witte, G2
Mr. Witte provided a history of the Cybersecurity Framework (CSF), pointing out that an Executive Order made CSF applicable to all sectors, and that the framework was developed in partnership among industry, academia, and government. While there are multiple frameworks to leverage for cybersecurity, the CSF establishes a common language within organizations and among external partners, providing a good way to organize and communicate. He described the three components in CSF (core, profile, and implementation tiers) and how they combined to provide a holistic approach. He further described the seven steps in implementing CSF and addressed the changes that have been implemented in the updated version. Finally, he described resources that can be used to help implement the framework.