Content Moderation and the Problem of Meshed Cognition

Submitted by Katie Dey on

Adam Hill is a Postdoctoral Fellow at the UC-Berkeley School of Information focusing on the regulation of technology and the relationship between human and technical norms. He received his PhD in behavioral economics and social policy from Berkeley and a JD from the NYU School of Law. Adam has both public and private sector experience, having worked for Paul, Weiss, Rifkind, Wharton & Garrison in New York and the USDA and the U.S.

Reasoning about Accidental and Malicious Misuse via Formal Methods
Lead PI:
Munindar Singh
Co-Pi:
Abstract

This project seeks to aid security analysts in identifying and protecting against accidental and malicious actions by users or software through automated reasoning on unified representations of user expectations and software implementation to identify misuses sensitive to usage and machine context.

Munindar Singh

Dr. Munindar P. Singh is Alumni Distinguished Graduate Professor in the Department of Computer Science at North Carolina State University. He is a co-director of the DoD-sponsored Science of Security Lablet at NCSU, one of six nationwide. Munindar’s research interests include computational aspects of sociotechnical systems, especially as a basis for addressing challenges such as ethics, safety, resilience, trust, and privacy in connection with AI and multiagent systems.

Munindar is a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), ACM (Association for Computing Machinery), and IEEE (Institute of Electrical and Electronics Engineers), and was elected a foreign member of Academia Europaea (honoris causa). He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award. He won NC State University’s Outstanding Graduate Faculty Mentor Award as well as the Outstanding Research Achievement Award (twice). He was selected as an Alumni Distinguished Graduate Professor and elected to NCSU’s Research Leadership Academy.

Munindar was the editor-in-chief of the ACM Transactions on Internet Technology from 2012 to 2018 and the editor-in-chief of IEEE Internet Computing from 1999 to 2002. His current editorial service includes IEEE Internet Computing, Journal of Artificial Intelligence Research, Journal of Autonomous Agents and Multiagent Systems, IEEE Transactions on Services Computing, and ACM Transactions on Intelligent Systems and Technology. Munindar served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. He previously served on the editorial board of the Journal of Web Semantics. He also served on the founding steering committee for the IEEE Transactions on Mobile Computing. Munindar was a general co-chair for the 2005 International Conference on Autonomous Agents and MultiAgent Systems and the 2016 International Conference on Service-Oriented Computing.

Munindar’s research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.

Twenty-nine students have received Ph.D. degrees and thirty-nine students MS degrees under Munindar’s direction.

Institution: North Carolina State University
Sponsor: National Security Agency
Characterizing user behavior and anticipating its effects on computer security with a Security Behavior Observatory
Co-Pi:
Abstract

Systems that are technically secure may still be exploited if users behave in unsafe ways. Most studies of user behavior are in controlled laboratory settings or in large-scale between-subjects measurements in the field. Both methods have shortcomings: lab experiments are not in natural environments and therefore may not accurately capture real world behaviors (i.e., low ecological validity), whereas large-scale measurement studies do not allow the researchers to probe user intent or gather explanatory data for observed behaviors, and they offer limited control for confounding factors. The SBO addresses the gap through a panel of participants consenting to our observing their daily computing behavior in situ, so we can understand what constitutes “insecure” behavior. We use the security behavior observatory to attempt to answer a number of research questions, including 1) What are risk indicators of a user’s propensity to be infected by malware?  2) Why do victims fail to update vulnerable software in a timely manner? 3) How can user behavior be modeled with respect to security and privacy “in the wild”?

Performance Period: 01/01/2018 - 01/01/2018
Institution: Carnegie Mellon University
Sponsor: National Security Agency
Secure Native Binary Execution
Lead PI:
Prasad Kulkarni
Abstract

Typically, securing software is the responsibility of the software developer. The customer or end-user of the software does not control or direct the steps taken by the developer to employ best practice coding styles or mechanisms to ensure software security and robustness. Current systems and tools also do not provide the end-user with an ability to determine the level of security in the software they use. At the same time, any flaw or security vulnerabilities ultimately affect the end-user of the software. Therefore, our overall project aim is to provide greater control to the end-user to actively assess and secure the software they use.

Our project goal is to develop a high-performance framework for client-side security assessment and enforcement for binary software. Our research is developing new tools and techniques to: (a) assess the security level of binary executables, and (b) enhance the security level of binary software, when and as desired by the user to protect the binary against various classes of security issues. Our approach combines static and dynamic techniques to achieve efficiency, effectiveness, and accuracy.

Prasad Kulkarni
Institution: University of Kansas
Sponsor: National Security Agency
Secure Native Binary Executions
Lead PI:
Prasad Kulkarni
Abstract

CPS systems routinely employ custom off-the-shelf (COTS) applications and binaries to realize their overall system goals. COTS applications for CPS are typically programmed using unsafe languages, like C/C++ or assembly. Such programs are often plagued with memory and other vulnerabilities that attackers can exploit to compromise the system.

There are many issues that need to be explored and resolved to provide security in this environment. For instance; (a) different systems may desire distinct and customizable levels of protection (for the same software), (b) different systems may have varying tolerances to the performance and/or timing penalties imposed by existing security solutions, and hence a solution applicable in one case may not be appropriate to a different system, (c) multiple solutions to the same vulnerability/attack may impose varying levels of security and performance penalties. Such tradeoffs and comparisons with other potential solutions are typically unknown or unavailable to users, and (d) solutions to newly discovered attacks and improvements to existing solutions continue to be devised. There is currently no efficient mechanism to retrofit existing application binaries with new security patches with minimal disruption to system operation.

The goal of this research is to design a mechanism to: (a) analyze and quantify the level of security provided and performance penalty imposed by different solutions to various security risks affecting native binaries, and (b) to study and build an architecture that can efficiently and adaptively patch vulnerabilities or retrofit COTS applications with chosen security mechanisms with minimal disruption.

Successful completion of this project will result in:

  • Exploration and understanding of the security and performance tradeoffs imposed by different proposed solutions to important software problems.
  • Discovery of (a set of) mechanisms to reliably retrofit desired compiler-level, instrumentation-based, or other user-defined security solutions into existing binaries.
  • Study and resolution of the issues involved in the design and construction of an efficient production-quality framework to realize the proposed goals.
Prasad Kulkarni
Secure Native Binary Execution
Lead PI:
Prasad Kulkarni
Abstract

CPS systems routinely employ custom off-the-shelf (COTS) applications and binaries to realize their overall system goals. COTS applications for CPS are typically programmed using unsafe languages, like C/C++ or assembly. Such programs are often plagued with memory and other vulnerabilities that attackers can exploit to compromise the system.

There are many issues that need to be explored and resolved to provide security in this environment. For instance; (a) different systems may desire distinct and customizable levels of protection (for the same software), (b) different systems may have varying tolerances to the performance and/or timing penalties imposed by existing security solutions, and hence a solution applicable in one case may not be appropriate to a different system, (c) multiple solutions to the same vulnerability/attack may impose varying levels of security and performance penalties. Such tradeoffs and comparisons with other potential solutions are typically unknown or unavailable to users, and (d) solutions to newly discovered attacks and improvements to existing solutions continue to be devised. There is currently no efficient mechanism to retrofit existing application binaries with new security patches with minimal disruption to system operation.

The goal of this research is to design a mechanism to: (a) analyze and quantify the level of security provided and performance penalty imposed by different solutions to various security risks affecting native binaries, and (b) to study and build an architecture that can efficiently and adaptively patch vulnerabilities or retrofit COTS applications with chosen security mechanisms with minimal disruption.

Prasad Kulkarni

Analysis of the Automated Vulnerability Discovery Process

Submitted by Anonymous on

BIO

Shelby Allen is a research scientist focusing on software assurance at the Georgia Tech Research Institute.

ABSTRACT

The demands of software analysis outpace manual analyst capabilities, and automated solutions are not yet sophisticated enough to replace manual analysts. This research examines efficient collaborations between human and machine for vulnerability discovery.

Subscribe to