Spotlight on Lablet Research #40 - Reasoning about Accidental and Malicious Misuse via Formal Methods

Spotlight on Lablet Research #40 -

Reasoning about Accidental and Malicious Misuse via Formal Methods

 


Lablet: North Carolina State University

PI: Munindar Singh
Co-PIs: William Enck, Laurie Williams

This project sought to aid security analysts in identifying and protecting against accidental and malicious actions by users or software through automated reasoning on unified representations of user expectations and software implementations to identify misuses sensitive to usage and machine context.

This research project dealt with accidental and malicious misuse case discovery in sociotechnical systems. System misuse is conceptually a violation of a stakeholder’s expectation of how a system should operate. Whereas existing tools make security decisions using the context of usage, including environment, time, and execution state, they lack an ability to reason about the underlying stakeholder expectations, which are often crucial to identifying misuses. Led by Principal Investigator (PI) Munindar Singh and Co-PIs William Enck and Laurie Williams, the research team’s vision was that if existing tools making security decisions could reason about expectations, they could automatically prevent, discover, and mitigate misuse. Unfortunately, automatic extraction of stakeholder expectations remains ineffective. The foregoing led the team to identify the following research questions: What are the key components of stakeholders’ expectations and how may they be represented computationally? How would we identify the relevant stakeholder expectations? In what ways can we employ reasoning about expectations to inform the specification of sociotechnical systems to promote security?Over the life of the project, the researchers addressed the project goals from the perspectives of understanding user expectations, how threat intelligence is captured, and how reasoning about interactions between computational components can be aligned with user expectations.

The following studies addressed different forms of misuse and the corresponding stakeholder expectations.

A user story in an app review contains one or more sequential events, each of which has an event type. Researchers considered the pattern of event types in a story as the story structure. Consider this story: "I wanted to sign up, but the app crashes every time I click the sign-up button." There are three events in this story: a user intention (I), an app behavior (B), and a user action (A), and the correct sequential order of the events is: wanting to sign up - clicking the button - app crashing. Therefore, the structure of the story is IAB. Intuitively, to understand user expectations, developers should focus on stories that include user intentions; to understand app problems, they should focus on stories that include app behavior events. The research team collected stories that have three types of structures, IA, AB, and BR (R for user reaction), as well as some random stories. The team then asked annotators to rate the stories based on how helpful they were toward three developer goals, the understanding of user expectations, app problem, and user retention, respectively. Raters gave three ratings to each story on a scale of 1-5, where 1 means the story is not at all helpful toward the goal and 5 means that the story is very helpful toward the goal. Two annotators rated each story for each goal. Researchers calculated the average ratings and compared the stories with target structures against random stories. They found that the former was significantly more helpful than the latter.

The research team identified the notion of rogue apps, which are those that one user can misuse to infringe upon the privacy or security of another. Many of these apps may have legitimate uses. Their app descriptions mention only the legitimate uses. However, app reviews (written by users who are exploited by others through an app and even those who exploit others) provide valuable information on their prospects for misuse. The team developed a curated dataset and techniques to determine if an app indicates the possibility of misuse. An examination of the running behavior of those apps suggested that reviews are effective in indicating rogue apps. For developers who innocently produce apps that may be misused, knowing the potential for misuse can be valuable, as it can be for app platforms and possibly other users.

Using electronic payments in mobile applications as an exemplar, the team investigated how to programmatically align user expectations with domain-specific security requirements. When end users encounter prompts for payment information, they often encounter logos and parties that represent the trust of the Payment Card Industry (PCI). Much of this trust is founded in the PCI Data Security Standard (PCI DSS), which defines strong security requirements for the software and hardware systems consuming and processing end user payment information. As such, the team used PCI DSS as a more technical and specific definition of expectations and developed program analysis tools to identify potential accidental or malicious misuse. Specifically, they asked three questions: (1) how do mobile applications ask for and process payment card information? (2) how well do mobile Software Development Kits (SDKs) published by Payment Service Providers (PSPs) adhere to secure programming standards? and (3) how securely do mobile application developers integrate PSP SDKs? They found that most non-trivial program analysis checks can be encoded via data flow analysis, a key challenge for which is identifying taint sources. They developed methods of identifying taint sources for payment card information using textual analysis of both user interface artifacts and SDK API names and performed multiple empirical studies with mixed results. Researchers found that the data flow of payment card information in most Android applications satisfies PCI DSS standards. However, based on artifacts of security coding practices, they found that most PSP SDKs have security weaknesses that open them to attacks. All discovered vulnerabilities and weaknesses were responsibly reported to vendors.

The research team systematically investigated the articles and reports available online covering technical and forensic analysis of advanced persistent threat attacks. Their aim was to understand the patterns of adversarial techniques deployed by the attackers, and utilize the patterns towards actionable threat intelligence. Specifically, they focused on the prioritization of security requirements, safeguards, and countermeasures that can mitigate the most prevalent and severe adversarial techniques. To this end, they investigated the low-level adversarial techniques used by the attackers, described in 667 APT attack reports of the past ten years and identified 18 prevalent adversarial techniques, 425 prevalent combinations of techniques and 7 relationship types among the prevalent combinations. They constructed a natural language processing and Machine Learning (ML)-based pipeline for extracting sequences of adversarial techniques from CTI reports and afterwards performed centrality analysis. They then investigated to what extent the NIST SP800-53 revision 5 security controls can mitigate the prevalent techniques, by analyzing the mapping between the NIST security controls and MITRE ATT&CK techniques. Researchers proposed a security control metric suite (based on the technique’s prevalence, and centrality) and prioritized a set of twenty topmost critical security controls based on the metric suite (such as system monitoring, configuration settings, malicious code protection). Overall, their findings suggested that existing implementations of security controls by organizations may lack in the early phases of APT attacks. Moreover, the findings suggested that these topmost security controls could also be implemented poorly and thus fall short in defending the adversarial attempts.  

Submitted by Anonymous on