A Commitment Logic for Reasoning about Trust in Complex Systems

pdf

Presented as part of the 2017 HCSS conference.

ABSTRACT

Human-machine collectives (that is, humans plus AIs) will be becoming increasingly commonplace, which ratchets up the assurance challenge — how do you reason about the trustworthiness of these complex, interconnected systems?  An existing approach to trust modeling has been to use modal logic: in particular, to apply deontic logic, where the primary operators represent ‘obligation’ and ‘permission’.  This seems to map very naturally to the notion of the enforcement of system properties in an assurance argument: for example, module A trusts module B with respect to property p because A knows that B is obligated to enforce p.  However, we claim that enforcement-based logics are not an ideal fit for reasoning about collectives of humans and AIs, mainly because of the separation of responsibility (in our example, A is making the obligation claim, but it is ultimately B’s job to realize that claim) - the more nodes in the collective, the more unwieldy and unrealistic the logic becomes.

In this presentation, we instead propose an alternative logic for reasoning about trust - a commitment logic.  In this framework, each node is responsible for making commitments to other nodes - each node broadcasts “commitment tuples” that are [commitment, evidence] pairs.  Nodes also can use commitment tuples from relevant neighboring nodes to calculate “trust assertion tuples” [reporting-node, trust-level, commitment, justification], which are also broadcast for use by other network nodes in making trust assessments.  In this framework, we have a clean separation between behavioral claims by one node, and the trust granted by other nodes as a result of this claim.

Given these two fundamental primitives, we define a set of inference rules for this logic, and show how networks of nodes converge on trust assessments for the collective.  We will also illustrate that, in addition to the separation of concerns between trust claims and trust granting, our proposed logic has other desirable properties for reasoning about behavior in complex collectives: granular trust assessments, ability to handle a diversity of classes of evidence, and process transparency.

This presentation is designed to contribute to the “Assurance for AI” theme of HCCS 2017.  We believe it is relevant to this theme because it presents a concrete, quantitative approach to reasoning about trust in complex systems, and suggests a new direction for research in that area.

--

David Burke leads the machine cognition research program at Galois, Inc., which investigates techniques for integrating human decision-making with machine intelligence (and vice versa).  Since joining Galois in 2004, his work has included conducting research into logics for reasoning about trust in the design of secure systems, approaches for ensuring robust decision-making in multi-agent systems, and the application of bio-inspired approaches to network security. His recent experience includes a PI role on DoD-funded projects focused on counterdeception, techniques for reasoning under conditions of extrememe uncertainty, and adversarial modeling for cybersecurity.  Other research interests include game theory, and bio-inspired AI.

Tags:
License: CC-2.5
Submitted by Katie Dey on