A Framework for Assuring Increasingly Autonomous Systems in Human-Machine Teams

pdf

ABSTRACT

As systems become increasingly autonomous, the human-machine role allocation changes. This can result in new failure modes of concern in safety-critical applications such as Urban Air Mobility (UAM). In the Assured Human Machine Interface for Increasingly Autonomous Systems (AHMIIAS) project with NASA, we developed a framework for 1) specifying the roles of a human operator and autonomous co-pilot, 2) verifying that the team satisfies safety properties, and 3) verifying that the autonomous co-pilot meets its requirements. Our framework includes: 1) an analyzable architectural model of the human machine team, specified in the Architecture Analysis and Design Language (AADL), 2) specification and analysis of safety properties through the use of the Assume Guarantee REasoning Environment (AGREE), 3) analysis of safety properties under fault conditions through the use of Architectural Modeling and Analysis for Safety Engineering (AMASE), 4) translation of the autonomous system implementation to a formal model through the use of a novel Soar-to-nuXmv translator, and 5) analysis of the requirements for the autonomous system through the use of model checking with nuXmv.

We used our framework to increase assurance in two selected scenarios of relevance to critical systems: 1) an unreliable sensor scenario 2) an abort landing scenario. The roles of the human operator and autonomous co-pilot in these scenarios were specified and their interactions modeled in AADL/AGREE.  Key properties of the piloting team were established with AGREE, and fault scenarios (e.g., delayed pilot response) were reasoned about with AMASE. A prototype autonomous co-pilot agent that performs the selected scenarios was implemented in Soar and simulated in X-Plane with the AgustaWestland AW609 tilt-rotor aircraft model.  We translated the agent to nuXmv and performed property verification.  To demonstrate the robustness of the reasoning framework, the agent was enhanced with a learning function that learns the pilot preference for an early warning threshold for a potentially unreliable sensor, and each of the requirements was proved of the resulting system.

Author

Dr. Jennifer Davis leads the Trusted Methods Team at Collins Aerospace. This team specializes in formal methods techniques and tools, together with their applications in the aerospace and defense industry. Dr. Davis has served as the Collins Aerospace Principal Investigator (PI) for several government-sponsored research programs on developing and applying high-assurance approaches to the verification and validation of autonomous systems. In particular, she served as PI for the recently concluded, NASA-sponsored Assured Human Machine Interface for Increasingly Autonomous Systems (AHMIIAS) project, which developed a framework for assuring Increasingly Autonomous Systems in human-machine teams. Dr. Davis earned her Ph.D. in Mathematics at the University of Nebraska at Lincoln in 2007.

 

Tags:
License: CC-2.5
Submitted by Anonymous on