Automated Evidence Generation for Continuous Certification

pdf

You must register and request HCSS Community Membership to view the slides.

Abstract: This talk will introduce the methodological and tooling foundations of an automated evidence generation workbench devised to support continuous assurance case-based certification and Authority-to-Operate (ATO). We will discuss approaches and challenges associated with the systematization and automation of diverse evidence generation techniques, including static analysis, testing, and Formal Methods. Furthermore, we will present the distinction between information for assurance assessment versus developer feedback.

In contraposition to activities that require substantial human judgment, such as requirements capture or risk assessment, evidence generation offers substantial opportunities for automation and hence for accelerating certification. However, the current practice in evidence generation is still largely manual, producing results that vary widely in content and format across tools and users. Consequently, assessing the relevance and validity of evidence artifacts is difficult. In many cases, assurance engineers fail to consistently record the intent, assumptions, justifications, and context of the evidence. Further, shaping the inputs that feed evidence generation, especially in the context of Formal Methods tools, still requires significant human involvement. We propose reusing and recomposing evidence in conjunction with automated input transformations to improve efficiency and speed in continuous assurance regimes that assume incremental or partial system changes.

In our talk, we will describe three essential elements of our approach in detail:

  1. A model of incremental, continuous system evolution and the assurance information supporting continuous certification/ATO.
  2. Evidential Assurance Case Fragments (EACFs), as packages of composable and reusable evidence, including their computational representation.
  3. The Evidence Generation Language (EGL) — a Domain-specific Language devised to systematize and automate diverse evidence generation techniques, including Formal Methods.

Finally, we will conclude the presentation by covering several examples of techniques implemented in our evidence generation workbench, along with results.

Mauricio Castillo-Effen is a Senior Researcher and Lockheed Martin Associate Fellow with Lockheed Martin Advanced Technology Laboratories (LM ATL). He leads the Trustworthy AI and Autonomy (TAA) team focused on developing tools and methodologies for deployment of autonomy and AI in high criticality applications. His team works with Lockheed Martin’s Business Areas solving challenges related to Verification and Validation, Test and Evaluation, and certification. He also collaborates externally with industrial and academic researchers making their innovations accessible to practitioners and engineers.

He has been a Principal Investigator and contributor in multiple research programs funded by DARPA, AFRL, NASA, and DHS. His research is centered on cyber physical system assurance, Artificial Intelligence, and Automated Reasoning. His background includes systems theory, control, and machine perception. Dr. Castillo-Effen has been an adjunct faculty member and a visiting lecturer at multiple universities around the world. His work has been published in the form of patents, academic publications at conferences, journals, and books. He holds a Ph.D. from the University of South Florida, a M.Sc. from the University of San Simon/TU Delft, and a B.Sc. from the University of Applied Sciences of Hannover—all in Electrical Engineering.



License: Copyright Lockheed Martin Corporation

Tags:
License: Other
Submitted by Katie Dey on