The Challenges of Software Assurance and Supply Chain Risk Management

pdf

Cognitive Programming Model for Reliable and Inspectable Automated Decision Making using Large Language Models Tristan Vanderbruggen & Matthew Sottile, Lawrence Livermore National Laboratory The past year has seen the rapid progress and adoption of large language models (LLMs) within many customer applications. Particularly chatbots powered by LLMs have demonstrated capabilities that few were expecting: they can write essays, web applications, and help with all matter of prototyping. Unfortunately, these systems often produce incorrect or misleading results. Furthermore, attempting to control their behavior through natural language prompting is challenging due to the ambiguous nature of natural language. We will discuss a more structured framework for using LLMs than the free flowing chatbots that aims to address the reliability issues present in current chat-like systems. Our emphasis is on the reliability of the execution of cognitive applications (not correctness of the answer), meaning that the output of the LLM must follow a predefined syntax that can reliably be parsed by the execution environment. 

We will present Automaton & Cognition (AutoCog), a framework based on automata theory that permits us to construct cognitive applications. Within this framework, we are constructing a programming model, Structured Thoughts, which permits the creation of reliable and inspectable cognitive application. This approach defines applications as state transition models where the LLM implements the state transition functions, and a coordination layer that interprets AutoCog programs is responsible for controlling and interpreting the output of the LLM. This layer allows us to define conditions essential to correctness such that unpredictable behavior of the LLMs can be dealt with.

In our poster, we will describe the compilation and execution processes of cognitive applications written using the Structured Thoughts programming model. Particularly, we will illustrate how our Structured Thoughts Language (STL) defines a Structured Thoughts Automaton (STA) which is compiled into a Finite Thoughts Automaton (FTA). The takeaway from this poster is that LLMs can be guided to create complex applications that are reliably executed by any LLM. These applications will reproduce the thought processes of their creators who can inspect every step of the decision process. However, more work is needed to figure out how we can maximize the probability that these answers would be correct.

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-859151 


Dr. Tristan Vanderbruggen is a computer scientist at Lawrence Livermore National Laboratory. He works on various compilation problems such as translating application developed in legacy languages to modern languages (C++) and facilitating access to open-source software analysis tools. Dr Vanderbruggen got his PhD from University of Delaware by studying the application of deep learning to solve compilation related problems.

License: CC-3.0
Submitted by Amy Karns on