Cedar is an authorization policy language used to express permissions for an application and authorize users’ actions accordingly. An authorization policy governs which principal in a system can perform what actions on a given resource. For example, a policy for a photo-sharing application might state that the application’s users can delete only the photos they own. The application backend enforces this policy by invoking the authorization engine to check if a delete request is allowed. Cedar is part of Amazon Verified Permissions (AVP), which launched at re:Invent in November 2022. In this talk, we present the idea of verification-guided development, and discuss how we use it to provide a high assurance implementation for the Cedar language.
Verification-guided development Authorization is a critical component of the trusted computing base, so we have taken several steps, which we term verification-guided development, to ensure that it is correct. In particular, we (1) constructed an executable formal model for Cedar in a verification-aware language, (2) proved properties of the formal model, and (3) established correspondence between the model and production code through differential random testing (DRT).
We wrote the Cedar model in Dafny, an open-source, verification-aware programming language that supports extracting (compiling) model code to Java. The Dafny model defines the ground truth, so our goal is to make the model easy to understand and easy to prove properties about. We have proved several core properties about Cedar’s design. For example, we prove that if the evaluator determines that no policies impact an authorization request, then the authorization decision will be deny (“default deny”). We also prove type safety: if the policies, entity store, and request are deemed valid according to the schema, then the policy evaluator will not error.
Our production implementation of Cedar is written in Rust. Out of the box, Rust’s type system ensures memory safety, ruling out a significant class of security vulnerabilities. We disable all use of unsafe code blocks (which can thwart safety guarantees) and use clippy to catch common Rust bugs.
We check that the model and production implementation agree using DRT, which runs the two implementations with the same randomly-generated inputs, and reports any differences in their execution behaviors. Inputs are produced by largely-handwritten generators that convert random bitstrings into policies, entities, and queries. We use coverage-guided fuzzing on bitstrings to produce inputs that thoroughly exercise the production code. This assures that properties proved of the simple Dafny model also hold of the optimized Rust code. It is also useful prior to proof: in some cases we treat the Rust code as the specification to check that our Dafny model is accurate before attempting proofs. We also use our generators to test useful properties of unmodeled implementation code, e.g., we test that a generated policy is correctly parsed back to itself after being pretty printed. Using verification-guided development, we have found dozens of bugs in both the model and implementation.
Kesha is an applied scientist with Amazon Web Services, working on the Cedar authorization language. She received her PhD in 2022 from the University of Maryland where she focused on formally verifying the quantum software stack.