AI Security and AI Safety Need a Common System Engineering Approach

AI systems continue to become more powerful and more embedded in other cyber systems, across government, defense, critical infrastructure, and commercial industry.  As humans interact with AI more often, and rely on it more deeply, we must also ensure that our AI systems operate with both security and safety.  Mitigating security risks and maintaining safety both depend on understanding how the AI functionality is integrated with the rest of the system and how human behavior can affect operation of the AI systems.  To this end, system engineering principles should be applied in common across both safety and security in the development and operation of AI systems.  A critical aspect of this is that safety and security be considered across the entire system lifecycle, from initial design or even model training, up through deployment and operation.  To illustrate how system engineering principles can be applied, we will examine several examples of related security and safety concerns.


Neal Ziring is the Technical Director for the National Security Agency’s Capabilities Directorate, serving as a technical advisor to the Capabilities Director, Deputy Director, and other senior leadership. Mr. Ziring is responsible for setting the technical direction across many parts of the capabilities mission space, including in cyber-security. Mr. Ziring tracks technical activities, promotes technical health of the staff, and acts as liaison to various industry, intelligence, academic, and government partners. Prior to the formation of the Capabilities Directorate, Mr. Ziring served 5 years as Technical Director of the Information Assurance Directorate.