C3E 2018 Homepage

For C3E 2018, we will look further into Machine Learning Vulnerability Analysis and Decision Support Vulnerabilities. Advances in machine learning are creating new opportunities to automate human perception and decision making at the scale of increasingly large datasets. Recently, vulnerabilities have been identified in deep learning-based classifiers that demonstrate how an attacker can poison training datasets and invert models to evade detection or disrupt classification. 

Machine Learning Vulnerability Analysis. Traditional vulnerability analysis in software and hardware includes static and dynamic code analysis and integration testing to identify known vulnerabilities and attack vectors. Machine learning introduces a new kind of attack surface that has been historically unaddressed. To address these vulnerabilities, we need new methods, tools and policy to standardize ML model testing that accounts for both weaknesses in each class of models and weaknesses in the system into which the models are integrated. In particular, new standards for designing test environments for ML are needed, and new research is needed to understand better how to test models based on human behavioral data. When testing machine learning during development or at runtime, it is important to understand the factors leading to software failures. How can we enhance software design and software development processes to recognize these weaknesses earlier in design time, or comprehensibly at runtime?

Decision Support Vulnerabilities. As machine learning is increasingly used to enhance decision support by automating the processing of large datasets, how can decision makers be exploited by misinformation due to adversarial attacks on ML, and how can use use explainable AI to inform decision makers about the provenance of machine-supported decisions?  The challenge of influencing decision makers concerns how information is visualized, and how these visualizations can be exploited to take advantage of strengths and weaknesses in human cognition that derive from perception, learning, and memory.  How can interface designers help decision makers recognize when information is inaccurate or doesn’t properly fit the given context, and be aware of the role of ML in facilitating decision support, so that they can defend their decisions against potential manipulation? At runtime, what information can be presented to the user to help them diagnose whether the failure is due to weaknesses in the model or to other causes?

Conference Sponsor

Brad Martin
Technical Lead

Lonnie Carey


 
Conference Chairs

Travis Breaux

Dan Wolf
Conference Organizers

Katie Dey

Anne Dyson

The workshop is sponsored by SCORE.