Research Team Status
- Names of researchers and position
- Michael Mahoney PI
- Ben Erichson SP
- Serge Egelman SP
- Any new collaborations with other universities/researchers?
- Yaoqing Yang (Dartmouth College)
Project Goals
- Our current focus is on (i) developing strong data augmentation schemes for training more robust image classification models; (ii) developing AI safety metrics to evaluate the robustness of deep neural networks beyond standard test/training curves.
- Developing robust training methods is important to improve robustness with respect to common corruption and adversarial examples. Most work on robustness in computer vision deals with adversarial or out of distribution robustness. However, it is also important for models that are deployed in the real world to be cognizant of some mistakes being worse than others (i.e self driving car misclassifying a person with a tree is worse than misclassifying between a dog and a cat). Thus, work has been made in creating "taxonomical" trees that define distances between classes.
- The same work also proposed new loss functions such as Hierarchical Cross Entropy that weights the loss differently based upon the distance between certain classes within the tree
- Other literature proposed MagicMix, which uses LLMs to make semantic interpolations between one object to another (i.e original image of a watermelon, and query an interpolation with another object such as a speaker creates a new image of a speaker that looks like a watermelon). However, this paper is (1) still in arxiv and (2) no code and (3) paper doesn't show any experiments done with this "semantic mixing"- We aim to utilize LLMs to generate a new synthetic dataset through semantic interpolation between hard classes in such a way that the minimal amount of data generation is required. The use of hierarchical distance between 2 classes and their weighted loss could also be used not as a loss function, but also as a loss metric to then generate a concise dataset to robustify the model to specific troubled classes (implication of a weak decision boundary).
Developing AI safety metrics is important to improve trustworthiness. We are particular interested in metrics that are data-agnostic, and focus on the properties (e.g., spectral properties) of the weight matrices. We have the believe that the weights of network can be used as a fingerprint, which can subsequently reveal weaknesses and security risks that are potentially posed by a given deep neural network. These metrics can also be used for creating a taxonomy of models to understand the training stage of a model, e.g., under-fitted or over fitted, or later the robustness, e.g., sensitive to perturbations or robust to perturbations. A specific interest is to extend these metrics to the class of large language models.
Accomplishments
- We prepared the camera ready version of our paper "NoisyMix: Boosting Model Robustness to Common Corruptions" which appears in the Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS) 2024, Valencia, Spain. PMLR: Volume 238. The camera read version can be found here https://proceedings.mlr.press/v238/erichson24a/erichson24a.pdf.
- NoisyMix combines stability training with noisy augmentation in input and latent space, which leads to neural networks that are more robust to perturbations & domain shifts. Our work significantly advances the state-of-the-art, while introducing no almost no computational overhead compared to AugMix and other related strong data augmentation schemes. Importantly, Noisymix improve robust with respect to both synthetic and real-world distribution shifts.
- Impact of research
- We will present our work at AISTATS 2024.
- Humans may make lots of mistakes, but sometimes when given a new environment or setting the mistakes may not be critical. However, out of distribution, an ML model may make mistakes that would be catastrophic. This is very important in critical applications that heavily utilizes classification of objects with computer vision (i.e self driving cars, tracking, drones). Thus improving robustness of critical importance for AI safety.
Publications and presentations
- Erichson, Benjamin, et al. "NoisyMix: Boosting model robustness to common corruptions." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.