This proposed research aims to advance the field of secure and trustworthy deep learning by exploring novel methods for training robust deep neural networks (DNN's) free from security violations. We propose to achieve this by utilizing advanced metric-informed training and certified adversarial training methods. The first task aims to improve strong data augmentation techniques to enhance the security and trustworthiness of neural networks in unknown environments. Our approach involves developing novel data augmentation techniques based on interpolation and mixing with noise, which has been demonstrated to achieve certified robustness of neural networks in prior literature. By enhancing the security of neural networks with these new data augmentation methods, we can ensure that they perform optimally in unpredictable settings.
Michael Mahoney works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, and applications in genetics, astronomy, medical imaging, social network analysis, and Internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the Mathematics Department, at Yahoo Research, and at Stanford University in the Mathematics Department. Among other things, he is on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council's Committee on the Analysis of Massive Data, he runs the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets, and he spent the fall of 2013 at UC Berkeley co-organizing the Simons Foundation's program on the Theoretical Foundations of Big Data Analysis.
Visit Michael Mahoney's Web site.