"CSU Researchers Funded by DARPA to Demystify Neural Networks, Improve Cybersecurity"

Artificial Neural Networks (ANNs) are computer software systems that process information, recognize patterns, and learn similarly to what the human brain does. Such systems, including everything from weather prediction models to facial recognition technology, are becoming increasingly important today. However, these complex networks are difficult to dissect and interpret, making them vulnerable to cyberattacks. Therefore, computer scientists and mathematicians at Colorado State University (CSU) are attempting to figure out how ANNs work and how they can be better protected against security threats. Their $1 million research project is funded by the US Department of Defense's Defense Advanced Research Projects Agency (DARPA). While complex, ANNs can be explained and simplified by viewing them as geometric objects with lower-dimensional sub-spaces representing the local geometry of data rather than as a black box function to fit data. Researchers could make ANNs more explainable, trustworthy, and secure by defining them as shapes while investigating their fundamental mathematical foundations. According to Michael Kirby, a professor in the CSU Department of Mathematics and leader of the study, there is this idea that geometric structure exists in data and that it can be used to better understand information. The team hopes to use insights into what they call the geometry of learning to better understand how attacks on the flaws of a trained Machine Learning (ML) algorithm can occur, as well as what tools and methods can be used to defend against such attacks. This article continues to discuss the CSU project aimed at understanding how adversarial Artificial Intelligence (AI) emerges and identifying ways to defend against adversarial attacks.

CSU reports "CSU Researchers Funded by DARPA to Demystify Neural Networks, Improve Cybersecurity"

Submitted by Anonymous on