"Taking the Time to Implement Trust in AI"

Researchers who value security and privacy have been paying close attention to the rapid development of new Machine Learning (ML) technology. The vulnerabilities of these technological advances and their Artificial Intelligence (AI) applications make users susceptible to attack. Therefore, Bo Li, a computer science professor at the University of Illinois Urbana-Champaign, has positioned her research career at the intersection of trustworthy ML, focusing on robustness, privacy, generalization, and the underlying interconnections between these elements. Li noted that ML is currently applied everywhere in the world of technology through various domains such as autonomous driving, Large Language Models (LLMs), and more, adding that it is also a benefit found in numerous applications, such as facial recognition technology. However, we have also learned that these technological advances are vulnerable to attack. Li earned $1 million to align her Secure Learning Lab with the Defense Advanced Research Projects Agency's (DARPA) Guaranteeing AI Robustness Against Deception (GARD) program. This article continues to discuss Li's work with her students on the concept of trustworthy AI. 

The University of Illinois Urbana-Champaign reports "Taking the Time to Implement Trust in AI"

Submitted by Anonymous on