"For AI, Secrecy Often Doesn't Improve Security"

A team of researchers has concluded that limiting public access to the underlying structures of Artificial Intelligence (AI) systems could have several consequences. They have gone into detail about the threats posed by the misuse of AI systems in areas such as disinformation, hacking, and more. The researchers assessed each risk and delved into whether there are more effective ways to combat it than restricting access to AI models. For example, when discussing how AI could be used to generate text for phishing emails, the researchers point out that it is more effective to strengthen defenses rather than restrict AI. This article continues to discuss key points from the team's paper "Considerations for Governing Open Foundation Models."

Princeton University reports "For AI, Secrecy Often Doesn't Improve Security"

 

Submitted by Gregory Rigby on