"Mass Event Will Let Hackers Test Limits of AI Technology"

As soon as ChatGPT was released, hackers began "jailbreaking" the Artificial Intelligence (AI) chatbot in an attempt to circumvent its safeguards so that it presents something irrational or offensive. Its creator, OpenAI, and other major AI providers, including Google and Microsoft, are collaborating with the Biden administration to allow thousands of hackers to test the limits of their technology. They are looking into how chatbots can be manipulated to cause harm, whether they share private information provided by users, and more. Anyone who has interacted with ChatGPT, Microsoft's Bing chatbot, or Google's Bard will soon discover that they have a propensity to fabricate information and confidently present it as fact. These systems, which are based on what are known as Large Language Models (LLMs), also imitate the cultural biases they have learned by being trained on vast troves of online text. US government officials were drawn to the concept of mass hacking in March at the South by Southwest festival in Austin, Texas, where Sven Cattell, founder of DEF CON's long-running AI Village, and Austin Carson, president of the responsible AI nonprofit SeedAI, led a workshop inviting community college students to hack an AI model. This article continues to discuss hackers testing the limits of AI technology.

AP reports "Mass Event Will Let Hackers Test Limits of AI Technology"

Submitted by Anonymous on