"HYAS Infosec Groundbreaking Research on AI-Generated Malware Contributes to the AI Act, Other AI Policies and Regulations"

Research from HYAS Infosec's HYAS Labs is contributing to the European Union's Artificial Intelligence (AI) Act. The AI Act is an initiative helping to shape the trajectory of AI governance, with US policies and considerations to follow soon. According to AI Act researchers and framers, the Act mirrors a specific conception of AI systems, considering them as non-autonomous statistical software with possible harms mainly from datasets. The researchers see the concept of "intended purpose," derived from product safety principles, as a fitting paradigm that has significantly impacted the AI Act's initial provisions and regulatory approach. However, researchers see a major gap in the AI Act regarding AI systems without an intended purpose, which includes General-Purpose AI Systems (GPAIS) and foundation models. HYAS' work on AI-generated malware, specifically BlackMamba and its more sophisticated and fully autonomous cousin, EyeSpy, is providing further insight into AI systems that lack an intended purpose, such as GPAIS, and the unique challenges GPAIS poses to cybersecurity. This article continues to discuss HYAS Infosec's research contributing to the AI Act and other AI regulations.

Business Wire reports "HYAS Infosec Groundbreaking Research on AI-Generated Malware Contributes to the AI Act, Other AI Policies and Regulations"

Submitted by grigby1

Submitted by grigby1 CPVI on