'Utter rubbish': Hackers have criticised current AI security testing

At DEF CON, the world's largest hacker conference, leading cybersecurity researchers stated that current methods for protecting artificial intelligence (AI) systems are fundamentally flawed and require a complete overhaul.
Here's What We Know
The conference's first Hackers' Almanac report, produced in conjunction with the University of Chicago's Cyber Policy Initiative, questions the effectiveness of the "red team" method of having security experts scan AI systems for vulnerabilities. Sven Cattell, who heads the AI Village DEF CON, said the approach cannot adequately protect against emerging threats because the documentation of AI models is fragmented and the assessments included in the documentation are inadequate.
At the conference, about 500 participants tested AI models, and even novices were successful in finding vulnerabilities. The researchers called for the implementation of structures similar to the Common Vulnerabilities and Exposures (CVE) framework used in traditional cybersecurity since 1999 to standardise how AI vulnerabilities are documented and remediated.
Source: DEF CON