Stanford University researchers have found that no current large language models (LLMs), including OpenAI’s GPT-4 and Google’s Bard, comply with the European Union’s AI Act. This legislation, the first of its kind, governs AI within the EU, but it is serving as a blueprint for AI regulators around the world.
The study assessed ten major model providers against the 12 requirements of the AI Act. Results showed a wide range of compliance levels, with some providers scoring less than 25%. Key areas of non-compliance included transparency in disclosing copyrighted training data, energy usage, emissions, and risk mitigation methodologies.
The study also highlighted a disparity between open source and closed model releases. Open source releases led to more robust resource disclosure but posed greater challenges in deployment monitoring and control.
The researchers proposed recommendations for improving AI regulation, including holding larger foundation model providers accountable for transparency and accountability, as well as the need for technical resources and talent to enforce the Act.
The findings underscore the challenges ahead.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.