AI model developers now required to disclose information about their systems under new EU regulations
The European Union has implemented new rules for providers of General-Purpose Artificial Intelligence (GPAI) systems, effective August 2, 2023. This legislation, known as the EU AI Act, primarily focuses on transparency, intellectual property, and enforcement provisions.
The new rules require developers to disclose comprehensive information about their models, including training data, licenses, computational resources, and compliance with EU copyright law. Developers must also specify what measures they took to protect copyright and report which sources they used for their training data.
Transparency obligations include sharing details on training data licenses and summaries, energy usage, and model documentation. This is to ensure users and regulators understand the AI model’s workings and provenance. The Act mandates that training data respect intellectual property rights, aligning model training processes with EU copyright protections.
Operators of these AI systems must now disclose how their systems work and what data they were trained on. There is no obligation to name specific datasets, domains, or sources in the new AI legislation.
The enforcement of these new rules is conducted through national competent authorities (NCAs) and the EU-level AI Office. The AI Office supervises GPAI providers, can request information, and has the power to impose sanctions for non-compliance. The European Artificial Intelligence Board coordinates oversight efforts, with support from advisory bodies and scientific panels.
Sanctions or fines can be applied for breaches, though specific fine amounts depend on the national authorities’ enforcement frameworks under the Act. The European Artificial Intelligence Agency will enforce the new AI rules from August 2026 for new models. Models that came onto the market before August 2, 2025, will be controlled by the new European Artificial Intelligence Agency from August 2027.
The new rules apply specifically to GPAI systems, which are versatile and can write texts, analyze language, or program. Particularly powerful AI models that could potentially pose a risk to the public must also document safety measures.
Several national and international alliances of authors, artists, and publishers have complained that the legislation does not adequately protect intellectual property. The Initiative for Copyright has criticized the ineffectiveness of the measures in the new AI legislation. Google has also expressed concerns about the new AI law.
From August 2026, the European Artificial Intelligence Agency will only start enforcing the rules. The new European Artificial Intelligence Agency will control models that came onto the market before August 2, 2025, from August 2027. Fines of up to 15 million euros or three percent of the total global annual turnover may be imposed for violations of the AI law.
In a nutshell, the AI Act imposes rigorous transparency on AI model providers, enforces adherence to intellectual property rules, and establishes a coordinated governance and enforcement ecosystem empowered to levy penalties for violations.
Developers must provide detailed information about their AI models' training data, licenses, computational resources, and compliance with EU copyright law, as required by the EU AI Act. This transparency is essential for users and regulators to understand the AI model's workings and provenance, given the versatile nature of General-Purpose Artificial Intelligence (GPAI) systems.