Navigating the AI Act: Obligations and Exemptions for General-Purpose AI Models

We explore the European Commission's guidelines on the AI Act, focusing on the obligations for providers of general-purpose AI models. Find out the importance of these models and the specific responsibilities for providers. This includes maintaining transparency, complying with copyright laws, and assessing systemic risks. We also discuss exemptions for open-source models and the enforcement framework - learn why understanding these guidelines is crucial for your organisation.
 

General-purpose AI models under the AI Act

Effective 2 August 2025, the AI Act introduced clear obligations for all providers of general-purpose AI models. These models are defined by their training compute and ability to perform diverse tasks, with language-generating models seen as particularly capable. The AI Act defines a general-purpose AI model as ‘an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market’.
 

Lifecycle of a general-purpose AI model

The lifecycle of a general-purpose AI model is critical for providers, particularly where the model presents systemic risks. Providers must continuously assess and mitigate these risks throughout the model's lifecycle, ensuring cybersecurity protection as needed. The lifecycle starts with the large pre-training run and includes all subsequent developments, whether before or after the model is marketed. This approach covers the model’s entire lifecycle – development, market availability and use.
 

Obligations for providers

Providers of general-purpose AI must keep documentation up to date, implement copyright policies and publish summaries of the model’s training content. They must also carry out systemic risk assessments continuously, using effective and proportionate measures. General-purpose AI models classified as presenting systemic risk are subject to additional obligations. These models are classified based on high-impact capabilities or a decision by the Commission. Providers must notify the Commission if their model meets the criteria for high-impact capabilities and may contest any classification they consider incorrect. The Commission can also designate models as having systemic risk, and providers can request reassessment of this designation.
 

When an organisation becomes a provider

An organisation becomes a ‘provider’ of a general-purpose AI model when it develops or commissions the model and places it on the market under its own name or trademark, whether free of charge or for payment. 'Placing on the market’ means making the model available for the first time on the EU market for distribution or use in a commercial activity. This can happen through libraries, APIs, direct downloads or physical copies. For example, if your organisation develops a model and uploads it to an online repository, or has a model developed by another party and places it on the market, you will need to comply with the AI Act. When a model is integrated into an AI system, it is also considered placed on the market, and the provider must comply with the AI Act's obligations. Providers outside the EU must appoint a representative within the Union to comply with the AI Act.
 

Modifications and reclassification

If your organisation substantially modifies a general-purpose AI model so that its generality, capabilities or systemic risk materially change, you will be considered providers of that model. The Commission sets a threshold for this based on the compute used for modification - greater than a third of the original model's compute. If the original model had systemic risk, the threshold is adjusted accordingly. Modifiers must comply with transparency obligations and, if the model has systemic risk, notify the Commission. This is a forward-looking approach that takes account of future increases in compute use and changes in technology and market conditions.
 

Exemptions for open-source models

In principle, providers of general-purpose AI models must comply with Articles 53 and 54 of the AI Act. Limited exemptions apply to models released under a genuinely free and open-source licence, provided they are not classified as presenting systemic risk. These exemptions include not having to maintain technical documentation, provide information to AI system providers, or appoint an authorised representative for models from third countries. The rationale is that open-source models can boost research and innovation and ensure transparency. Even if exemptions apply, providers must still comply with copyright law and produce a summary of the training content.
 

Conditions for exemption eligibility

To qualify for exemptions under the AI Act, a general-purpose AI model must be released under a free and open-source licence, allowing access, use, modification, and distribution. The licence must permit the model to be openly shared without restrictions like non-commercial use or user scale thresholds. Monetisation is prohibited, meaning no payment should be required for access or use. Essential details of the model, like its design and configuration, must be publicly available to support transparency and practical use. Optional paid services can be offered alongside the model, provided they don't affect its free usage.
 

Enforcement, supervision and the role of the AI office

The AI Office will oversee and enforce compliance with Chapter V of the AI Act. Providers can show compliance by following an approved code of practice, which simplifies the process and builds trust. Those who don’t adhere to the code must demonstrate compliance through other means, and may face more scrutiny. 

The AI Office will monitor adherence, handle complaints, and coordinate with providers, particularly those managing systemic risk models. Providers are encouraged to engage with the AI Office, report any serious incidents, and make use of the support available during the transition period. The Commission will maintain confidentiality and consider the challenges faced by providers as they adapt to the new framework.

The AI Act represents a significant step towards regulating general-purpose AI models to ensure they are developed and used responsibly. By understanding and adhering to these obligations, providers can contribute to a safer and more innovative AI landscape - collaboration and transparency will be key to navigating these changes successfully.

If you have any queries about how your organisation may need to comply with the AI Act, or would like further information, please visit our data protection services section or contact Christopher Beveridge.

Contact us

Please refer to the Introduction to our Privacy Statement and the Contacts section, which tell you what we do with your personal information, your rights and other relevant information.