Many organisations still struggle with accountability and governance for AI. Only around 25% have fully implemented AI governance programmes, and fewer than 30% formally define oversight roles, highlighting widespread uncertainty about responsibility when AI systems lead to harm or regulatory exposure.
This uncertainty is precisely what the EU AI Act is designed to eliminate. As general-purpose AI models become embedded across products, platforms, and decision workflows, regulators are no longer asking whether AI introduces risk, but who is accountable for managing it.
The EU AI Act's GPAI obligations establish clear expectations for model provider responsibility, particularly for GPAI with systemic risk. They compel the leadership to address governance, transparency, and lifecycle accountability at the model level.
why does the EU AI Act matter for GPAI model providers?
The EU AI Act (Regulation 2024/1689), which came into force in 2024, is the world's first comprehensive AI regulatory framework, designed to ensure that AI systems placed on the EU market are trustworthy, safe, and respectful of fundamental rights. Within this framework, General-Purpose AI (GPAI) model providers and deployers are subject to defined obligations under Articles 53 and 55 of the Act. These obligations differ based on whether a model is classified as presenting systemic risk due to its scale, capabilities, or deployment context.
These legal principles translate into enforceable operational requirements. Central to the framework are documentation and disclosure obligations that require GPAI model providers to maintain current technical records covering model architecture, training and evaluation methods, and performance characteristics. Providers must also share relevant information with the EU AI Office and downstream AI system developers. They must also safeguard intellectual property and trade secrets. This enables organisations integrating GPAI models to meet their own compliance responsibilities under the Act.
GPAI with systemic risk: elevated requirements and transparency obligations
Not all GPAI models are treated equally. Models designated as GPAI with systemic risk are subject to enhanced requirements due to their potential societal or economic impact. Providers must implement formal safety and security frameworks, conduct systemic risk assessments, and demonstrate how risks are identified, mitigated, and continuously monitored across the model lifecycle.
Expectations from GPAI model providers
The EU AI Act shifts the focus from reactive risk-handling to proactive stewardship through the following mandates:
- Safety and security framework: Providers must establish, maintain, and continuously report on a framework throughout the model's entire lifecycle.
- Data transparency: Providers must publish summaries of training data, allowing partners and regulators to assess model limitations and legal exposure.
- Copyright compliance: Providers are required to implement strict data governance policies that align with EU copyright requirements.
- Systemic risk management: Both standard and systemic-risk models must demonstrate lawful development to mitigate downstream risks for customers.
from voluntary guidance to strategic advantage
To support the practical implementation of the EU AI Act, the European Commission has endorsed a voluntary GPAI Code of Practice. The code offers structured guidance on how providers can operationalise documentation, transparency, and safety requirements. While not legally binding, it is recognised as a mechanism for reducing administrative burden and increasing legal certainty. It also helps organisations translate regulatory intent into consistent execution.
For organisations, the code signals that EU AI Act compliance requires early preparation rather than last-minute remediation. Non-compliance may trigger enforcement actions categorised like so:
- Misleading or partial disclosures to oversight bodies carry a maximum fine of €7.5 million or 1% of their worldwide revenue.
- For non-compliance with the listed articles, there are fines of up to €15 million or 3% of their global turnover.
- For committing prohibited AI practices, the penalty is the higher of €35 million or 7% of their global turnover.
By contrast, organisations that invest early in documentation discipline, safety frameworks, and model-level governance are better positioned to manage regulatory exposure and avoid disruption as enforcement timelines mature.
from compliance to competitive advantage
When approached strategically, the EU AI Act compliance can become a source of competitive advantage rather than a constraint. Organisations that embed strong AI governance, transparency, and oversight practices tend to earn greater confidence from regulators, partners, and enterprise customers. This trust increasingly influences procurement decisions, partnership viability, and pl atform adoption in regulated markets.
The advantage is realised through execution. Organisations that integrate AI governance across legal, data science, cybersecurity, and business teams strengthen oversight, reduce operational friction, and better align AI initiatives with enterprise strategy. Mature governance frameworks are associated with 23% fewer AI-related incidents and 31% faster time-to-market for new AI capabilities, demonstrating how integrated governance supports both risk control and innovation velocity.
how can Infosys BPM support responsible GPAI compliance?
Operationalising the EU AI Act at scale requires translating regulation into repeatable governance and execution practices. Infosys BPM supports organisations in embedding responsible AI through governance-led frameworks addressing transparency, human oversight, systemic risk, and compliance readiness. By aligning AI deployment with trust-by-design principles, Infosys BPM helps enterprises manage EU AI Act GPAI obligations proactively, reduce regulatory exposure, and scale AI responsibly.
Frequently asked questions
GPAI model providers under the EU AI Act (Regulation 2024/1689) must maintain comprehensive technical documentation, publish training data summaries, implement copyright compliance policies, and share relevant information with the EU AI Office and downstream developers. These requirements apply under Articles 53 and 55, with enhanced obligations for models designated as presenting systemic risk—including formal safety frameworks, adversarial testing, and continuous risk monitoring across the model lifecycle. Explore how Infosys BPM supports responsible AI compliance with governance-led frameworks.
A standard GPAI model is subject to documentation, transparency, and copyright compliance obligations under Article 53 of the EU AI Act. A GPAI model designated as presenting systemic risk—typically due to training compute exceeding 10²⁵ FLOPs or significant societal or economic impact—faces elevated requirements under Article 55. These include formal safety and security frameworks, mandatory incident reporting to the EU AI Office, adversarial testing, and continuous risk assessment throughout the model lifecycle. The distinction is not static: regulatory designation can change as deployment scale and capability evolve.
Non-compliance with GPAI provisions carries tiered financial penalties. Misleading or incomplete disclosures to oversight bodies carry fines of up to €7.5 million or 1% of global annual revenue. Non-compliance with Articles 53 or 55 obligations—covering documentation, transparency, and safety frameworks—incurs fines of up to €15 million or 3% of global turnover. Violations of prohibited AI practices carry the highest penalty: up to €35 million or 7% of global turnover, whichever is higher. Early governance investment consistently proves more cost-effective than regulatory remediation.
No—the GPAI Code of Practice, endorsed by the European Commission, is voluntary, but it is formally recognised as a mechanism for demonstrating compliance with EU AI Act obligations and for reducing administrative burden. Providers that adhere to the Code gain increased legal certainty and a structured pathway for operationalising documentation, transparency, and safety requirements. For enterprises under enforcement scrutiny, documented adherence to the Code is likely to carry material weight in regulatory assessments, making early adoption a credible risk mitigation strategy.
Enterprises that embed AI governance proactively—rather than retrofitting controls under enforcement pressure—realise advantages across three dimensions. Organisations with mature AI governance frameworks report 23% fewer AI-related incidents and 31% faster time-to-market for new AI capabilities, directly improving both risk posture and innovation velocity. Beyond operational metrics, demonstrable governance strengthens trust with enterprise customers and regulators, increasingly influencing procurement decisions and platform adoption in regulated markets—translating compliance investment into commercial advantage.


