Governance in artificial intelligence reached a definitive milestone with the EU AI Act. The transition from voluntary frameworks to strict legal mandates has become a boardroom priority. For global enterprises, the financial stakes are unprecedented. Non-compliance with prohibited practices can trigger penalties of up to €35 million or 7% of total global annual turnover, while general breaches of obligations can result in fines of €15 million or 3% of turnover.
Understanding the nuances of high-risk classification is a strategic necessity for maintaining market access in the Eurozone.
Navigating the high-risk classification
The Act employs a rigorous risk-based hierarchy, but the "high-risk" designation is where the most significant operational burden lies. Here are some of the essential classifications:
- Under Article 6, a system is classified as high-risk if it serves as a safety component for products already covered by Union harmonisation legislation, such as medical devices, civil aviation, or automotive safety. These applications require a third-party assessment.
- Annexe III identifies specific functional areas, including biometrics, critical infrastructure, and employment, as inherently high-risk due to their potential impact on fundamental rights.
It is also important to appreciate the nuance regarding the "profiling" of natural persons. Any AI system used for profiling within the scope of Annexe III is denied exemptions and categorised as high-risk based on use cases/context bits in Article 6 criteria. This classification remains mandatory because of the significant harm automated processing can inflict on individuals' health, safety, or legal standing. Enterprises must proactively document their assessment of these systems before they ever reach the market.
The EU AI act compliance checklist
To achieve robust AI governance and compliance, providers must move beyond peripheral checks and integrate a structural Quality Management System (QMS). This framework ensures that risk management is not a one-time event but a continuous, iterative process running throughout the system's entire lifecycle. High-risk providers must implement the following core pillars:
Data governance and management
Training and testing datasets must be relevant, representative, and, to the best extent possible, free of errors. This involves rigorous bias detection and mitigation to prevent discriminatory outcomes in high-stakes sectors like recruitment or credit scoring.
Technical documentation
Enterprises must prepare comprehensive audit documentation prior to deployment. This includes system architecture, performance metrics, and a detailed description of the risk management system.
Record-keeping
High-risk systems are required to feature automatic logging capabilities. These logs must ensure the traceability of the system's functioning over its lifetime to facilitate post-market monitoring and incident reporting.
Human oversight
Systems must be designed so that natural persons can effectively oversee operations, interpret outputs, and, if necessary, override or "stop" the AI in a safe state.
A comprehensive EU AI act compliance checklist serves as the foundation for these efforts. It ensures that transparency measures, such as clear user instructions and performance disclosure, are met with precision.
Executing the AI conformity assessment
Before any high-risk AI system is placed on the EU market, it must undergo a formal AI conformity assessment. This procedure verifies that the system meets all mandatory requirements established in the Act. While most Annexe III systems may rely on internal control assessments, those integrated into products regulated under Annexe I typically require the intervention of a 'notified body' or third-party auditor. Successfully navigating this process culminates in the attainment of the CE marking and an EU Declaration of Conformity.
For non-European GPAIs, these obligations are equally binding if their AI systems are placed on the EU market or if the output produced by the system is used within the Union. Failure to demonstrate conformity upon request during an audit can lead to immediate market withdrawal and the aforementioned tiered penalties.
As the August 2026 deadline (Annexe I (embedded AI) obligations may apply later) for high-risk systems approaches, the window for preparatory governance is narrowing. The complexity of the EU AI Act demands a proactive, cross-functional strategy that balances creative deployment with rigorous legal adherence. Mastery of these regulations will distinguish the market leaders of the next decade. These are leaders who will treat compliance as a foundational tenet of their AI journey.
How can Infosys BPM help enterprises with holistic AI governance and compliance?
At Infosys BPM, we recognise that compliance is not an inhibitor of innovation but a catalyst for trust. Our AI governance and compliance solutions are designed to harmonise the complex requirements of the EU AI Act with your existing operational workflows. By leveraging our deep domain expertise in regulated industries and our AI-first approach, we help brands streamline their EU AI Act compliance checklist and navigate the complexities of the AI conformity assessment process.
Frequently asked questions
Systems are high-risk if they are safety components in regulated products (Annex I) or fall under Annex III functions like biometrics, critical infrastructure, or employment decisions. Profiling natural persons in Annex III contexts also triggers mandatory high-risk status regardless of exemptions.
Providers must implement a Quality Management System (QMS), ensure data governance with bias mitigation, maintain technical documentation and automatic logging, design for human oversight, and conduct conformity assessments. Transparency measures like clear instructions and performance disclosure are also required.
Prohibited AI practices carry fines up to €35 million or 7% of global annual turnover, while high-risk obligations violations attract up to €15 million or 3% of turnover. Other breaches face up to €7.5 million or 1.5% depending on severity.
Non-EU GPAIs face the same obligations if their systems are placed on the EU market or outputs are used in the Union, including conformity assessments and market withdrawal risks for non-compliance. Systemic risk GPAIs face additional transparency and mitigation requirements.
High-risk systems undergo formal assessment to verify mandatory requirements before market placement, often requiring notified body review for Annex I products. Successful assessment leads to CE marking and EU Declaration of Conformity, with post-market monitoring required.