Skip to main content
All posts
AI & Data6 min read

EU AI Act: What Engineering Teams Need to Implement Now

A technical breakdown of EU AI Act requirements — risk classification, documentation, and conformity steps for engineering teams.

Updated: 8 April 2026

The EU AI Act is no longer a policy discussion — it is an engineering requirement. With the prohibition of unacceptable-risk systems already in force and high-risk obligations phasing in through 2027, engineering teams need concrete implementation plans, not slide decks.

This post breaks down what technical teams actually need to build, document, and monitor to achieve compliance.

Understanding Risk Classification

The AI Act classifies AI systems into four risk tiers. Your first task is determining where your systems fall:

Unacceptable risk (banned since February 2025)

  • Social scoring by public authorities
  • Real-time biometric identification in public spaces (with narrow exceptions)
  • Emotion recognition in workplaces and educational institutions
  • AI that exploits vulnerabilities of specific groups

Action: Audit your portfolio. If anything touches these categories, it must be decommissioned or redesigned.

High risk (obligations apply from August 2026)

This is where most enterprise AI systems land. A system is high-risk if it falls under Annex III categories:

  • Employment: CV screening, interview scoring, promotion decisions
  • Credit scoring and insurance: Automated risk assessment for natural persons
  • Critical infrastructure: Energy, water, transport management systems
  • Education: Automated grading, admission decisions
  • Law enforcement and border control: Risk assessment tools

Additionally, AI systems used as safety components of products regulated under existing EU legislation (medical devices, vehicles, machinery) are automatically high-risk.

Watch out: An internal tool that ranks job applicants by CV match score is a high-risk system, even if a human makes the final decision. The AI Act considers systems that "assist" human decisions in these domains as high-risk.

Limited risk (transparency obligations)

  • Chatbots must disclose they are AI
  • AI-generated content (deepfakes, synthetic text) must be labeled
  • Emotion recognition systems must inform users

Minimal risk (no specific obligations)

Spam filters, AI-powered search, recommendation engines for non-critical domains.

High-Risk System Requirements: The Technical Checklist

If your system is classified as high-risk, here is what you need to implement:

1. Risk management system (Article 9)

Not a one-time assessment — an ongoing, documented process:

  • Identify and analyze known and foreseeable risks for each AI system
  • Estimate and evaluate risks that emerge during use
  • Adopt risk mitigation measures and document their effectiveness
  • Test to ensure residual risk is acceptable with appropriate metrics

Implementation: Integrate risk assessment into your CI/CD pipeline. Every model update should trigger a risk review with documented sign-off.

2. Data governance (Article 10)

Training, validation, and testing datasets must meet specific quality criteria:

  • Relevance and representativeness — documented analysis of dataset composition
  • Bias examination — proactive identification of potential biases, particularly for protected characteristics
  • Gap analysis — documented assessment of data limitations
  • Statistical properties — recorded and versioned alongside the model

Implementation: Use a data catalog (e.g., Microsoft Purview, DataHub) with lineage tracking. Every training run must reference a specific, versioned dataset with documented properties.

3. Technical documentation (Article 11)

Before a high-risk system is placed on the market, you need comprehensive documentation covering:

  • General system description and intended purpose
  • Development process, including design choices and trade-offs
  • Monitoring, functioning, and control mechanisms
  • Risk management process details
  • Description of changes throughout the lifecycle

Implementation: Treat this like architecture decision records (ADRs) but mandated by law. Automate documentation generation from your ML pipeline metadata where possible.

4. Record-keeping and logging (Article 12)

High-risk systems must have automatic logging capabilities:

  • Log events throughout the system's lifecycle
  • Enable traceability of the system's operation
  • Logs must cover, at minimum: periods of use, input data characteristics, reference databases used, and the natural persons involved in verification

Implementation: This is not standard application logging. You need ML-specific observability — log every inference with input features, model version, output, confidence score, and timestamp. Tools like Azure ML's model monitoring or MLflow tracking cover most of these requirements.

5. Human oversight (Article 14)

High-risk systems must be designed to allow effective human oversight:

  • Humans must be able to understand the system's capabilities and limitations
  • Operators must be able to override or reverse AI decisions
  • The system must include a stop mechanism

Implementation: Build admin dashboards that show model confidence distributions, flag low-confidence decisions for review, and provide one-click override capabilities.

6. Accuracy, robustness, and cybersecurity (Article 15)

  • Accuracy must be declared and measurable against defined metrics
  • Robustness against adversarial inputs and data drift
  • Cybersecurity measures appropriate to the risk level

Implementation: Adversarial testing, automated drift detection, and security hardening of model endpoints are now legal requirements, not nice-to-haves.

Conformity Assessment: Self or Third-Party?

Most high-risk AI systems under Annex III can undergo self-assessment using harmonized standards. Exceptions requiring third-party assessment:

  • Biometric identification and categorization systems
  • Critical infrastructure safety components
  • Systems covered by specific sectoral legislation that already requires third-party assessment

For self-assessment: Implement an internal quality management system (Article 17) that covers all the above requirements, and maintain a technical file (Annex IV) that can be presented to market surveillance authorities on request.

Timeline: What Is Due When

MilestoneDateAction required
Unacceptable-risk prohibitionsFebruary 2025Already in force. Ensure no banned systems are operating.
General-purpose AI obligationsAugust 2025GPAI providers must comply with transparency and documentation rules.
High-risk system obligationsAugust 2026Full compliance required for Annex III high-risk systems.
Existing high-risk systemsAugust 2027Systems already on the market must comply if significantly modified.

Practical Steps for Engineering Leaders

  1. Inventory your AI systems. Create a registry of every model in production with its classification under the AI Act. This registry is itself a requirement (Article 49).
  2. Start with logging. Retrofitting observability is the most time-consuming requirement. Begin now.
  3. Standardize documentation. Create templates for technical documentation that align with Annex IV. Automate population from pipeline metadata.
  4. Embed risk assessment in your SDLC. Make it part of pull request reviews for model changes, not a quarterly compliance exercise.
  5. Designate responsibility. Someone in the engineering organization must own AI Act compliance. This is not a task for legal alone.

Bottom line: The EU AI Act is fundamentally an engineering regulation. Legal teams can interpret it, but only engineering teams can implement it. Start with logging and documentation — these are the hardest to retrofit and the most likely to be audited first.

Related Resources

Disclaimer: This article provides general technical guidance on regulatory requirements and should not be construed as legal advice. Regulations may be subject to updates, national transposition differences, and evolving enforcement interpretations. Always consult qualified legal counsel for compliance decisions specific to your organisation.

Questions about classifying your AI systems or building compliant ML pipelines? Contact us — we help engineering teams turn regulatory requirements into technical specifications.

EU AI Act complianceAI risk classificationhigh-risk AI systemsAI Act technical requirementsAI conformity assessment

Frequently Asked Questions

When does the EU AI Act apply to high-risk AI systems?
High-risk AI system obligations under the EU AI Act take effect in August 2026. Systems already on the market must comply by August 2027 if significantly modified.
What qualifies as a high-risk AI system under the EU AI Act?
AI systems used in employment (CV screening, interview scoring), credit scoring, critical infrastructure, education (automated grading), and law enforcement are classified as high-risk under Annex III. AI used as safety components in regulated products is also automatically high-risk.
Can companies self-assess their AI systems for EU AI Act compliance?
Most high-risk AI systems under Annex III can undergo self-assessment using harmonized standards. Exceptions requiring third-party assessment include biometric identification systems, critical infrastructure safety components, and systems covered by specific sectoral legislation.
What are the penalties for non-compliance with the EU AI Act?
Penalties can reach up to 35 million EUR or 7% of global annual turnover for prohibited AI practices, and up to 15 million EUR or 3% of turnover for other violations.
What technical documentation is required for high-risk AI systems?
Before placing a high-risk system on the market, you need documentation covering: general system description, development process including design trade-offs, monitoring and control mechanisms, risk management process details, and a description of changes throughout the lifecycle.

Need expert guidance?

Our team specializes in cloud architecture, security, AI platforms, and DevSecOps. Let's discuss how we can help your organization.

Related articles