The 12-month countdown calendar
The Act has staggered application dates. For a hiring AI deployer, only four of them matter, and the cliff edge is 2 August 2026.[4]
- 1 August 2024 — entry into force; published in the Official Journal of the EU.
- 2 February 2025 — prohibited practices apply (e.g. workplace emotion-recognition systems are now unlawful in the EU regardless of consent), and Article 4 AI literacy obligations begin.
- 2 August 2025 — General-Purpose AI (GPAI) model rules apply; most penalty provisions activate.[4]
- 2 August 2026 — the cliff for hiring teams: full Annex III high-risk obligations apply, including Article 26 deployer duties for any AI system used to evaluate candidates, place targeted job ads, or filter applications.[1][4]
- 2 August 2027 — high-risk obligations for Annex I embedded AI (regulated product safety) apply; the public Member-State register for Annex III deployers is in steady-state operation.
The Commission has tabled a so-called “Digital Omnibus” proposal that would push the Annex III date to 2 December 2027, but it remains in trilogue. Treat 2 August 2026 as binding for planning purposes; if the postponement is enacted you simply gain an extra eighteen months of cushion.[4][6]
What “high-risk” actually means for an AI interviewer
Annex III Section 4(a) explicitly covers “AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.”[1] Read literally, that captures: a CV-screening model, an automated voice or video interview, a coding-assessment grader, a culture-fit scorer, and a ranking model that orders shortlists for human reviewers. The high-risk classification is independent of company size or sector — a ten-person Berlin startup is just as in-scope as a Fortune 500.
Two roles get specific obligations. Providers are the companies that build or substantially train the system (the AI vendor). Deployers are the natural or legal persons using the system under their authority — that is the employer.[7] Most obligations sit on the provider, but Article 26 puts a non-trivial set on the deployer too, and Article 25 promotes a deployer to provider if the employer substantially modifies the system or rebrands it.
The FRIA — Fundamental Rights Impact Assessment
Article 27 introduces the FRIA, a fundamental-rights-focused cousin of the GDPR DPIA. The strict scope is narrow: public-law bodies, private entities providing public services, and deployers of Annex III systems 5(b) (creditworthiness) and 5(c) (life and health insurance pricing).[3][5] Most private-sector hiring deployers are not obliged under Article 27 in the literal text.
The catch: even when not literally required, the FRIA template is becoming the de-facto reasonable-care standard. Article 26(9) makes a GDPR Article 35 DPIA mandatory anyway, regulators are publishing FRIA templates aligned with the AI Office, and a candidate complaint will be handled by data protection authorities that already think in fundamental-rights terms. We recommend treating the FRIA as mandatory-in-practice and producing a single document covering both the DPIA and the FRIA. The headings you must address per Article 27:
- Description of the deployer’s processes in which the AI will operate
- Period of time and frequency of intended use
- Categories of natural persons likely to be affected
- Specific risks of harm, including disparate-impact risks per protected attribute
- Description of human oversight measures (per the provider’s instructions for use)
- Measures taken if risks materialize, including internal governance and complaint mechanisms
Result must be communicated to the relevant market surveillance authority on first use; updates are required when any of the above materially changes.[3]
Transparency and candidate notice
Two articles compose the candidate-facing transparency stack.
Article 26(11) obliges deployers to inform natural persons that they are subject to the use of a high-risk AI system — before the interaction. In hiring, that means the candidate must know the AI is in the loop before they agree to the interview, in clear language and accessible format.[2]
Article 86gives the candidate the right to a meaningful explanation of any decision that significantly affects them, on request. This is not a generic “the model said no” — the explanation must be specific enough that the person can challenge or contest the outcome.[7]
Operationally, that translates to two product changes most current hiring stacks have not yet shipped: a pre-interview AI disclosure page, and a post-decision “explain this outcome” surface for candidates who request it.
Logging, audit, and human oversight
Article 26 layers four ongoing operational duties on the deployer:[2][6]
- Use the system per the instructions for use (Art. 26(1)). The provider’s documentation defines the permissible operating envelope; deviating from it transfers liability.
- Assign competent human oversight(Art. 26(2)). Named individuals, with authority and competence to override the system, must be designated. “The recruiter who happens to be on the call” is not enough; you must document the role.
- Monitor input data relevance (Art. 26(4)). Inputs must remain representative of the population the system was designed for. If you start interviewing at a new seniority level, in a new region, or in a new language, you have to re-validate.
- Retain logs for six months minimum (Art. 26(6)). Includes prompts, model outputs, scoring, and any operator interventions. Six months is a floor; fifteen months is a defensible ceiling for hiring and aligns with most candidate complaint windows.
Workers’ representatives must be informed before a high-risk system is put into use (Art. 26(7)). In countries with strong works councils — Germany, Austria, France — this is procedurally load-bearing: failure to consult the council before rollout is itself a violation under national labour law, separately from any AI Act fine.[5]
Penalties and enforcement
The Article 99 penalty structure has three bands. Breaches of prohibited practices reach EUR 35 million or 7% of worldwide turnover. Breaches of high-risk obligations — including Article 26 — reach EUR 15 million or 3% of worldwide turnover, whichever is higher. Supplying incorrect information to authorities reaches EUR 7.5 million or 1%.[5]
Each Member State designates a market surveillance authority for AI; most have selected their existing data protection authority or a combined AI/data regulator. The authorities have full GDPR-style audit powers — on-site inspection, document compulsion, and the ability to mandate disclosure of training data and source code.
The 90-day prep checklist
If you are reading this in late April 2026, you have approximately 95 days. The checklist below maps to what enforcers will actually look for in the first audit window. Print it; assign owners; track in weeks.
- Inventory. List every AI system in the hiring pipeline with model name, vendor, role (provider/deployer), and Annex III classification.
- Vendor evidence pack.Pull each provider’s technical documentation, instructions for use, EU declaration of conformity, and CE-marking record. Without these, you cannot lawfully deploy after 2 August 2026.
- FRIA + DPIA. One unified document; six headings per Article 27 plus the GDPR Article 35 fields. File it with your market surveillance authority on first use.
- Human oversight role. Named persons, written job description, escalation procedure for system error or bias signals, authority to override the AI in real time.
- Candidate notice flow. Pre-interview disclosure page (clear language, EU language version where required) plus Article 86 explanation surface for post-decision requests.
- Logging policy. Six-month retention floor; secure export pipeline for regulator requests; access log on the access log (auditors check who reads candidate transcripts).
- Bias and accuracy monitoring. Quarterly disparate impact analysis along sex, age, disability, and ethnicity where consented and lawful. Document remediation actions.
- Works council notice. Where applicable under national law, formal consultation before go-live; minutes retained.
- Annex III register entry. Confirm with your provider that the system is registered in the EU database; deployers cannot use unregistered Annex III systems (Art. 26(8)).[2]
- Incident response. Article 73 serious-incident reporting line; fifteen-day reporting deadline for serious incidents; document the runbook.
What this looks like with Intrvio
Intrvio is the provider; the employer is the deployer. As provider we ship the EU declaration of conformity, the technical documentation package, an instructions-for-use document, a FRIA template pre-populated with system specifics, and an audit-ready log export (transcripts, model outputs, operator interventions, six- or twelve-month retention configurable). On the deployer side, our dashboard surfaces the disparate-impact analysis quarterly and the per-decision explanation that satisfies Article 86 candidate requests. The work that remains yours: assigning the human oversight role, conducting the works council notice where applicable, and filing the FRIA with your national market surveillance authority.
Frequently asked questions
Sources
- [1]European Commission AI Act Service Desk — Annex III: high-risk AI systems list, Section 4(a) Employment. https://ai-act-service-desk.ec.europa.eu/en/ai-act/annex-3
- [2]ArtificialIntelligenceAct.eu — Article 26: Obligations of deployers of high-risk AI systems. https://artificialintelligenceact.eu/article/26/
- [3]ArtificialIntelligenceAct.eu — Article 27: Fundamental Rights Impact Assessment for high-risk AI systems. https://artificialintelligenceact.eu/article/27/
- [4]European Commission AI Act Service Desk — EU AI Act implementation timeline. https://ai-act-service-desk.ec.europa.eu/en/ai-act/eu-ai-act-implementation-timeline
- [5]aiactblog.nl — Article 26 AI Act: 12 deployer obligations explained (Feb 2026). https://www.aiactblog.nl/en/posts/article-26-deployer-obligations-ai-act
- [6]NicFab Blog — Art. 26 AI Act: operational checklist for deployers of high-risk AI systems (Apr 2026). https://www.nicfab.eu/en/posts/art-26-deployer-checklist/
- [7]Legalithm — Article 26 EU AI Act: deployer obligations for high-risk AI. https://www.legalithm.com/en/ai-act-guide/article-26-deployer-obligations
