What does the EU AI Act say about hiring AI?
EU AI Act Annex III (Regulation 2024/1689) lists eight high-risk use case areas. Section 4 — Employment, workers' management and access to self-employment — explicitly defines point (a) as 'AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates'.[1] This covers CV filtering, automated interviews, candidate scoring, and voice/video analytics during the interview.
Recital 57 clarifies the rationale: these systems can have an appreciable impact on a person's future career prospects and livelihood and may perpetuate historical patterns of discrimination, particularly against women, certain age groups, persons with disabilities, or persons of certain racial/ethnic origin or sexual orientation.[5] That is why the Act imposes strict obligations on both providers (companies that build the system) and deployers (employers using it).
Compliance timeline
- 1 August 2024: AI Act published in the Official Journal.[7]
- 2 February 2025: Prohibited practices (e.g. emotion recognition) and AI literacy obligations applied.[7]
- 2 August 2025: GPAI (general-purpose AI) model rules and most penalties applied.[2]
- 2 August 2026: Full obligations for standalone Annex III high-risk AI systems apply — including hiring AI.[2][7]
- 2 August 2027: High-risk AI obligations for Annex I products (regulated product safety components) apply.[2]
Note: a 'Digital Omnibus' proposal tabled by the EU Commission late 2025 suggests pushing the Annex III date to 2 December 2027. Until that is enacted, 2 August 2026 remains the binding date.[7]
Employer obligations checklist
The Act layers Article 26 deployer obligations and Article 86 individual-rights obligations together for Annex III. As an employer, here are the actions you have to document:
- System inventory. List every hiring AI tool with deployer/provider role mapping.
- Risk assessment. Run a rights-impact assessment (akin to a GDPR DPIA) for each high-risk system and document mitigations.
- Human oversight. Assign human reviewers with the authority and training to override AI-generated rankings (Article 26(2)).[3]
- Transparency to candidates. Show a notice telling candidates that AI is being used, with an opt-in human review path (Article 86).[4]
- Worker notification. Inform workers' representatives BEFORE deploying a high-risk AI system in the workplace (Article 26(7)).[3]
- Bias monitoring. Continuously monitor the system's operation for selection-rate disparities and performance drift.[6]
- Technical documentation. Keep the provider's Annex IV technical file with version history.
- Log retention. Keep automatically-generated logs for at least six months (Article 26(6)).[3]
- Incident reporting. Notify the provider, distributor, and national competent authority promptly of fundamental-rights risks or serious incidents.[3]
- AI literacy training. Ensure staff operating the system have adequate AI literacy (applicable since February 2025).[6]
- Candidate explanation channel. Provide a path for rejected candidates to request the main elements of the decision (Article 86).[4]
- Vendor evidence pack. Collect from the provider: risk classification rationale, performance/bias testing methodology, logging and traceability guarantees, and change-notification obligations.
How Intrvio helps
Intrvio is built so that deployers can collect this evidence in one place. Scale plan ships:
- Audit log. Every interview session, every score, and every human review action is signed and immutably retained.
- Transparency notice templates. Candidate-side 'AI is being used' notice with an explanation-request link and human review path.
- Bias monitoring dashboard. Tracks selection-rate disparities from your ATS callbacks and alerts when thresholds are exceeded.
- Region tagging. Every interview record carries an EU or US region tag; proves data residency in audits.
- Decision-record export. One-click export of the full evidence chain for a single candidate.
- Annex IV technical file contribution. Our provider file is published in the Trust Center; you can append it to your deployer file.
Compliance is shared
Let us be plain: Intrvio is a tool. The employer is the controller of the candidates' data. This page is not legal advice — work with your in-house legal and EU compliance team. Intrvio's role is to make collecting deployer evidence a by-product of daily operations rather than an Excel chase.
For more, see our Trust Center, DPA, and sub-processor list.
FAQ
References
- [1] European Commission AI Act Service Desk — Annex III: high-risk AI systems list, Section 4(a) Employment.
- [2] European Commission AI Act Service Desk — EU AI Act implementation timeline.
- [3] ArtificialIntelligenceAct.eu — Article 26: Obligations of deployers of high-risk AI systems.
- [4] ArtificialIntelligenceAct.eu — What the AI Act means for staffing businesses (Article 86 explanation).
- [5] AI Act Recital 57 — High-risk AI systems in employment and human resources.
- [6] Hunton — The Impact of the EU AI Act on Human Resources Activities.
- [7] Lewis Silkin — Charting the EU AI Act timeline (Apr 2026).
