The vendor mess
A buyer asking “what AI tool should we use for interviews?” will typically see Metaview, HireVue, GoodTime, BarRaiser, Pillar, Honeit, Spark Hire, Willo, Humanly, Calendly, and Intrvio in the same search session. These products are not substitutes for one another. They sit in four distinct categories that compete with each other only at the boundaries, and combining the wrong ones produces duplicate cost without filling the actual gap.
Category 1: Scheduling automation
Examples: Calendly, GoodTime, Modernloop, Paradox (for retail). What they do: take the calendar coordination friction out of getting a candidate, an interviewer, and a slot in the same place at the same time. Some go further into “intelligent scheduling” — interviewer-load balancing, panel composition, time-zone-aware routing.[4]
What they do not do: anything that touches the content of the interview itself. They are pure infrastructure. The buyer's question is operational efficiency, not assessment quality.
Category 2: Interview intelligence / notetaker
Examples: Metaview, Pillar (now Lever AI Interview Companion), BarRaiser, Honeit, Bluedot, Fireflies (general-purpose).
What they do: join a live human interview as a passive participant, transcribe it, and produce structured notes, summaries, and highlights. Some go into interviewer coaching (real-time intervention if a question is leading or illegal), interviewer performance scoring, and team calibration loops where panelists watch the same clips.[1][3] Most explicitly disclaim candidate scoring — Lever's product page says “AI Interview Companion assists with transcription and summarization. It does not provide candidate scoring or hiring recommendations.”[2]
What they do not do: replace the human interviewer. The human still runs the interview. The AI is back-office documentation and coaching.
Category 3: Async video assessment
Examples: HireVue, Spark Hire, Willo, VidCruiter, myInterview.
What they do: present a candidate with fixed prompts, record their video answers asynchronously, and either route the recordings to humans for review or apply AI scoring. This is the oldest mature category in the AI-for-interviews space.
What they do not do: have a real-time conversation with the candidate. No follow-up probes, no clarification, no adaptation. The candidate gets a single shot per question (with re-record rules per vendor). The category took a public hit in January 2021 when HireVue announced it would stop using visual analysis in response to a 2019 EPIC FTC complaint and its own internal research showing visual cues added negligible predictive power.[5]
Category 4: Agentic interviewer
Examples: Intrvio (GAIA), parts of Mercor, parts of HeyMilo, some newer entrants.
What they do: conduct the interview itself, in real time, with real-time adaptation. The AI asks questions, listens, decides which probe to fire next based on the candidate's answer, and scores against an anchored rubric. From the candidate's felt experience, this is closer to a phone screen than to an async recording — but no human is on the other end.
What they do not do: take notes during a separate human interview (that is Category 2), schedule the interview (Category 1), or rely on async recording (Category 3). The category is newer, smaller, and the most regulatorily exposed under the EU AI Act and NYC Local Law 144.
The 2×2 matrix
| Axis | Structured | Unstructured |
|---|---|---|
| Human-led | Cat. 2: Interview intelligence. Metaview, Pillar, BarRaiser. Notetaker + coaching that enforces structure on a human conversation. The interview itself is human; the AI keeps it disciplined. | Status quo unstructured panel. No AI in the loop, or only Calendly-style scheduling (Cat. 1) on top. The classic failure mode that unstructured interviews' .38 validity describes. |
| AI-led | Cat. 4: Agentic interviewer. Intrvio, parts of Mercor. Real-time AI conversation scored against an anchored rubric. The structure is enforced by the AI itself. | Cat. 3: Async video assessment. HireVue, Spark Hire. Fixed prompts; AI scoring of recordings. Less structure than Cat. 4 because no real-time probing. |
A clean read of the matrix: structure and AI-leadership are orthogonal axes. You can be AI-led but unstructured (one-way video with vibes-based scoring). You can be human-led but highly structured (a panel with a strict rubric and a notetaker). The two-by-two clarifies why “AI for interviews” covers such different products.
Which category to buy when
The decision is largely about where your bottleneck is.
- Bottleneck: scheduling. 80% of recruiter time is calendar back-and-forth. Buy Category 1 (GoodTime, Calendly). Do not buy a notetaker yet.
- Bottleneck: interviews already happen, documentation is uneven. Hiring managers leave with three lines of scratch notes. Buy Category 2 (Metaview, Pillar). Pair with a structured interview rubric — the AI can only document what actually got asked.
- Bottleneck: too many candidates for live screens. 5,000 applicants per req, no recruiter capacity. Buy Category 3 (HireVue) for budget / review-flexibility, or Category 4 (Intrvio) for a screening conversation that is structured, drop-off-friendly, and audit-ready under EU AI Act / LL144.
- Bottleneck: structure, not volume. Hiring managers go off-script regardless of policy. Buy Category 4 (Intrvio) and use it for the technical / behavioural screen; keep humans for the final loop.
The boundary problems
Real-world products do not stay neatly in their categories. The common boundary blurs:
- Notetaker → light scoring.Some Cat. 2 vendors quietly add “summary recommendations” that operate close to scoring. Once a recommendation is used to substantially assist a hiring decision, the product crosses into AEDT territory under NYC Local Law 144 and high-risk AI under EU AI Act Annex III. Vendors usually stay shy of the line, but buyers should check.
- Async video → real-time. Some Cat. 3 vendors have added live elements to address the drop-off problem. Where a probe-and-respond loop appears, the category drift is toward Cat. 4.
- Agentic interviewer → scheduling. Cat. 4 vendors get pulled into Cat. 1 because customers want a single URL to send candidates. The scheduling layer is usually a thin integration with Cat. 1 tools rather than a real product substitute.
- Interview intelligence → interviewer training. BarRaiser leans here with bias detection, real-time guidance, and interviewer scorecards.[3] This is genuinely additive to documentation but also drifts toward an HR-tech adjacency (talent development) rather than a hiring-stack adjacency.
Pricing maps to category, not features
A common buyer mistake: comparing per-seat or per-interview prices across categories as if they were substitutable. They are not.
- Cat. 1 scheduling tends to price per recruiter seat ($30–$80/month). Cost scales with hiring team size.
- Cat. 2 interview intelligence tends to price per interviewer seat ($50–$120/month) — sometimes per-meeting on the low end. Cost scales with interview panel headcount.
- Cat. 3 async video tends to price per submission or per-req. The pricing reflects volume, not headcount.
- Cat. 4 agentic interviewer tends to price per completed interview, occasionally with a platform base. Cost scales with funnel volume — and usually undercuts Cat. 3 because the marginal cost of an AI conversation is lower than the cost of an async-video reviewer salary.
The sensible budget conversation is: which category solves your actual bottleneck, then price-compare within that category. A $5,000/year scheduler does not substitute for a $50,000/year interview intelligence platform; they fill different gaps.
Compliance posture by category
The four categories carry different regulatory burdens under EU AI Act Annex III and NYC Local Law 144.
- Cat. 1 scheduling — generally not in scope. The tool does not score, classify, or rank candidates.
- Cat. 2 interview intelligence— usually not in scope, provided the product genuinely does not score candidates. Once a vendor adds “recommendations,” the question tightens; deployers should review the specific product's outputs against the AEDT definition.
- Cat. 3 async video assessment — clearly in scope for both regulations when AI scoring is enabled. The legacy visual-analysis controversy[5] put this category under permanent regulatory scrutiny.
- Cat. 4 agentic interviewer — clearly in scope. The platform must ship deployer-compliance materials (FRIA template, instructions for use, audit-grade log export).
A buyer who only wants to add Cat. 2 documentation can adopt without triggering high-risk obligations. A buyer who is buying Cat. 3 or Cat. 4 must allocate compliance budget alongside the platform spend; it is not optional.
The category map, in one paragraph
You are buying scheduling automation if your problem is calendars, interview intelligence if your problem is what gets remembered, async video assessment if your problem is volume and you accept higher drop-off, or an agentic interviewer if you want the screening conversation itself to be structured. These are four different products. They are not direct substitutes. The next generation of buying RFPs will get this right; today, most still do not.
Frequently asked questions
Sources
- [1]Metaview Help Center — Complete Metaview overview for talent acquisition (interview intelligence / AI notetaker positioning). https://support.metaview.ai/guides/overview-for-ta
- [2]Lever / Pillar — AI Interview Companion product page (transcription, summarization, structured-process aid; explicitly does not score candidates). https://www.lever.co/pillar/
- [3]BarRaiser — vs Metaview comparison (positioning interview intelligence as note-taking + interviewer coaching + bias detection). https://www.barraiser.com/barraiser-vs-metaview-a-better-alternative
- [4]eesel AI blog — 7 best Metaview alternatives for recruiting (2025), category overview of interview intelligence tools. https://www.eesel.ai/blog/metaview-alternatives
- [5]EPIC — HireVue facing FTC complaint, halts use of facial recognition (the canonical async-video-assessment regulatory event). https://epic.org/hirevue-facing-ftc-complaint-from-epic-halts-use-of-facial-recognition/
