The Otter.ai lawsuit is a defining moment for HR compliance. Here’s what TA leaders need to know.
A consolidated federal class action now before Judge Eumi K. Lee in the U.S. District Court for the Northern District of California has put AI note-taking tools squarely in the crosshairs of employment law. The case, In re Otter.AI Privacy Litigation, alleges that Otter’s “Otter Notetaker” and “OtterPilot” tools recorded private conversations, including candidate calls, without the consent of all participants, and then used those recordings to train its AI models without adequate disclosure.
The case has not yet produced binding rulings. But employment attorneys are already calling it a “hot issue” for HR teams, and the liability implications extend well beyond Otter itself.
Here is what TA leaders, HR directors, and recruiting teams need to understand right now.
The employer owns the risk, not just the vendor
The most important legal reality of the Otter lawsuit: even if your organization did not build the tool, you are responsible for how it is used inside your hiring process.
As employment attorneys at Littler Mendelson noted in a February 2026 analysis of the litigation, the practical recommendation for HR teams is to select, configure, and control a vetted tool rather than cede that ground to whatever employees happen to download. Banning general AI note-takers outright is, in practice, unenforceable. One in five professionals reported frequently using AI to draft meeting notes in a 2025 survey, and recruiters are no different. If there is no approved alternative, they will find their own.
The takeaway: you need a platform, not a policy.
Candidate interviews are not just another meeting
General AI note-taking tools were designed for product syncs, all-hands calls, and sales team updates. A candidate interview is categorically different.
Recruiting conversations involve compensation expectations, career frustrations, personal circumstances, location constraints, and employment history. That data is sensitive. It falls under GDPR, CCPA, and state-level biometric and wiretap statutes. And in states like California, Illinois, and at least nine others where all-party consent is required, recording a conversation without the explicit consent of all parties is a legal violation.
The Otter lawsuit alleges that by default, Otter seeks consent only from the meeting host, not from all participants. In a recruiting context, that means candidates are being recorded without their knowledge. That is the liability.
Honeit was built from the ground up with this reality in mind. Our platform includes a structured dual-consent workflow for every interview: candidates receive a clear notification before the call begins, and their consent is logged. It is not a buried settings checkbox. It is a built-in compliance step that happens every time, for every candidate, across every recruiter on your team.
For a deeper look at how consent flows work in recruiting, see our article: Candidate Consent and Recruiting Compliance.
The shadow AI problem inside your organization
One of the most significant risks the Otter lawsuit surfaces is not about Otter specifically. It is about the broader pattern of shadow AI adoption across recruiting teams.
Free tools grow through calendar integrations and freemium downloads. A recruiter connects a general note-taking bot to their Google Calendar on a Monday morning, and by the end of the week it has joined every interview, every hiring manager debrief, and every sensitive internal conversation on their schedule. No IT approval. No data retention review. No consent workflow. No audit trail.
Multiply that across a team of 20 recruiters, a staffing agency, or an RPO, and you have dozens of unauthorized bots capturing candidate data and confidential client information with zero governance. As the consolidated case filings detail, Otter’s default configuration does not notify non-users that they are being recorded, and the option to do so is available only on its most expensive enterprise plan.
We wrote about this last summer, and the Otter lawsuit has validated every concern raised there: Attack of the Interview Note-Taking Bots.
The Honeit platform gives IT, HR, and legal teams a single approved solution. Our AI note-taker extends only to selected screening calls and interviews. It never auto-joins every meeting on a recruiter’s calendar. That selective, controlled deployment is not a limitation. It is the point.
Structured evaluation protects you against bias claims
The Otter lawsuit is not the only AI-in-hiring case making headlines. In Harper v. Sirius XM, filed in August 2025, plaintiffs allege an AI-powered applicant tracking system unlawfully discriminated against job seekers by disproportionately downgrading candidates based on proxies for race. The case invokes Title VII of the Civil Rights Act.
Unstructured, untraceable AI evaluation is a liability in two directions: consent and consistency. If different candidates are asked different questions, scored on different criteria, and evaluated through different lenses, you have no defensible record when a hiring decision is challenged.
Honeit’s structured call guides and skills-based scorecards ensure every candidate for a given role answers the same questions, scored against the same criteria. That consistency is not just good recruiting practice. It is your legal defense.
Candidate fraud: why recorded interviews matter
There is a growing, underreported risk in remote hiring that the compliance conversation often overlooks: candidate fraud and impersonation.
AI-generated resumes, fabricated LinkedIn profiles, and even impersonation on video calls are increasingly common, particularly for remote technical roles. A candidate who passed an async video screen may not be the same person who shows up on day one.
A recruiter-led, recorded interview with attached soundbites is your strongest verification layer. It is proof of a real conversation with a real human. The “Interviewed by [Recruiter Name]” attribution on a Honeit candidate submission serves double duty: it establishes recruiter credibility with the hiring team and confirms candidate authenticity for your records.
You cannot fake a live, structured conversation the same way you can fabricate a resume.
What should TA leaders do now?
The Littler Mendelson guidance is direct: get ahead of this. Select, configure, and control a vetted tool rather than allow unmanaged adoption to make the decision for you.
For recruiting and HR teams specifically, that means auditing what your recruiters are actually using. If the answer is “whatever they downloaded,” you have a shadow AI problem worth addressing before the next candidate complaint.
It means establishing dual consent as a non-negotiable standard across every candidate interview, every screening call, and every recruiter on your team. Consent must be documented, not assumed.
It means requiring a structured evaluation. Call guides and skills-based scorecards are not bureaucratic overhead. They are the evidence you will need if a hiring decision is ever challenged.
And it means using a platform built for recruiting, not repurposed for it. General tools record meetings. Honeit is purpose-built for the recruiter-to-candidate conversation, with compliance, accountability, and candidate delivery built into the workflow from the start.
The Otter.ai lawsuit will continue to develop. But the compliance standard it is clarifying already existed. Recruiters have always had a higher obligation to candidates than to meeting attendees. The tools they use should reflect that.
See how Honeit handles consent and compliance, or book a live demo to see the platform in action.

