AI email assistants in HR comms: time saver or liability?

Been using an AI email assistant for candidate communication for about three months now. It’s mostly helped but there have been a few moments that made me nervous and I want to see if anyone else in HR or recruitment has run into similar things.

The time saving is real for standard messages. Interview confirmations, status updates, rejection emails where the message is basically the same every time. Draft appears, I check it, fix a word or two, send. That part genuinely works.

Where I’ve gotten uncomfortable is the edge cases. Had one where a candidate had disclosed a health situation in their application that wasn’t relevant to the role, and the AI-drafted reply referenced it in passing. Completely unintentional, clearly pulled from the previous email thread, but still. That’s exactly the kind of thing that creates a data handling problem or, worse, something that reads as discriminatory even when it isn’t.

I’ve also had the tool suggest language I’d describe as slightly too warm for formal rejection. Not wrong exactly, but the kind of phrasing that might read as implied encouragement to reapply when we don’t actually want them to. You can imagine how that plays out.

My current approach is to never send anything from the tool on a first pass for anything sensitive. It’s a draft for routine messages only, and any email touching something legally adjacent gets written manually. Which still saves time overall, but it does mean the tool is doing less than the marketing implied it would.

Is anyone running these with clearer guardrails baked in? Would be useful to know how others are drawing the line.

At the executive level, anything touching employment decisions goes through legal review regardless of how it was drafted. The AI tool is useful for volume work. For anything with liability surface, the review step isn’t optional and the efficiency gains are smaller. That’s not a reason to avoid the tools, it’s a reason to be honest about what they actually save time on.

The health disclosure example is exactly the kind of edge case that surfaces fast in HR AI contexts. The tool has no concept of sensitive categories. It sees text and generates a contextually relevant reply. That the text included a protected category doesn’t register as different. That’s a tool design limitation that’s currently being managed by human review, which means the human review step can never be optional in this context.

We’ve run into similar things in founder-level hiring communications. The tone calibration issue specifically. AI-drafted messages in high-stakes candidate relationships can come across as either too formal or too warm depending on the context, and calibrating that requires knowing more about the relationship than the tool can infer from an email thread.

The liability framing is the right one for HR contexts. I tell clients this: the tool is a drafting aid, not a compliance tool. If you wouldn’t sign off on a contract clause without a lawyer, don’t send a candidate email generated by AI without a human who understands the legal context reading it first. That sounds obvious but the time saving pressure pushes people toward skipping the review.

The ‘too warm on rejection’ problem is interesting. I’ve seen similar things in client communications. The tools are optimized for engagement and positive sentiment, which is completely the wrong calibration for formal HR language where clarity and legal defensibility matter more than warmth.