Blog · Product

Agent Mode: AI replies that book meetings, not just send messages

Most "AI personalization" in the LinkedIn outreach category writes the first message. That's it. Once a prospect replies, you're back to a human staring at an inbox.

That's the wrong half of the funnel to automate. The opener is the cheap part. The reply triage — the hours per day spent reading inbound, deciding whether each prospect is a real opportunity, drafting a context-aware response, and getting a calendar slot on the table — is where the real time goes. And it's where deals get won and lost.

Agent Mode is the layer that handles the reply funnel. Here's how it actually works.

What it does, end to end

When a prospect replies to a message — through any sender on any campaign — Agent Mode runs four steps before the reply ever lands in your unified inbox:

  1. Classify the reply. The model reads the message in context (the sender, the previous outbound, the prospect's profile, the campaign's intent) and tags it: interested, objection, not now, wrong person, negative, auto-reply, or referral.
  2. Score the warmth. A confidence number from 0 to 1 — how sure is the model that this is a genuine, qualified, in-market response.
  3. Draft a response. Tailored to the classification. An interested reply gets a short message that includes a calendar suggestion. An objection reply gets a response that addresses the specific objection. A wrong person reply gets a polite ask for the right contact.
  4. Propose a meeting slot. Where appropriate, Agent Mode pulls from the operator's calendar (Microsoft Teams, Google Calendar, or both) and includes two or three concrete slot options in the draft.

You see all four outputs in the inbox feed, ranked by warmth. You can approve a draft as-is, edit and send, or reject and write your own. Or you can flip Agent Mode into autonomous mode, where it sends the draft for high-confidence interested replies without asking, and only flags the ambiguous ones for human review.

How it decides when to act on its own

The autonomy decision is the part that took us the longest to get right. Agent Mode acts autonomously only when three conditions are all true:

  • The classification confidence is above 0.85
  • The classification is one of the "safe to auto-respond" types — interested, auto-reply, not now — and not objection or negative
  • The proposed action is reversible (sending a message is reversible, booking a meeting is not)

If any of those fail, the draft sits in the queue waiting for a human. We default new accounts to "approve every reply" mode and only suggest enabling autonomous mode after the operator has manually approved 50+ Agent Mode drafts and built a feel for the model's judgement on their specific ICP.

The Microsoft Teams integration

Most calendar integrations stop at "we generated a Calendly link, here you go." That's not enough. The real friction in scheduling is that the prospect doesn't want to click a link, see a wall of slots, and pick one. They want to be told "how about Tuesday at 11am or Wednesday at 3pm?" and reply yes.

Agent Mode reads the operator's actual calendar availability (we ship a Microsoft Teams integration that pulls from Microsoft 365, plus Google Calendar via OAuth) and includes two or three concrete slot suggestions in the reply itself. If the prospect picks one, Agent Mode books the meeting and sends the Teams or Google Meet invite. If the prospect counter-proposes a different time, Agent Mode checks availability and either confirms or counter-counter-proposes.

This sounds small. It's not. The booking-rate lift from "here are two times that work for me" vs "here's my Calendly" is roughly 2x in our internal A/B data. People say yes to a specific question. They don't always click links.

What it's not good at

Honest limitations. Agent Mode handles common reply patterns well — interest, basic objections (price, timing, fit), referrals, wrong-person redirects. It does not handle:

  • Novel, multi-clause objections that combine several concerns ("we already use X, and the budget is committed for Q3, but circle back in October if your roadmap covers Y")
  • Replies in languages the model wasn't tuned on (we currently work well in English, French, German, Spanish, and Portuguese)
  • Highly technical replies that require deep product knowledge specific to your offering
  • Anything that smells like a relationship — long-running conversations where the prospect references prior context Agent Mode doesn't have

For all of those, Agent Mode flags the reply for human review and writes a one-line note explaining why. The model knows what it doesn't know. That's the most important property.

The numbers

From our internal use over the last six months, across roughly 14,000 inbound replies handled by Agent Mode:

  • ~40% of replies get fully handled end-to-end — drafted, sent, and either booked into a meeting or qualified out — without a human touching the conversation
  • ~35% of replies get a draft generated that the operator approves with minor edits (median time-to-approve: 38 seconds)
  • ~25% of replies get flagged for human review — the operator writes the response from scratch
  • Meeting-booking rate on interested classifications, when Agent Mode proposes specific slots: 61% (vs 32% for the same cohort using a generic Calendly link)

The 40% number is the one that matters most. That's roughly 5,600 replies that didn't need a human at all — at our team size, that's the difference between needing two SDRs to manage the inbox and managing it ourselves between other work.

Where this goes

The next layer we're building is conversation memory across senders. Right now, Agent Mode treats each conversation as standalone. The improvement we want: when a prospect connects with sender A, doesn't reply, then 60 days later their colleague replies to sender B, Agent Mode should recognise that both sit at the same company, pull the prior context, and adjust the response.

That's a different kind of intelligence — relational, not just conversational. It's the next year of work.