Agent Mode & the AI surface

One agent. Eight workflows. None of them named after a model.

Most outreach tools use AI to write the first message and call themselves "AI-powered." LinkedReach uses AI for eight different jobs — qualifying, personalising, drafting, classifying, replying, refusing to reply, compacting, and learning. Here is exactly what each one does.

AI drafted — awaiting approval

Thread with Priya Shah
Sure, send some times Tuesday.
Priya Shah · 2 min ago
Tuesday 2pm or 4pm GMT? — Sarah
AI draft · awaiting approval
Approve & send Edit Reject
01 · Pre-spend qualification

The brain decides who is worth a connection request.

LinkedIn caps you at 25 connection requests per sender per day. That cap is the most expensive resource in the system. Sending one to someone who is obviously not in your ICP — the wrong title, the wrong company size, the wrong region — is a waste of a slot you cannot get back.

Before any campaign action fires, the brain reads the lead's profile against your ICP brief. It returns a fit score, the reasoning behind the score, and a recommended action: send, skip, or flag for review. Bad-fit leads never reach the queue. Edge cases get human eyes.

  • Scores every lead against the campaign's stored ICP brief
  • Returns a 0–100 fit score plus the why behind it
  • Routes obvious bad-fit leads to archive, not the queue
  • Surfaces edge cases for a human decision instead of guessing
Qualification · Lead 0042
Pre-send check
LEAD
Sara D. · Account Executive · 12-person Series A startup
Fit score · 41 / 100
Title and company size both off ICP
Skip
REASONING
ICP is VP-level Demand Gen at 200–2,000 person companies. Sara is an AE at a 12-person startup — wrong seniority and wrong company stage. Routed to archive, sender slot preserved.
02 · Personalisation at scale

Per-lead openers without per-lead human time.

A sender hand-personalising thirty messages a day burns out at week two. The brain takes that job — producing one or two lines per lead that reference the prospect's role, company, and recent activity, dropped into the message via tokens.

Same model, your voice. The campaign-level ICP brief tells the brain what your offer is and what tone to use. Reply rates do not come from cleverness in the opener — they come from sounding human at scale, which is exactly what Mad Libs templates can't do.

  • Pulls signal from profile, role, company, and recent posts
  • One or two lines per lead, dropped in via personalisation tokens
  • Tone tuned per campaign, not per sender
  • Falls back to your written template if it can't beat a quality bar
Generated opener · Priya S.
42 words · tone 8.2
CONTEXT THE BRAIN USED
VP Demand Gen at Cinder. Posted last week about ABM tooling fatigue and "stitching together five different point solutions."
DRAFT MESSAGE
Hi Priya — saw your post on ABM stack stitching. We're building the orchestration layer specifically to kill that problem for outreach across multiple senders. Worth a 15-min look?
03 · Sequence drafting

Hand the brain a brief. Get back a sequence.

Building a five-step LinkedIn sequence from scratch is the highest-friction setup task in outreach. Most teams ship a passable v1 and never edit it again. The brain solves the cold-start problem: give it your offer, your ICP, and your pod's voice, and it drafts the whole sequence — connect note, follow-up, message two, the InMail, the bump — in one pass.

Each step is tuned with the right level of variance for its job. The InMail and the first connect note get tighter, more deterministic generation because they're high-stakes single-shot moments. The follow-ups get looser variance because the brain has more chances to land.

  • Full multi-step sequence drafted from your offer + ICP + voice
  • Per-step variance tuning — high-stakes steps generate tighter
  • You edit, swap, or accept whole — nothing ships without you reading it
Drafted sequence · Q2 RevOps
5 steps · auto-generated
Step 1 · Connect note (high-stakes)
Variance: tight · one shot, must land
Tuned
Step 2 · Message 1 (medium)
Variance: medium · pitch differentiator
Tuned
Step 3 · Follow-up (looser)
Variance: high · brain has another swing
Tuned
Step 4 · InMail (high-stakes)
Variance: tight · the closer
Tuned
04 · Phrase-freshness retry

The same prompt does not get to ship the same phrase.

Give an LLM a prompt a thousand times and a few stock phrases will appear in 80% of the outputs. "Hope this finds you well." "Wanted to reach out about." "Quick question." Every recipient who has been targeted by a campaign before recognises the smell.

Every generated message is checked against the pod's prior sends with a phrase-overlap score. Above the threshold, the brain regenerates once with explicit "do not reuse phrasing" instructions. The result is messaging that stays fresh as the pod scales from one sender to ten to thirty.

  • 3-gram overlap check against the pod's prior sends
  • Above threshold → one regeneration with anti-reuse instructions
  • Keeps prospects from receiving the same opener twice via different senders
Freshness check
Lead · Marcus K.
First draft · overlap 47%
Reuses 3-gram patterns from prior sends
Rejected
Regeneration · overlap 11%
Anti-reuse instructions applied
Cleared
SHIPPED MESSAGE
Marcus — the throughput piece in your last post resonated. We've been pushing on the same problem from the orchestration angle. Ten minutes worth a look?
05 · Reply classification

Closers stop reading "thanks but not now" all day.

Triage of inbound replies is where the time goes. A campaign at scale produces hundreds of replies a week, and a meaningful share of them are not real opportunities — they're polite passes, OOO notices, "wrong person" referrals, or genuinely negative.

Every inbound is classified the moment it arrives: interested, objection, not now, wrong person, negative, OOO, or auto-reply. Each one is routed to the right queue. The closer sees only the queue that matters. The pod sees aggregate signal — which titles convert, which industries push back, where the funnel is leaking.

  • Seven-class taxonomy on every inbound, in real time
  • Routing to the right pod member's queue — closer, manager, archive
  • High-intent replies surfaced first in the unified inbox
  • Aggregate signal: which titles, industries, and offers convert
Inbox routing · this hour
14 replies classified
Priya S. · Interested
Confidence 94% · routed to closer queue
Hot
Anna T. · Wrong person
Referral to "Jordan in revops" · routed to research
Referred
Jordan R. · OOO
Auto-snoozed until Mon · sequence paused
OOO
Devon M. · Not interested
Polite pass · archived · sender does not see it
Archive
06 · Agent Mode auto-reply

Drafts, asks, books — while the pod sleeps.

Reply latency kills conversion. A reply that lands at 11pm and gets answered at 9am the next morning has already lost a meaningful share of intent. Agent Mode answers it at 11:01pm.

On a positive reply, the agent drafts a contextual response, asks the qualifying questions you specified at campaign setup, and proposes real calendar slots. Run it in approve-each-reply mode while you build trust, then graduate to fully autonomous on high-confidence interested replies. Every action is logged for audit.

  • Draft, send, ask follow-up, book — all autonomous when configured
  • Approve-each-reply mode for week one, autonomous after you trust it
  • Calendar integration proposes real slots, not Calendly links
  • Reply preview / approval flow for teams that want a human in the loop
  • Every drafted, edited, and sent message logged for audit
Agent Mode · Priya S.
Sent at 23:14
INBOUND
Yeah, makes sense — happy to chat. Tuesday or Thursday work, mornings ideally.
Intent · Interested
Confidence 94% · auto-send threshold met
Auto-replied
AGENT REPLY (sent)
Tuesday morning works great. I have 9:30 or 10:30 ET open — either of those land for you?
07 · High-stakes safety gating

Some replies the agent should never touch.

The cost of an autonomous reply going wrong is not symmetric. A clumsy reply to "what's your pricing?" costs you a meeting. A clumsy reply to a GDPR request, a contract objection, or a legal threat costs you a brand — and possibly a regulator letter.

Every inbound is keyword-scanned against a list of high-stakes phrases — legal, GDPR, contract, pricing, refund, complaint, "remove me", lawyer, and the rest. On a match, auto-send is disabled for that thread, an alert is filed, and a human picks it up. The agent gets to do the easy 80% and is locked out of the dangerous 20%.

  • Keyword scan on every inbound for high-stakes phrases
  • Match → auto-send disabled for that thread, alert filed
  • Asymmetric-cost design: the agent owns the easy, humans own the dangerous
  • Configurable: add your own phrases for industries with their own landmines
Safety gate triggered
Thread · Marcus K.
INBOUND
Please remove me from your contact list and confirm the deletion under GDPR.
Detected phrase · "remove me"
Also matched: "GDPR" · auto-send disabled
Gated
Pilot alert filed
Routed to human review queue · thread frozen
Awaiting human
08 · Long-thread compaction

The agent stays on-topic in 30-message threads.

LLMs lose the plot in long conversations. The topic that was set in message one drifts as new messages pile up, and by message twelve the agent is replying to the immediate previous message with no memory of what the original outreach was actually about.

When a thread runs past ten messages, the brain keeps the very first message (which sets the topic), the most recent nine (which carry the live context), and replaces the dropped middle with a single elision marker. No extra LLM call. The agent stays anchored to what the conversation was originally about.

  • Triggers automatically at 10+ messages in a thread
  • First message preserved — the topic anchor
  • Most recent nine preserved — the live context
  • Heuristic, not another LLM call — deterministic and free
Thread compaction · 14 msgs
Compacted to 11
Message 01 · topic anchor
"Saw your post on ABM stack stitching..."
Kept
Messages 02–05 · elided
[4 earlier messages elided]
Dropped
Messages 06–14 · live context
Most recent 9 carried into the prompt
Kept
A note on what we don't say

We won't tell you which model is behind the brain.

Outreach buyers in 2026 are being marketed at with model names. We think that's the wrong unit of decision — the leading model will change four times in the next two years, and you will care about the workflow that ships, not the badge on the box.

What you should ask instead

Does the system qualify before it spends a daily-cap action? Does it refuse to auto-reply on legal phrases? Does it stay on-topic in a 30-message thread? Does it learn from every reply the pod sends, or does each sender train its own brain in isolation? These questions outlast any model release schedule.

See the agent on your own pipeline.

14-day free trial. No credit card. Connect a sender, run a campaign, watch the brain qualify, draft, classify, and reply.