Most outreach tools use AI to write the first message and call themselves "AI-powered." LinkedReach uses AI for eight different jobs — qualifying, personalising, drafting, classifying, replying, refusing to reply, compacting, and learning. Here is exactly what each one does.
LinkedIn caps you at 25 connection requests per sender per day. That cap is the most expensive resource in the system. Sending one to someone who is obviously not in your ICP — the wrong title, the wrong company size, the wrong region — is a waste of a slot you cannot get back.
Before any campaign action fires, the brain reads the lead's profile against your ICP brief. It returns a fit score, the reasoning behind the score, and a recommended action: send, skip, or flag for review. Bad-fit leads never reach the queue. Edge cases get human eyes.
A sender hand-personalising thirty messages a day burns out at week two. The brain takes that job — producing one or two lines per lead that reference the prospect's role, company, and recent activity, dropped into the message via tokens.
Same model, your voice. The campaign-level ICP brief tells the brain what your offer is and what tone to use. Reply rates do not come from cleverness in the opener — they come from sounding human at scale, which is exactly what Mad Libs templates can't do.
Building a five-step LinkedIn sequence from scratch is the highest-friction setup task in outreach. Most teams ship a passable v1 and never edit it again. The brain solves the cold-start problem: give it your offer, your ICP, and your pod's voice, and it drafts the whole sequence — connect note, follow-up, message two, the InMail, the bump — in one pass.
Each step is tuned with the right level of variance for its job. The InMail and the first connect note get tighter, more deterministic generation because they're high-stakes single-shot moments. The follow-ups get looser variance because the brain has more chances to land.
Give an LLM a prompt a thousand times and a few stock phrases will appear in 80% of the outputs. "Hope this finds you well." "Wanted to reach out about." "Quick question." Every recipient who has been targeted by a campaign before recognises the smell.
Every generated message is checked against the pod's prior sends with a phrase-overlap score. Above the threshold, the brain regenerates once with explicit "do not reuse phrasing" instructions. The result is messaging that stays fresh as the pod scales from one sender to ten to thirty.
Triage of inbound replies is where the time goes. A campaign at scale produces hundreds of replies a week, and a meaningful share of them are not real opportunities — they're polite passes, OOO notices, "wrong person" referrals, or genuinely negative.
Every inbound is classified the moment it arrives: interested, objection, not now, wrong person, negative, OOO, or auto-reply. Each one is routed to the right queue. The closer sees only the queue that matters. The pod sees aggregate signal — which titles convert, which industries push back, where the funnel is leaking.
Reply latency kills conversion. A reply that lands at 11pm and gets answered at 9am the next morning has already lost a meaningful share of intent. Agent Mode answers it at 11:01pm.
On a positive reply, the agent drafts a contextual response, asks the qualifying questions you specified at campaign setup, and proposes real calendar slots. Run it in approve-each-reply mode while you build trust, then graduate to fully autonomous on high-confidence interested replies. Every action is logged for audit.
The cost of an autonomous reply going wrong is not symmetric. A clumsy reply to "what's your pricing?" costs you a meeting. A clumsy reply to a GDPR request, a contract objection, or a legal threat costs you a brand — and possibly a regulator letter.
Every inbound is keyword-scanned against a list of high-stakes phrases — legal, GDPR, contract, pricing, refund, complaint, "remove me", lawyer, and the rest. On a match, auto-send is disabled for that thread, an alert is filed, and a human picks it up. The agent gets to do the easy 80% and is locked out of the dangerous 20%.
LLMs lose the plot in long conversations. The topic that was set in message one drifts as new messages pile up, and by message twelve the agent is replying to the immediate previous message with no memory of what the original outreach was actually about.
When a thread runs past ten messages, the brain keeps the very first message (which sets the topic), the most recent nine (which carry the live context), and replaces the dropped middle with a single elision marker. No extra LLM call. The agent stays anchored to what the conversation was originally about.
Outreach buyers in 2026 are being marketed at with model names. We think that's the wrong unit of decision — the leading model will change four times in the next two years, and you will care about the workflow that ships, not the badge on the box.
Does the system qualify before it spends a daily-cap action? Does it refuse to auto-reply on legal phrases? Does it stay on-topic in a 30-message thread? Does it learn from every reply the pod sends, or does each sender train its own brain in isolation? These questions outlast any model release schedule.
14-day free trial. No credit card. Connect a sender, run a campaign, watch the brain qualify, draft, classify, and reply.