Intent Data for SDRs: Playbook for Intent-Triggered Outreach [2026]
SDR playbook for intent-triggered outreach: which signals to act on, when to call vs. email, exact sequence structure, and how to avoid the 'surveillance creep' mistake.
IntentGPT Team
Intent data gives SDRs the answer to the hardest question in outbound: "Why am I calling this person right now?" A prospect who visited your pricing page twice this week has a reason you can reference. A company whose Bombora score just surged on your product category is in the market. Reaching them without knowing the signal is cold prospecting. Reaching them with the signal is warm, timely, and demonstrably more likely to convert.
The conversion lift is real. SDR teams with properly instrumented intent programs report 2x–3x meeting conversion rates on intent-triggered sequences vs. standard cold outreach — but only when the sequencing logic, messaging, and timing are right. This playbook covers all three.
Bottom line: Intent data does not replace outbound skill — it eliminates the timing problem. The SDR who reaches out the day a prospect is researching wins against the SDR who reaches out a month earlier or a month later.
Quick Reference: Intent Signal → Action Matrix
| Signal | Signal Source | Urgency | First Touch | Sequence Length |
|---|---|---|---|---|
| Pricing page (2+ visits in 7 days) | First-party | Same day | Phone call | 3-step, 5 days |
| Demo/trial page visit (no conversion) | First-party | Same day | Phone call | 3-step, 5 days |
| ROI calculator completion | First-party | Same day | Phone call | 3-step, 5 days |
| Competitor comparison page view | First-party | 24 hours | Email → call | 4-step, 7 days |
| Third-party topic surge (high score) | Third-party | 48 hours | 5-step, 10 days | |
| G2 category page view | Second-party | 48 hours | Email → call | 5-step, 10 days |
| Webinar attendance (live) | First-party | 24 hours | Email (personalized) | 3-step, 7 days |
| In-product feature exploration | First-party (product) | 24 hours | 3-step, 5 days | |
| Email click to pricing/demo | First-party | 48 hours | Reply thread or call | 2-step, 3 days |
What Makes Intent-Triggered Outreach Different
Standard cold outreach is based on fit: the account matches your ICP, so you reach out. Intent-triggered outreach adds timing: the account matches your ICP and they are exhibiting active research behavior right now.
The practical difference for an SDR is not just conversion rate. It is the quality of the conversation. A prospect who clicked on your ROI calculator 20 minutes ago is in a different mental state than one who has not thought about your product category in six months. They have a problem on their mind. Your job is to connect your solution to that problem before they close the browser tab.
The mistake most SDR teams make when they get intent data for the first time: they send the same sequence they use for all outbound, with no change to timing, messaging, or channel. Intent data does not improve bad sequencing. It accelerates good sequencing.
The Four Intent Signal Types — and How to Respond to Each
Type 1: High-Urgency First-Party Signals (Pricing, Demo, ROI Calculator)
These signals indicate commercial intent. The prospect has moved past research and is evaluating cost and feasibility. Response time is everything.
Target response window: Same business day, ideally within 4 hours of the signal.
Channel: Phone first, email if no answer. These are not "email and hope" signals — they warrant a call.
Messaging framework — what to say and what NOT to say:
The surveillance creep mistake: "Hi, I noticed you were on our pricing page today — thought I'd reach out." This tells the prospect they are being watched. It makes them uncomfortable, not impressed. Even if your intent is to be helpful, the framing makes you sound like you are monitoring their browser activity.
The right framing: Reference the category pain point that would lead someone to your pricing page, not the visit itself.
"Hey [Name], I reach out to [role] at companies like [theirs] when they're in the middle of evaluating their [category] stack. Timing may be terrible, but if you're at all looking at this space right now, I'd love to show you something specific to [their industry] — takes 15 minutes."
The signal tells you when to call. The messaging treats the timing as a hypothesis ("you might be evaluating") not a fact ("I know you were on our site").
Sequence structure — Type 1:
- Day 0: Phone call (leave voicemail if no answer with the "evaluating their stack" framing above)
- Day 1: Follow-up email — one-line, references the voicemail, asks for 15 minutes
- Day 3: Final call + email combo — "Tried to connect earlier this week. Happy to send a quick comparison doc instead if easier."
If no response after Day 3, move to a standard nurture sequence. Do not chase high-intent signals with a 15-touch cadence — the window has likely closed.
Type 2: Competitor Research Signals
When a prospect visits your competitor comparison page or a third-party review site shows them comparing you to a competitor, they are in active evaluation — and they may not have you on the shortlist.
Target response window: Within 24 hours.
Channel: Email first (less intrusive; gives them something to reference), then phone on Day 2.
Messaging framework: Reference the evaluation context without naming the competitor specifically.
"Hi [Name] — when [role] at [size] companies in [industry] are building their shortlist for [category], the comparison that trips most people up is [the dimension competitors usually lose on]. We have a 10-minute breakdown of the 3 questions that predict which platform wins in your environment. Worth 10 minutes?"
Note the specificity: you are naming a comparison dimension, not a competitor. This positions you as the evaluator's guide, not a vendor defending their own product.
Sequence structure — Type 2:
- Day 0: Email (comparison framing above)
- Day 1: LinkedIn connection request (no message — let the connection speak for itself)
- Day 2: Phone call, reference the email
- Day 4: Second email — "One thing I didn't include in my last note: [specific differentiator relevant to their industry]"
- Day 7: Final touch — "Closing the loop — happy to send comparison matrix directly if useful."
Type 3: Third-Party Topic Surge (Bombora or Similar)
Topic surge signals are weaker than first-party behavioral signals because they tell you the account is researching the category, not that they have found you. They warrant outreach but at a lower urgency level.
Target response window: 48–72 hours.
Channel: Email first. These contacts have not necessarily heard of you — a cold call on a topic surge signal often lands flat.
Messaging framework: The surge tells you the category of their research. Match your opening line to the topic that is surging.
If the surge topic is "sales automation," your opener references sales automation. If it is "data privacy compliance," your opener references compliance pain. Do not send the same opener to every surging account regardless of topic.
Sequence structure — Type 3:
- Day 0: Email (topic-matched opener)
- Day 2: LinkedIn message (brief, references the same topic)
- Day 4: Phone call
- Day 7: Email — include a relevant case study or stat specific to their industry
- Day 10: Final email — "Closing this out unless [X]. Happy to resurface in Q[next quarter] if timing isn't right."
Type 4: Product and Trial Behavior Signals
For PLG companies, in-product behavior is the most immediate trigger. A trial account that activated and is exploring purchase-decision features is more likely to convert on a call than any content-consumption signal.
Target response window: 24 hours of the triggering event.
Channel: Email first (the contact is in your product — an email is less disruptive than a cold call mid-trial).
Messaging framework: Acknowledge the trial without referencing specific features they used (which feels surveilled). Instead, offer to accelerate their evaluation.
"Hi [Name] — Akshar from IntentGPT. You're in the middle of a trial and I want to make sure you're seeing the parts most relevant to [their use case]. I can jump on a 15-minute call to walk through the two features that drive the most value for [their company type]. Worth it?"
Sequence structure — Type 4:
- Day 0: Email (trial acceleration offer)
- Day 1: In-app message or product notification (if your platform supports it)
- Day 3: Follow-up email — share one insight relevant to their apparent use case
- Day 5 (if trial expiry is approaching): "Your trial ends [date] — happy to extend it or walk through the buying process if you're still evaluating."
Timing: The Window That Determines Whether Intent Data Works
Intent data has a half-life. The prospect who was actively evaluating today will not be in the same mental state in 10 days. Most third-party intent platforms update signals daily or weekly. Most first-party signals fire in real time.
The timing math:
The general principle supported by sales response time research: same-day outreach on high-intent signals converts at significantly higher rates than next-day or later outreach. For first-party behavioral signals (pricing page, demo page), a 4-hour response window is the operational standard at high-performing SDR teams.
For third-party intent signals, the window is longer — the account has been surging for several days by the time the score updates in your platform. Reaching them within 48–72 hours of the signal update is the practical target.
Signal freshness by source:
| Signal Source | Typical Freshness When Received | Practical Outreach Window |
|---|---|---|
| First-party (website) | Real-time | Same day |
| First-party (email click) | Real-time | 24–48 hours |
| First-party (product behavior) | Real-time to 24 hours | 24 hours |
| Third-party (Bombora weekly update) | Up to 7 days old | 48–72 hours after receiving |
| Third-party (daily refresh platforms) | Up to 24 hours old | Same day after receiving |
| Second-party (G2, TechTarget) | 24–48 hours typical | 48 hours |
Measuring Whether Your Intent Program Is Working
Management will ask whether the intent data investment is paying off. Here is the measurement framework:
Metric 1: Intent-triggered meeting rate vs. cold outreach meeting rate. Pull conversion rates separately for sequences triggered by intent signals vs. sequences with no intent trigger. If intent-triggered sequences are not outperforming by at least 50%, the signal quality, messaging, or timing is broken — not the concept.
Metric 2: Pipeline velocity — intent-sourced vs. non-intent-sourced. Track the average days from first touch to SQL and from SQL to closed-won for opportunities where an intent signal preceded first outreach. If pipeline moves faster, intent is doing its job.
Metric 3: Rep productivity — meetings per outbound activity. How many calls/emails does it take to book a meeting from an intent-triggered sequence vs. a cold sequence? This is the most direct measure of SDR efficiency and the easiest to sell to management.
Metric 4: False positive rate. Tag every intent-triggered sequence outcome. How many high-intent accounts turned out to be irrelevant? (Existing customers in a different product line, wrong geography, incorrect ICP.) If false positives exceed 30% of high-intent accounts, your ICP definition or signal filtering needs tuning.
When Intent-Triggered Outreach Does Not Work
Knowing when not to rely on intent data prevents the program from getting a bad reputation internally.
The prospect is a champion inside an account that is not the decision-maker. They research on behalf of their boss. The intent signal fires. You call. You get an enthusiastic conversation but no meeting with the economic buyer. Intent data does not tell you seniority — enrich the contact before calling.
The account is a competitor doing reconnaissance. Companies monitor competitors' pricing and feature pages regularly. A pricing page visit from someone at a direct competitor is a competitive intelligence visit, not a buying signal. Build an exclusion list of known competitor domains and filter them from your intent alert queue.
The signal fired because of a PR mention or industry news. If your company or product category was mentioned in a major trade publication this week, you may see a spike in website traffic that looks like intent. Check whether the spike correlates with a specific referral source before assigning rep time to it.
The product trial is an academic or research visit. Students, consultants doing market research, and journalists signing up for free trials all generate product behavioral signals. Look for indicators of non-buying intent: @gmail.com / @outlook.com email domains, job titles that don't match your ICP, company sizes below your minimum deal threshold.
Frequently Asked Questions
How do I use intent data without making prospects feel surveilled?
Reference the category pain point or the evaluation context implied by the signal, not the signal itself. "I reach out to [role] when they're evaluating [category]" is informed by intent but not creepy. "I saw you were on our pricing page" is the same information delivered in a way that damages the relationship before it starts. The signal is your trigger to act; the messaging is what the prospect hears.
How many intent-triggered sequences should one SDR manage at a time?
A workable rule: an SDR running intent-triggered sequences exclusively can typically manage 30–50 active sequences simultaneously (vs. 80–120 for cold volume outbound) because each sequence requires more research and personalization per contact. Intent data improves conversion rate but does not eliminate the work of good outreach.
Should I call or email first on an intent signal?
Use the signal type to decide. High-urgency first-party signals (pricing page, demo page, ROI calculator) warrant a phone call first — the prospect is in active commercial evaluation and a call is appropriate. Lower-urgency signals (third-party topic surge, webinar attendance, content download) warrant email first — the contact has not explicitly shown commercial intent, and a cold call with no prior email context can feel presumptuous.
How do I handle an intent signal for an account that already said "no" 3 months ago?
Do not treat it as a fresh start — acknowledge the prior conversation. "We spoke a few months back and the timing wasn't right. Given [signal context], I wanted to check in to see if anything has changed." A "no 3 months ago" with a current intent signal is actually one of the highest-probability scenarios in your pipeline — the objection was timing, not fit, and the intent signal suggests the timing has shifted.
Related Guides
- B2B Intent Data: The Complete Guide — Hub page: all signal types, provider comparison, ROI benchmarks
- First-Party Intent Data Setup — How to instrument your website and CRM to generate first-party signals
- B2B Intent Data Providers: Comparison — Side-by-side evaluation of Bombora, 6sense, Demandbase, and others
- IntentGPT for Sales Teams — How IntentGPT delivers intent signals directly to rep workflows
JSON-LD Schema Markup
[
{
"@context": "https://schem