Skip to main content

Blog · Guide · 2026-05-07

Anatomy of a high-intent B2B buying signal

What a real B2B buying signal actually is, the seven categories that correlate with purchase intent, and how long each one stays useful before it decays.

A B2B buying signal is a specific, observable change in a company's public state that correlates with active purchase consideration. It is not a website pageview or a content download. The categories that vendor and analyst research consistently rank as predictive are funding events, executive hires, hiring velocity, ad activity, new locations, tech-stack changes, and review or evaluation activity. Most signals decay inside 30 to 90 days, which is why timing matters as much as the signal itself.

TL;DR

  • A buying signal is a change in a company's observable state, not a pixel hit. Pixel-based "intent" is anonymous research, and third-party signal accuracy declines 30 to 50 percent versus first-party data (fl0.com).
  • Seven categories carry the bulk of predictive weight: funding, executive turnover, hiring velocity, ad activity, new locations, tech-stack changes, and review or evaluation activity.
  • Decay is brutal. Job-change signals are typically expired at 90 days (UserGems); funding intent peaks in the first 90 to 180 days post-round.
  • Compound signals beat single signals. Two correlated changes inside a 30-day window are a stronger predictor than any one signal alone.
  • Speed compounds the effect. First sellers to engage on a trigger event can see roughly 4x conversion lift over generic outbound, per signal-platform benchmarks (Salesmotion).

What counts as a buying signal (and what doesn't)

The intent-data category has muddied the term. Vendors sell three different things under one label.

True buying signals are changes in a company's public state. A Series B announcement, a VP of Engineering hire, a new Indeed posting for ten SDR roles, a switch from Salesforce to HubSpot, a new branch office. These events are timestamped, attributable to a specific company, and verifiable by a third party.

Anonymous research signals are what most "intent data" platforms sell. Bombora's Company Surge, for instance, infers buying interest by aggregating content consumption across a co-op of roughly 5,000 publisher sites and 16.6 billion monthly interactions (Bombora). It is genuine information, but it tells you the company is researching, not buying, and it cannot tell you which person is researching (Influ2 review of Bombora).

Pixel and bidstream signals are the weakest tier. Industry estimates put bidstream-derived intent accuracy at as low as 1 to 20 percent in the worst cases (intentdata.io). False positives from researchers, students, and competitors trigger the same signals as buyers. The problem is structural, not vendor-specific.

The practical test: can you point to a specific public artifact (a job posting URL, a press release, a SEC filing, a Maps listing) that proves the change happened? If yes, it is a buying signal. If the only evidence is a probability score from a black box, treat it as a hint, not a signal.

For the related question of how fast contact records go stale on top of these signals, see our B2B contact data decay write-up.

The seven signal categories that actually correlate with purchase intent

Every signal vendor publishes a slightly different taxonomy. Stripped of marketing, they converge on roughly seven categories. The table below summarizes what each category looks like, what it implies, and where the supporting evidence comes from.

Category Concrete example What it implies Evidence strength
Funding event Seed, Series A or later round closes; SEC Form D filing Budget unlock; 3 to 6 months of active vendor evaluation Strong. Crunchbase trigger studies cite peak buying behavior in the 90 to 180 day post-funding window (Crunchbase trigger overview via Landbase)
Executive hire or turnover New CFO, VP of Sales, Head of RevOps New leaders rebuild stacks; first 100 days = 70 percent of discretionary budget (UserGems) Strong
Hiring velocity Five SDR roles posted in 30 days; new department creation Headcount expansion implies budget approval; jobs typically appear within 30 days of budget sign-off per industry trackers (Origami) Moderate to strong
Ad activity New campaigns visible in Google Ads Transparency Center, Meta Ad Library, LinkedIn Ad Library Active demand-gen spend; budget is being deployed Moderate
New locations New office, retail unit, or warehouse on Maps or in filings Operational expansion drives tools, services, and vendor procurement Moderate
Tech-stack change Adding or removing a CRM, analytics, or infra tool tracked by BuiltWith or HG Insights Replacement or integration window Moderate
Review or evaluation activity New G2 or Capterra reviews of a competitor; first-party review request from your category Prospect is comparing options; shortlists are tighter than they used to be, so being on one matters more than ever Moderate

Two notes on honesty. First, "ad activity" as a buying signal has the weakest direct evidence; it correlates with budget but not always with category-relevant purchase. Second, "tech-stack change" is moderate, not strong, because public detection lags real adoption, often by weeks.

For a comparison of how the legacy contact-database tools handle these categories (most of them ignore everything except technographics), see our Apollo vs ZoomInfo comparison.

Decay windows: how long each signal stays useful

Signals are perishable. The conversion lift from acting on a trigger comes from being early. UserGems and other signal-platform benchmarks consistently report that new-hire signals expire near 90 days, after which the new executive has settled and built vendor preferences (UserGems). The pattern repeats across categories.

Signal type Typical useful window Why it expires
Funding event 90 to 180 days Budget gets allocated quickly; vendor short-lists form within the first quarter
Executive hire 30 to 90 days First 100 days drive most discretionary spend; preferences set fast (UserGems)
Hiring velocity 14 to 30 days per posting Postings fill or get pulled; the urgency that prompted the role fades
Ad activity 7 to 30 days Campaigns turn over weekly; signal is about now-state, not history
New locations 60 to 120 days Procurement for a new site clusters in the first 3 to 4 months of operations
Tech-stack change 30 to 90 days Integration and replacement work happens fast once a switch begins
Review or evaluation activity 7 to 21 days Active shortlists close quickly; vendors have a narrow window to be considered

Two structural reasons signals decay so fast. First, modern B2B buyers spend only about 17 percent of total buying time interacting with sales reps and complete most of the journey independently (Gartner via Brixon Group summary). Second, Forrester's research finds 92 percent of buyers start with at least one vendor in mind and 41 percent already have a preferred vendor before formal evaluation (Forrester via Digital Commerce 360). If you are not in the consideration set in the first window after the trigger, you are working uphill.

A separate 6sense data point reinforces the cost of being late: 82 percent of deals are won by the first vendor a buyer contacts, and 82 percent of buyers have already established their requirements before engaging sellers (6sense via Column Five summary).

Reading signals together: the compound-signal advantage

A single signal is suggestive. Two correlated signals inside a short window are predictive. The most reliable compound patterns we see in the research and in the field:

  • Funding plus hiring velocity. A Series B followed by ten engineering postings in 30 days is a clearer "we are buying tooling" signal than either event alone.
  • Executive hire plus tech-stack change. A new VP of Marketing plus a HubSpot-to-Marketo migration is a textbook re-platforming pattern.
  • New location plus hiring velocity. A new branch and ten local postings inside 60 days indicates real operational launch, not a paper announcement.
  • Review activity plus competitor tech detection. A G2 review of a competitor combined with that competitor still being detected in the prospect's stack is the highest-intent late-stage signal there is.

Compound signals also help solve the buying-group problem. Gartner's research finds the typical B2B buying group has roughly 6 to 10 stakeholders representing about five business functions (Gartner), and Forrester puts the average at 13 people involved in the decision with 89 percent of purchases involving two or more departments (Forrester via Outreach summary). One signal points to one person; multiple signals across functions point to a buying group in motion.

How AI scoring should weight signals

This is where most "AI lead scoring" goes wrong. The shortcut is to pile every signal into a single regression and emit a score. The problem is that signals are not interchangeable. A 90-day-old funding round and a 3-day-old executive hire should not contribute equally, even if a model trained on closed-won data says they do.

A defensible scoring approach has four properties:

  1. Time decay per category. Apply a decay function tuned to the typical window for that signal type, not a global half-life.
  2. Compound bonus. Boost the score when two correlated signals appear within an overlapping window, because that pattern is more predictive than the sum of its parts.
  3. Source verifiability. Down-weight signals you cannot link to a public artifact. If the score depends on a black-box probability, the score is itself a black box.
  4. Plain-language explanation. Every score should resolve to a sentence a human can sanity-check. "Score 8.4 because: Series B closed 41 days ago, three engineering hires posted in the last 14 days, and competitor tool detected in stack." That is auditable. A bare number is not.

This is the approach behind Keendai, the signal-scored prospect generation product that mines Maps and other public sources, scores each lead with a 0 to 10 score, and links every signal back to the source URL and timestamp it was scraped. If you want to see how Keendai mines signals end to end, the architecture page walks through the full scrape, enrich, score and dedup pipeline. Keendai pricing starts at a free tier of 50 leads per month.

A final caveat. Vendor-published conversion-lift figures (the "first seller wins 5x as often" style of stat) come from signal-platform marketing teams, not peer-reviewed research. Treat them as directional, not precise. The structural claim, that early action beats late action by a wide margin, is well supported across Gartner, Forrester, and 6sense research. The exact multiplier is not.

Frequently asked questions

What is a B2B buying signal in plain language?

A buying signal is a public, observable change in a company's state that suggests it may be about to buy something. It is grounded in a real artifact such as a press release, a job posting, a SEC filing, or a Maps listing, not in anonymous web traffic.

How is a buying signal different from intent data?

"Intent data" is the broader umbrella, and most of what is sold under that label is anonymous content-consumption inferred from publisher co-ops or bidstream feeds. A buying signal is more specific: a real, attributable event. The two overlap, but signals are easier to verify and act on than aggregated intent scores.

Which buying signals are most predictive?

The strongest evidence supports funding events and executive hires, both of which directly correlate with budget unlock and active vendor evaluation in 90-day windows. Hiring velocity is moderate-to-strong. Ad activity, tech-stack changes, new locations, and review activity are moderate.

How long does a buying signal stay useful?

Most signals decay inside 30 to 90 days. Funding events stay warm for 90 to 180 days. Executive hires for 30 to 90. Job postings for 14 to 30. Review and evaluation activity decays fastest, often inside three weeks, because shortlists close fast.

Are pixel-based intent tools worth using?

Sometimes, as one input among many. The accuracy concerns are real: third-party intent data is estimated to be 30 to 50 percent less reliable than first-party data, and bidstream sources can be far worse (fl0.com). Use them to widen the funnel, not to score purchase intent on their own.

Can AI reliably score buying signals?

Yes, if the model applies category-specific decay, rewards compound signals, and produces an explanation that cites the underlying artifacts. Without those properties, an AI score is just a confident-looking guess. The audit standard is whether a human can read the explanation and verify the cited signals against public sources.


If you want to act on signals instead of static lists, start free with 50 leads/month.

Last updated: May 2026.