Operate

Score Leads & Accounts

Score Leads & Accounts

Scoring models that reflect how deals actually close — not how they looked when someone set up the CRM.

Scoring models that reflect how deals actually close — not how they looked when someone set up the CRM.

Average conversion rate lift for leads with intent signal scoring versus leads scored on firmographic and behavioral signals alone — the compound model outperforms any single-dimension approach.

Average conversion rate lift for leads with intent signal scoring versus leads scored on firmographic and behavioral signals alone — the compound model outperforms any single-dimension approach.

THE brıef

Most lead scoring models are optimistic approximations designed once and never revisited — they award points for a job title match and a website visit and call it prioritization. The Score Leads & Accounts agent builds and maintains scoring models that combine firmographic fit, technographic signals, behavioral engagement, and third-party intent data into a dynamic, continuously calibrated score that actually predicts which leads and accounts convert.

Configures multi-dimensional scoring models

A single-dimensional score — firmographic fit, or behavioral engagement, or intent data alone — misses the compound signal that separates in-market accounts from vaguely interested ones. The agent configures scoring models that weight multiple signal categories: firmographic fit (industry, company size, geography against ICP), technographic profile (stack signals indicating category readiness or competitive displacement opportunity), behavioral engagement (email open sequences, website pages visited and in what order, webinar attendance, content downloads), and intent signals from third-party providers (keyword cluster activity, review site visits, competitor comparison behavior). Weights are set from historical win/loss data rather than intuition — the model reflects what actually predicted a closed deal, not what someone guessed would matter.

Scoring model configuration — Enterprise SaaS ICP: Firmographic fit (35% weight): 25 pts industry match, 20 pts company size 200–5,000, 10 pts US/Canada geography. Technographic (25% weight): 30 pts Salesforce CRM, 20 pts Outreach or SalesLoft in stack. Behavioral (20% weight): email engagement sequence scoring, 3+ page visits in 7 days. Intent (20% weight): Bombora topic surge 'sales engagement software' or 'outbound automation'. Threshold scores: MQL ≥ 65, SQL ≥ 80.

Configures multi-dimensional scoring models

A single-dimensional score — firmographic fit, or behavioral engagement, or intent data alone — misses the compound signal that separates in-market accounts from vaguely interested ones. The agent configures scoring models that weight multiple signal categories: firmographic fit (industry, company size, geography against ICP), technographic profile (stack signals indicating category readiness or competitive displacement opportunity), behavioral engagement (email open sequences, website pages visited and in what order, webinar attendance, content downloads), and intent signals from third-party providers (keyword cluster activity, review site visits, competitor comparison behavior). Weights are set from historical win/loss data rather than intuition — the model reflects what actually predicted a closed deal, not what someone guessed would matter.

Scoring model configuration — Enterprise SaaS ICP: Firmographic fit (35% weight): 25 pts industry match, 20 pts company size 200–5,000, 10 pts US/Canada geography. Technographic (25% weight): 30 pts Salesforce CRM, 20 pts Outreach or SalesLoft in stack. Behavioral (20% weight): email engagement sequence scoring, 3+ page visits in 7 days. Intent (20% weight): Bombora topic surge 'sales engagement software' or 'outbound automation'. Threshold scores: MQL ≥ 65, SQL ≥ 80.

Maintains scores dynamically as signals change

A static score assigned at the moment of first contact becomes wrong within days — the lead's behavior changes, their intent signals evolve, their company posts a funding announcement, and none of it updates the number sitting in the CRM field. The agent maintains scores dynamically: recalculating each lead and account's score as new behavioral signals arrive (email click, new page visit, demo request), as enrichment data refreshes (new technographic signal detected, job change at the account), and as third-party intent data updates. Scores decay when engagement goes cold, preventing stale high scores from clogging the top of sales queues with leads that last showed interest six months ago. Every score reflects the current state of the lead, not their state at first contact.

Score update log — last 24 hours: Acme Corp (account): 54 → 71 (+17) — CMO hired, now matches buyer persona. Trigger: LinkedIn org change detected. Jane Smith at DataStream: 62 → 81 (+19) — visited pricing page twice + opened 3 emails in sequence. Trigger: behavioral score spike. Legacy Inc: 78 → 44 (-34) — 90-day engagement decay applied. Trigger: no activity since January.

Maintains scores dynamically as signals change

A static score assigned at the moment of first contact becomes wrong within days — the lead's behavior changes, their intent signals evolve, their company posts a funding announcement, and none of it updates the number sitting in the CRM field. The agent maintains scores dynamically: recalculating each lead and account's score as new behavioral signals arrive (email click, new page visit, demo request), as enrichment data refreshes (new technographic signal detected, job change at the account), and as third-party intent data updates. Scores decay when engagement goes cold, preventing stale high scores from clogging the top of sales queues with leads that last showed interest six months ago. Every score reflects the current state of the lead, not their state at first contact.

Score update log — last 24 hours: Acme Corp (account): 54 → 71 (+17) — CMO hired, now matches buyer persona. Trigger: LinkedIn org change detected. Jane Smith at DataStream: 62 → 81 (+19) — visited pricing page twice + opened 3 emails in sequence. Trigger: behavioral score spike. Legacy Inc: 78 → 44 (-34) — 90-day engagement decay applied. Trigger: no activity since January.

Calibrates model accuracy against closed-won data

A scoring model is only credible if it's validated against what actually happened. The agent runs ongoing calibration analyses comparing model predictions against closed-won and closed-lost outcomes: are the accounts that converted to revenue scoring above threshold at the right rates? Are high scores correlating with pipeline conversion? Is a particular signal category over-weighted (lots of points for intent but no conversion lift from intent signals in isolation)? Calibration reports surface model drift — when the correlation between score and conversion starts to weaken — and recommend weight adjustments with supporting data. The model evolves with the business rather than calcifying at the configuration it had when someone first set it up.

Model calibration report — Q1 2026: 312 closed-won deals analyzed. MQL threshold accuracy: 81% of closed-won had MQL score at some point (strong). Intent signal lift: +34% conversion rate for leads with intent score ≥ 20 vs. without (validates weighting). Technographic over-weighting detected: Salesforce signal adds 30 pts but conversion lift is only 12% vs. baseline — recommend reduce to 18 pts. Model performance score: 74/100.

Calibrates model accuracy against closed-won data

A scoring model is only credible if it's validated against what actually happened. The agent runs ongoing calibration analyses comparing model predictions against closed-won and closed-lost outcomes: are the accounts that converted to revenue scoring above threshold at the right rates? Are high scores correlating with pipeline conversion? Is a particular signal category over-weighted (lots of points for intent but no conversion lift from intent signals in isolation)? Calibration reports surface model drift — when the correlation between score and conversion starts to weaken — and recommend weight adjustments with supporting data. The model evolves with the business rather than calcifying at the configuration it had when someone first set it up.

Model calibration report — Q1 2026: 312 closed-won deals analyzed. MQL threshold accuracy: 81% of closed-won had MQL score at some point (strong). Intent signal lift: +34% conversion rate for leads with intent score ≥ 20 vs. without (validates weighting). Technographic over-weighting detected: Salesforce signal adds 30 pts but conversion lift is only 12% vs. baseline — recommend reduce to 18 pts. Model performance score: 74/100.

Routes and alerts based on score thresholds

A score that lives in a CRM field but doesn't trigger action is just a number. The agent connects scoring to routing and alerting logic: when a lead crosses the MQL threshold, it routes to the assigned SDR queue with the score breakdown showing which signals triggered the threshold. When a dormant account's score spikes — intent surge or a new contact behavior pattern — an alert fires to the AE or CSM with context on what changed. When a whole territory sees an unusual score spike pattern, a manager alert surfaces the trend before the quarterly review. The scoring model becomes an operational system, not a reporting artifact.

Routing and alert triggers — last 7 days: 14 new MQLs routed to SDR queue (avg score: 72). 3 dormant accounts with intent spike alerts sent to AEs — avg account score jump +28. 1 territory alert: West Coast Mid-Market territory average score up 31% QoQ — 6 accounts crossing threshold simultaneously. 2 accounts re-classified from SQL to MQL on score decay.

Routes and alerts based on score thresholds

A score that lives in a CRM field but doesn't trigger action is just a number. The agent connects scoring to routing and alerting logic: when a lead crosses the MQL threshold, it routes to the assigned SDR queue with the score breakdown showing which signals triggered the threshold. When a dormant account's score spikes — intent surge or a new contact behavior pattern — an alert fires to the AE or CSM with context on what changed. When a whole territory sees an unusual score spike pattern, a manager alert surfaces the trend before the quarterly review. The scoring model becomes an operational system, not a reporting artifact.

Routing and alert triggers — last 7 days: 14 new MQLs routed to SDR queue (avg score: 72). 3 dormant accounts with intent spike alerts sent to AEs — avg account score jump +28. 1 territory alert: West Coast Mid-Market territory average score up 31% QoQ — 6 accounts crossing threshold simultaneously. 2 accounts re-classified from SQL to MQL on score decay.

Today vs. with

Today vs. with

Score Leads & Accounts

Score Leads & Accounts

Today

Scoring model configured once, never revisited — awards points for job title match and an email open regardless of whether those signals predict conversion

Scores calculated at first contact and never updated — high-score leads from six months ago occupy the top of the queue while current hot accounts sit unnoticed

Score sits in a CRM field that reps check or ignore — no automated routing, no alerts, no operational connection to what happens downstream

With ABM Strategist

Multi-dimensional model calibrated from closed-won data — weights reflect what actually predicted revenue, not what seemed reasonable at setup

Dynamic scoring updated in real time as signals arrive — scores decay with inactivity, spike on intent signals, and reflect the lead's current state

Score thresholds trigger routing, alerts, and territory trend signals — the model runs the queue prioritization, not the rep's judgment of which field to sort by

Three layers, one platform by Lantern

Three layers, one platform by Lantern

Every agent runs on three layers: a unified data model, 150+ enrichment providers, and an open-source engine where every decision is auditable.

Every agent runs on three layers: a unified data model, 150+ enrichment providers, and an open-source engine where every decision is auditable.

Data Waterfall

150+ enrichment providers. Sequential routing optimized per segment. The best answer wins. No vendor lock-in.

Agent Engine

Open-source execution engine. Workflows defined in code. Human-in-the-loop checkpoints. Full audit trail on every action.

Revenue Ontology

Every data source normalized into one model. Entity resolution across systems. Relationships stored, not inferred. Schema that evolves with your business.

FAQ

FAQ

How long does initial model configuration take?

What intent data providers does the scoring model support?

Can we have separate scoring models for different segments?

How does score decay work?

Prioritization built on guesswork is just noise with extra steps — build a model that actually predicts revenue.

Prioritization built on guesswork is just noise with extra steps — build a model that actually predicts revenue.

USE CASES

Revenue Team

Marketing Team

Customer Success

PRICING

Pricing

RESOURCES

Blog

About Lantern

Status

Support

© LANTERN 2025

Terms

Privacy

Linkedin

USE CASES

Revenue Team

Marketing Team

Customer Success

PRICING

Pricing

RESOURCES

Blog

About Lantern

Status

Support

© LANTERN 2025

Terms

Privacy

Linkedin

USE CASES

Revenue Team

Marketing Team

Customer Success

PRICING

Pricing

RESOURCES

Blog

About Lantern

Status

Support

© LANTERN 2025

Terms

Privacy

Linkedin