By
KakiyoKakiyo
·MQL·

MQLs: Definition, Scoring, and Handoff

Practical, audit-friendly approach to defining MQLs, building defensible scoring, and creating handoff packets that increase sales follow-up and pipeline quality.

MQLs: Definition, Scoring, and Handoff

MQLs are supposed to simplify go-to-market execution: marketing identifies the best leads, sales follows up fast, pipeline becomes more predictable. In practice, MQLs often do the opposite. Teams argue about what “qualified” means, scoring gets gamed by vanity activity, and SDRs stop trusting the queue.

A useful way to think about MQLs is this: an MQL is not a label, it’s a contract. It’s an operational promise that (1) the lead fits who you sell to, (2) there is evidence of intent, and (3) sales can take a consistent next step without redoing marketing’s work.

Below is a practical, audit-friendly approach to MQLs: definition, scoring, and handoff, designed to reduce ambiguity and increase conversion to real sales conversations.

What exactly is an MQL?

An MQL (Marketing Qualified Lead) is a lead that marketing has validated as worth sales follow-up based on agreed criteria. “Agreed” is the key word. If sales would not act on it quickly and consistently, it is not qualified.

A defensible MQL definition typically includes:

  • Fit: Are they the right type of company and persona for your offer?
  • Intent: Did they do something that suggests buying interest (not just content curiosity)?
  • Recency: Did it happen recently enough that sales outreach is likely to land?

MQLs sit in the middle of the funnel. They are usually MOFU: the person is not necessarily ready to buy today, but there is enough signal to justify a sales touch.

To avoid stage confusion, it helps to make the handoffs explicit.

StageOwnerWhat’s true at this stageTypical next action
Lead / InquiryMarketingKnown contact, limited evidence of buying intentNurture or light qualification
MQLMarketingMeets your fit + intent + recency contractRoute to SDR fast, with context
SAL (Sales Accepted Lead)Sales/SDRSDR acknowledges and starts the motionStart conversation, verify basics
SQLSales/SDRSales has confirmed real opportunity potentialBook meeting or create opportunity

If your team also uses PQLs (Product Qualified Leads), treat them as their own source of MQLs, not a replacement, because the scoring and handoff packet are usually different.

If you want a companion piece on aligning stages across the funnel, see MQLs and SQLs: Align Definitions, Boost Pipeline Health.

A practical MQL definition you can defend (and measure)

Most MQL problems come from one of two failure modes:

  • “Anything that breathes” MQLs: too many low-intent leads, SDRs ignore the queue.
  • “Unicorn-only” MQLs: too strict, marketing under-credits meaningful demand.

A robust definition uses two layers: gating criteria (hard rules) and scoring (soft rules).

1) Set non-negotiable gates

Gates prevent obvious bad leads from becoming “qualified” due to noisy activity.

Common gates:

  • Company is within your serviceable geography (if relevant)
  • Company size is within your target band
  • Persona is within your selling motion (or at least adjacent)
  • No disqualifiers (student, vendor, competitor, job seeker, consultant-only inquiry, etc.)

Gates should be boring, stable, and documented. If gates change weekly, your MQL reporting will be meaningless.

2) Define what counts as intent, in your business

Intent is not “they read a blog post.” Intent is behavior that suggests evaluation.

Examples of higher-intent behaviors:

  • Pricing page visits (with time-on-page thresholds)
  • Demo request, contact sales, “talk to an expert”
  • Attending a product-focused webinar (not a broad thought-leadership event)
  • Replying to outbound with a relevant question
  • LinkedIn conversation signals like “How does this work for X?” or “What’s pricing?”

A key improvement for many teams is to stop treating all channels equally. A “reply with a constraint” in a conversation often carries more buying information than 10 anonymous pageviews.

For a deeper look at channel-specific triggers, see Marketing Qualified Lead: Definition, Triggers, Handoff.

3) Put a recency window in writing

Without a recency window, you will route stale leads that cannot be actioned.

A common starting point:

  • Inbound high-intent (demo/contact): 1 to 7 days
  • Behavioral intent (pricing, comparison): 7 to 14 days
  • Event/webinar intent: 7 to 21 days

Pick windows that match your sales cycle and follow-up capacity, then revisit with data.

A simple funnel diagram with five labeled steps: Lead, MQL, SAL, SQL, Opportunity. Each step shows the owner (Marketing or Sales) and a brief note of the required evidence (fit, intent, recency, acceptance, validated need).

MQL scoring: build a model that drives action, not arguments

Once gates are set, scoring should answer one operational question: Should sales act now, nurture, or ignore?

A good scoring model is:

  • Explainable (a rep can see why the score is high)
  • Calibrated (a threshold corresponds to a real likelihood of becoming an SQL)
  • Resistant to gaming (does not over-reward easy-to-trigger activity)

Start with three scoring pillars

To keep scoring both rigorous and usable, score across three pillars, with weights that match your motion:

  • Fit (0–50): company and persona alignment
  • Intent (0–30): buying behaviors and signals
  • Readiness (0–20): near-term indicators that sales can move forward

This structure intentionally separates “they are our target” from “they are active” from “they can take a next step.” Many scoring models blur these and create false positives.

Example scoring rubric (customize to your ICP)

Use this as a starting template, not as universal truth.

PillarSignal typeExample signalPoints (example)Notes
FitFirmographicsTarget employee band+10 to +20Scale by how narrow your ICP is
FitIndustryIn your top industries+5 to +15Use a controlled picklist
FitPersonaEconomic buyer / champion persona+10 to +20Tie to past wins
FitAccountExisting ABM target account+10Avoid double counting with intent
IntentWebPricing or comparison page visit+8 to +15Add time threshold + repeat visits
IntentFormDemo request / contact sales+20 to +30Often an automatic MQL if gates pass
IntentEmailReply with question+10 to +20Better than clicks
IntentLinkedInMeaningful reply in-thread+10 to +20Score based on content, not just “replied”
ReadinessTimingExplicit project timeframe shared+5 to +10From form, chat, or conversation
ReadinessConstraintsBudget/process acknowledged+5 to +10Even “no budget until Q3” is useful
ReadinessMeetingAccepted meeting slot+20This often skips straight to SQL/SAL

Two rules that keep this clean:

  • Apply decay to intent and readiness signals. Yesterday’s interest is more valuable than last quarter’s.
  • Add negative scoring for disqualifying behaviors (career page only visits, student email domains, “just researching,” vendor outreach).

Scoring pitfalls that destroy trust

If SDRs do not trust MQLs, they will work around them. These are the most common causes:

Overweighting easy-to-generate activity

Clicks, opens, and generic content consumption are cheap signals. They can support a story, but they should not define qualification.

No channel normalization

If a webinar attendee and a LinkedIn responder both become MQLs, they should not be treated as equivalent without context. The expected conversion rates differ.

One score trying to do two jobs

If your score tries to predict both “who fits” and “who is ready today,” it will fail at both. Keep pillars distinct and report them separately when possible.

No backtesting

You do not need a data science team to validate scoring. You do need discipline.

At minimum, track:

  • MQL to SAL conversion rate
  • SAL to SQL conversion rate
  • MQL to SQL conversion rate
  • Time-to-first-touch on MQLs
  • “Rejection reasons” (why sales disqualified or ignored)

If you are using predictive scoring in your CRM, this complements, not replaces, your operational definition. For Salesforce teams, the implementation details and governance matter a lot, see Salesforce Einstein Lead Scoring: Setup, Tips, Pitfalls.

The minimum viable data you need to score MQLs well

Most scoring problems are not math problems. They’re data hygiene problems.

Here is a practical “minimum viable schema” to make MQL scoring explainable and auditable.

FieldWhere it livesWhy it mattersExample
Lead stageCRMPrevents lifecycle ambiguityLead, MQL, SAL, SQL
MQL dateCRMEnables SLA measurement2026-01-11
MQL reasonCRM or MAPExplains the trigger“Pricing page + LinkedIn reply”
Fit tierCRMMakes routing fasterTier 1, Tier 2, Tier 3
Intent tierCRMPrioritizes outreachHigh, Medium, Low
Last intent timestampMAP/CRMPowers recency rules“2026-01-10 14:30 UTC”
Source + campaignMAP/CRMSupports channel calibrationWebinar, LinkedIn, Paid Search
Conversation evidence linkCRM note / system linkMakes handoff actionableLink to thread summary

The key is MQL reason. If sales cannot see why the lead is in their queue, they will assume “marketing inflated it.”

MQL handoff: make it a packet, not a ping

Scoring produces an MQL. Handoff determines whether it becomes pipeline.

The best handoffs do two things:

  • Reduce the SDR’s time-to-context
  • Reduce requalification work

Define the MQL to SAL SLA

An SLA is not just a response time goal. It’s a commitment that makes the stage meaningful.

A simple SLA framework:

  • Marketing routes MQLs in near real time (minutes, not days)
  • Sales accepts or rejects within a set window (often same day for high-fit)
  • Rejected MQLs require a coded reason, so marketing can fix the upstream logic

If you do not measure acceptance time and rejection reasons, you do not have an SLA, you have a suggestion.

Build a handoff packet SDRs can actually use

A high-performing handoff packet answers:

  • Who is this, and why them?
  • What did they do that indicates intent?
  • What should I say first?
  • What is the next best step if they engage?

Here is a lightweight template you can operationalize in CRM notes or an automated enrichment step:

Packet elementWhat “good” looks likeExample
One-line contextFit + trigger summary“VP RevOps at 250-person SaaS, visited pricing twice, replied on LinkedIn asking about integrations.”
EvidenceSpecific, time-stamped“Pricing page 2x yesterday, LinkedIn reply at 10:14am.”
Suggested openerShort, relevant, non-creepy“Saw you were looking at integrations, are you exploring this for your current tooling or a new stack?”
Qualification next stepClear micro-yes“If it helps, I can share how teams like you handle X, want a 2-minute overview?”
Routing noteWho owns it and why“Tier 1 ICP, route to SDR Team A.”

This packet is especially important for conversational channels like LinkedIn, where the context often lives outside the CRM.

For a more end-to-end qualification system (including scoring and automation), see Lead Qualification Process: Steps, Scoring, and Automation.

Using LinkedIn conversation signals to improve MQL quality

Many teams treat LinkedIn as a top-of-funnel brand channel. It’s also a high-signal qualification channel because you can capture intent in the prospect’s own words.

Examples of conversation signals that should influence MQL scoring and handoff:

  • Problem acknowledgment (“We’re struggling with X right now.”)
  • Constraint disclosure (“We tried a tool but adoption failed.”)
  • Process detail (“We need security review before we buy.”)
  • Timeline (“Re-evaluating vendors in Q2.”)
  • Stakeholder mapping (“Looping in our Head of Sales Ops.”)

These signals are often more predictive than passive engagement. The challenge is collecting them consistently and translating them into structured fields that route correctly.

This is where purpose-built conversation automation can help.

Where Kakiyo fits (without breaking your funnel)

Kakiyo is designed to manage personalized LinkedIn conversations at scale, from first touch to qualification to meeting booking. For MQL operations, that matters in three concrete ways:

  • Consistency: Kakiyo can run repeatable qualification prompts (using customizable prompts and industry templates) so your “intent capture” does not depend on individual rep style.
  • Experimentation: A/B prompt testing helps you learn which conversation openers and follow-ups generate higher-quality MQLs, not just more replies.
  • Actionability: An intelligent scoring system and real-time analytics make it easier to prioritize conversations and pass clear evidence into the sales handoff.

If you are scaling LinkedIn as a qualification channel, also review Automated LinkedIn Outreach: Do It Safely and Effectively to keep pacing, permission, and human oversight in place.

A dashboard-style illustration showing three panels: an MQL score breakdown (Fit, Intent, Readiness), a list of active LinkedIn conversations with statuses (New reply, Qualifying, Meeting proposed), and a handoff card containing a short summary and next suggested message.

A simple operating cadence to stabilize MQLs (in 30 days)

You do not need a quarter-long “redefinition project” to improve MQL quality. You need a short sprint, a pilot, and tight feedback loops.

Week 1: Definition sprint (sales and marketing together)

  • Agree on gates (what can never be an MQL)
  • Agree on the top 3 to 5 intent triggers per channel
  • Define recency windows
  • Decide what must be included in the handoff packet

Output: a one-page MQL contract.

Week 2: Scoring pilot

  • Implement the three-pillar scoring model
  • Add decay and negative scoring
  • Create “MQL reason” visibility for reps
  • Set initial thresholds for action (act now, nurture, disqualify)

Output: a working model, even if imperfect.

Weeks 3 to 4: SLA + feedback loop

  • Turn on routing rules and enforce response SLAs
  • Require rejection reasons (with a short picklist)
  • Review weekly: which MQL reasons convert, which fail, and why
  • Adjust weights and thresholds based on conversion, not opinions

Output: fewer MQLs, higher trust, higher conversion.

What “good” looks like when MQLs are working

When MQLs are healthy, you will see these operational behaviors:

  • SDRs treat the MQL queue as a priority, not a chore
  • Rejection reasons decrease over time (or become more specific)
  • MQL to SAL time shrinks, especially for high-fit
  • Conversion improves not just in volume, but in downstream quality (SQL rate, meeting quality)

If your team wants to use conversation-led qualification to make MQLs more evidence-based, Kakiyo can help by automating and supervising LinkedIn conversations through qualification and booking, while preserving human override control when nuance matters. Learn more at Kakiyo or go deeper on the mechanics in the LinkedIn Prospecting Playbook: From First Touch to Demo.

Kakiyo