PS Product SecurityKnowledge Base

๐Ÿ“ˆ Top Metrics for a Product Security Director

Director metrics

Intro: A Product Security Director rarely wins by reporting more vulnerabilities. The stronger operating model is to report coverage, prioritization quality, release discipline, remediation efficiency, and evidence quality in a way that engineering leaders and executives can connect to business outcomes.

What this page includes

  • why leadership metrics matter
  • a top-10 metrics set for a Product Security Director
  • how to connect those metrics to business goals and board language
  • example formulas, thresholds, and dashboard slices

Working assumptions

  • the audience includes engineering leadership, product leadership, security leadership, and sometimes audit or customer trust stakeholders
  • the metric system should help steer action, not just describe noise

Why these metrics matter

A Product Security Director is usually accountable for four things at once:

  1. risk reduction for the applications that matter most;
  2. delivery enablement so security is not seen as random friction;
  3. governance and evidence for audits, customers, and release confidence;
  4. program scalability across teams, products, and business units.

That means the dashboard cannot stop at:

  • raw vulnerability counts;
  • total findings by severity;
  • scanner coverage percentages with no ownership context.

The better question is:

Are we making our most important applications safer in a measurable way without breaking delivery?


The top 10 metrics

1) Tier-1 application coverage

Definition
Percentage of business-critical applications that have the required baseline controls attached.

Suggested baseline

  • source repository mapped
  • owner assigned
  • SAST or equivalent code signal
  • SCA or SBOM signal
  • secrets detection
  • image or artifact scanning where applicable
  • release evidence path defined

Formula

tier_1_coverage = covered_tier_1_apps / total_tier_1_apps

Why it matters
Coverage is the foundation for every other metric. If your crown-jewel applications are not onboarded, your posture dashboard is misleading.

Business link
Supports customer trust, regulatory readiness, and concentration-of-risk management.


2) Risk-weighted critical exposure backlog

Definition
Total open risk points for critical findings after weighting by business criticality, internet exposure, and exploitability.

Example weighting

risk_score =
  severity_weight *
  exploitability_weight *
  business_criticality_weight *
  exposure_weight

Why it matters
This is much more useful than a flat โ€œnumber of criticals.โ€ It reflects the risks that actually threaten the product and the business.

Business link
Supports executive prioritization, funding justification, and release decisions.


3) Median age of open criticals on internet-facing tier-1 apps

Definition
Median number of days that unresolved critical findings remain open on exposed, high-criticality applications.

Why it matters
Aging tells you whether the organization is actually burning down dangerous backlog or simply accumulating it.

Business link
Direct signal for breach likelihood reduction and leadership accountability.


4) Mean time to remediate (MTTR) by severity and by app tier

Definition
Average time from finding creation to verified closure, segmented by severity and criticality tier.

Recommended segments

  • critical / high / medium
  • tier-1 / tier-2 / tier-3
  • internet-facing / internal
  • code / dependency / cloud / container / secret

Why it matters
MTTR shows whether the remediation system works, not merely whether findings exist.

Business link
Ties directly to engineering efficiency, delivery confidence, and exposure reduction.


5) Reopen or recurrence rate

Definition
Percentage of previously closed findings that reopen or reappear in the same control family or service.

Formula

reopen_rate = reopened_findings / closed_findings

Why it matters
A high reopen rate usually means weak fixes, poor regression controls, or ineffective secure engineering practices.

Business link
Helps identify wasted engineering effort and quality issues in the secure SDLC.


6) Security gate pass rate for release candidates

Definition
Percentage of release candidates that pass defined security quality gates on the first attempt.

Good segmentation

  • by product team
  • by environment
  • by gate type: SAST, SCA, DAST, secrets, evidence, signing

Why it matters
This tells you whether the organization has shifted security left effectively or is still discovering risk too late.

Business link
Connects directly to release predictability and development throughput.


7) Exception debt

Definition
Count and age distribution of active security exceptions or risk acceptances.

Track

  • total active exceptions
  • exceptions older than 30 / 90 / 180 days
  • exceptions on tier-1 apps
  • exceptions with no review date
  • exceptions with no compensating control

Why it matters
Exceptions are not bad by themselves; unmanaged exception debt is.

Business link
Critical for audit readiness, governance maturity, and board-level trust.


8) Ownership completeness

Definition
Percentage of applications and critical findings with a named accountable owner and mapped engineering team.

Formula

ownership_completeness = owned_objects / total_objects

Why it matters
Unowned findings do not get fixed consistently. Unowned applications distort the posture model.

Business link
Improves operational accountability and reduces time lost in triage.


9) Secret exposure rate and rotation completion rate

Definition

  • Secret exposure rate: active or newly leaked credentials per 100 repositories or per 1,000 commits
  • Rotation completion rate: percentage of leaked active credentials rotated and verified within SLA

Why it matters
Secrets are one of the fastest paths from code hygiene failure to real compromise.

Business link
Strong indicator for incident prevention and cloud risk reduction.


10) Provenance and release evidence completeness

Definition
Percentage of releases for in-scope applications that include required evidence such as:

  • build provenance
  • SBOM
  • signature or attestation where required
  • scan results
  • gate decision record
  • approving actor or workflow trace

Why it matters
A Product Security Director is often expected to explain not only whether the release passed, but why it was considered acceptable.

Business link
Supports secure release governance, customer assurance, and compliance.


One view table: metric, audience, business value

Metric Primary audience Why leadership cares
Tier-1 application coverage Director, VP Eng, CISO shows whether the security program covers what matters
Risk-weighted backlog Director, CISO shows true risk concentration
Critical finding age Director, Eng leadership shows whether dangerous backlog is stagnating
MTTR Director, EMs shows fix-system efficiency
Reopen rate Director, AppSec managers shows quality of remediation
Gate pass rate Release managers, engineering leaders shows release predictability and early discovery maturity
Exception debt CISO, audit, director shows governance health
Ownership completeness Director, platform leaders shows accountability quality
Secret exposure / rotation Cloud and platform leaders shows operational hygiene and incident prevention
Evidence completeness Director, audit, customer trust shows defensibility of release process

How to tie these metrics to business goals

A Product Security Director usually has to translate technical data into one of the following business narratives.

A) Revenue protection

Use:

  • tier-1 coverage
  • risk-weighted backlog on revenue-critical apps
  • median age of criticals on internet-facing apps

Narrative:
โ€œWe are reducing concentrated risk on the services that directly affect revenue and customer trust.โ€

B) Release confidence and developer productivity

Use:

  • gate pass rate
  • MTTR
  • recurrence rate

Narrative:
โ€œSecurity is becoming more predictable and less disruptive because issues are found earlier and fixed with lower rework.โ€

C) Audit and customer assurance

Use:

  • evidence completeness
  • exception debt
  • coverage by regulated application tier

Narrative:
โ€œWe can demonstrateโ€”not just assertโ€”that secure release controls are being executed consistently.โ€

D) Program scale and governance maturity

Use:

  • ownership completeness
  • portfolio coverage
  • trend lines across business units

Narrative:
โ€œThe program is scaling beyond heroics and is becoming a reliable operating system.โ€


Example dashboard slices

Executive view

  • Tier-1 coverage trend
  • Risk-weighted critical backlog trend
  • Median critical age for internet-facing apps
  • Exception debt by business unit
  • Release evidence completeness

Engineering leadership view

  • Gate pass rate by team
  • MTTR by team and control family
  • Recurrence rate by repo or service
  • Secret leakage rate by org or project
  • Ownership completeness by portfolio

Product security operations view

  • New findings vs closed findings ratio
  • Reimport stability / dedup quality
  • Exception review SLA breaches
  • Coverage gaps by control family
  • Release blocks by root cause

Example target ranges

These are illustrative, not universal.

Metric Example target
Tier-1 coverage > 95%
Median age of criticals on tier-1 internet-facing apps < 14 days
Critical MTTR < 10 days
Reopen rate < 5%
First-pass gate rate > 85%
Exceptions older than 90 days trending down every quarter
Ownership completeness 100% for tier-1 and tier-2 apps
Secret rotation SLA > 95% on time
Evidence completeness for production releases 100% for in-scope apps

Why this matters specifically for ASPM and ASOC platforms

ASOC- and ASPM-style platforms become strategically useful when they improve these leadership metrics, not when they merely ingest more tools.

A mature platform should make it easier to answer:

  • which tier-1 applications are under-scanned?
  • which teams repeatedly fail gates?
  • which findings are aging on the apps that matter most?
  • which exceptions are accumulating without compensating controls?
  • which releases lack sufficient evidence?

If your platform cannot answer those questions cleanly, it is still acting like a noisy scanner aggregator rather than a product security operating layer.


Example metric mapping in YAML

metrics:
  - id: tier1_coverage
    title: Tier-1 application coverage
    owner: product-security-director
    formula: covered_tier1_apps / total_tier1_apps
    source_objects:
      - applications
      - connectors
      - owners
    target: ">= 0.95"

  - id: gate_pass_rate
    title: First-pass release gate success
    owner: release-security-lead
    formula: first_pass_releases / total_release_candidates
    source_objects:
      - release_candidates
      - gate_results
    target: ">= 0.85"

  - id: evidence_completeness
    title: Production release evidence completeness
    owner: product-security-operations
    formula: releases_with_full_evidence / prod_releases
    source_objects:
      - releases
      - evidence_artifacts
    target: "== 1.0"

Footer note: Good leadership metrics reduce ambiguity. Great leadership metrics make it obvious where to invest, what to fix first, and how security helps the business ship safer software with less friction.