PS Product SecurityKnowledge Base

๐Ÿ“ Security Metrics and KPIs โ€” Coverage, MTTR, Finding Aging, Threat-Model Coverage, Secret Exposure Rate, and Business Translation

Intro: Security metrics matter only when they change decisions. A good metric helps an engineering or product leader decide where to invest, what to escalate, and whether a release, platform, or program is moving in the right direction.

What this page includes

  • the most useful product-security KPIs for engineering-led programs
  • metric definitions, anti-patterns, and business translation
  • examples of targets and governance use

Design rules for useful metrics

  • prefer decision-support metrics over vanity dashboards
  • measure coverage, latency, aging, and exception debt, not only raw finding counts
  • define the denominator explicitly
  • tie each metric to an owner and a review cadence
  • use trends and segmentation by criticality, not one global average that hides reality

High-value KPI set

KPI Simple definition Why leaders care Common anti-pattern
Security review coverage % of in-scope systems or changes that received the intended security review shows whether the program reaches the estate at all counting ad hoc reviews with no scope definition
Design review coverage % of critical apps / services that had architecture or secure-design review in the target period leading indicator of insecure design risk reporting only raw review count
% critical apps with current threat model share of crown-jewel or tier-1 apps with a maintained threat model tells leadership whether critical business flows were reviewed intentionally treating a 3-year-old threat model as current
MTTR by severity mean time to remediate or mitigate findings by severity / environment speaks directly to risk reduction speed one MTTR for everything, mixing low and critical
Findings aging age distribution of open findings (for example: 0โ€“30, 31โ€“90, 90+) exposes backlog health and chronic neglect reporting count only, without age buckets
Coverage of secret scanning / secret exposure rate % of target repos and pipelines covered, plus rate of real exposures found per period ties directly to breach prevention and credential hygiene measuring detections without repository coverage
Release gate pass rate % of releases that pass security gates without exception shows if release governance works and whether controls are practical celebrating high pass rate while silently granting waivers
Exception debt number and age of open risk acceptances / waivers, especially on critical assets turns exceptions into a visible management problem storing exceptions in email and forgetting expiry
Vulnerability SLA attainment % of findings closed or mitigated inside policy SLA aligns security policy with delivery behavior using SLA attainment without excluding invalid / duplicate findings
Runtime detection dwell or triage time time from meaningful alert to triage / containment tells you whether production detection is operationally useful mixing false-positive alerts with validated incidents

Metric formulas that are worth standardizing

Design review coverage

reviewed in-scope apps during period / total in-scope apps

Threat-model coverage for critical apps

critical apps with current threat model / total critical apps

Secret exposure rate

confirmed secret exposures during period / repos or pipelines in-scope

Findings aging

Bucket open findings by age and show the distribution, not only the mean.

MTTR by severity

Measure separately for critical, high, and medium findings, and separate production from non-production.

Business translation table

Metric Business objective it supports What a healthy trend looks like
Design review coverage reduce expensive late-stage redesign and security defects upward and stable for tier-1 systems
Threat-model coverage for critical apps protect revenue, trust, and regulated business flows close to full coverage for crown jewels
MTTR critical / high reduce exposure window and breach likelihood downward trend with few long-tail outliers
Findings aging reduce unmanaged backlog and audit pain shrinking 90+ day bucket
Secret exposure rate reduce credential-driven incidents and cloud compromise paths lower rate with higher preventive coverage
Release gate pass rate maintain delivery speed without bypassing risk controls stable pass rate with controlled, rare exceptions
Exception debt reduce hidden risk accumulation low, time-bounded, actively burned down

Suggested target philosophy

Avoid universal targets copied from another company. Use a maturity-aware model.

Maturity stage Good first target set
Early define ownership, start measuring coverage, establish 30/60/90 aging buckets, create exception log
Growing set severity-based MTTR targets, require threat models for critical apps, track release gate outcomes
Mature segment by product line, business criticality, and attack surface; tie metrics to portfolio and leadership reviews

Example KPI pack for a monthly engineering review

  1. design review coverage for tier-1 and tier-2 systems;
  2. % critical apps with current threat model;
  3. critical / high findings aging distribution;
  4. MTTR for critical and high findings;
  5. secret exposure rate and repo coverage;
  6. count and aging of open exceptions;
  7. release gate exception count and reasons.

Anti-patterns to avoid

  • measuring only total findings
  • rewarding teams for hiding or downgrading findings
  • using MTTR with no validity filtering
  • mixing application, infrastructure, and hygiene debt into one meaningless number
  • using metrics without a review cadence or owner