PS Product SecurityKnowledge Base

๐Ÿงฎ Collecting Product Security Metrics Without ASPM or ASOC

Intro: Many teams delay reporting because they assume a platform must exist first. In practice, you can build a credible measurement system with exports, ownership data, and disciplined definitions.

What this page includes

  • a manual collection model
  • source-of-truth options
  • lightweight data definitions
  • a repeatable monthly workflow

The rule

If a metric cannot be collected automatically yet, collect it consistently.

A manual metric is still useful when:

  • the definition is stable
  • the owner is clear
  • the reporting cadence is predictable
  • the audience understands the confidence level

Minimal source set

You can start with five sources:

  1. Repo / pipeline platform
    GitLab or GitHub for coverage, gate pass rate, and pipeline adoption.
  2. Scanner artifacts
    SAST, SCA, IaC, secret scanning, container scanning, mobile scanning.
  3. Issue tracker
    Jira, GitHub Issues, Linear, or ServiceNow for ownership and remediation status.
  4. Exception log
    Markdown, spreadsheet, ticket queue, or lightweight database.
  5. Asset inventory
    System list with product owner, business tier, and environment.

Data dictionary you need first

Before building charts, define:

  • what counts as an application
  • what counts as tier-1 / tier-2
  • what is a critical finding
  • what date starts the remediation clock
  • what events count as a release
  • what counts as โ€œcoveredโ€ by a control

If those definitions drift, the dashboard becomes political instead of useful.

Manual monthly workflow

Step 1 โ€” Export the current system inventory

Fields:

  • application or service name
  • owner
  • business tier
  • environment
  • repo(s)
  • cloud account / subscription / project
  • API exposure
  • mobile app yes/no

Step 2 โ€” Export security findings

Collect from:

  • GitLab vulnerability report or JSON artifacts
  • GitHub code scanning / secret scanning views
  • Trivy, Checkov, Semgrep, ZAP, MobSF, or DefectDojo exports
  • cloud posture findings from AWS, Azure, or GCP

Step 3 โ€” Normalize with a simple schema

Use columns such as:

  • source_tool
  • product
  • service
  • finding_title
  • severity
  • confidence
  • state
  • first_seen
  • last_seen
  • owner
  • exception_id
  • release_blocking yes/no

Step 4 โ€” Derive leadership metrics

Examples:

  • critical findings older than SLA
  • new findings this month
  • closed findings this month
  • release pass rate
  • exception volume and age
  • scanner coverage by app tier

Step 5 โ€” Review for obvious distortion

Check for:

  • duplicate findings from multiple sources
  • findings with missing owners
  • old tickets marked โ€œacceptedโ€ with no expiry
  • tools with known noisy rule packs

Confidence labels

Mark each metric as:

  • A โ€” system-generated and trusted
  • B โ€” system-generated but partially normalized manually
  • C โ€” manually assembled with review

This is far better than pretending low-quality data is fully precise.

A good manual dashboard

A good manual dashboard usually has:

  • 8 to 12 metrics
  • one owner for each metric
  • one sentence explaining why it matters
  • one note explaining the data confidence and caveats

Common mistakes

  1. Mixing infrastructure findings and application findings without ownership mapping
  2. Counting scanner events instead of risk-bearing findings
  3. Reporting only totals, not age or trend
  4. Using โ€œnumber of scans runโ€ as a proxy for security improvement
  5. Hiding exception debt outside the dashboard