PS Product SecurityKnowledge Base

๐Ÿ—“๏ธ Six-Week Product Security Express Audit Plan for a New Director

Intro: A new Product Security director rarely gets the luxury of a slow discovery phase. The organization wants fast answers: where the largest risks are, which controls are real vs theatrical, what evidence exists, and where the first quarterโ€™s action plan should focus. This page gives a structured six-week express audit that is deep enough to be credible and fast enough to drive action.

Use this page when

  • you joined a new company and need a fast product-security reality check;
  • leadership wants an outside-in view of control maturity across AppSec, cloud, CI/CD, Kubernetes, identity, and response;
  • you need a first-pass fact base before writing a 90-day or two-quarter transformation plan.

Desired outcomes by the end of week 6

By the end of the audit, the director should be able to say:

  • which product and platform domains are highest risk;
  • which workflows are reliable, inconsistent, or mostly informal;
  • where ownership is unclear;
  • which controls exist only on paper;
  • where evidence is strong enough for audit and incident response;
  • which 5-10 actions should land in the next quarter.

Audit principles

  • do not start with tools; start with flows, ownership, and evidence;
  • prefer a small number of deep samples over a giant spreadsheet of shallow assertions;
  • separate documented policy from operating reality;
  • treat exceptions, admin shortcuts, and โ€œtemporaryโ€ workarounds as first-class evidence;
  • translate findings into both engineering actions and leadership decisions.

The six workstreams to cover every week

Workstream Core question
1. Governance and ownership who owns risk decisions, release gates, and exceptions?
2. Secure SDLC and AppSec how are design review, code review, scanning, and remediation actually run?
3. CI/CD and supply chain who can change build, artifact, and deployment trust?
4. Cloud, Kubernetes, and platform where can misconfiguration or identity sprawl create blast radius?
5. Identity, secrets, and crypto how are privileged access, service identities, secrets, and keys governed?
6. Detection, evidence, and response can the org detect, scope, and prove what happened during an incident or audit?

Week-by-week plan

Week 1 โ€” establish scope, inventory, and interview map

Primary goal: understand what the company actually ships and who changes it.

What to do

  • meet engineering leadership, platform owners, SRE, compliance, and key product leads;
  • identify tier-0 / tier-1 systems, crown-jewel data, and internet-facing surfaces;
  • map the software delivery path: repo โ†’ CI/CD โ†’ artifact โ†’ deploy โ†’ runtime;
  • request a first-pass artifact pack.

Ask for these artifacts immediately

  • engineering org map and ownership list;
  • service/application inventory;
  • repo inventory and default settings;
  • CI/CD platform inventory;
  • cloud account / subscription / project inventory;
  • cluster inventory;
  • vulnerability-management dashboard or exports;
  • policy/standard list;
  • incident runbooks and last major postmortems.

Output of week 1

  • scope map;
  • interview schedule;
  • first list of โ€œunknowns that should not be unknown.โ€

Week 2 โ€” AppSec and SDLC operating reality

Primary goal: determine whether security review happens early enough and whether remediation actually closes risk.

Review

  • design review or threat-model intake;
  • security requirements for new systems or high-risk changes;
  • code review expectations and secure coding guidance;
  • SAST / DAST / SCA / secret scanning placement;
  • vuln-triage workflow, SLAs, and exception path;
  • security-champion or developer-enablement model.

Sample-based checks

Pick 3-5 services and ask:

  • what changed in the last 90 days?
  • what security review happened?
  • what findings were opened and how were they prioritized?
  • what was deferred and why?

Output of week 2

  • one-page SDLC maturity snapshot;
  • gaps between policy and sampled reality;
  • first high-confidence findings.

Week 3 โ€” CI/CD, release governance, and supply chain

Primary goal: assess how much trust is concentrated in pipelines, runners, artifacts, and deploy automation.

Review

  • branch protection and repo governance;
  • runner model and isolation;
  • secret handling in pipelines;
  • artifact repositories and image registries;
  • release approvals and protected environments;
  • provenance, signing, or release evidence patterns;
  • third-party build integrations and automation tokens.

Sample-based checks

Take two representative delivery paths and walk them end to end:

  • who can merge;
  • who can re-run or alter pipelines;
  • what identities deploy;
  • what evidence is retained.

Output of week 3

  • trust-boundary diagram for at least two critical delivery paths;
  • top 5 supply-chain or release-control weaknesses.

Week 4 โ€” Cloud, Kubernetes, and platform control-plane review

Primary goal: identify where infrastructure identity, misconfiguration, or privilege sprawl can magnify product risk.

Review

  • cloud account / subscription / project baseline controls;
  • public exposure governance;
  • IAM / role design and admin boundaries;
  • workload identity vs static secrets;
  • Kubernetes RBAC, namespace model, admission controls, and exception-heavy workloads;
  • posture-finding ownership and remediation loops.

Sample-based checks

For 2-3 high-value environments, verify:

  • who can change production networking or public exposure;
  • who can create privileged workloads;
  • which identities can read secrets or alter runtime policy;
  • whether logs exist for privileged actions.

Output of week 4

  • cloud/Kubernetes heatmap by domain;
  • list of blast-radius multipliers.

Week 5 โ€” Identity, secrets, crypto, evidence, and response

Primary goal: test whether the organization can both defend and prove.

Review

  • joiner/mover/leaver process for privileged roles;
  • emergency access and break-glass design;
  • secrets lifecycle and rotation expectations;
  • KMS/HSM/key ownership and cryptographic usage governance;
  • logging coverage, retention, and immutable evidence path;
  • incident playbooks, containment workflow, and decision ownership.

Sample-based checks

  • review one offboarding case, one privileged-access review, and one emergency-access record if available;
  • test whether a suspicious deployment or privileged action can be reconstructed from logs and records;
  • verify whether backup/restore assurance is documented for critical systems.

Output of week 5

  • evidence-strength rating;
  • identity/secret/crypto control narrative;
  • incident-readiness observations.

Week 6 โ€” synthesis, scoring, and action plan

Primary goal: produce an executive-credible picture plus an engineering-credible next-step plan.

Build the final package

  • domain scorecard;
  • top findings with business effect;
  • top findings with engineering effect;
  • immediate containment items;
  • quarter roadmap candidates;
  • ownership and dependency notes.

Final readout structure

  1. What is working
  2. Where control reliability is weak
  3. Where blast radius is too large
  4. Where evidence is insufficient
  5. What must happen in the next 30 / 60 / 90 days

Suggested scoring model

Use a simple traffic-light score with explanation. Avoid fake precision.

Domain Green means Yellow means Red means
Governance owners and decisions are clear some workflow exists but exceptions dominate unclear ownership and ad hoc decisions
SDLC / AppSec review and remediation are repeatable partial review coverage or weak triage mostly reactive or tool-driven only
CI/CD / Supply chain delivery trust is well bounded some strong controls, but weak runner or approval discipline broad deploy power and weak evidence
Cloud / Kubernetes baseline controls and admin boundaries exist mixed posture and exception-heavy clusters/accounts large blast radius and unclear guardrails
Identity / Secrets / Crypto privileged access and key use are governed some lifecycle controls exist, but rotation/review is inconsistent static secrets, broad admin rights, unclear key ownership
Detection / Evidence / Response logs and records support scoping and proof partial visibility, manual reconstruction required unreliable evidence and weak response readiness

Minimum evidence request list

Ask for these early and refresh the list as you learn more:

  • service inventory;
  • critical-system tiering;
  • repo inventory and default repo controls;
  • CI/CD platform and runner inventory;
  • artifact / registry inventory;
  • cloud account or subscription inventory;
  • cluster inventory;
  • privileged-role membership export;
  • sample access-review record;
  • sample offboarding record;
  • threat-model or architecture-review samples;
  • sample vulnerability queue export;
  • exception register;
  • release sign-off artifact;
  • deployment approval example;
  • posture or compliance finding export;
  • logging-retention standard;
  • incident runbooks;
  • last 1-2 major postmortems;
  • backup/restore test evidence for critical systems.

Interview guide by stakeholder

Stakeholder Ask them this
Engineering leader What security activities slow delivery today, and which ones actually reduce incidents?
Platform / SRE Which identities, environments, or exceptions make you most nervous?
AppSec / Product Security Which risks keep recurring because the workflow never really changed?
Compliance / audit Where is evidence strong on paper but weak in operating detail?
Product / delivery owner Which release or feature paths bypass the normal model under pressure?

Common failure modes for new directors

  • turning the first six weeks into a tool bake-off;
  • accepting policy docs as proof of operation;
  • asking for giant spreadsheets before understanding the delivery flow;
  • treating all domains with equal urgency;
  • reporting findings without naming owners and next steps.

Best companion pages in this KB

Public references

---Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.