PS Product SecurityKnowledge Base

๐Ÿง  Review Cheat Sheets for Code, Design, Cloud, Kubernetes, and Release

Intro: A good reviewer does not try to remember every standard from memory. A good reviewer keeps a small number of sharp questions ready and knows which evidence to ask for next.

Use this page when: you need a fast review prompt during a meeting, PR review, design review, or pre-release checkpoint.

The three-question discipline

In almost every review, ask at least one question from each category:

  1. Identity: who or what is acting?
  2. Exposure: what can be reached, changed, or exfiltrated?
  3. Evidence: what log, config, test, or artifact proves your answer?

If one of these is missing, the review is probably too shallow.

Ten-minute code review cheat sheet

Stop-the-line questions

  • Does untrusted input reach a dangerous sink without validation, normalization, or authorization?
  • Is the control implemented only in the UI, client, or gateway but not in the backend service?
  • Could this code expose secrets, tokens, file paths, stack traces, or internal identifiers?
  • Does the change introduce dangerous parser, deserialization, command, template, or file-handling behavior?

Ask-next questions

  • What security regression test was added with the fix or feature?
  • Which abuse case was considered beyond the happy path?
  • Where is the authorization decision recorded or observable?

Evidence to request

  • unit/integration tests;
  • route or handler code;
  • authz middleware/service logic;
  • logging examples for success and denial paths.

Ten-minute design review cheat sheet

Stop-the-line questions

  • What is the trust boundary and where does it move in this design?
  • Which identity calls which resource, and how is that identity established?
  • Can one tenant, user, or service affect another beyond the intended scope?
  • What happens when the new component fails, times out, or receives malformed input?

Ask-next questions

  • Which part of the design is easiest to abuse, not just easiest to break?
  • What telemetry will tell us the design is being misused?
  • Which decision here deserves a short decision record?

Evidence to request

  • data-flow or sequence diagram;
  • trust-boundary notes;
  • authn/authz model;
  • expected logs, alerts, and ownership list.

Ten-minute cloud / IAM review cheat sheet

Stop-the-line questions

  • Did the change widen who can assume a role, access a project, or administer a resource?
  • Are wildcard actions, broad resource scopes, or public exposure being introduced?
  • Can the workload pivot into more sensitive control planes or data stores?
  • Will the resulting change be visible in audit logs and posture tooling?

Ask-next questions

  • Which non-human identity is now more powerful than before?
  • What is the blast radius if that identity is stolen?
  • Which control would have caught this if the review missed it?

Evidence to request

  • IaC plan or diff;
  • IAM policy and trust-policy view;
  • network exposure details;
  • audit-log destination and posture findings.

Ten-minute Kubernetes review cheat sheet

Stop-the-line questions

  • Is the workload asking for privileges, host access, broad RBAC, or unnecessary service-account tokens?
  • Does the namespace have an explicit security posture, or is it effectively โ€œanything goesโ€?
  • Can this pod talk to everything or assume a stronger cloud identity than intended?
  • Do runtime signals exist if the workload is abused after deployment?

Ask-next questions

  • Which admission or policy layer enforces the baseline?
  • How is workload identity scoped and rotated?
  • What would lateral movement look like from this pod?

Evidence to request

  • deployment YAML;
  • service-account and RBAC bindings;
  • namespace labels / admission policy;
  • network policy and runtime detection examples.

Ten-minute release review cheat sheet

Stop-the-line questions

  • Did a sensitive path change: auth, IAM, workflows, deployment logic, secrets, or external exposure?
  • Are the required quality gates present and passing for this trust tier?
  • Is the deployment using the intended environment, identity, and artifact?
  • Has any risk been accepted implicitly instead of explicitly recorded?

Ask-next questions

  • What would need to fail for us to roll this back safely?
  • Which evidence ties commit, build, artifact, and deployment together?
  • What changed in public reachability, privilege, or data handling?

Evidence to request

  • release notes / change ticket;
  • pipeline run and approvals;
  • artifact provenance, SBOM, or release evidence when applicable;
  • exception record or tracked residual risk.

Decision legend for fast reviews

Result Meaning Typical next move
Proceed evidence is sufficient and no material gap is visible document assumptions and move on
Proceed with actions risk is bounded, but follow-up is required create tracked tasks and expiry date
Escalate trust boundary, privilege, or exposure is unclear move to deeper review or exception board
Block a material control is missing on a high-risk path stop release or redesign before proceeding

Cheat-sheet stack to pair with this page

Need Go deeper here
General review patterns Security Review Checklists and Cheat Sheets
APIs API Review Checklist
Cloud change Cloud Change Review Checklist
Kubernetes Kubernetes Deployment Review Checklist
Release readiness Pre-Release Security Checklist
Identity changes IAM Review Checklist
Image/build review Dockerfile Review Checklist

---Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.