PS Product SecurityKnowledge Base

⚔️ Hands-On Attack-to-Defense Playbooks for Product Security

Intro: Reading attack patterns is useful, but engineering judgment gets stronger when the same scenario is viewed as an attack path, a detection problem, a containment problem, and a hardening problem. These playbooks are designed to help AppSec, DevSecOps, cloud, and Kubernetes engineers practice that full loop.

How to use this page: pick one scenario, reproduce it safely in a lab, collect the evidence you would expect in a real environment, and finish only when you can explain both the fix and the proof that the fix works.

Attack-to-Defense Playbook Loop

What a good hands-on playbook contains

Each playbook should answer six questions:

  1. What is the attacker trying to achieve?
  2. Which trust edge is being crossed?
  3. What evidence would we see in code, config, logs, and cloud control planes?
  4. How do we contain the issue without creating more damage?
  5. What secure default or control pattern prevents recurrence?
  6. How do we verify the environment is truly better, not just quieter?

Suggested playbook sequence

Playbook Best role fit Primary skill you build
Broken API authorization and tenant-boundary abuse AppSec, backend security abuse thinking, authorization review, test design
SSRF to metadata or internal-control-plane abuse AppSec, cloud security trust-boundary reasoning, egress control, identity isolation
CI token theft and pipeline-to-repository compromise DevSecOps, platform security runner trust, secret handling, blast-radius reduction
Kubernetes workload-to-cluster escalation Kubernetes, cloud-native security pod security, workload identity, containment
IAM / IaC privilege drift and cloud escalation cloud security, DevSecOps policy review, blast-radius analysis, detective guardrails
Dependency or build-provenance compromise supply-chain security artifact trust, attestations, rollback discipline

Playbook 1 — Broken API authorization and tenant-boundary abuse

Attack path

A user can read or modify another tenant's object because the service validates authentication but not object-level authorization. The attacker iterates IDs, abuses predictable references, or pivots through an admin/helper endpoint.

Practice safely

  • Use a lab like Juice Shop or a deliberately vulnerable internal demo app.
  • Create two users from different tenants.
  • Capture the exact request and response differences when the wrong tenant's object is fetched or mutated.
  • Add a regression test that fails before the fix and passes after the fix.

Evidence to collect

  • access logs showing subject, object, route, and decision outcome;
  • application logs for authN success without matching authZ checks;
  • unit/integration tests that prove cross-tenant access was previously allowed;
  • design docs or route definitions that show the object-owner check was implicit instead of explicit.

Defensive response

  • enforce object-level authorization in the service layer, not only in the UI or gateway;
  • make tenant context explicit in request handling and storage access;
  • log denied and allowed sensitive object actions with consistent identifiers;
  • add pre-release abuse tests for horizontal and vertical authorization.

“Done” looks like

  • the same abuse request is denied consistently across UI, API, and background jobs;
  • logs now show why the action was denied;
  • automated tests prevent the bug from returning;
  • reviewers can point to the exact policy or code path enforcing ownership.

Best cross-links:

Playbook 2 — SSRF to metadata or internal-control-plane abuse

Attack path

A file-fetcher, webhook validator, PDF renderer, avatar importer, or URL-preview feature allows server-side requests to attacker-controlled destinations. The attacker then targets metadata services, private admin interfaces, or internal APIs.

Practice safely

  • Use a local SSRF-safe lab or a non-production sandbox.
  • Reproduce the issue against a harmless local endpoint first.
  • Test whether the component can reach link-local or RFC1918 ranges, internal DNS names, or cloud metadata addresses.
  • Record what identity or credential would be exposed if the same pattern existed in production.

Evidence to collect

  • app logs for outbound fetch destinations;
  • proxy, egress, or VPC flow logs;
  • cloud audit logs if metadata-derived credentials are later used;
  • code paths that accept URL input without scheme/host allow-listing.

Defensive response

  • deny metadata-service and private-range access unless explicitly required;
  • put outbound proxy or egress filtering in front of fetcher-style components;
  • use workload identity instead of static credentials, and scope it narrowly;
  • normalize URL parsing and reject dangerous schemes or redirect chains.

“Done” looks like

  • the component can only call approved destinations;
  • metadata access is blocked or meaningless because the attached identity is tightly scoped;
  • detections exist for suspicious outbound fetch patterns;
  • tests cover redirect, DNS rebinding, and internal-address edge cases.

Best cross-links:

Playbook 3 — CI token theft and pipeline-to-repository compromise

Attack path

A pipeline leaks a long-lived token, over-privileged runner identity, or repository secret. The attacker modifies workflows, publishes malicious artifacts, accesses packages, or uses CI as a stepping stone into cloud resources.

Practice safely

  • Use a disposable training repo or pipeline sandbox.
  • Inventory every secret, token, and cloud identity reachable from the runner.
  • Simulate exfiltration by proving which actions the token could take, without actually performing harmful changes.
  • Compare behavior between ephemeral and persistent runners.

Evidence to collect

  • pipeline logs and job definitions;
  • runner configuration and network placement;
  • audit trails for repo, package, registry, and cloud actions;
  • evidence of branch protection, CODEOWNERS, approval gates, and environment protections.

Defensive response

  • replace broad static secrets with short-lived federation where possible;
  • isolate runners by trust tier and keep privileged jobs off shared runners;
  • lock down workflow modification paths and deployment approvals;
  • generate provenance, attestations, and release evidence for high-trust builds.

“Done” looks like

  • token blast radius is small and easy to explain;
  • runner compromise does not automatically imply repo-admin or cloud-admin access;
  • critical workflow changes require controlled review;
  • artifact trust can be traced through provenance and release evidence.

Best cross-links:

Playbook 4 — Kubernetes workload-to-cluster escalation

Attack path

A container starts with unsafe privileges, a mounted service account token, weak admission controls, or excessive namespace permissions. The attacker gets code execution in the pod, then attempts credential theft, secret access, lateral movement, or cluster-control-plane actions.

Practice safely

  • Use Kubernetes Goat, EKS Goat, or a disposable sandbox cluster.
  • Start from a workload with one intentionally weak control at a time: privileged mode, hostPath, broad RBAC, or unrestricted egress.
  • Record which escalation steps are blocked and which remain possible.
  • Compare namespace policy labels and admission decisions before and after hardening.

Evidence to collect

  • Kubernetes audit logs and admission decisions;
  • workload YAML, RBAC bindings, and service-account references;
  • runtime detections from Falco, Tetragon, or cloud-native signals;
  • cloud audit logs if the workload federates into provider IAM.

Defensive response

  • apply Pod Security Standards or equivalent admission policy;
  • minimize service-account privileges and avoid token mounting where unnecessary;
  • enforce network policy, image provenance, and runtime signal collection;
  • make namespace trust tiers explicit instead of letting everything behave the same.

“Done” looks like

  • the workload no longer escalates through the same route;
  • the namespace has an explicit security posture;
  • RBAC and workload identity are easy to explain;
  • runtime detections are validated against a safe simulation.

Best cross-links:

Playbook 5 — IAM / IaC privilege drift and cloud escalation

Attack path

A small Terraform, ARM/Bicep, or CloudFormation change introduces broader trust than intended: wildcard actions, public exposure, cross-account role assumption, or logging gaps. The attacker chains those misconfigurations into privilege escalation or stealth.

Practice safely

  • Use Terragoat, CloudGoat, or a disposable sandbox account/subscription/project.
  • Review the delta at both the IaC layer and the deployed-resource layer.
  • Map which principals, networks, and data stores are newly exposed.
  • Verify whether the same issue would be caught by review, policy-as-code, and detective controls.

Evidence to collect

  • pull request diff and plan/apply output;
  • cloud IAM policy simulator results or equivalent reasoning;
  • posture findings and audit logs;
  • explicit list of resources whose blast radius changed.

Defensive response

  • require least-privilege review for non-human identities and trust policies;
  • codify guardrails with policy-as-code and preventive controls;
  • maintain immutable logging for privilege and trust-path changes;
  • review public exposure and transitive trust, not only resource-local settings.

“Done” looks like

  • the risky change is blocked before apply or flagged immediately after;
  • reviewers can name the principal, trust edge, and resource blast radius;
  • logging exists for the risky control-plane actions;
  • the secure pattern is captured as a reusable baseline.

Best cross-links:

Playbook 6 — Dependency or provenance compromise

Attack path

A malicious dependency update, build-script modification, compromised maintainer path, or unsigned artifact enters the pipeline. The attacker aims to get untrusted code built, published, or deployed under the cover of normal automation.

Practice safely

  • use a training repository or dependency-management demo project;
  • compare a normal update flow with one that changes trust assumptions: maintainer, source, lockfile, or build step;
  • verify whether provenance, attestation, and policy checks would detect the difference;
  • test the rollback path, not only the prevention path.

Evidence to collect

  • dependency PR history and review approvals;
  • SBOM, provenance, or attestation artifacts;
  • package-manager lockfile changes and source redirects;
  • release evidence linking commit, build, artifact, and deploy decision.

Defensive response

  • protect workflow files, release paths, and package publication identities;
  • use automated dependency updates with policy and test controls;
  • generate provenance and verify it before promotion;
  • keep rollback and revocation playbooks ready for suspect releases.

“Done” looks like

  • high-trust releases can be traced back to reviewed source and controlled builds;
  • suspicious dependency or workflow changes are visible quickly;
  • deployment promotion depends on trust evidence, not only build success;
  • rollback is rehearsed and not invented during the incident.

Best cross-links:

A simple scoring model for practice

Use a 0–2 score for each area after you finish a playbook:

  • Exploit understanding — can you explain the attacker’s path without hand-waving?
  • Evidence mapping — can you name the logs, configs, and artifacts that prove it happened?
  • Containment quality — can you reduce damage before the full fix lands?
  • Prevention quality — did you implement a reusable control rather than a one-off patch?
  • Verification quality — can you prove the same path no longer works?

A playbook is mature when the team can repeat the exercise with a new service or environment without needing the same expert every time.

---Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.