PS Product SecurityKnowledge Base

Kubernetes Security Assessment Pack

Intro: These exercises are written like platform-security assessment tasks rather than trivia. Each task gives you a realistic manifest or command fragment, then asks you to find what is broken, risky, or incomplete.

What this page includes

  • five snippet-driven Kubernetes security tasks;
  • hidden worked answers for self-testing;
  • emphasis on Pod posture, RBAC, admission, runtime, and secret handling.

Task 1 - Privileged deployment hidden inside a normal app rollout

Scenario

A team says this is “just a temporary debug deployment.” It is about to be merged into the default application Helm chart.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: billing-api
  namespace: prod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: billing-api
  template:
    metadata:
      labels:
        app: billing-api
    spec:
      containers:
        - name: api
          image: registry.example.com/billing-api:1.8.2
          securityContext:
            privileged: true
            allowPrivilegeEscalation: true
          volumeMounts:
            - name: host-root
              mountPath: /host
      volumes:
        - name: host-root
          hostPath:
            path: /

Prompt

Find the main weaknesses, explain the likely blast radius, and propose a safer alternative for debugging.

Reveal the worked answer

What is wrong

  • privileged: true gives the container broad host-like capability access;
  • allowPrivilegeEscalation: true removes an important guardrail;
  • mounting / via hostPath gives the workload visibility into the host filesystem;
  • the pattern is placed in a normal application deployment, not in a tightly controlled break-glass workflow.

Blast radius

A container compromise can become host compromise. From there, an attacker can target kubelet credentials, runtime sockets, mounted secrets, node-local logs, and neighboring workloads.

Better answer

Use one of these instead:

  1. a separate short-lived debug pod in a dedicated namespace with narrow RBAC and no hostPath;
  2. kubectl debug / ephemeral containers with explicit operator access controls;
  3. node-level break-glass procedures outside the normal application manifest path.

Safer replacement posture

securityContext:
  runAsNonRoot: true
  allowPrivilegeEscalation: false
  readOnlyRootFilesystem: true
  capabilities:
    drop: ["ALL"]
  seccompProfile:
    type: RuntimeDefault

Task 2 - Service account token mounted into a workload that does not need cluster access

apiVersion: v1
kind: Pod
metadata:
  name: invoice-exporter
  namespace: reporting
spec:
  serviceAccountName: default
  containers:
    - name: exporter
      image: registry.example.com/exporter:2.0.0
      env:
        - name: DB_HOST
          value: reporting-db

Prompt

Explain why this is risky and show a minimal improvement.

Reveal the worked answer

Main issue

By default, many clusters will mount a service account token into the pod. If the app is compromised, that token may be used for cluster discovery or lateral movement.

Additional issue

The pod uses the namespace default service account instead of a purpose-built account, which often correlates with sloppy RBAC hygiene.

Better pattern

If the app does not need Kubernetes API access:

apiVersion: v1
kind: Pod
metadata:
  name: invoice-exporter
  namespace: reporting
spec:
  automountServiceAccountToken: false
  containers:
    - name: exporter
      image: registry.example.com/exporter:2.0.0

If it does need API access, create a dedicated service account and bind only the required permissions.

Task 3 - RBAC role looks read-only but is still dangerous

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: support-readonly
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/log", "secrets"]
    verbs: ["get", "list", "watch"]

Prompt

Identify the hidden risk and propose a corrected role design.

Reveal the worked answer

Hidden risk

secrets read access is not “readonly” in the business sense. It can still expose database credentials, cloud keys, API tokens, and internal certificates.

Better design

  • split log access from secret access;
  • scope roles by namespace where possible;
  • create a smaller role for support engineers.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: support-pod-reader
  namespace: app-prod
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/log"]
    verbs: ["get", "list", "watch"]

Task 4 - Admission policy exists, but it does not actually protect the release path

Scenario

A platform team says “we use Kyverno, so privileged containers are blocked.” You review the cluster and find this policy only in staging.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: deny-privileged
spec:
  validationFailureAction: Audit
  rules:
    - name: no-privileged
      match:
        any:
          - resources:
              kinds:
                - Pod
      validate:
        message: "Privileged containers are not allowed"
        pattern:
          spec:
            containers:
              - securityContext:
                  privileged: false

Prompt

What is incomplete here? What would you change before calling this a control?

Reveal the worked answer

What is incomplete

  • Audit means detection, not prevention;
  • the policy may not cover controllers or all namespaces the way operators assume;
  • it focuses on privileged but not on host namespaces, hostPath, or privilege escalation.

Better answer

  • move high-trust paths to Enforce after testing;
  • extend the policy set to other breakout-relevant fields;
  • validate that production admission paths actually receive the control;
  • monitor policy bypass and exception workflows.

Task 5 - Secret consumed as env var in a long-lived deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: payments
spec:
  template:
    spec:
      containers:
        - name: api
          image: registry.example.com/payments:4.2.1
          env:
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: payments-db
                  key: password

Prompt

This is common. What are the security tradeoffs? What would you check next before approving it?

Reveal the worked answer

Tradeoffs

Using a Kubernetes Secret is better than hardcoding credentials, but environment-variable injection still has limitations:

  • some processes or crash dumps may expose env values;
  • secret rotation may not update a long-lived process cleanly;
  • anyone with pod exec or env inspection paths may gain access indirectly.

What to check next

  • is etcd encryption enabled?
  • who can read secrets in this namespace?
  • does the workload actually need a static password or can it use workload identity / short-lived auth?
  • is there rotation logic or restart automation?

Better patterns to discuss

  • CSI-mounted secrets with tighter file permissions where appropriate;
  • external secret managers;
  • workload identity for cloud-native services instead of shared passwords.