PS Product SecurityKnowledge Base

☸️ Container / Kubernetes / Platform Security — Images, Admission, RBAC, Pod Hardening, Isolation, and GitOps / Deployment Plane

Intro: Container and Kubernetes security is a platform problem, not just a scanner problem. The secure state emerges when image trust, admission control, RBAC, pod hardening, workload isolation, and GitOps / deployment governance all reinforce each other.

What this page includes

  • a refreshed platform-security view for container and Kubernetes environments
  • practical controls for images, admission, RBAC, hardening, isolation, and GitOps
  • short examples and links to deeper pages already in this KB

Container / Kubernetes Platform Security

Figure: image trust, admission, runtime isolation, and deploy-plane trust should be treated as one system.

What this domain covers

Layer Primary question Core controls
Images What are we running and can we trust it? signed images, scanning, SBOMs, minimal bases, registry policy
Admission What is allowed into the cluster? Pod Security Admission, Kyverno / Gatekeeper, image verification, namespace guardrails
RBAC Who can do what in the cluster and API? least privilege, no wildcard admin, separate humans/controllers/workloads
Pod hardening How much damage can one workload do if compromised? seccomp, AppArmor, capabilities drop, non-root, read-only FS, volume control
Isolation How well can one workload or namespace be separated from others? NetworkPolicies, node placement, sandbox runtimes, namespace design
GitOps / deployment plane Who is allowed to change desired state and how is it proven? protected repos, CODEOWNERS, controller scoping, signed artifacts, environment approvals

1) Image security

Image trust should start before the cluster sees a manifest.

Minimum baseline

  • use minimal and well-understood base images
  • rebuild frequently rather than “patch inside the running container”
  • scan images for vulnerabilities and obvious secrets
  • produce SBOMs where practical
  • sign release images and verify trust before deploy
  • separate experimental registries from production registries

What to review

  • Are tags mutable, and if so can the deploy plane pin digests anyway?
  • Are unsigned images allowed into production namespaces?
  • Can the same registry host both curated and ad hoc developer images without clear boundaries?

Example: hardened container security context

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: ghcr.io/example/app@sha256:deadbeef
      securityContext:
        runAsNonRoot: true
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        capabilities:
          drop: ["ALL"]
        seccompProfile:
          type: RuntimeDefault

This does not make a pod “safe,” but it removes a large amount of default risk.

2) Admission control

Admission is where the platform turns guidance into enforceable guardrails.

Practical pattern

  • namespace labels enforce Pod Security Admission levels
  • a policy engine handles exceptions and richer logic
  • image policy verifies signatures or trusted registries
  • high-risk namespaces get stricter rules and narrower exception paths

Modern recommendation

Use Pod Security Standards as the baseline mental model and Pod Security Admission for the built-in control path. Add Kyverno, Gatekeeper, or similar policy layers when you need image verification, mutation, richer conditions, or organization-specific policy bundles.

Example: verify image signatures before deploy

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-signed-images
spec:
  validationFailureAction: Enforce
  rules:
    - name: verify-prod-images
      match:
        any:
          - resources:
              namespaces:
                - prod
              kinds:
                - Pod
      verifyImages:
        - imageReferences:
            - "ghcr.io/example/*"
          attestors:
            - entries:
                - keys:
                    publicKeys: |-
                      -----BEGIN PUBLIC KEY-----
                      REPLACE_ME
                      -----END PUBLIC KEY-----

In real life, tie this to Sigstore or your approved signing path instead of pasting static public keys into policy files without a lifecycle plan.

3) RBAC and API access

Kubernetes RBAC failures often become platform failures.

High-value rules

  • keep cluster-admin exceptional and rare
  • separate human admin, CI/CD, GitOps controller, and workload identities
  • avoid wildcard permissions unless there is a compelling platform reason
  • review list, watch, impersonate, secrets, and admission-related permissions carefully
  • disable automatic service-account token mounting where not needed

Common failure patterns

  • one GitOps controller can change everything in every namespace
  • CI jobs can both build and administer the cluster
  • developers have broad write access because “the cluster is internal”

4) Pod hardening

Pod hardening is the direct reduction of attack surface inside the workload.

Control Why it matters
runAsNonRoot reduces default privilege and accidental root runtime
allowPrivilegeEscalation: false blocks easy escalation paths
drop capabilities limits ambient Linux privilege
seccompProfile: RuntimeDefault reduces syscall surface
AppArmor / SELinux where available adds OS-level confinement
readOnlyRootFilesystem makes tampering and persistence harder
controlled hostPath usage prevents easy host escape and host tampering
restricted volume / device use avoids hidden privilege expansion

5) Isolation

Isolation is not one feature. It is the combined effect of namespace design, network segmentation, runtime controls, and scheduling choices.

Isolation options by strength

  • basic: namespace separation + RBAC + NetworkPolicy
  • stronger: dedicated nodes, taints/tolerations, stricter admission, no shared sensitive services
  • highest-assurance pockets: sandbox runtimes such as gVisor or Kata for selected workloads, plus stronger tenant and network boundaries

What to review

  • Can one namespace reach another by default?
  • Do sensitive workloads share nodes with high-risk workloads?
  • Are debug and ephemeral-container paths controlled and audited?

6) GitOps and the deployment plane

Clusters are often protected more strongly than the systems that change them. That is backwards.

GitOps / deployment-plane controls

  • protect the source repositories that define desired state
  • require CODEOWNERS and independent approval for policy, controller, and production-environment changes
  • scope Argo CD / Flux / equivalent controllers to the projects and namespaces they actually own
  • pin image digests and prefer verified artifacts over floating tags
  • separate “can propose desired state” from “can operate the production controller”
  • treat secrets decryption, signing keys, and deploy credentials as crown-jewel assets

Typical anti-patterns

  • GitOps repo writable by too many engineers
  • one controller with broad cluster write access and weak project scoping
  • unverified manifests and images promoted by tag only
  • emergency production changes performed directly in-cluster and never reconciled back to source

Practical control table

Domain Baseline control Better control Failure if missing
Images scan + digest pinning signed images + admission verification malicious or drifted image reaches production
Admission PSA baseline PSA restricted + policy engine exceptions unsafe specs slip in “temporarily”
RBAC no routine cluster-admin per-role least privilege + review cadence one credential compromises the cluster
Pod hardening runtime defaults explicit hardened securityContext everywhere compromise has easy local escalation paths
Isolation namespace separation network + node + runtime isolation by trust zone one compromised workload laterally probes too much
GitOps / deploy plane protected branch protected repo + controller scoping + provenance secure cluster, insecure change path

Two simplified field examples

Example 1 — image trust gap

A team scans images in CI but deploys by mutable tag and never verifies the result at admission. A registry credential is abused and the tag is silently re-pointed. CI looked clean; production now runs something else.

Fix direction: pin digests, sign images, and verify signature or provenance at admission.

Example 2 — deployment plane compromise

The cluster RBAC looks respectable, but the GitOps controller has broad write access and its repo has weak approval rules. An attacker does not need to “hack Kubernetes”; they only need to push malicious desired state through the trusted deployment path.

Fix direction: scope the controller, harden the repo, require review, and protect signing / deploy credentials.

References and best-practice anchors

  • Kubernetes Pod Security Standards
  • Pod Security Admission
  • Kubernetes RBAC and NetworkPolicy docs
  • Sigstore / Cosign for signing and verification
  • Argo CD / Flux security considerations
  • Existing deep pages in this KB for Kyverno, OPA, runtime investigation, and StackRox / ACS