โธ๏ธ Kubernetes Hardening
Intro: Kubernetes hardening is not one setting. It is a layered control model across Pods, nodes, control plane, identity, network, secrets, logging, and upgrade discipline. The safest clusters assume compromise is possible and reduce blast radius at every layer.
What this page includes
- the hardening areas that matter most in real clusters;
- a rollout order that avoids breaking everything at once;
- practical manifests and config snippets;
- older versus current Kubernetes hardening patterns.
The hardening model
Treat the cluster as several connected trust layers:
- container image and build path;
- Pod security posture;
- service account and RBAC scope;
- namespace and network separation;
- worker node and runtime hardening;
- control plane access;
- secrets and encryption;
- logging, detection, and alerting;
- patching and periodic review.
Important old-versus-current note
Older pattern you will still see
- PodSecurityPolicy (PSP)
- looser service account defaults
- broad kubeconfig distribution
- host-path and privileged exceptions treated as routine
Current pattern
- Pod Security Admission plus policy engines where needed
- default
automountServiceAccountToken: falseunless required - private control-plane access paths
- default-deny network posture
- stronger runtime and audit visibility
Pod hardening priorities
Start with the workload itself:
- run as non-root where practical;
- use read-only root filesystems where practical;
- drop unnecessary Linux capabilities;
- deny privilege escalation;
- avoid host namespace sharing and host paths except for tightly controlled system workloads;
- restrict service account token mounting unless the app needs it.
Practical snippet โ restricted Pod posture
apiVersion: apps/v1
kind: Deployment
metadata:
name: invoice-api
spec:
replicas: 2
selector:
matchLabels:
app: invoice-api
template:
metadata:
labels:
app: invoice-api
spec:
automountServiceAccountToken: false
containers:
- name: api
image: registry.example.com/team/invoice-api:1.2.3
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
Service account minimization
Service account token note for newer clusters
In newer clusters, the safer default is to prefer projected, rotating tokens only where a workload truly needs API access. Long-lived static token habits from older cluster examples should be treated as historical context, not as a current baseline.
One of the most common ways cluster compromise gets worse is when workloads inherit credentials they never needed.
Practical snippet โ explicit service account with narrow purpose
apiVersion: v1
kind: ServiceAccount
metadata:
name: invoice-api
namespace: billing
Pair that with RBAC scoped to the smallest useful resource set.
Network and namespace posture
Namespaces help organization, but they are not isolation by themselves. Isolation comes from policy.
Recommended pattern:
- separate workloads into intentional namespaces;
- apply a default deny NetworkPolicy stance;
- explicitly allow only required east-west and north-south paths;
- keep control plane access off untrusted networks;
- avoid exposing management surfaces to the public Internet.
Practical snippet โ default-deny NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: billing
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Then add explicit allow rules for DNS, ingress controller traffic, and service-to-service paths that are actually needed.
Control plane priorities
The control plane is high trust because it can read Secrets, schedule Pods, and execute commands inside the cluster.
Protect it with:
- firewall restrictions and private access paths;
- strong authentication;
- RBAC with narrow roles;
- protected kubeconfig files;
- restricted etcd access;
- TLS for control-plane communications;
- audit logging enabled by default.
Practical control-plane checklist
- Is the API server reachable from the public internet?
- Who can
exec,port-forward, or read secrets? - Are admin kubeconfigs copied into laptops and bastions without lifecycle control?
- Is etcd encrypted at rest?
- Are audit logs retained outside the cluster?
Secrets and encryption
Practical snippet โ encrypt secrets at rest
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- kms:
name: awskms
endpoint: unix:///var/run/kmsplugin/socket.sock
- identity: {}
After changing encryption configuration, re-write existing secrets so they are re-encrypted under the active provider.
Protect cloud metadata from Pods
In cloud-hosted clusters, metadata services often become a privilege-escalation path.
Use one or more of:
- network policy restrictions;
- cloud-native metadata hardening controls;
- workload identity instead of inherited node credentials;
- explicit egress restrictions to metadata endpoints.
Logging and threat detection
A hardened cluster is easier to investigate.
High-value telemetry includes:
- Kubernetes audit logs;
- workload and container logs;
- seccomp or syscall-relevant runtime data where supported;
- node-level and kubelet-relevant logs;
- registry and admission events;
- privileged workload and
kubectl execactivity.
Useful alert candidates
- Pod created with privileged or broad host access;
- change to
securityContexton a sensitive deployment; - anonymous or unexpected API access;
- Pod or workload identity trying to create more workloads;
- access to sensitive files or unexpected shells in containers.
Upgrade and review discipline
Hardening is not complete after initial deployment.
Program expectations:
- prompt patching of cluster components and nodes;
- periodic vulnerability scans;
- periodic penetration testing where appropriate;
- removal of unused components, add-ons, and old privileges;
- regular review of namespaces, RBAC, network policies, and exceptions.
Practical rollout order
- inventory workloads and trust tiers;
- establish namespace and service-account hygiene;
- enable Pod Security Admission in audit or warn mode;
- move to restricted enforcement by namespace tier;
- apply default-deny NetworkPolicy posture;
- tighten RBAC and kubeconfig handling;
- enable and centralize audit logs;
- review node and runtime hardening;
- formalize periodic review and patch cadence.
Common mistakes
- assuming namespaces isolate traffic by themselves;
- relying on node credentials instead of workload identity;
- leaving service account token mounts enabled everywhere;
- carrying old PSP-era docs forward without modernizing them;
- focusing on Pod settings while leaving control-plane exposure weak.
Cross-links
- Kubernetes Security Baseline
- Kubernetes Top 10 Misconfigurations
- Kubernetes RBAC and ABAC
- Network Policy Patterns
- Runtime Investigation Playbook
- Falco for Runtime Detection โ Practical Guide, Legacy Notes, and 2026 Patterns
---Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.
Control-plane and API access
Workload hardening is important, but clusters often fail first at the management boundary.
Add these review questions:
- is the API server reachable only from expected paths?
- who has
cluster-adminand why? - can workloads or humans reach Secrets too broadly?
- is kubelet exposure reviewed explicitly?
- are audit logs preserved outside the cluster?
See ๐ Kubernetes API Access Hardening.
Standards and practical rollout order
A useful standards-aligned mental model is:
- correct cluster and control-plane configuration
- scan images and workload definitions
- apply network separation
- control what workloads may run
- enable audit and event visibility
This order is often more survivable than jumping straight to strict policy without understanding who currently depends on what.
Tool landscape: older versus current
Older tool lists for Kubernetes security are still useful for orientation, but the modern working set is usually smaller and more opinionated.
Categories that still matter
- image and package scanning
- cluster configuration audit
- policy enforcement
- runtime detection
- network visibility
- RBAC / identity review
Practical shortlist for many teams today
- Trivy or equivalent for image and config scanning
- kube-bench for CIS-style benchmark checking
- Kubescape or equivalent posture and framework mapping
- Kyverno or Gatekeeper for workload policy
- Falco and sometimes Tetragon for runtime detection
- Cilium / Hubble or equivalent for network visibility where supported
Why not rely on giant tool lists alone
A cluster rarely becomes safer because the team installed more tools.
It becomes safer when the team can clearly answer:
- which workload classes must be restricted;
- which namespaces are trusted differently;
- which identities can change workload state;
- which network paths are intentional;
- which alerts are acted on by humans.
Tooling selection matters less than control placement
Kubernetes teams often over-index on adding tools and under-invest in deciding where those controls should live.
A more effective pattern is:
- image and manifest checks before deployment;
- admission controls at deployment;
- posture tools for review and backlog shaping;
- runtime detection only when someone owns the output.
For a more structured tool-selection view, see ๐งฐ Kubernetes Security Tooling Map and Standards.