๐ Kubernetes API Access Hardening
Intro: If an attacker gains broad control over the Kubernetes API, most other controls become cleanup work. API access hardening is therefore not a side topic. It is one of the core control-plane trust boundaries in a cluster.
What this page includes
- why the Kubernetes API server and kubelet are such important targets
- practical access-hardening steps for self-managed and managed clusters
- commands and manifests that help validate access posture
- older versus current access patterns
Why this matters
The Kubernetes API is the management plane for:
- workloads and Pod specs
- Secrets and config references
- RBAC roles and bindings
- exec, logs, and port-forward behavior
- node and control-plane state
Broad or weak API access often means:
- privilege escalation
- secret exposure
- workload tampering
- persistence through RBAC or controller abuse
- easier lateral movement from one compromised workload
Current versus older patterns
Older patterns you still see
- public control-plane endpoints exposed too broadly
- static kubeconfig files shared between people and automation
- cluster-admin granted for convenience
- broad kubelet access tolerated
- weak or inconsistent audit visibility
Current direction
- private or tightly filtered control-plane access
- strong SSO / OIDC-backed human auth where possible
- short-lived credentials for automation
- role-scoped access with explicit bindings
- kubelet lockdown
- audit logging and regular access review
Baseline control objectives
Use this short list as a review lens.
| Objective | What โgood enough to startโ looks like |
|---|---|
| Strong authentication | no anonymous access, no weak legacy auth, real identity for admins and automation |
| Least-privilege authorization | no routine use of cluster-admin, scoped roles and service accounts |
| Network exposure control | API endpoint reachable only from expected networks or access paths |
| kubelet hardening | anonymous auth disabled, authorization enabled, endpoint not casually exposed |
| Auditability | API audit logs enabled and retained outside the cluster |
| Credential hygiene | kubeconfigs and tokens are minimized, rotated, and not copied everywhere |
Step 1 โ understand who can do what
Start with what Kubernetes already tells you.
Practical commands
kubectl auth can-i --list
kubectl auth can-i get secrets --all-namespaces
kubectl get clusterrolebinding
kubectl get rolebinding --all-namespaces
kubectl get serviceaccounts --all-namespaces
Use these as review commands, not as a complete audit method.
Step 2 โ reduce broad human access
Review for these red flags
- many users mapped to
cluster-admin - shared admin identities
- kubeconfigs passed around in wikis or secrets managers without lifecycle
- long-lived personal tokens
- direct production cluster access for people who mostly need read-only visibility
Good patterns
- SSO / OIDC-backed identities
- just-in-time or role-scoped access
- separate admin, operator, and read-only roles
- break-glass flow instead of permanent overprivileged accounts
Step 3 โ minimize workload access
Service accounts are often the bridge from app compromise to cluster compromise.
Practical command
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}{" SA="}{.spec.serviceAccountName}{" automount="}{.spec.automountServiceAccountToken}{"\n"}{end}'
Practical manifest pattern
apiVersion: apps/v1
kind: Deployment
metadata:
name: payments-api
spec:
template:
spec:
serviceAccountName: payments-api
automountServiceAccountToken: false
Only enable token mounting when the application actually needs cluster API access.
Step 4 โ harden kubelet access
Kubelet is too often treated like a background detail. It is not.
Review for these controls
- anonymous auth disabled
- authorization enabled
- TLS properly configured
- endpoint not exposed to untrusted networks
- node access restricted to the expected control-plane path
Self-managed kubelet flags you should understand
--anonymous-auth=false
--authorization-mode=Webhook
--read-only-port=0
These are not the whole kubelet story, but if reviewers do not know them, they often miss one of the easiest node-side weaknesses.
Step 5 โ protect the API endpoint with network controls
Managed-cluster expectation
- private endpoint where feasible
- public endpoint, if used, filtered to admin source ranges or a controlled access path
- firewall or security-group protection around worker-to-control-plane paths
Self-managed expectation
6443is not exposed to the open internet- admin access happens through approved paths such as VPN, bastion, or tightly managed private networking
Practical command
kubectl cluster-info
Then verify the endpoint placement and exposure in the cloud control plane, firewall, load balancer, or reverse proxy that fronts the API.
Step 6 โ lock down RBAC before adding policy engines
Policy engines help, but they do not fix weak access design.
Practical review commands
kubectl get clusterroles
kubectl describe clusterrole cluster-admin
kubectl get clusterrolebinding | grep -E 'cluster-admin|system:masters'
Good review questions
- Which roles are truly namespace-scoped?
- Which subjects can read Secrets?
- Which automation identities can create or update workloads?
- Who can exec into Pods?
- Who can approve their own privilege elevation?
Step 7 โ enable and preserve audit logs
Access hardening without auditability becomes guesswork.
Minimal idea
Capture API audit events to an external system and retain enough detail to answer:
- who changed RBAC?
- who read or modified Secrets?
- who created a privileged or policy-violating workload?
- who used exec, attach, or port-forward?
Practical snippet
See ../snippets/k8s/kube-apiserver-audit-policy.yaml.
Step 8 โ review kubeconfig handling
Kubeconfig files are credentials plus routing metadata. Treat them that way.
Red flags
- kubeconfigs committed to repos
- kubeconfigs bundled into CI secrets forever
- one kubeconfig reused across many jobs or users
- local admin kubeconfigs left unencrypted on laptops
- no clear owner or expiration for automation kubeconfigs
Better patterns
- cloud-native federation or workload identity where possible
- short-lived credentials
- separate read-only and write-capable kubeconfigs
- documented rotation and revocation path
Practical rollout order
- identify public or overly broad API exposure
- remove obvious cluster-admin sprawl
- disable or reduce service-account token mounting
- harden kubelet access
- enable or improve audit logging
- move from static/shared credentials toward stronger identity models
Common mistakes
- focusing on admission control before cleaning up RBAC
- assuming namespaces alone are an access boundary
- leaving kubelet exposure out of the review
- treating managed clusters as โsecure by defaultโ
- distributing kubeconfigs as if they were harmless config files