Network Policy Patterns
Intro: This page explains NetworkPolicy as a practical segmentation tool rather than a theoretical Kubernetes checkbox. It focuses on rollout patterns that reduce lateral movement while still respecting how real platform teams discover traffic, stage exceptions, and avoid outages.
What this page includes
- baseline patterns for default deny, app paths, egress, and shared services
- rollout advice that keeps segmentation from turning into chaos
- review questions that connect Kubernetes policy to the broader platform design
Working assumptions
- cluster networking is usually permissive by default, so segmentation has to be designed deliberately and introduced carefully
Kubernetes NetworkPolicy is one of the clearest examples of a control that is simple in YAML but hard in operations. The hard part is usually not syntax. The hard part is discovering real traffic, defining allowed flows, and managing exceptions without returning to a flat cluster.
First reality check
Before relying on NetworkPolicy, confirm that your CNI actually enforces it. The API object alone does nothing without a network plugin that implements policy enforcement.
Mental model
Use NetworkPolicy for application-level segmentation at layer 3/4.
It is best at:
- reducing easy east-west movement between workloads;
- making service dependencies explicit;
- shrinking the reachable set after pod compromise;
- documenting intended traffic flows close to the workload.
It is not a substitute for:
- RBAC and workload identity;
- admission policy;
- TLS and service authentication;
- cluster-admin guardrails;
- cloud-network segmentation outside the cluster.
Recommended rollout sequence
1. Start with visibility and ownership
Before broad enforcement, identify:
- namespace owners;
- application-to-application dependencies;
- shared services such as DNS, ingress, service mesh, metrics, and logging;
- sensitive egress destinations such as metadata endpoints, secret stores, and external SaaS dependencies.
2. Apply namespace-level defaults
A strong baseline is to make the namespace model explicit.
Default deny ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
Default deny egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress
These defaults create a clean starting point: nothing talks unless you intentionally allow it.
3. Add named service paths
Name policies after business or platform flows, not after vague technical ideas. That makes reviews easier.
Example: allow frontend to backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-orders-api
spec:
podSelector:
matchLabels:
app: orders-api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
team: storefront
podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8443
4. Keep shared-service exceptions explicit
Most segmentation rollouts fail because teams forget platform dependencies. Common examples include:
- cluster DNS;
- ingress or gateway controller paths;
- certificate managers;
- metrics and log collection agents;
- service-mesh control-plane traffic.
Create these as explicit, reusable policies rather than hidden one-off exceptions.
5. Add egress control for sensitive workloads
Egress policy is where the security value often jumps. It can reduce:
- secret exfiltration;
- data exfiltration;
- easy command-and-control callbacks;
- opportunistic lateral probing of internal services.
A staged rollout often works best: start with highly sensitive namespaces first, then expand.
Common pattern library
| Pattern | Use it when | What to watch |
|---|---|---|
| Default deny ingress | any namespace with meaningful isolation goals | make platform paths explicit |
| Default deny egress | sensitive workloads, regulated data, admin components | do not forget DNS and required external services |
| Frontend to backend allow | app-to-app service path | label hygiene matters |
| Ingress controller to app | internet or edge traffic reaches workloads through a controller | scope source namespaces carefully |
| Namespace to namespace allow | multi-service trust relationship | avoid re-creating flat trust by over-broad selectors |
| Egress to external CIDR or service | known external dependency | keep exception ownership clear |
Important limits and caveats
- NetworkPolicy is primarily a layer 4 construct.
- Behavior outside TCP, UDP, and SCTP may vary by network plugin.
- Policy without enforcement support in the CNI has no practical effect.
- A pod cannot use NetworkPolicy to block access to itself.
- Node-local, host-network, and cloud-network realities still matter outside the policy object.
Review questions
- Which namespaces still have no default-deny baseline?
- Which policies describe real business flows, and which are vague or over-broad?
- Are ingress and egress both considered, or only inbound paths?
- Can a compromised workload still reach unrelated services by default?
- Are platform exceptions tracked and reviewed like any other privileged dependency?
- How do cloud segmentation, ingress design, and service identity align with the policy set?
Common anti-patterns
- turning on cluster-wide default deny with no traffic discovery;
- allowing all egress because โthe application needs the internetโ;
- using broad namespace selectors that silently recreate flat connectivity;
- forgetting that identity and admission controls still matter after segmentation;
- never deleting temporary policy exceptions.
Strong target state
A strong target state is not โthe most policies.โ It is a cluster where:
- namespaces start isolated by default;
- allowed paths are named and reviewable;
- sensitive egress is deliberate;
- platform exceptions are visible debt, not invisible magic.
Related pages
Suggested reference links
Author attribution: Ivan Piskunov, 2026 - Educational and defensive-engineering use.