Here’s a fun fact: by default, every pod in your Kubernetes cluster can talk to every other pod. No restrictions. No questions asked.
That database in the production namespace? Your random debug pod can reach it. The payment service? Wide open to that compromised container in the dev namespace.
Kubernetes networking is “default allow.” The fix takes 30 seconds.
The Problem
Without NetworkPolicies, Kubernetes networking looks like this:
┌─────────────────────────────────────────────────┐
│ Cluster Network │
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ dev pod │◄──►│ staging │◄──►│ prod │ │
│ └─────────┘ └─────────┘ └─────────┘ │
│ ▲ ▲ ▲ │
│ │ │ │ │
│ └──────────────┴──────────────┘ │
│ Everything talks to everything │
└─────────────────────────────────────────────────┘
One compromised pod = lateral movement to everything.
The Fix: Default Deny
Add this to every namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production # Change per namespace
spec:
podSelector: {} # Applies to ALL pods in namespace
policyTypes:
- Ingress
- Egress
That’s it. Zero pods selected in the selector means “all pods.” Empty ingress and egress rules means “deny everything.”
Now your namespace looks like this:
┌─────────────────────────────────────────────────┐
│ production namespace │
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ api │ │ worker │ │ db │ │
│ └─────────┘ └─────────┘ └─────────┘ │
│ 🚫 🚫 🚫 │
│ No traffic in or out until explicitly allowed │
└─────────────────────────────────────────────────┘
But Wait, My Pods Need to Talk
Yes. Now you explicitly allow what’s needed:
Allow Ingress from Specific Pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-db
namespace: production
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: api
ports:
- protocol: TCP
port: 5432
Now only pods with app: api can reach the database on port 5432.
Allow Egress to External Services
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-egress
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Allow external HTTPS
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 443
This allows DNS lookups and outbound HTTPS, but blocks connections to internal RFC1918 addresses.
The Full Starter Kit
Here’s what we deploy to every namespace:
---
# 1. Default deny all traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# 2. Allow DNS for all pods (essential)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
# 3. Allow ingress from ingress controller
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-controller
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
Apply it:
kubectl apply -f networkpolicies/ -n production
kubectl apply -f networkpolicies/ -n staging
# ... repeat for all namespaces
Automating with Kyverno
Tired of manually adding policies? Use Kyverno to auto-inject:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-deny
spec:
rules:
- name: add-default-deny-networkpolicy
match:
resources:
kinds:
- Namespace
exclude:
resources:
namespaces:
- kube-system
- kube-public
generate:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
name: default-deny-all
namespace: "{{request.object.metadata.name}}"
data:
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Now every new namespace automatically gets the default deny policy.
Verifying It Works
Check Policies
kubectl get networkpolicies -A
Test Connectivity
Before policy:
kubectl exec -it test-pod -n dev -- curl -m 5 http://api.production:8080
# Works ✓
After policy:
kubectl exec -it test-pod -n dev -- curl -m 5 http://api.production:8080
# curl: (28) Connection timed out
Visualise with kubectl
kubectl describe networkpolicy default-deny-all -n production
CNI Support
NetworkPolicies need a CNI that supports them:
| CNI | Support |
|---|---|
| Calico | ✅ Full |
| Cilium | ✅ Full + Extended |
| Weave | ✅ Full |
| Flannel | ❌ None |
| AWS VPC CNI | ⚠️ Needs addon |
If you’re on EKS with the default VPC CNI, you need the Calico addon or switch to Cilium.
Common Gotchas
1. DNS Breaks Everything
Forgot to allow DNS egress? Every pod fails to resolve hostnames. Always include the DNS allow rule.
2. Health Checks Fail
Kubelet health probes come from the node, not from pods. You might need:
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/8 # Your node CIDR
ports:
- port: 8080
protocol: TCP
3. Metrics Collection Breaks
Prometheus needs to scrape pods. Allow it:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
podSelector:
matchLabels:
app: prometheus
4. Service Mesh Sidecars
If you’re using Istio/Linkerd, the sidecar needs network access. Mesh policies often supersede NetworkPolicies, but test carefully.
The Security Win
With default deny in place:
- Compromised pods can’t scan the network
- Lateral movement requires explicit policy gaps
- Blast radius of any breach is contained
- Compliance auditors are happy
It’s not a silver bullet, but it’s the single highest-impact security control you can add to a Kubernetes cluster in under a minute.
Summary
# Add this to every namespace. No exceptions.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Then explicitly allow only what’s needed.
Zero trust isn’t a product. It’s a policy. This is where it starts.
Using Cilium? Check out CiliumNetworkPolicy for L7 rules and DNS-aware policies.