Pod Security Standards Enforcement - The PSP Replacement That Actually Works
PodSecurityPolicies (PSPs) were removed in Kubernetes 1.25. If you’re still figuring out the replacement, this is it: Pod Security Standards (PSS) with the built-in Pod Security Admission (PSA) controller.
Unlike PSPs, Pod Security Standards are simple: three profiles (Privileged, Baseline, Restricted) applied at the namespace level via labels. No custom resources, no RBAC bindings, no third-party controllers required.
This post covers how PSS works, how to migrate from PSPs, and production patterns for enforcement.
TL;DR
- Three profiles: Privileged (unrestricted), Baseline (prevent escalations), Restricted (hardened)
- Enforced via namespace labels - no CRDs needed
- Three modes:
enforce(block),audit(log),warn(warn user) - Built into Kubernetes since 1.23, stable since 1.25
- Use
auditmode first to find violations before enforcing
Code Repository: All code from this post is available at github.com/moabukar/blog-code/pod-security-standards
The Three Profiles
┌─────────────────────────────────────────────────────────────────┐
│ Pod Security Profiles │
└─────────────────────────────────────────────────────────────────┘
PRIVILEGED BASELINE RESTRICTED
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ No restrictions │ │ Prevent known │ │ Current best │
│ │ │ privilege │ │ practices │
│ • hostNetwork │ │ escalations │ │ │
│ • hostPID │ │ │ │ • runAsNonRoot │
│ • privileged │ │ • No hostPath │ │ • drop ALL caps │
│ • anything │ │ • No privileged│ │ • seccomp │
│ │ │ • No hostPorts │ │ • read-only fs │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
System/Infra Most Workloads Security-Critical
Privileged
No restrictions. Use for:
- System components (CNI, CSI drivers)
- Monitoring agents that need host access
- Anything requiring elevated privileges
Baseline
Prevents known privilege escalations. Blocks:
- Privileged containers
- Host namespaces (network, PID, IPC)
- HostPath volumes
- Host ports
- Dangerous capabilities
Good for most workloads.
Restricted
Full hardening. Requires:
- Non-root user
- No privilege escalation
- Drop all capabilities (except NET_BIND_SERVICE)
- Seccomp profile set
- Read-only root filesystem (recommended)
Use for security-critical applications.
Enforcement Modes
Each profile can be applied in three modes:
| Mode | Behavior |
|---|---|
enforce | Reject pods that violate the policy |
audit | Log violations but allow the pod |
warn | Send warning to user but allow the pod |
Recommended rollout: warn → audit → enforce
Namespace Labels
Apply policies using namespace labels:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
# Enforce restricted profile
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
# Audit baseline violations
pod-security.kubernetes.io/audit: baseline
pod-security.kubernetes.io/audit-version: latest
# Warn on baseline violations
pod-security.kubernetes.io/warn: baseline
pod-security.kubernetes.io/warn-version: latest
Quick Labels
# Enforce baseline on a namespace
kubectl label namespace myapp \
pod-security.kubernetes.io/enforce=baseline \
pod-security.kubernetes.io/enforce-version=latest
# Add audit for restricted
kubectl label namespace myapp \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/audit-version=latest
Profile Requirements
Baseline Profile - What’s Blocked
# BLOCKED - privileged container
spec:
containers:
- securityContext:
privileged: true # ❌
# BLOCKED - host namespaces
spec:
hostNetwork: true # ❌
hostPID: true # ❌
hostIPC: true # ❌
# BLOCKED - hostPath volume
spec:
volumes:
- name: host-vol
hostPath: # ❌
path: /etc
# BLOCKED - host ports
spec:
containers:
- ports:
- hostPort: 8080 # ❌
# BLOCKED - dangerous capabilities
spec:
containers:
- securityContext:
capabilities:
add:
- SYS_ADMIN # ❌
- NET_RAW # ❌
Restricted Profile - What’s Required
apiVersion: v1
kind: Pod
metadata:
name: restricted-compliant
spec:
securityContext:
runAsNonRoot: true # ✓ Required
seccompProfile:
type: RuntimeDefault # ✓ Required
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false # ✓ Required
capabilities:
drop:
- ALL # ✓ Required
readOnlyRootFilesystem: true # Recommended
runAsNonRoot: true # ✓ Required (if not set at pod level)
Migration from PSPs
Step 1: Audit Current State
Before migrating, understand what PSPs allow:
# List all PSPs
kubectl get psp
# Check which pods use which PSPs
kubectl get pods -A -o custom-columns=\
'NAMESPACE:.metadata.namespace,NAME:.metadata.name,PSP:.metadata.annotations.kubernetes\.io/psp'
Step 2: Map PSPs to Profiles
| PSP Characteristic | Profile |
|---|---|
privileged: true | Privileged |
hostNetwork/hostPID/hostIPC: true | Privileged |
allowedHostPaths defined | Baseline or Privileged |
runAsUser: MustRunAsNonRoot | Restricted |
requiredDropCapabilities: ALL | Restricted |
Step 3: Test with Audit Mode
Apply audit labels to namespaces:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
# Keep PSP working, but audit what would happen with PSS
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Check audit logs:
kubectl logs -n kube-system -l component=kube-apiserver | grep "pod-security"
Step 4: Fix Violations
Common fixes:
# Add seccomp profile
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
# Add non-root requirement
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
# Drop capabilities
spec:
containers:
- securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
Step 5: Enforce
Once violations are fixed:
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/enforce-version=latest \
--overwrite
Production Patterns
Pattern 1: Tiered Namespaces
# System namespace - privileged
apiVersion: v1
kind: Namespace
metadata:
name: kube-system
labels:
pod-security.kubernetes.io/enforce: privileged
---
# Platform namespace - baseline
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/warn: restricted
---
# Application namespace - restricted
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
Pattern 2: Gradual Rollout with Terraform
locals {
namespace_policies = {
"kube-system" = "privileged"
"monitoring" = "baseline"
"logging" = "baseline"
"ingress-nginx" = "baseline"
"cert-manager" = "baseline"
"production" = "restricted"
"staging" = "restricted"
"development" = "baseline"
}
}
resource "kubernetes_namespace" "namespaces" {
for_each = local.namespace_policies
metadata {
name = each.key
labels = {
"pod-security.kubernetes.io/enforce" = each.value
"pod-security.kubernetes.io/enforce-version" = "latest"
"pod-security.kubernetes.io/audit" = each.value == "restricted" ? "restricted" : "baseline"
"pod-security.kubernetes.io/audit-version" = "latest"
}
}
}
Pattern 3: Default Restricted, Exceptions via Labels
Set cluster default to restricted, then label exceptions:
# In kube-apiserver configuration
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces:
- kube-system
- kube-node-lease
- kube-public
Exemptions
For workloads that legitimately need elevated privileges:
Namespace Exemptions
Configure in the admission configuration:
apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
exemptions:
namespaces:
- kube-system
- istio-system
- monitoring
User Exemptions
For specific service accounts:
exemptions:
usernames:
- system:serviceaccount:kube-system:*
- system:serviceaccount:monitoring:prometheus
RuntimeClass Exemptions
For workloads using specific runtimes:
exemptions:
runtimeClasses:
- kata
- gvisor
Validating Pods
Dry-Run Testing
Test if a pod would be allowed:
# Check against restricted profile
kubectl run test --image=nginx --dry-run=server -n production
If it fails:
Error from server (Forbidden): pods "test" is forbidden:
violates PodSecurity "restricted:latest":
allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false),
unrestricted capabilities (container "test" must set securityContext.capabilities.drop=["ALL"]),
runAsNonRoot != true (pod or container "test" must set securityContext.runAsNonRoot=true),
seccompProfile (pod or container "test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Compliant Pod Template
apiVersion: v1
kind: Pod
metadata:
name: compliant-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /var/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
Deployment Template
A deployment that passes restricted:
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 65534
runAsGroup: 65534
fsGroup: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:1.0.0
ports:
- containerPort: 8080
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
serviceAccountName: secure-app
automountServiceAccountToken: false
Third-Party Alternatives
If built-in PSA isn’t enough, consider:
Kyverno
Policy-as-code with custom rules:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-run-as-nonroot
spec:
validationFailureAction: enforce
rules:
- name: run-as-non-root
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Containers must run as non-root"
pattern:
spec:
containers:
- securityContext:
runAsNonRoot: true
OPA Gatekeeper
Rego-based policies:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPPrivilegedContainer
metadata:
name: psp-privileged-container
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
excludedNamespaces: ["kube-system"]
When to Use Alternatives
- Need custom policies beyond the three profiles
- Want to enforce on resources other than Pods
- Need mutation (auto-fix violations)
- Require detailed audit trails
Troubleshooting
Pod Rejected - Check Why
kubectl describe pod <pod-name> -n <namespace>
Look for events:
Events:
Type Reason Message
---- ------ -------
Warning Failed violates PodSecurity "restricted:latest":
allowPrivilegeEscalation != false
Check Namespace Labels
kubectl get namespace production --show-labels
Check Audit Logs
# For managed Kubernetes, check control plane logs
# For self-managed, check kube-apiserver logs
kubectl logs -n kube-system -l component=kube-apiserver | grep "pod-security"
Common Fixes
| Violation | Fix |
|---|---|
allowPrivilegeEscalation | Add securityContext.allowPrivilegeEscalation: false |
runAsNonRoot | Add securityContext.runAsNonRoot: true and ensure image runs as non-root |
capabilities | Add securityContext.capabilities.drop: ["ALL"] |
seccompProfile | Add securityContext.seccompProfile.type: RuntimeDefault |
hostPath | Replace with emptyDir, configMap, or PVC |
Best Practices
1. Start with Audit Mode
Never enforce immediately. Audit first:
kubectl label namespace myapp \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted
Wait a week, review logs, then enforce.
2. Use Version Pinning in Production
Pin to a specific version to avoid surprise changes:
pod-security.kubernetes.io/enforce-version: v1.28
Use latest only in development.
3. Document Exemptions
If a workload needs privileged access, document why:
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
pod-security.kubernetes.io/enforce: baseline
annotations:
security.example.com/exemption-reason: "Prometheus node-exporter requires hostPath for metrics"
security.example.com/exemption-approved-by: "security-team"
4. Combine with Network Policies
PSS restricts pod capabilities; Network Policies restrict network access. Use both:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Conclusion
Pod Security Standards are the official replacement for PSPs. They’re simpler (three profiles, namespace labels) and built into Kubernetes. Start with audit mode, fix violations, then enforce.
For most workloads, Baseline is enough. For security-critical applications, use Restricted. Reserve Privileged for system components that truly need it.