This is the fourth and final post in my blog post series tracing packets’ journey through a Kubernetes cluster. Parts 1-3 covered foundations, pod-to-pod traffic (east-west), and north-south traffic through load balancers. This post will get to that scary and often dreaded topic of encryption. This will keep it simple, doing encryption without a service mesh.
Encryption in Kubernetes is not a one-size-fits-all setup. It requires decisions about things like where to terminate TLS, what traffic should be encrypted, and how to manage all those certificates. Now that we have a better understanding of the path of a packet, we can make these decisions.
Traffic flowing into, out of, and through a Kubernetes cluster crosses several boundaries:
Client to Load Balancer: external network traffic
Load Balancer to Node: it could be external traffic or may be internal
Node to Ingress Controller Pod: this is the cluster network
Ingress Controller to a Backend Pod: also cluster network
Pod to Pod: also cluster network
Each boundary that you cross is a possible encryption termination point. But you need to ask which hops on this journey need encryption, and where should TLS terminate?
There are three general patterns for handling TLS in a Kubernetes cluster.
The load balancer has the TLS certificate and can terminate the encryption. The traffic within the cluster then travels unencrypted.
Client ══════════► Load Balancer ──────────► Node ──────────► Pod
HTTPS │ HTTP HTTP
TLS terminates
here
This is the simplest configuration pattern. The load balancer handles all that pesky certificate stuff, and applications within the cluster receive plain ol’ HTTP.
Configuration example for an AWS ALB (Application Load Balancer) Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789:certificate/abc-123
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
spec:
ingressClassName: alb
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 80
Pros:
Simple certificate management: in this case AWS handles certificate renewal
No certificate configuration happens inside cluster
Allows the load balancer to inspect traffic for use in things like rate limiting, or logging
Lower CPU usage on application pods since they don’t have to spend compute resources on encryption and decryption.
Cons
Traffic flows unencrypted within cluster network
Requires that you really trust the security of your internal network infrastructure
Not so good when you have compliance/regulatory requirements that mandate full end-to-end encryption
Use cases:
Internal applications where the cluster network is trusted
When the L7 load balancer features (WAF, header inspection) are required
Environments that are not subject to stricter regulatory requirements
In this case, instead of the load balancer terminating the encryption, it instead forwards the encrypted traffic as is. The TLS encryption will terminate at the Ingress Controller running inside the cluster.
Client ══════════► Load Balancer ══════════► Node ══════════► Ingress Pod ──────► Backend Pod
HTTPS │ HTTPS HTTPS │ HTTP
L4 passthrough TLS terminates
(no decryption) here
The load balancer operates at Layer 4 rather than Layer 7, forwarding the TCP connections without doing any payload inspection.
Configuration example for an NGINX Ingress with passthrough:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 443
Notice the ssl-passthrough is set to “true”. The TLS certificate is stored, in this case, in a Kubernetes Secret:
# Create TLS secret from certificate files (Bad idea. You should use a service)
kubectl create secret tls app-tls-secret \
--cert=tls.crt \
--key=tls.key
# View the secret
kubectl get secret app-tls-secret -o yaml # Blech
Pros:
TLS terminates within cluster boundary
The load balancer does not need any certificate access
The traffic remains encrypted traffic between the LB and ingress controller
Cons:
The load balancer is unable inspect traffic, preventing use of any L7 features the LB may have.
You will need to do some sort of certificate management within the cluster, a service that updates the certificate from some source.
Ingress controller to backend still unencrypted (by default)
Use cases:
When TLS must terminate within the cluster
When the load balancer should not have access to certificates (but for a managed service, there should already be sufficient security built-in)
When L4 load balancing is sufficient (it can be but it limits your options)
In this case, each hop uses its own TLS connection. Traffic is decrypted and re-encrypted at each hop.
Client ══════════► Load Balancer ══════════► Ingress Pod ══════════► Backend Pod
HTTPS │ HTTPS │ HTTPS
TLS session 1 TLS session 2 TLS session 3
terminates terminates terminates
re-encrypts re-encrypts here
This requires certificates for all hops. That’s a lot of certificates depending on the hops.
Configuration example (NGINX Ingress with backend HTTPS):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
nginx.ingress.kubernetes.io/proxy-ssl-secret: "default/backend-ca-secret"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: ingress-tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 8443 # Backend listens on HTTPS
The backend pod must serve TLS (note the mounted certs on volumes):
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-app:latest
ports:
- containerPort: 8443
volumeMounts:
- name: tls-certs
mountPath: /etc/tls
readOnly: true
volumes:
- name: tls-certs
secret:
secretName: backend-tls-secret
Pros:
Traffic is encrypted at every segment
It is part of Defense in Depth
It can satisfy stricter requirements
Cons:
More complicated certificate management: you have lots of certs to manage and expiration of just one of them can break the chain
Increased latency for the extra time to do TLS handshakes
Higher CPU usage because of encryption/decryption at each hop
Just a bigger operational pain
Use cases:
Regulatory compliance requiring end-to-end encryption (government, financial institutions, health care)
Zero-trust network architectures
You can instead encrypt traffic at the network layer using your CNI plugin. Several CNI plugins can encrypt all pod traffic transparently. Here are some examples, though I haven’t practiced these but I present them for your education. I encourage you to experiment.
WireGuard is a VPN protocol built into the Linux kernel (5.6+). It encrypts traffic at Layer 3 so the applications don’t have to care about it..
How it works:
Every cluster node generates a WireGuard keypair
The CNI configures WireGuard tunnels between nodes
All pod-to-pod traffic between nodes is encrypted
Applications see plain TCP/UDP; encryption happens before it gets that far up the OSI stack.
Both Calico and Cilium support this.
IPsec is an older protocol for encryption. It is also Layer .
IPsec requires more configuration than WireGuard, including key exchange (IKE) setup. WireGuard is generally preferred for new deployments due to simpler configuration and better performance.
Calico supports IPSec.
┌────────────────────────────────────────────────────────────────────────────────┐
│ CNI-LEVEL ENCRYPTION │
├────────────────────────────────────────────────────────────────────────────────┤
│ │
│ Pod A (10.244.0.5) Pod B (10.244.1.3) │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Application │ │ Application │ │
│ │ sends HTTP │ │ receives HTTP │ │
│ │ (plaintext) │ │ (plaintext) │ │
│ └────────┬─────────┘ └────────▲─────────┘ │
│ │ │ │
│ ▼ │ │
│ ┌──────────────────┐ ┌───────┴──────────┐ │
│ │ Kernel TCP/IP │ │ Kernel TCP/IP │ │
│ └────────┬─────────┘ └────────▲─────────┘ │
│ │ │ │
│ ▼ │ │
│ ┌──────────────────┐ ┌───────┴──────────┐ │
│ │ WireGuard │ │ WireGuard │ │
│ │ encrypts │ ════════════════► │ decrypts │ │
│ │ (Layer 3) │ encrypted │ (Layer 3) │ │
│ └──────────────────┘ └──────────────────┘ │
│ │
│ - Application code unchanged │
│ - All pod traffic encrypted automatically │
│ - Encryption/decryption in kernel (fast) │
│ - No certificate management per application │
│ │
└────────────────────────────────────────────────────────────────────────────────┘
Pros of CNI encryption:
Transparent to applications
Encrypts all pod traffic, not just HTTP/HTTPS
Transparent certificate management
Faster since it happens in the kernel
Simpler to enable in the whole cluster
Cons of CNI encryption:
No mutual authentication at application level: the traffic is simply encrypted
No application-level identity: just IP address
It encrypts cross-node traffic by default but not traffic within the same-node (must set that)
Use cases:
When you want to encrypt all cluster traffic without a lot of fuss
When you have to have encryption (but not application-level mTLS)
Defense in depth along with application TLS
# Capture traffic between nodes (should be encrypted)
# On Node 1, capture traffic to Node 2
sudo tcpdump -i eth0 -nn host 192.168.1.11 and udp port 51820
# Output (WireGuard):
# 14:30:01.123 IP 192.168.1.10.51820 > 192.168.1.11.51820: UDP, length 128
# The payload is encrypted - you won't see pod IPs or application data
# Capture on WireGuard interface (sees decrypted traffic)
sudo tcpdump -i wireguard.cali -nn host 10.244.1.3
# Output:
# 14:30:01.123 IP 10.244.0.5.45678 > 10.244.1.3.8080: Flags [P.], seq 1:100
# Compare: without encryption, eth0 would show pod IPs directly
For applications where you must have mutual TLS (mTLS) or certificate-based identity without a service mesh, you can just do the TLS directly in the application or a sidecar container in your pod.
The application handles the TLS itself and loads certificates from mounted Secrets (Again, secrets that are ideally managed by a service that updates them.)
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-app:latest
ports:
- containerPort: 8443
env:
- name: TLS_CERT_FILE
value: /etc/tls/tls.crt
- name: TLS_KEY_FILE
value: /etc/tls/tls.key
- name: TLS_CA_FILE
value: /etc/tls/ca.crt
volumeMounts:
- name: tls-certs
mountPath: /etc/tls
readOnly: true
volumes:
- name: tls-certs
secret:
secretName: my-app-tls
Application code (Go example):
// Load certificates
cert, err := tls.LoadX509KeyPair("/etc/tls/tls.crt", "/etc/tls/tls.key")
caCert, err := ioutil.ReadFile("/etc/tls/ca.crt")
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(caCert)
// Configure TLS with mutual authentication
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{cert},
ClientCAs: caCertPool,
ClientAuth: tls.RequireAndVerifyClientCert,
}
server := &http.Server{
Addr: ":8443",
TLSConfig: tlsConfig,
}
server.ListenAndServeTLS("", "")
This does, however place the burden on the application developers to deal with TLS.
cert-manager is a service that automates all the certificate management and renewal within Kubernetes.
Installing cert-manager, example:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
Create a Certificate Authority (self-signed for internal use):
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: internal-ca
namespace: cert-manager
spec:
isCA: true
commonName: internal-ca
secretName: internal-ca-secret
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: selfsigned-issuer
kind: ClusterIssuer
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: internal-ca-issuer
spec:
ca:
secretName: internal-ca-secret
Issue certificates for applications:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-app-cert
namespace: default
spec:
secretName: my-app-tls
duration: 2160h # 90 days
renewBefore: 360h # 15 days before expiry
subject:
organizations:
- my-company
commonName: my-app.default.svc.cluster.local
dnsNames:
- my-app
- my-app.default
- my-app.default.svc
- my-app.default.svc.cluster.local
issuerRef:
name: internal-ca-issuer
kind: ClusterIssuer
# Verify certificate was issued
kubectl get certificate my-app-cert
# Output:
# NAME READY SECRET AGE
# my-app-cert True my-app-tls 5m
# View certificate details
kubectl get secret my-app-tls -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout
For external certs Let’s Encrypt is a time-tested way to handle it:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod-account
solvers:
- http01:
ingress:
class: nginx
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: app-example-com
namespace: default
spec:
secretName: app-example-com-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- app.example.com
Network Policies are a different level of security and are not encryption, but they are complementary. They determine which pods can communicate with which other pods at Layer 3/4. By default all pods can talk to all other pods and that is often undesirable.
Network Policies are resources that define ingress and egress rules for pods. The CNI plugin will enforce these rules, for example using iptables rules.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: default
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
What this policy does:
Gets applied to pods with the label app: backend
It allows ingress only from pods with label app: frontend on running on port 8080
Further, it allows egress only to pods with label app: database on running on port 5432
And it allows egress to kube-dns for DNS resolution which is kinda important!
As mentioned earlier by default, pods will accept traffic from any source. You can set a default deny policy to restrict it and then allow only that that traffic you explicitly allow:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: default
spec:
podSelector: {} # Applies to all pods in namespace
policyTypes:
- Ingress
- Egress
NetworkPolicy operates at L3/L4 (IP addresses and ports)
There is no application-layer (L7) filtering (HTTP paths, headers)
Pod identity is IP-based, not certificate-based
It does not encrypt traffic
It requires CNI support (not all CNIs implement NetworkPolicy, notably Flannel)
# Check if CNI supports NetworkPolicy
kubectl get pods -n kube-system -l k8s-app=calico-node
# or
kubectl get pods -n kube-system -l k8s-app=cilium
# Test connectivity (should be blocked by policy)
kubectl exec -it frontend-pod -- curl -m 5 http://backend:8080
# Output: curl: (28) Connection timed out
# Test allowed connectivity
kubectl exec -it allowed-pod -- curl -m 5 http://backend:8080
# Output: HTTP 200 OK
# View iptables rules created by NetworkPolicy (Calico)
sudo iptables -L cali-pi-xxxx -n -v
To follow a true defense-in-depth approach, you should combine multiple layers:
TLS at the edge: HTTPS from clients to load balancer
CNI Encryption: WireGuard for all pod-to-pod traffic (it just does it)
Network Policies: Restricting which pods can communicate to each other
Application TLS (not always needed): mTLS for sensitive services and in higher regulatory environments
┌────────────────────────────────────────────────────────────────────────────────┐
│ DEFENSE IN DEPTH: COMBINED APPROACH │
├────────────────────────────────────────────────────────────────────────────────┤
│ │
│ Internet Cluster │
│ │
│ ┌────────┐ ┌──────────┐ ┌────────────────────────────────────────┐ │
│ │ Client │ │ LB │ │ │ │
│ └───┬────┘ └────┬─────┘ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │ │ │ Ingress │ │ Backend │ │ │
│ │ HTTPS │ HTTPS │ │ Pod │ │ Pod │ │ │
│ │◄═════════════►│◄═════════►│◄═│ │◄═════│ │ │ │
│ │ TLS 1.3 │ TLS 1.3 │ └──────────┘ └──────────┘ │ │
│ │ │ │ │ │ │ │
│ │ │ │ │ WireGuard │ │ │
│ │ │ │ │◄═══════════════►│ │ │
│ │ │ │ │ encrypted │ │ │
│ │ │ │ │ │
│ │ │ │ NetworkPolicy: only frontend │ │
│ │ │ │ can reach backend on 8080 │ │
│ │ │ │ │ │
│ │ │ └────────────────────────────────────────┘ │
│ │
│ Layer: Edge TLS Ingress TLS CNI Encryption NetworkPolicy │
│ (HTTPS) (HTTPS) (WireGuard) (L3/L4 ACL) │
│ │
└────────────────────────────────────────────────────────────────────────────────┘
Do you need to encrypt traffic inside the cluster?
├── No → TLS termination at load balancer
└── Yes
├── Is compliance satisfied by network-layer encryption?
│ ├── Yes → CNI encryption (WireGuard/IPsec), Simple
│ └── No (need application-level identity)
│ ├── Can you use a service mesh?
│ │ ├── Yes → Istio/Linkerd mTLS (takes care of a lot of the mess for you)
│ │ └── No → Application TLS with cert-manager (blech)
│ └── Need E2E encryption?
│ └── Yes → Re-encryption at each hop
└── Do you need to restrict which pods can communicate?
└── Yes → Add Network Policies
# Test TLS connection to a service
openssl s_client -connect app.example.com:443 -servername app.example.com
# View certificate details
echo | openssl s_client -connect app.example.com:443 2>/dev/null | openssl x509 -text -noout
# Check certificate expiry
kubectl get secret my-app-tls -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -enddate -noout
# Output: notAfter=Mar 15 12:00:00 2024 GMT
# View cert-manager certificate status
kubectl describe certificate my-app-cert
# Look for Ready condition and any error messages
# View cert-manager logs
kubectl logs -n cert-manager -l app=cert-manager -f
# Check WireGuard status on node
sudo wg show
# Verify peers are connected and traffic is flowing
# Check for WireGuard errors in CNI logs
# Calico:
kubectl logs -n calico-system -l k8s-app=calico-node | grep -i wireguard
# Cilium:
kubectl logs -n kube-system -l k8s-app=cilium | grep -i wireguard
# Verify kernel module is loaded
lsmod | grep wireguard
# Output: wireguard 81920 0
# Check if traffic is actually encrypted (should see UDP 51820)
sudo tcpdump -i eth0 -nn udp port 51820
# List all network policies
kubectl get networkpolicies -A
# Describe a specific policy
kubectl describe networkpolicy backend-policy
# Test connectivity from a debug pod
kubectl run debug --rm -it --image=busybox -- wget -qO- --timeout=5 http://backend:8080
# Check CNI logs for policy enforcement
# Calico:
kubectl logs -n calico-system -l k8s-app=calico-node | grep -i policy
# Cilium:
cilium policy get
cilium monitor --type policy-verdict
Encryption in Kubernetes involves a decision making process at various levels:
Edge encryption (from client to cluster):
TLS terminates at the load balancer: simplest, but traffic unencrypted inside cluster
TLS passthrough: TLS terminates at Ingress Controller within cluster
Re-encryption: TLS at every hop, the most secure but more to manage
Cluster encryption (from pod to pod):
CNI encryption (WireGuard/IPsec): transparent, encrypts all cross-node traffic transparently
Application TLS: application controls certificates, enables mTLS
Service mesh (not covered here): automates mTLS with sidecars (lots of batteries included and more options for filtering traffic)
Network access control:
Network Policies: L3/L4 rules restricting which pods can communicate
This is NOT encryption, but a complementary security layer to go along with encryption.
This all depends on compliance requirements, tolerance for operational complexity, and whatever threat model you are basing your security policies on. In a lot of cases, TLS at the edge combined with CNI encryption is more than enough and more manageable. For environments requiring application-level identity and mTLS, then application TLS with cert-manager or a service mesh is necessary.
Ingress TLS: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
Securing a Cluster: https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/
cert-manager Documentation: https://cert-manager.io/docs/
Installation: https://cert-manager.io/docs/installation/
ACME Issuer: https://cert-manager.io/docs/configuration/acme/
CA Issuer: https://cert-manager.io/docs/configuration/ca/
Let’s Encrypt: https://letsencrypt.org/
ACME Protocol: https://datatracker.ietf.org/doc/html/rfc8555
Calico WireGuard: https://docs.tigera.io/calico/latest/network-policy/encrypt-cluster-pod-traffic
Cilium Encryption: https://docs.cilium.io/en/stable/security/network/encryption/
WireGuard: https://www.wireguard.com/
Network Policies: https://kubernetes.io/docs/concepts/services-networking/network-policies/
Network Policy Recipes: https://github.com/ahmetb/kubernetes-network-policy-recipes
Mozilla SSL Configuration Generator: https://ssl-config.mozilla.org/
TLS 1.3 RFC: https://datatracker.ietf.org/doc/html/rfc8446
TLS/HTTPS: https://kubernetes.github.io/ingress-nginx/user-guide/tls/
Backend HTTPS: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol
SSL Passthrough: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#ssl-passthrough
NIST Cryptographic Standards: https://csrc.nist.gov/projects/cryptographic-standards-and-guidelines
CIS Kubernetes Benchmark: https://www.cisecurity.org/benchmark/kubernetes