If you’ve ever wondered how your app reaches an external payment gateway or a third-party API from inside the cluster, that’s egress in action.

At a basic level:
Ingress = traffic coming in
Egress = traffic going out

While ingress is often front-and-center because it deals with public access to your services, egress is just as important—especially when security, compliance, or traffic control are involved.

What egress looks like in practice

Let’s say you’ve got a pod that needs to call an external API:

bashCopyEditcurl https://api.stripe.com

From the pod’s perspective, this is just a simple HTTP request. But under the hood, the traffic:

  1. Leaves the pod
  2. Goes through the node’s network stack
  3. Gets SNAT’ed (source network address translated) to use the node’s IP
  4. Exits the cluster to the destination

Unless you configure something else, all outbound traffic is allowed by default, and it exits through the node’s IP.

That’s where control becomes important.


Why control egress?

If you’re in a regulated environment, or just want to prevent pods from calling arbitrary external services, egress control matters. Common use cases:

  • Security: Prevent pods from calling unknown or malicious IPs
  • Compliance: Ensure only allowed destinations (e.g. specific APIs) are reachable
  • Auditability: Monitor and log which services are communicating outside the cluster
  • Stability: Avoid surprises from rogue services hitting rate limits on external APIs

Common ways to control egress in Kubernetes

There’s no single “egress controller” in Kubernetes. You use a combination of native and external tools depending on how deep you want to go:

1. Network Policies

Kubernetes NetworkPolicies allow you to restrict pod-to-pod and pod-to-external traffic.

Here’s an example that only allows a pod to reach one external IP:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-stripe-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: payment-service
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 34.210.129.0/24

⚠️ Important: This only works if your CNI plugin (like Calico or Cilium) supports egress policies. Not all do.

2. Egress Gateways (Service Mesh)

If you’re using Istio or another service mesh, you can define Egress Gateways to funnel all external traffic through a controlled node or proxy.

This gives you:

  • Centralized control and logging
  • Fine-grained access policies
  • TLS inspection or traffic shaping if needed
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: stripe-api
spec:
  hosts:
  - "api.stripe.com"
  location: MESH_EXTERNAL
  ports:
  - number: 443
    name: https
    protocol: HTTPS
  resolution: DNS

This lets the mesh know to allow traffic to an external domain.

3. Egress NAT Gateway (Cloud-native)

In cloud environments like AWS, you can route egress traffic through a NAT Gateway (for private subnets), or use tools like:

  • AWS Egress-only Internet Gateway for IPv6
  • Firewall rules or security groups to restrict outbound destinations
  • VPC routing tables to force traffic through a proxy or filtering service

This is especially useful when you don’t want to rely on app-level control or NetworkPolicies.


How to identify where egress traffic is going

If you want to audit or monitor egress, consider tools like:

  • Cilium Hubble – for real-time flow visibility inside the cluster
  • Istio Telemetry – if you’re already using a service mesh
  • VPC Flow Logs – in AWS, to track traffic leaving your subnets
  • Packet capture (tcpdump) – when in doubt, go low-level

Final thoughts

Egress in Kubernetes often gets ignored—until someone asks why a pod is calling out to a sketchy IP, or a compliance team demands traffic audits.

The default behavior is wide open, but with the right combination of NetworkPolicies, mesh gateways, and cloud-native routing, you can get full control over what leaves your cluster—and how.

It’s not always simple to set up, but it’s well worth it for anyone running production workloads with external dependencies.

See also