IP addresses are a bad proxy for identity. They tell you nothing about the workload behind it. Inside a Kubernetes cluster, that problem gets even worse: Pods come and go, CIDR ranges get reused, and a pod in the payments namespace and a pod in the sandbox namespace might look identical at the network layer.
To get around the fragility of IPs, you typically secure internal service-to-service connections by terminating mTLS or requiring the connecting service to present a JWT or API key. But if you want to enforce access control based on what is actually connecting to your internal services, you need real workload identity.
That's exactly what pod identity variables give you. Starting with v0.22.1 of the ngrok Kubernetes Operator, Traffic Policy on Kubernetes-bound endpoints can now read pod metadata directly: name, namespace, UID, and custom annotations.
If a pod in namespace-a gets compromised and tries to reach a service it shouldn't, a namespace-scoped policy rejects the connection before it even gets to your upstream. The pod either is who it claims to be, via Kubernetes-attested identity, or the connection is denied.
When a connection arrives on a Kubernetes-bound endpoint, ngrok resolves the identity of the originating pod and makes it available under the conn.k8s.pod.* namespace in your Traffic Policy expressions.
A few things worth knowing upfront:
kubernetes binding. They're not available on public or internal endpoints.hostNetwork: true share the node's IP, so they can't be uniquely identified and pod identity won't work for them.Pod identity is resolved at connection time, and there are cases where identity information is unavailable, like in the brief window immediately after a pod starts before it's fully registered, or in CNI configurations that don't preserve source IPs. In those cases, conn.k8s.pod.metadata.error_code gets set instead of the metadata variables, and your policy should always handle this. The two approaches are fail-closed (recommended for sensitive endpoints) and fail-open.
Fail-closed, which you should make your default, denies the connection if identity can't be resolved:
on_tcp_connect:
- expressions:
- conn.k8s.pod.metadata.error_code != ""
actions:
- type: deny
- expressions:
- conn.k8s.pod.metadata.namespace != "payments"
actions:
- type: deny
- actions:
- type: forward-internal
config:
url: http://my-service.my-namespace.internalFail-open still enforces identity when it's present, but lets connections through if identity is unavailable, which makes it useful for less sensitive endpoints or during initial rollout:
on_tcp_connect:
- expressions:
- conn.k8s.pod.metadata.error_code == "" && conn.k8s.pod.metadata.namespace != "payments"
actions:
- type: deny
- actions:
- type: forward-internal
config:
url: http://my-service.my-namespace.internalFail-open is a useful escape hatch, but a CNI misconfiguration or a transient resolution failure will silently bypass your policy.
The most common use case is locking a service to a specific namespace so only pods in that namespace can connect. Here's a complete AgentEndpoint that enforces this for a payments service:
apiVersion: ngrok.k8s.ngrok.com/v1alpha1
kind: AgentEndpoint
metadata:
name: payments-service
spec:
url: https://payments-service.internal.example.com
upstream:
url: http://payments-svc.payments:8080
trafficPolicy:
inline:
on_tcp_connect:
- expressions:
- conn.k8s.pod.metadata.error_code != ""
actions:
- type: deny
- expressions:
- conn.k8s.pod.metadata.namespace != "payments"
actions:
- type: deny
- actions:
- type: forward-internal
config:
url: http://payments-svc.payments.internalAny connection from outside the payments namespace is rejected at the TCP layer, before TLS negotiation, HTTP parsing, or your upstream service ever seeing the request.
Pod identity policies pair well with namespace-scoped installations of the ngrok Kubernetes Operator. Together, the infrastructure-level isolation from separate installs is backed by policy-level enforcement at the connection layer.
The docs cover more patterns in depth, including allowlisting specific pods by name, annotation-based policies, and multi-tenant cluster isolation.
Pod identity variables require ngrok Kubernetes Operator v0.22.1 or later. See the Updates & Upgrades guide for instructions on upgrading if you're on an older version.