Kong API Gateway (Hybrid mode)
Precondition
Cluster has to be deployed with the Hashicorp Vault sample and HAProxy Ingress Controller . Otherwise this demo wouldn't work, because it needs the autogenerated TLS certificates.
Kong Enterprise: If you want to install the Kong Enterprise mode, you will need to add a file named license.jsonin the folder examples/kong-gateway.
DNS preparation
You can test Kong Ingress by adding the following entry to your /etc/hosts file:
# Append to /etc/hosts
[...]
127.0.0.1 httpbin.example.com
127.0.0.1 httpbin-tls.example.com
127.0.0.1 postgres.example.com
127.0.0.1 openbao.example.com
127.0.0.1 kong-manager.example.com
127.0.0.1 kong-admin.example.comInstallation
You can start the installation of Kong API Gateway with the included shell script:
cd examples/kong-gateway
bash setup.shThe following components are installed with the setup.sh:
- Creates Kubernetes Gateway API CRDs
- OpenBao - HELM Chart / cert-manager HELM Chart (see README)
- Kong Ingress HELM Chart is used to deploy Kong API Gateway
- Installs Kong Control Plane (Kong Ingress Controller, KIC) instance to namespace kong
- Installs 2 Kong Gateway node (Gateway Proxies) instances to namespace kong
- Creates Gateway API Configuration
- Gateway Class Ressource for Kong Gateway
- Gateway configuration for Kong Gateway instance (KIC and Gateway Proxies)
- Creates httpbin HttpRoute
- After that you can open httpbin with the URL: https://httpbin.example.com:8081. When you have added the Root CA to your system Truststore, or your browser the connection should be secured correctly. You can find the Root CA certificate under:
examples/openbao/root-certs/rootCACert.pem.
- After that you can open httpbin with the URL: https://httpbin.example.com:8081. When you have added the Root CA to your system Truststore, or your browser the connection should be secured correctly. You can find the Root CA certificate under:
TLS Passthrough (TLSRoute)
The showcase also demonstrates TLS passthrough via a TLSRoute. This pattern is relevant whenever Kong must not terminate TLS — common examples are:
- Databases (PostgreSQL, MySQL) where the client connects directly over TLS and the DB presents its own certificate
- Services with mutual TLS (mTLS) where the backend validates the client certificate, which Kong cannot do if it terminates TLS first
- Compliance requirements demanding end-to-end encryption without a middle-man decrypting traffic
In this demo a backend service holds its own TLS certificate issued by the internal Vault PKI. Kong routes the connection based on the SNI hostname (httpbin-tls.example.com) without ever seeing the plaintext — the same way it would route to a TLS-enabled database.
Client ──TLS──▶ Kong (port 9443, SNI routing) ──TLS──▶ Backend (Vault cert)
↑ no decryption hereRequired configuration
For KIC to accept the TLS protocol on the Gateway listener, the stream port must be configured with the ssl parameter in values.yaml:
proxy:
stream:
- containerPort: 9443
servicePort: 9443
protocol: TCP
parameters:
- sslThis causes Kong to advertise the listener as SSL=true via its Admin API, which is what KIC checks to classify the listener as TLSProtocolType. Without this flag KIC reports UnsupportedProtocol and the TLSRoute stays in NoMatchingParent. The ssl flag does not mean Kong terminates TLS — KIC still programs the route as passthrough.
Note: After a Helm upgrade that changes Kong's deployment, KIC does not automatically re-reconcile the Gateway. Trigger it manually:
bashkubectl annotate gateway kong -n kong reconcile-trigger="$(date +%s)" --overwrite
Test via port-forward
kubectl port-forward -n kong svc/kong-gateway-proxy 9443:9443curl --cacert ../vault/root-certs/bundle.pem \
--resolve 'httpbin-tls.example.com:9443:127.0.0.1' \
https://httpbin-tls.example.com:9443/The --cacert flag provides the Vault Root CA so curl can verify the backend certificate. The response confirms that TLS was terminated at the backend — not at Kong:
{"service":"tls-backend","tls":"terminated-here","issuer":"vault-pki"}PostgreSQL TLS Passthrough (direct SSL)
The showcase also includes a PostgreSQL 17 backend to demonstrate TLS passthrough with real non-HTTP traffic — the most common real-world use case for this pattern.
Why PostgreSQL requires direct SSL
Standard PostgreSQL SSL starts with a proprietary SSLRequest message before the TLS handshake. This means the TLS ClientHello — which carries the SNI hostname Kong needs to route — is never the first byte on the wire. Kong sees an unknown protocol and cannot route the connection.
PostgreSQL 17 introduced direct SSL (sslnegotiation=direct), where the TLS ClientHello is sent immediately as the first message, just like HTTPS. With direct SSL the SNI is visible to Kong and passthrough routing works:
Client ──direct-TLS──▶ Kong (port 9443, reads SNI) ──TLS──▶ PostgreSQL 17 (Vault cert)
↑ no decryption herePrivate key permissions
PostgreSQL refuses to start if the SSL private key is readable by group or others (mode 0600 required). Kubernetes Secret volumes mount files as 0644 (root-owned). An init container (running as root) copies the certificate and key from the read-only Secret mount into a shared emptyDir, sets chown 70:70 (the postgres user in the alpine image) and chmod 0600 before the main container starts.
Option 1: Ephemeral pod inside the cluster (no local tools needed)
The postgres:17-alpine image includes openssl. This pod connects directly to Kong's proxy service inside the cluster, presents postgres.example.com as the SNI, and Kong routes to the PostgreSQL backend — no port-forward needed.
TLS routing check — shows that Kong routes correctly and the backend presents its Vault-issued certificate:
kubectl run -it --rm tls-check \
--image=postgres:17-alpine \
--restart=Never \
-- sh -c 'openssl s_client \
-connect kong-gateway-proxy.kong.svc.cluster.local:9443 \
-servername postgres.example.com \
-brief 2>&1 | head -20'Full psql connection — uses hostaddr to connect to Kong's service IP while keeping host=postgres.example.com as the SNI hostname for TLS routing:
kubectl run -it --rm psql-test \
--image=postgres:17-alpine \
--restart=Never \
-- sh -c 'psql "hostaddr=$(getent hosts kong-gateway-proxy.kong.svc.cluster.local \
| awk '"'"'{print $1}'"'"' | head -1) \
host=postgres.example.com port=9443 \
sslmode=require sslnegotiation=direct \
user=demo password=demo dbname=demo"'The hostaddr parameter directs the TCP connection to Kong's ClusterIP, while host sets the SNI in the TLS ClientHello. This is the standard libpq way to separate routing target from TLS hostname.
Option 2: Docker from outside the cluster (no local psql needed)
Forward Kong's stream port to your local machine, then use docker run with postgres:17-alpine as a throwaway psql client. The --add-host flag makes postgres.example.com resolve to the Docker host (where the port-forward is listening).
kubectl port-forward -n kong svc/kong-gateway-proxy 5432:9443 &docker run --rm -it \
-v "$(pwd)/examples/vault/root-certs/bundle.pem:/bundle.pem:ro" \
--add-host "postgres.example.com:host-gateway" \
postgres:17-alpine \
psql "host=postgres.example.com port=5432 \
sslmode=require sslnegotiation=direct \
user=demo password=demo dbname=demo \
sslrootcert=/bundle.pem"host-gateway is a Docker special value that resolves to the host machine's IP — available on Docker Desktop (Mac/Windows) and Docker Engine 20.10+.
Option 3: Local psql (if PostgreSQL 17 is installed)
kubectl port-forward -n kong svc/kong-gateway-proxy 5432:9443 &psql "host=postgres.example.com port=5432 \
sslmode=require sslnegotiation=direct \
user=demo password=demo dbname=demo \
sslrootcert=examples/vault/root-certs/bundle.pem"Note:
sslnegotiation=directrequires libpq 17 or later (psql --version). Earlier clients send the PostgreSQLSSLRequestfirst, the SNI is not visible to Kong, and the connection will not route correctly.
Show Kong Manager
If your local DNS settings (/etc/hosts) are set correctly, you can open the Kong Manager UI in your browser: https://kong-manager.example.com:8081