The sp_invoke_external_rest_endpoint stored procedure that’s available in 2025 allows for SQL Server to hit external rest endpoints…which opens up quite a few interesting options.
I was thinking about this the other day and it occurred to me that Kubernetes has a REST API
So can we hit that from within SQL Server?
Let’s have a look how to do it. Now there’s a few steps, here’s what we’re going to do: –
1. Create a certificate authority and a signed certificate
2. Deploy a reverse proxy to Kubernetes
3. Configure SQL Server to be able to hit the reverse proxy
4. Use the stored procedure to hit the Kubernetes API via the reverse proxy
Ok, let’s go!
First thing to do is create a CA and then a signed certificate. It would be nice if we could just upload the root certificate from the Kubernetes cluster but (and trust me here, I’ve tried for longer than I’m going to admit)…it just doesn’t seem to work. All you’ll get is this infuriating error: –
Msg 31608, Level 16, State 24, Procedure sys.sp_invoke_external_rest_endpoint_internal, Line 1 [Batch Start Line 0]
An error occurred, failed to communicate with the external rest endpoint. HRESULT: 0x80072ee7.
So creating our own signed certificate seems to be the only way forward.
(but I’m going to keep trying, so if I do get it working…I’ll update here)
One thing to note, I’m running SQL Server 2025 in WSL on Windows 11…hitting a local Kubernetes cluster with Metallb installed, running in Hyper-V.
Cool ok first things first, set some variables: –
DOMAIN="api.dbafromthecold.local" DAYS_VALID=365 NAMESPACE="default" CA_KEY="CA.key" CA_CERT="CA.pem"
Create the certificate authority: –
openssl genrsa -out $CA_KEY 2048 openssl req -x509 -new -nodes -key $CA_KEY -sha256 -days 3650 -out $CA_CERT -subj "/C=IE/ST=Local/L=Test/O=TestCA/CN=My Local CA"
Then we’re going to create a key and a certificate signing request: –
openssl genrsa -out $DOMAIN.key 2048 openssl req -new -key $DOMAIN.key -out $DOMAIN.csr -subj "/C=IE/ST=Local/L=Test/O=TestAPI/CN=$DOMAIN"
Use the CA certificate and key to sign the CSR and generate a server certificate: –
openssl x509 -req -in $DOMAIN.csr -CA $CA_CERT -CAkey $CA_KEY -CAcreateserial -out $DOMAIN.crt -days $DAYS_VALID -sha256
OK, great…we have our signed certificate. Now let’s deploy a reverse nginx proxy to Kubernetes!
Create a secret in kubernetes to hold the signed certificate and key: –
kubectl create secret tls k8s-api-cert --cert=$DOMAIN.crt --key=$DOMAIN.key --namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
This stores the certificate and key which we’ll later mount into the nginx pod.
Deploy the ingress controller, which’ll expose our proxy: –
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Confirm the ingress controller is up and running: –
kubectl get all -n ingress-nginx
Now create a config map containing the reverse proxy configuration: –
kubectl create configmap k8s-api-nginx-conf --from-literal=nginx.conf=" events {} http { server { listen 443 ssl; ssl_certificate /etc/nginx/certs/tls.crt; ssl_certificate_key /etc/nginx/certs/tls.key; location / { proxy_pass https://10.0.0.41:6443; proxy_ssl_verify off; } } } " -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
N.B. – 10.0.0.41:6443 is the IP address and port of the kube-apiserver
This config will tell nginx to accept HTTPS connections (required for sp_invoke_external_rest_endpoint), terminate TLS using the secret we created, and proxy the request to the Kubernetes API server.
Deploy nginx, mounting the config map and TLS secret to enable the reverse proxy: –
kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: k8s-api-proxy namespace: $NAMESPACE spec: replicas: 1 selector: matchLabels: app: k8s-api-proxy template: metadata: labels: app: k8s-api-proxy spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 443 volumeMounts: - name: nginx-conf mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: nginx-cert mountPath: /etc/nginx/certs readOnly: true volumes: - name: nginx-conf configMap: name: k8s-api-nginx-conf - name: nginx-cert secret: secretName: k8s-api-cert EOF
Create a service to expose the nginx deployment: –
kubectl apply -f - <<EOF apiVersion: v1 kind: Service metadata: name: k8s-api-proxy namespace: $NAMESPACE spec: selector: app: k8s-api-proxy ports: - protocol: TCP port: 443 targetPort: 443 EOF
And now, create an ingress resource to route traffic to the nginx service: –
kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8s-api-ingress namespace: $NAMESPACE annotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" spec: ingressClassName: nginx tls: - hosts: - $DOMAIN secretName: k8s-api-cert rules: - host: $DOMAIN http: paths: - path: / pathType: Prefix backend: service: name: k8s-api-proxy port: number: 443 EOF
Getting there! Let’s check the pod and the ingress: –
kubectl get pods kubectl get ingress
Ha ha! Ok, we’re almost ready to hit the Kubernetes API. But first we have to configure SQL to trust the certificate.
To do that we need to copy the certificate to /var/opt/mssql/security/ca-certificates. That location doesn’t exist by default so it needs to be created.
mkdir /var/opt/mssql/security/ca-certificates cp $CA_CERT /var/opt/mssql/security/ca-certificates/$CA_CERT chown mssql:mssql /var/opt/mssql/security/ca-certificates/$CA_CERT
Then restart SQL Server: –
sudo systemctl restart mssql-server
Check the SQL Server’s error log, we’re looking for: –
Successfully placed CA.crt in trusted root store
Oh, and if we want to test with curl…we need to place the cert in the server’s trust store: –
sudo cp $CA_CERT /usr/local/share/ca-certificates/myCA.crt sudo update-ca-certificates
But before we test, we need to make sure that SQL Server can resolve our domain name. That means adding an entry into the /etc/hosts file: –
10.0.0.50 api.dbafromthecold.local
Let’s try it!
curl https://api.dbafromthecold.local/version
Boooooom! It works! Now let’s try within SQL Server. Enable the stored procedure: –
EXECUTE sp_configure 'external rest endpoint enabled', 1; RECONFIGURE WITH OVERRIDE;
And let’s go….
DECLARE @version NVARCHAR(MAX); EXEC sp_invoke_external_rest_endpoint @url="https://api.dbafromthecold.local/version", @method = 'GET', @response = @version OUTPUT PRINT @version
Niiiiiiice! But what if we want to list the pods in the cluster?
The API endpoint is but if we try to hit it right now, we’ll get: –
{
"status": "Failure",
"message": "pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" in the namespace "default"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
The reason that’s happening is that the request is reaching the Kubernetes API server, but the user it’s running as — system:anonymous — doesn’t have permission to list pods.
Let’s fix that by:
1. Creating a service account in Kubernetes
2. Creating a role that allows reading (and creating ) of pods
3. Bind the role to the service account
4. Extract a token from that service account
5. Use that token in the stored procedure
So create the service account: –
kubectl create serviceaccount api-reader
And a role that allows listing and creating pods: –
cat <Now bind that role to the service account: –
kubectl create rolebinding pod-reader-binding --role=pod-reader --serviceaccount=default:api-reader --namespace=defaultCreate a token for that service account: –
cat <And grab the token: –
kubectl get secret api-reader-token -o jsonpath="{.data.token}" | base64 -dThis’ll give us a big long token that we can use in the header parameter of the stored procedure…like this: –
DECLARE @pods NVARCHAR(MAX); EXEC sp_invoke_external_rest_endpoint @url="https://api.dbafromthecold.local/api/v1/namespaces/default/pods", @headers="{"Authorization":"Bearer eyXXXXX....XXXX"}", @method = 'GET', @response = @pods OUTPUT PRINT @podsBut that’ll just give us a big chunk of json back…let’s try and parse it a little.
DECLARE @pods NVARCHAR(MAX); EXEC sp_invoke_external_rest_endpoint @url="https://api.dbafromthecold.local/api/v1/namespaces/default/pods", @headers="{"Authorization":"Bearer eyXXXXX....XXXX"}", @method = 'GET', @response = @pods OUTPUT SELECT pod_name, namespace, container_image, pod_ip, status FROM OPENJSON(@pods, '$.result.items') WITH ( pod_name NVARCHAR(100) '$.metadata.name', namespace NVARCHAR(100) '$.metadata.namespace', container_image NVARCHAR(100) '$.spec.containers[0].image', pod_ip NVARCHAR(50) '$.status.podIP', status NVARCHAR(50) '$.status.phase' );And that’ll return a row for each pod in the cluster (in the default namespace).
This cluster only has the one pod in the default namespace (the nginx pod) so the results will be: –
pod_name namespace container_image pod_ip status k8s-api-proxy-abcde default nginx:latest 10.244.0.5 Running Phew! Still with me? Great! One more thing then…what about deploying a pod to Kubernetes from SQL Server?
That’s why we created the role with the create verb earlier
Let’s keep it simple and just create another nginx pod in the default namespace in the cluster: –
DECLARE @deploy NVARCHAR(MAX); EXEC sp_invoke_external_rest_endpoint @url="https://api.dbafromthecold.local/api/v1/namespaces/default/pods", @headers="{"Authorization":"Bearer eyXXXXX....XXXX"}", @method = 'POST', @payload = '{ "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "nginx", "namespace": "default" }, "spec": { "containers": [ { "name": "nginx", "image": "nginx:latest", "ports": [ { "containerPort": 80 } ] } ] } }', @response = @deploy OUTPUT PRINT @deployAnd now if we check the cluster: –
kubectl get podsThere’s our pod!
So that’s how to hit the Kubernetes API via a reverse proxy from SQL Server 2025 using sp_invoke_external_rest_endpoint
Cue maniacal laughter.
Thanks for reading!