Jenkins X — Securing the Cluster

The folks over at Cloudbees have done a great job making installing Jenkins X as simple as it gets with sensible defaults. However, in order to be able to use *.domain.xyz wildcard certificates later on in the process, we needed to make two decisions:

  • we need to use a real domain (for cert-managers DNS challenge)
  • we need to change the urltemplate of the exposed services from
    "{{.Service}}.{{.Namespace}}.{{.Domain}}"
    to
    "{{.Service}}-{{.Namespace}}.{{.Domain}}"
    which would have meant a new sub-domain for every namespace.

Our install command looked something like:

jx install - provider=gke 
- git-username=${github_username}
- git-api-token=${github_api_token}
- default-admin-password=${default_admin_password}
- version=${jx_version}
- no-default-environments=true
- git-private=true
- no-tiller=false
- domain=${jx_domain}
- long-term-storage=false
- exposecontroller-urltemplate='"{{.Service}}-{{.Namespace}}.{{.Domain}}"'
- urltemplate='"{{.Service}}-{{.Namespace}}.{{.Domain}}"'
- buildpack=kubernetes-workloads
- batch-mode=true

IMPORTANT: During the installation the jxing-nginx-ingress-controller service will be created.

You will need to create DNS records (Type A)pointing to:

  • YOUR-DOMAIN > loadbalancer ip
  • *.YOUR-DOMAIN > loadbalancer ip

You can find the loadbalancer ip with:

kubectl get svc jxing-nginx-ingress-controller -n kube-system -o'jsonpath={ .status.loadBalancer.ingress[0].ip }'

Within just a few minutes we had a shiny new Jenkins X platform up and running with Jenkins, Nexus, and a multitude of other services running in our secured cluster.

But hang on, the services are http only!?!

Fear not, a simple jx upgrade ingress performs all the necessary steps to upgrade your ingress to https on our domain — you can read more about this in this post by Viktor Farcic.

A few minutes later and https was available. Nice!

But hang on, all my services are open to the world!?!

Ah, of course! Restricting access to the master only prevents people accessing the cluster through kubectl. Any ingresses created are open per default.

After weighing up our options, the most practical solution was to address the problem at the source, the “jxing-nginx-ingress-controller”.

We made use of the whitelist-source-range setting from nginx, which says:

You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0..1,....

To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.

Pro-Tip #1: for webhooks to work, the GitHub servers need to be whitelisted as well.
You can find them here.

Pro-Tip #2: don’t forget to add the IP range used for your pods (see the VPC section above).
Otherwise you will find that your own pods can not access service URLs within the same cluster (2 hours of debugging until I found this one 😅 ).

So, patch the jxing-nginx-ingress-controller can be done with:

kubectl patch configmap/jxing-nginx-ingress-controller 
— type merge
-p '{"data" : {"whitelist-source-range" : "...CIDRS..."}}'
-n kube-system

Damn! Didn’t work!

It turns out, that for this to work properly, there was one more thing needed for GKE. In order to allow authorised IP ranges, we needed to preserve the client source ip.

Luckily, this was just another simple patch.

kubectl patch svc jxing-nginx-ingress-controller 
-p '{"spec":{"externalTrafficPolicy":"Local"}}'
-n kube-system
Without VPN

With both our cluster and Jenkins X services packed securely away behind a list of whitelisted IP ranges, it was time look at the OAuth integration.