Microk8s is a mind blowing powerful solution to run a fully equipped kubernetes infrastructure locally, with ease. An open door to deploy and test directly on your local cluster as if you were in a typical production setup, without much fuss. You may be up and running in one command. Its popularity is deservedly growing with some applications already in the wild and more possible. In this brief introduction we’ll look at a most typical use case of MicroK8s.
Let’s take a case of a Docker (micro)services environment with production workloads run on Kubernetes. A most wanted need for an efficient dev team (or a single developer alike) is that of a consistent, reliable production-like local environment where to test and integrate their code with all necessary components of a platform, all up and running, long before deploying to a common Staging environment for example.
Practically, for the daily integration work each team member brings his/her own customized local setup: some prefer plain docker and manual start/stop of containers, some go for docker-compose, others use minikube, vagrant, virtualbox, a combination of all the above or another esoteric solution. Some people prefer to mock certain services, others go for the challenge to run entire stacks on their own laptop. Many irremediably spend hours in setting up and fixing their own ever-changing local setup. But all they just needed was to test that new endpoint they coded into one (1) service! I have seen people spend days hunting down a bug, to find out it was all about their local environment needing a fix, which is of course a silly way to waste time.
Lack of consistence among local environments of team members will lead to an underestimated lack of productivity, later reflected in the final quality of the service(s) delivered. Conversely, if the entire team manages to agree to use, say, a common docker compose definition to spawn a local test environment, this can guarantee a smooth experience and more predictable results on shared projects. The key is in the ability of the team to find and accept the compromise of a common shared solution, then use that. Now, especially if your platform is not-trivial and uses a true, interconnected mesh of microservices (not just standalone dockerized monolithic services communicating through queues) there is some better solution than docker compose for a local environment.
For the lucky ones who work on Linux or the others on Mac or Windows who can afford a fat Linux VM, Canonical came to great help with the MicroK8s snap, released in December 2018. Snaps, originally an Ubuntu thing, can now be supported in a large number of Linux distributions, and MicroK8s seems like a good incentive to use them. This article will not explain what is a snap or how to support them in your distribution though.
MicroK8s can be installed on your Linux system very easily and being a snap is going to be tightly integrated with system resources, so running smoothly and fast, while staying separated from the rest of the applications. The integration is such that you will see systemd services running with the MicroK8s components in your system. What also surprises is the amount of addons it brings with it, and its general compatibility (it’s Certified Kubernetes) making it a complete platform.
Minikube is another powerful way to run K8S locally, can be installed relatively easily and has setup scripts for all platforms, but it runs into a heavy virtualized VM on Linux, too (requiring an Hypervisor installed), has a relatively limited number of addons and might need its fair amount of work to be adapted to advanced needs or to be consistently deployed by multiple team members. Of course: everyone is free to stick to his/her own preference.
So what can be found in a typical production Kubernetes platform that you will find in MicroK8s out of the box or as addons (more info here)?
- K8S Dashboard
- Kube DNS
- Storage class to a folder on localhost
- Ingress (MicroK8s does not have a Load Balancer, but the ingress addon can be used to publish on the external IP if necessary)
- efk stack (Elasticsearch, Fluentd ans Kibana) for aggregated logs
$ microk8s.kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:16443
Elasticsearch is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Heapster is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/heapster/proxy
Kibana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
KubeDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Grafana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy
Most of the above you will also find as addons of Minikube. But in MicroK8s there’s more, oriented to microservices and more complex stateful setups:
- metrics-server but also Prometheus for metrics made visible in Grafana
- the istio service mesh (more info here)
- linkerd , another service mesh
- the jaeger operator
MicroK8s has its own builtin kubectl tool within the snap (
microk8s.kubectl) and can easily generate the full kubeconfig configuration file to use locally in case you have your own kubectl installed (
microk8s.config > kube/config). You may also add aliases in
/etc/hosts for the most commonly used URLs, like Dashboard, Grafana and Kibana.
Kubeless runs easily on MicroK8s, for serverless setups (who needs Lambda?); why not put a Jenkins on it (perhaps in another way than in the example), and in case you were wondering, Helm Charts can be easily supported, too.
To install tiller for a functional helm chart support (assuming your kubeconfig is in place), the usual
helm init will do the trick, opening up to the world of all available charts. And this is just the beginning!
Customization or extension of MicroK8s are possible from its source repo, so you can make a customized setup to share with your team, like choosing the addons to enable by default, so there is your team’s shared local platform.
Going back to old fashioned monolithic services for the close, as an experiment it took about 10 minutes to have a full fledged K8S cluster with an up and running WordPress local blog (
helm install stable/wordpress), and just like that I was installing themes and writing articles. Can you do that with Minikube (including the time it takes to create a VM)? Otherwise ask your friendly devops how long it took them to get stable dns, monitoring, logging, RBAC properly setup in a vanilla cluster..