Edge resources are often limited, however, facilitating Kubernetes bare minimal 2–4 GB minimum RAM requirement might not be a problem in a data center or public cloud environments and on the other-hand the same will be a limiting factor for edge computing where users cannot use a heavy Kubernetes distro with multiple package dependencies.
This is where Rancher’s k3s comes into the rescue, k3s is really a Linux distro and Kubernetes distro combined in a one binary of less than 40 MB and just requires 512 MB of RAM which makes it easy for deploying Kubernetes even on the environments with the most resource conservative infrastructures and low-resource computing environments. k3OS is an other offering based on k3s where the solution is packaged as a operating system completely managed by Kubernetes. K3s omits many features that bloat up most Kubernetes distributions, such as rarely used plug-ins, and consolidates the various functions of a Kubernetes distribution into a single process.
k3s resolves the infrastructure component, then comes the major set of challenges like deploying and managing applications as Kubernetes itself is not a traditional, all-inclusive PaaS (Platform as a Service) system. Deploying microservice based applications without PaaS can be simple with few geographically centralized data centers but the scenario is totally different having thousands of edge sites as shown below:
In a traditional PaaS setting , developers would develop an application locally, isolated away from where it was going to be deployed. Here application and its infrastructure were not connected but the same will not fit in a Kubernetes environment where each piece of an application is its own container (simply micorservices). μPaaS represents a shift in the way applications are developed and deployed. Rather than an application being developed in isolation from its infrastructure, an application is developed inside its infrastructure.
Rancher’s Rio is a MicroPaaS that can be layered on top of any standard Kubernetes cluster. Consisting of a few Kubernetes custom resources and a CLI to enhance the user experience, users can easily deploy services to Kubernetes and automatically get continuous delivery, DNS, HTTPS, routing, monitoring, autoscaling, canary deployments, git-triggered builds, and much more. The work for k3s started as a component of Rio which was then separated as two different first-class open source projects.
K3s is a brand new distribution of Kubernetes that is designed for users who need to deploy applications quickly and reliably to resource-constrained environments (example: edge sites with single board computers) and designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.
K3s achieves its lightweight goal by stripping a bunch of features out of the Kubernetes binaries (e.g. legacy, alpha, and cloud-provider-specific features), replacing docker with Containerd, and using sqlite3 as the default DB (instead of etcd, supports etcd as a option). K3s just needs minimal kernel and cgroup mounts. As a result, this lightweight Kubernetes only consumes 512 MB of RAM and 200 MB of disk space.
K3s bundles the Kubernetes components (kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy) into combined processes that are presented as a simple server and agent model. Running k3s server will start the Kubernetes server and automatically register the local host as an agent. k3s supports multi-node model where users can use the ‘node-token’ generated while the process startup. By default k3s installs both server and agent (combined the Kubelet, kubeproxy and flannel agent processes), the same can be controlled using ‘ — disable-agent’ where server and agent (master and node in Kubernetes terminology) can be separated.
By default k3s uses Flannel as CNI, the same can be omitted from install using ‘ — no-flannel’. Others can be configurable as CNI is totally independent of Kubernetes Install today and mostly treated as an add-on.
k3s minimal resource usage:
The binary initiation creates a full-fledged Kubernetes cluster and by default writes the kube-config file to ‘/etc/rancher/k3s’. k3s installs Kubectl on the host and provides capabilities to execute kubectl and crictl commands in conjunction with k3s.
As all the kube-system components run as individual process no containers (kube-apiserver, kube-scheduler, kube-proxy and kube-controller-manager) are created in the namespace. Only application containers created through manifests are created as containers/pods managed by Containerd (CRI).
Containerd as Container Runtime:
Instead of a full instance of Docker as the runtime engine for containers, K3s uses Docker’s core container runtime, Containerd. This way, many elements normally included with Docker can be omitted.
k3s uses Containerd — a container runtime which can manage a complete container lifecycle — from image transfer/storage to container execution, supervision and networking and eliminates Docker layer. The only difference between Docker and Containerd: Docker uses Containerd as one of the component where Docker adds functionalities such as an engine to interact with container runtime and can be treated as a container runtime interface. As shown below all containers are managed by Containerd directly making it lightweight removing Docker components.
Sqlite3 as the default storage mechanism:
By default K3s uses Sqlite as a storage backend replacing etcd. etcd3 is still available, but not default. Due to technical limitations of SQLite, K3s currently does not support High Availability (HA), as in running multiple master nodes today.
Any file found in /var/lib/rancher/k3s/server/manifests will automatically be deployed to Kubernetes in a manner similar to kubectl apply. As shown below Coredns is deployed automatically on to the cluster with cluster initiation.
K3s supports a CRD controller for installing helm charts. With this feature users can make use of a simple YAML based manifest to communicate with controller to auto-deploy Helm charts. All the variables required can be passed through ‘set’ which enables users to customize and deploy Helm charts as part of cluster deployment consuming upstream charts or an internal chart repository.
Rio — Kubernetes based MicroPaaS
Rio a “micro-PAAS (μPaaS)” that developers can use to easily deploy boilerplate services in their applications to Kubernetes that can be layered on top of any standard Kubernetes cluster. Rio orchestrates set of application development essential features with few Kubernetes custom resources and a CLI such as setting up Istio for service mesh or security and communication between your microservices, autoscaling, build infrastructure, TLS, set up routing, seamless canary deployments, A/B deployments, monitoring HTTP traffic inside service mesh etc.
Rio provides a comprehensive CLI which will mask multiple Kubectl and Istioctl commands to configure and deploy services.
Rio uses multitude of applications to facilitate the features above and all are deployed to ‘rio-system’ namespace on Kubernetes (K3s cluster).
Rio is a single binary which automates all deployment of components on Kubernetes cluster.
Deploying a Service with Cluster Domain and TLS
Rio enables users to use a single command to bring a full-fledged application with Service Mesh, Autoscaling, TLS and other integral routing constructs. This masks multiple Kubectl commands to deploy individual objects along with sophisticated Service Mesh configuration.
By default Rio will create a DNS record pointing to your cluster. Rio also uses Let’s Encrypt to create a certificate for the cluster domain so that all services support HTTPS by default.
Rio also creates required Istio configurations like Destination Rules and Virtual Services to enable service mesh on the deployed service. Rio packs all the required application deployment aspects into one command. Rio also deploys ‘Kiali’ for Istio specific visualization.- Service Mesh
Service Mesh and Canary Deployments
Rio has a built in service mesh, powered by Istio and Envoy. Rio specifically does not require the user to understand much about the underlying service mesh.
Services can have multiple versions deployed at once. The service mesh can then decide how much traffic to route to each revision.
Using Rio-Stage to stage multiple versions of a same service:
Rio ‘stage’ stages a new revision of an existing service:
Using Rio-Promote to adjust traffic between versions in a service:
Rio ‘promote’ enables users to use canary flows on a deployed service adjusting the weights between versions gradually. The traffic will be shifted gradually with a default weight increase by 5% shift every 5 seconds (configurable).
Shown below, service V3 created above with rio-stage is promoted:
Adjusting weight to services dynamically:
Visualization with Kiali:
Rio deploys Kiali (default UI for Istio) with the Isio stack. All the configurations and changes made above can be easily depicted from the web interface.
By default, Rio enables autoscaling for workloads. Depends on QPS and current active requests on your workload, Rio scales the workload to the proper scale.
Scaling services from zero based on demand:
Source code to Deployment
Rio supports configuration of a Git-based source code repository to deploy the actual workload. It is as easy as giving Rio a valid Git repository repo URL.
Rio will automatically watch any commit or tag pushed to a specific branch (default is master). By default, Rio will pull and check the branch at a certain interval, but can be configured to use a webhook instead. Rio enables users to configure webhooks dynamically creating Kubernetes secret underneath and can be passed with the build command.
Rio deploys Prometheus, Grafana and Kiali for monitoring and visualization of service deployments.