Kubernetes, from dev to prod

Before diving into how implementing K8S from the development phase up to the production, we can have a quick look at what it does. But as there are already plenty of articles about Kubernetes available, we are not going to detail here all the basics of k8s but just a quick reminder of the main features to know and useful to understand this article :

  • Pod: the simplest unit that can be deployed on cluster. It contains one or several containers.
  • ReplicaSet: declaration of pods min and max occurences to be instanciated
  • Service: an abstraction of a logical set of pods and policies to access those which are within it. It has its own IP address enabling decoupling communications between services and pods
  • Deployment: management of pods/replicaSets updates (ie state changes)
  • Node: aka minion, is a member of the cluster. It can be either a physical or a virtual machine
  • Controller: a component that controls periodically the cluster state through apiserver and tries to restore desired state in case of detected gap

Also, Kubernetes infrastructure relays on two core elements: a master and a set of nodes.

Master hosts several components:

  • kube-apiserver exposes all k8s APIs to interact with control pane
  • etcd stores all relevant data concerning the cluster in an highly-available key-value store. It will be used when recreating a cluster after a crash
  • kube-scheduler assigns pods to create to available nodes based on scheduling decisions rules. These take in account hardware, software and contraints potentially defined on each node
  • kube-controller-manager runs all controllers
    • node controller which is in charge of checking node availability
    • replication controller which is in charge of controlling that replication specifications for each pod is fulfill
    • endpoints controller which is in charge of populates endpoints, in other words joins Services and Pods
    • security controllers which mange accounts and API keys for all namespaces. There are two kinds : service account & token

Each minion or node hosts 2 components and pods:

  • kubelet is the agent in charge of checking status and health of all pods described in PodSpecs enabled on the node
  • kube-proxy is in charge of networking on the node managing networking rules and connection forwarding

To sum up:

Kubernetes is well suited for a CI/CD pipeline, for many reasons  :

  • It is based on containerized components, thus allowing portability amongst environments and easy automation
  • It can operate on-premise or in the cloud
  • It supports “zero-downtime” deployment

It is so popular nowadays that many satellites solutions have appeared, enhancing and extending the core solution.

Today we will focus on Helm, the reference k8s package manager.

Helm helps deploy (install & upgrade) k8s complex full stacks applications. You just have to describe a related set of k8s ressources in templatized k8s descriptors. The collection of templates is called a “chart”. A chart can be a simple NGINX service, a multi-tier application with a frontend, a backend and a database, or even a complex micro-service architecture.

A chart can therefore be stored in a SCM repository. When you package the chart, it creates a ready to deploy versionned tar.gz archive that you can push to a “chart repository”.

There’s also a central public repository of public charts, called the “Helm Hub” (https://hub.helm.sh), just like Docker Hub for docker images. You can also host your own private Helm repository.


There’s a strong analogy between Helm and common Linux package managers like Yum or Aptitude, but Helm is much more than that  :

  • It supports deployment “hooks”, for custom actions and a fine grained deployment scenario (ex : pre-install, post-install)
  • Each deployment is considered a “release”, and you can easily  “upgrade” (and “rollback”) your deployment if a new version of the chart is available
  • You can define environmement specific variables, and thus make your charts reusable between a dev, staging or production environment
  • And of course, since it describes k8s ressources, it supports all of its core features like scalability for instance


Now, let’s see how you can make a CI/CD pipeline with this tool and  (automated through a Jenkins platform for instance).

The pipeline extends the classic Docker based CI pipeline where :

  • You start from a Git tag or branch
  • You build your components and run your unit tests (locally)
  • You create a bunch of docker images and store them in a repository
  • You launch your containers and make some integration or QA tests

Now with Helm, along with your docker images, you will have to :

  • Update your chart template files and the chart version
  • Package the chart
  • Push the packaged chart to your repository
  • Run the helm install or helm upgrade commands to deploy on the kubernetes cluster

If you have a end to end Kubernetes infrastructure, you can leverage your cluster (or clusters) to get a repeatable deployment process, and coherent test phases along your pipeline. And you are not very far from complete CD (the only remaining question being “should i trust my automated tests” ?)

Obviously, Helm is not perfect, and it has some limits (https://medium.com/virtuslab/think-twice-before-using-helm-25fbb18bc822). For exemple, it’s a client/server model , and the server part, called Tiller, is a non HA (1 replica only) service running in your cluster, that may cause downtime on a production cluster. But Helm now being a top level project of the Cloud Native Fondation, it will evolve to provide robust k8s package management.

Helm has several competitors, like Draft (https://draft.sh/), ksonnet (https://ksonnet.io/) or Skaffold (https://github.com/GoogleContainerTools/skaffold).

Final thoughts : in the end, development and CI/CD on Kubernetes is all about tools that facilitate management of multiples k8s configuration files and deployment on the cluster.

To go further with kubernetes, we’d like to introduce service mesh mechanism.

One of the more famous is Istio, https://istio.io, brings nice feature to administrate, supervise and optimise your Kubernetes cluster.

Main features of Istio are:

  • Traffic Management: Load Balancing, Canary release, A/B testing
  • Security : Ease security enablement and configuration, control traffic
  • Observability: Routing monitoring, traffic logging available through user-friendly dashboards

More to come… Keep in touch… 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *