Bring your containers to the cloud

Cloud and containers, two buzzwords of the IT world put together. What can go wrong?

This post is a refresh on a previous one (https://cloudinthealps.mandin.net/2017/03/24/containers-azure-and-service-fabric/) with a focus on containers, rather than the other micro-services architectures.

As usual, I’ll speak mainly of the solutions provided by Microsoft Azure, but they usually have an equivalent within Google Cloud Platform or Amazon Web Services, and probably other more boutique providers.

And let’s be more specific, considering what happened in the container orchestrator world in the recent weeks. I am of the general opinion that this war is already over, and Kubernetes has won. Let’s focus on how to run/use/execute a Kubernetes cluster.

First step : you want to try out Kubernetes on your own. The ideal starter pack would be called Minikube (https://github.com/kubernetes/minikube)

. I already wrote about it, the good thing about it is that you can run a Kubernetes installation on your laptop, in a few minutes. No need to worry about setting up any cluster and configurations you do not understand at all.

You might want to play out a bit with Kubernetes the hard way, in order to be able to understand the underlying components. But that is not necessary if you only want to focus on the running pods themselves.

Now you are ready to run a production workload Kubernetes Cluster. And you would like to handle everything on your own. There are many ways to get there.

First, you want to deploy your own cluster, not manually but on your own terms. There is a solution, kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), that will help you along the way, without having to do everything by hand. This is a solution that is compatible with any underlying hardware, cloud, virtual or physical.

On Azure specifically,  there are two concurrent solutions to build your Kubernetes cluster : ACS (https://azure.microsoft.com/en-us/services/container-service/) & ACS-engine (https://github.com/Azure/acs-engine).

ACS (Azure Container Service) is mostly a deployment assistant, that will ask you the relevant questions on your K8s deployment, and then create and launch the corresponding ARM template. After that, you’re on your own. And you may download the template, edit it and re-use it anytime you want!

ACS-Engine is a command line customizable version of ACS, with more power to it 🙂

I feel that both are Azure dedicated versions of Kubeadm, but they do not add value to your production. They still are good ways to quickly deploy your tailored cluster!

BTW, if you go to the official webpage for ACS, it now just speaks about AKS, and you’ll have to dig a bit deeper to find out about the other orchestrators 😉

What if you could have your K8s cluster, be able to run your containers, and just have to manage the clustering and workload details? There is a brilliant solution called AKS (https://azure.microsoft.com/en-us/services/container-service/) , and no it does not stand for Azure K8S Service… It actually means Azure Container Service. Don’t ask. With that solution you just have to take care of your worker nodes, and the running workloads. Azure will manage the control plane for you. Nothing to do on the etcd & control nodes. Cherry on the top : you only pay for the IaaS cost of the worker nodes, the rest is free!

In my opinion, it’s the best solution today, it offers you a wide flexibility and control on your cluster, at a very low cost, and lets you focus on what is important : running your containers.

One last contestant to join the ring : Azure Container Instances (https://azure.microsoft.com/en-us/services/container-instances/). This solution is still in Preview, but might become a strong player soon. The idea is that you just care about running your container, and nothing else. For now it is a plugin for an actual K8S cluster, that will present itself as a dedicated worker node, where you can force a pod to run. I did not have time to fully test the solution and see where the limits and constraints are, but we’ll probably hear from this team again soon.

Leave a Reply