Autonomous versus autonomic systems

This is a difficult topic. I have to admit I am still not completely comfortable with all the concepts and functions.
However, the thinking is amazingly interesting, and I will take some time to ingest everything.
First things first, I will use this post to summarize what I have learned so far.

How did I end up reading that kind of work, you ask? Weeeellll, that’s easy 🙂
Brendan Burns, in one of Ignite ’17 sessions, used the comparison “autonomous vs autonomic” to discuss Kubernetes.
This got me thinking on the actual comparison, and aided with our trusted friend, Google, I found a NASA paper about that (https://www.researchgate.net/publication/265111077_Autonomous_and_Autonomic_Systems_with_Applications_to_NASA_Intelligent_Spacecraft_Operations_and_Exploration_Systems) I started to read it, but it was a bit obscure for me, and scientific English, applied to space research, was a bit too hard for an introduction to that topic of autonomic systems.
Some more research, helped by me beloved wife, led to a research thesis, in French, by Rémi Sharrock (https://www.linkedin.com/in/tichadok/). The thesis is available right there : https://tel.archives-ouvertes.fr/tel-00578735/document. This one relates to the same topic, but applied to distributed software and infrastructure, which ends up being way more familiar to me 🙂

The point where I am right now is just over getting the definitions and concepts right.
I will try to describe what I understand here about automated, autonomous and autonomic systems.
There is some progression from the first to the second, and from the second to the third concept.
Let’s start with automated. An automated system is just like an automaton in the old world : something that will execute a series of commands, on the order of a human (or another system). For example, you have a thermostat at home that send the temperature from inside and outside your home to the heater controller.

There is no brain in there, or almost none.
The higher step is an autonomous system. This one is able to take some decision and act on the data it captured. To continue with the thermostat example, you have a heater controller which will handle the current temperature, from both inside and outside, and decide whether to start heating the house, and how.
The short version is that the system is able to execute a task by itself.

And then we have an autonomic system. This is able to have a higher view of its environment, and should be able to control the fact that it will always be able to execute its tasks. I have run out of the heater example, but let’s take a smart mower. The first degree of autonomicity that it has is the way it will control its battery level, and return to its base station to recharge, in order to ensure that it will be able to continue its task, which is mowing the lawn.
There are multiple pillars of autonomicity. Rémi Sharrock described four in his thesis, and I tend to trust him on this :

Each of these four pillars can be implemented into the system, to various degrees.
I am not yet comfortable enough on describing precisely the four pillars, but it will come in a future post!

Going back to my (our) roots

Yes, another post with an obscure reference for a title.

After some time discussing tech subjects, I was of a mind of going back to something that has often been misread in the past by IT teams and IT management. And by that I mean : business. Yes, again.

Do not misunderstand me, I am still a technologist, and I love learning about technology, finding out the limits and possibilities of any enw tech that is coming out. I am not a sales person, nor a marketing person. However I have been exposed to many well crafted presentations and talks over the years, and what often came out of even the most interesting ones was that : “our tech is fantastic, buy it!”

All right, I love that tech. Be it virtualisation, SAN, VSAN, public cloud, containers, CI/CD, DevOps… choose whatever you like. But technology is not an end to itself in our day to day world. Whatever matters is what you will do with it for your company or customers.

I will take an example. An easy shot at someone I admire. Mark Russinovich, CTO of Azure, and longtime Windows expert (I would use a stronger term if I knew one 🙂 ). A few months ago, during a conference, he had a demo running where he could spin up thousands of container instances in a few seconds, with a simple command.

First reaction : “Wow!”

Second reaction : “Wooooooowwww!”

Third reaction : “How can we do the same?”

Fourth reaction (probably the sanest one) : “Wait, what’s the point?”

And there we go. What was the point. For me, Mark’s point was to show how good Azure tech is. Which is his job, and this demo made that very clear. But Mark did go further, as he usually does, during his speech and encouraged everyone to think about the usages. Unfortunately, most of the people I have discussed with seem to miss the point. They see the Wow effect, and want to share it. But few of us decide to sit down and think about what the use case could be.

And that is the difficult, and probably multi-million dollar question : how to turn amazing technology into a business benefit.

Never forget that, apart from some very lucky people, we are part of a company that is trying to make money, and our role is to participate to that goal. We should always think about our customers, internal or external, and how we can help them. If doing that involves playing with some cool toys and be able to brag about it, go for it! But that is not the other way around.

PS : to give one answer to how we could use Azure Container instances for the real world, especially the kubelet version of ACI, try and think about batch computing, where you would periodically need to spin up dozens or hundreds of container instances for a very short time. Does that ring any bell for you?

PPS : I could not find the exact session from Mark I am describing here, but there is an almost identical session from Corey Sander and Rick Claus there : Azure Container Instances: Get containers up and running in seconds

My very first public presentation – feedback

There we are, I have finally given my talk about Kubernetes and Azure.

It was both more and less than I expected.

It was more easy, once I got there, into the position of a speaker than I expected. My fellow speakers were very kind and supportive, which helped with the pre-stage flutters 🙂 It was also easier because the room was of a reasonable size, and I was not on stage in front of 500 people.

And it was less deep dive than I expected, which also allowed me to relax a bit. I could get a feeling about the audience before going there, which let me into the dark regarding their needs and expectations.

 

Let’s set the stage. The event took place at Microsoft’s Building 20, which is a Reactor (https://developer.microsoft.com/en-us/reactor/). So the building is definitely designed to host events comfortably. That helped a lot, as we even had someone from the A/V team to help us and ensure all the screens and microphones would be working correctly. And yes, the free coffee might also have been a huge help 🙂

The room was large, without any raised platform for the speaker, but with multiple repeat screens all around.

I was the third speaker, so I definitely had some time to review my slides and demo setup a few times.

I did setup the demo environment the night before, to avoid any deployment issue at the last minute (which did happen 2 days before while I was practicing). Once again, having a scripted demo ensured that I would not forget any step, or mess up some command line options.

 

I did have a few issues during the talk. First the mike did stop working at some point, failed battery. I kept on speaking without it, as the room was small enough to let me speak louder for a short time and still be heard. The support guy came shortly to replace the battery, so no big issue there.

My remote clicker did work perfectly, but not the pointer part. That’s a shame, because it made it more difficult to point out at a precise section of a slide or demo. Afterwards I found out why, and I should be able to avoid that particular issue in the future.

I did not get as much interaction as I hoped I would. I thing that it was mostly due to my anxiety, which prevented me to behave like my normal self and be engaging.

 

What I would change for the future. First, for a set event like this one, I would practice in front of a camera, or a mirror, to actually see and listen to my speech. That would probably ensure that I would keep the correct pace and articulation. And also make sure that the flow of slides is comprehensible.

Second, I would work more to know the expectations of the public. It turns out that my talk was way too technical and fast it should have been. While discussing with the attendees afterwards, I realized that I did not get many of the points through, probably because I went too fast over those. This brings me back to the interactions point above : would I have been more comfortable and interactive, I could have grasped that during the session and corrected it.

Third, I should probably think about learning a bit more about controlling my voice and projecting it. I realized that during the week leading to the event, as I had to speak in a loud environment, and present/discuss the same kind of subjects.

 

Labs

A word on the hands-on labs we had in the afternoon. I just was glad to have stayed for that part.

First because I had never been on the proctor side before, and it’s really fascinating to see a problem through the eye of someone with a different mindset and culture. I really learned a lot, and realized a lot during these 2 hours.

Second, because it showed me the areas where my presentation had been lacking, and how much I had not been clear enough to be understood by everyone.  I think these discussions with the attendees were the deeper feedback and improvement tips that I could get.

For the record, the container labs we used are there : https://github.com/Azure/blackbelt-aks-hackfest/

That’s it for now. This first talk has unlocked something and made me realize that I should talk at every occasion I can, and that I love it, at least when it’s done 😉

 

My very first public presentation – preparation

I’m writing this a bit ahead of time, as I plan to write a follow-up to compare what is planned against what will have happened.

 

As the title suggests, I will be hosting my very first public session on the 21st of April. I am taking part in Global Azure Bootcamp , a worldwide community event where experts from around the world gather locally to share their experience and knowledge on Azure. I would probably have preferred to be involved in an event in France, however I am in Seattle that week, so my event of choice will be directly @Microsoft in Redmond.

This will be an occasion for multiple first times for me : first time on my own as a public speaker, first participation in Global Azure Bootcamp, first time presenting fully in English, and first time presenting in Redmond of course 🙂 So, big step far out of my comfort zone.

 

The aim of this post, as stated above, is to record what I did to prepare for the event, and afterwards, write down what have gone right and wrong, and how I can progress and do better.

 

I have chosen the topic of containers & Kubernetes on Azure for two reasons : first I am rather comfortable with the subject, and second a colleague, Jean Poizat, https://www.linkedin.com/in/jean-poizat-0a97bb/, did already build a slidedeck and demo  which I could expand from.

Obvious first then : I have a chosen familiar grounds and existing material, to limit the amount of work needed. This however presented a challenge : start from slides which I did not write, and get familiar with those, before rearranging & completing those to my purpose and comfort.

A word on how I got out of my comfort zone : a nice kick in the back end! I saw on some social networks few friends and colleagues getting ready for GAB in France, which prompted me to start collaborating, at least to give a hand. Once I realized I would be in Seattle at that time, I contacted the local event owner Manesh Raveendran, https://www.linkedin.com/in/maneshraveendran/, to offer my help, in broad terms. It took me a while to be able to suggest the session I will be presenting, and I almost chickened out a few times. But once Manesh wrote me in, that was it, I had to make this work!

The next step was to get very familiar with the presentation and with the associated demos. I started presenting to myself, but out loud and standing. This allowed me to work my speech, content and speed, and fine tune the slides. I also quickly incorporated the demos, to work out how to time things, and how to work around a failing demo.

I started 10 days before the set date, with the slides & demo mostly ready. I allowed a minimum of a deck run every two days, that I would then adjust depending on my comfort and accuracy.

During these dry run, I would keep a piece of paper next to me, to write down whatever thoughts/questions or clarifications were needed. These would affect either the speech or the slides, and even the demo.

In between these runs, I would review the slides as much as I could every day.

I did not spend as much time reviewing the demo, as Jean had provided me with a solid script that would mostly run by itself, on my cue. The few manuals demos were quite simple, and worked every time.

I was also lucky enough to meet with several architects during that time, who were kind enough to give me their feedback on my slides, and even to let me rehearse in front of them, and give me their impressions and advice. That was a big help, and a great comfort as showtime loomed closer 🙂

I am now a few hours from the actual session, I will submit this post and start writing the follow-up right after the session.

Stay tuned!

 

PS : the program for the Redmond event is there : https://www.azurecommunityevents.com/#/event?181C8806-AFB7-4142-B0D3-B1858E9E8956

IoT everywhere, for everyone

Today is another tentative to explain part of the Microsoft Azure catalog of solutions.

As I did write about the different flavors of containers in Azure, I feel that it’s time for a little explanation about the different ways of running you IoT solution in Azure.

There are three major ways of running an IoT platform in Azure : build your own, Azure IoT Suite and IoT Central.

There are some sub-versions of those, that I will mention as I go along but these are the main players. I have listed those in a specific order, on purpose :

There you have it, I actually do not have to write another word 🙂

 

Alright, some words anyway. At the far end of the figure, you have what has always existed in the cloud and before. If you want a software stack,  you just build it. You will probably use some third-party software, unless you really want to write everything from the ground up. Let’s assume you will at least use a DBMS, probably a queuing system etc. You might go as far as to use some PaaS components from Azure (IoT Hub is a good candidate obviously, along with Stream Analytics). Long story short, you will have complete control over the stack and how you use it. But with great power… etc. It is a costly solution, in terms of time, money, people. And not only upfront investment, but you will also have to maintain all that stack, even provide your users with some kind of SLA.

 

Let’s say you are not ready to invest two years of R&D into your platform, and want to be able at least to get your pilot on track in a few days. Here comes Azure IoT Suite. It is a prepackaged set of Azure PaaS components that are ready to use. There are several use cases fully ready to deploy : Remote monitoring, Predictive Maintenance, Connected Factory.  You can start with one of those, and customize it for you own use. Once it is deployed, you have full access to each Azure component and you may evolve the model to suit your own needs. There are some very good trainings, with devices simulators, available. You can start playing with a suite in a few hours, see the messages and data go back and forth. You still have to manage the components once they are deployed, even though they are PaaS, so the management overhead is rather limited. But it is your responsibility to operate.

 

At the other end of the scope, we have Azure IoT Central. IoT central is a very recent solution to help you start your IoT project. We have been lucky enough to discuss the solution early on and I have to admit I have been convinced very early on by the product and the team behind it. So, the point is you have a business to run, and you might not want to invest millions to build and run something that is indeed not your core business. Start your IoT Central solution, configure a few devices, sensors, users and applications, and you’re done. Pilot in minutes, production in a few hours.

And like a good SaaS solution, you do not operate anything, you do not even have to know what is under the hood.

 

To conclude, I’ll say that the SaaS, PaaS and IaaS subtitles on the figure were here to remind you that the same choice principles apply here as anywhere in a cloud world : it is a choice you have to make between control and responsibility.

Azure SLAs

Another quite short post today, but for a complex topic.

I had the discussion several times with our customers, and more recently with several Microsoftees and MS partners.
The discussion boils down to “SLAs for Azure are complex, and you might not get what you think”.
And I’ll add “you might get better or worse than you are used to on-premises”.

Quick reminder, the official SLA website is here : https://azure.microsoft.com/en-us/support/legal/sla/
They are adapted quite frequently and what I write today might be proven wrong very soon. Yes, it happens, sometimes I am right for a long time 🙂

Back to our SLAs. I will focus on one service, but the idea can be expanded to almost all services.

Some services SLA are quite easy to figure out. Take Virtual Machines (Azure or not) for example. You just have to decide what metric proves that a VM is alive (ping reply for example), and measure that. Do some computation at the end of the month, and you’re done.

With backups, the official SLA () is a monthly uptime percentage. Which does not mean much for me, speaking of backups. Luckily, there is a definition of “downtime” :
“Downtime” is the total accumulated Deployment Minutes across all Protected Items scheduled for Backup by Customer in a given Microsoft Azure subscription during which the Backup Service is unavailable for the Protected Item. The Backup Service is considered unavailable for a given Protected Item from the first Failure to Back Up or Restore the Protected Item until the initiation of a successful Backup or Recovery of a Protected Item, provided that retries are continually attempted no less frequently than once every thirty minutes.

Meaning basically that the “backup service” has to be available at all time, whether you try to backup or restore. But, and there are actually two buts, there is not hard commitment there. Microsoft will give you back a service credit if the service is not provided, to the limit of a 25% credit. Eventually, you could get no service at all for a month, and you would get a 25% service credit. And the second, more important, but, there is absolutely nothing about a guarantee on your data. You could lose all of your data, and at most get a 25% service credit.
Some people would then point you to the storage SLA, stating that once the backup is stored, the SLA that applies is the one from storage. Another but here, as we are in the same situation : no commitment about your data.

One note : I never looked closely at the SaaS services SLAs (Office365 for example), but I remember someone from Microsoft IT saying that it was too difficult, and expensive, even for them, to build the infrastructure and services to compete with what Office365 offers. So yes, you might dig into their SLAs, and find that they have a light hand… but think hard on what you can do yourself, and how much it would cost you 🙂

Do not get me wrong, Microsoft does a quite good job with its SLAs, and from my experience, a way better job that most companies can do internally or for their customers. I worked for a hosting company, and I can assure you that we could write down an SLA about backups, and even commit to it. We could pray that we would be right, and prepare the compensations in case we were at fault, but that was it. There was no way for us to economically handle a complete guarantee.

Microsoft Tech Summit France

As the summit has just closed its doors, I would like to share my feedback on this first Tech Summit to happen in France.
As far as I know there are already Tech Summits in several other countries around the world. From what I have heard, they are supposed to be “local Ignite” events. For honesty’s sake, I have to say that I have not attended Ignite so far, only Tech-Ed Europe a few years ago, so I will not compare too much the two events. However according to the community website (http://aka.ms/community/techsummit) the sessions were exactly the same as the ones played at Ignite.

I did not see any numbers published, so far, but it was a rather small event. Attendance to the first keynote on Microsoft 365 was not really high, however the Azure keynote attracted more people and the room was almost full. I had the feeling that Azure was more exciting than Microsoft 365, but maybe 9:30 was too early for most 🙂 Or maybe I am biased toward Azure 😉
The conference took place in one Hall from Paris Expo, on one level. And we were far from crowding it.
As it was a free event, right in Paris, it seems that a lot of people came and went, just for a session or two, rather than stay for the whole two days. Which is rather smart, as it lets local people continue running their business, while being able to attend some sessions. And it lent a quiet feeling to the event itself.

For once, I managed to attend a few sessions, and they were very interesting, very focused on a tight subject. I was never deceived by a catchy title enticing me to a session that had nothing to do with what I could expect.
The speakers were a mix of Microsoft Corp and Microsoft France, most sessions were in English and we could interact easily with every speaker afterwards. Overall the sessions raise some good ideas for me to pitch, and subjects to talk about with my customers. I would have liked more technical sessions, but I think deep dives need a specific environment and public to be able to run properly.

In conclusion, I liked the event overall, but I do not see it as attractive as Experiences. And it was much smaller!
Also Experiences had been criticized has being less technical than the previous event it replaced, Tech Days. From my point of view, Tech Summit is on the same level as Experiences, just smaller and 6 months later (or earlier depending on how you look at it 🙂 )

As usual, the strategy is a bit difficult to read, but the local speakers and content providers were present and accessible, which is almost always my first reason to come 🙂

One final word about the technical levels used to sort the sessions : levels are standard, from 100 to 400, with 100 being introductory and 400 being expert. My advice would be to change the description as the level describes mostly the current knowledge you need to have about the product (Azure for example) than the depth of the session. 400 does not mean you will see live coding and the inners of the platform. It means that you know already where you’re going, and have probably already used the product.

GDPR, my love

The original title was supposed to be “in bed with GDPR”, but it might have been a little too clickbait 🙂

Anyway, short post today, but an important one, I think.

To be honest, I feel like screaming everytime I see/read/hear someone telling me that “we need to have a GDPR offer/business/thing”. Alright, it is a buzzword, and I have to live with that. I have made my peace with AI, Blockchain, Big Data, IoT , Cloud, etc. But I still struggle with GDPR. Here is why.

First this policy is a very important one in Europe, and will impact every business that comes anywhere close to us. You cannot ignore it. And every company has to look into it and find out what is needed to be compliant.

Second, the deadline is looming, but the national law for France is not yet in application. There is a text that is discussed (https://www.legifrance.gouv.fr/affichLoiPreparation.do;jsessionid=?idDocument=JORFDOLE000036195293&type=contenu&id=2&typeLoi=proj&legislature=15) but there might still be many changes before the law is applied in France. That means that we should hurry to wait, but be prepared… tough one.

Last, and most important, and the main reason of my screaming : it is mostly a question of law, for lawyers. Sure IT has to get ready to comply, but most of the consulting and debating and discussing has to be managed by law experts, which will be the right people to understand what it will mean to be compliant.

Sure an IT company can get some services in place, offer some broad suggestions and consulting. But without a lawyer, trained for that (and a proper written and voted law…) our job is almost meaningless.

The risk of innovation burnout

Catchy title, isn’t it? It could have been copied from a Management magazine, or CIO Monthly. Note to self : check before getting a copyright infringement lawsuit.

What I wanted to write about is mostly how to deal with the fast pace of innovation in the IT cloud business.

And mostly, how I deal with it, in my specific role, and how I dealt with it before.

As IT pros, we need to always keep an eye on the market, to check emerging technologies, to check where the existing ones are going and which ones are dying. This serves two purposes :

  • Keep our company and infrastructure up to date
  • Keep our own profile up to date, or at least on the track for the future

In french we have an expression for that : “veille technologique”, which  would roughly translate to “technological watch”.

In some french schools this subject is taught. It mostly describe how to identify the proper source of information to track, and how to track those. The sources are mostly tech websites and influencers. The tools are more diverse : RSS feed, Linkedin, Twitter, Facebook, Reddit…

In my previous position, as an infrastructure consultant & architect, I had to keep up with a limited set of technologies, mostly around databases and virtualization. My watch was purely technical, and dealt with detailed evolution of some component : which new feature was available in the latest version of Vsphere ESX, what capabilities was expected in the future release of Oracle DB etc. In that scenario, using RSS feeds, and attending some virtual events from the software editor was enough. I could keep up with the innovation pace by investing something along the line of one day per month of my time.

Today, if I consider my CTO-like role, the job is more complex. The scope I have to watch is much broader. If you consider only Microsoft Azure and the services it may provide, it is already almost impossible to keep up. For example, if you use the blog posts “Last week in Azure” which only relate to official news from the Azure blog, you get around 30 news per week (https://azure.microsoft.com/en-us/blog/last-week-in-azure-week-of-2018-02-12/). If you want to dig into each announce, and find out how it might affect you, this will take more time than you have in a week 🙂

And that does not count anything outside of official Azure news. If you add some specific content creators, from Microsoft or not, which post also every week, and then also add news and tendencies around DevOps… you get the point. I forgot the podcasts, and videos…

 

The main risk, as the title stated, is innovation burnout, or innovation overload. From what I have seen with colleagues, partners and customers, most of them do not want to keep up with that mass of information. Fortunately, I love learning new stuff, and I love information. Here is how I am currently working to get the most relevant information in my mind, and keep up with the news stream.

I have separate tools for separate needs, and most important, I do not use them in the same environment and pace :

  1. I use RSS feeds to track some news websites and blogs. I use those for both ends of the scope : news websites that publish a lot, which makes RSS feed scanning worthwhile, if you are a quick reader; and professional blogs where the authors publish irregularly.
  2. We also use the official Azure blog RSS feed directly into Slack, so that we can discuss easily any announcement that might be interesting for us or for our customers
  3. I tend to avoid in person micro events where you get to see a session on one subject, for half a day. This usually means that you consume at least half a day, or more, for unguaranteed return on investment. Unless the speaker is well renowned, or the session is a way of meeting with people you need to meet anyway.
  4. By the same book, when I go to official conferences, I mostly do not attend sessions. I attend some of them, but my main information media is people. I’d rather talk to five different people during the same 45 minutes, and get their opinion and feedback on the same tech. Unless the tech is not out yet and no one has been using it 🙂
  5. I also use podcasts, mostly for business and market trends, as well as general information. I listen to those usually when I travel, and am mostly way behind in my podcast list 😉
  6. Lastly, I use discussion with my fellow humans to share and discuss tech (and non-tech) trends : colleagues, partners, competitors, customers, prospects etc.

 

All in all, I tend to remain at a high level of information on tech trends, until I have a real need to dig into one, and find out how it is applicable to a specific scope and project. This allows me to keep my sanity, and have some productivity every day!

Bring your containers to the cloud

Cloud and containers, two buzzwords of the IT world put together. What can go wrong?

This post is a refresh on a previous one (https://cloudinthealps.mandin.net/2017/03/24/containers-azure-and-service-fabric/) with a focus on containers, rather than the other micro-services architectures.

As usual, I’ll speak mainly of the solutions provided by Microsoft Azure, but they usually have an equivalent within Google Cloud Platform or Amazon Web Services, and probably other more boutique providers.

And let’s be more specific, considering what happened in the container orchestrator world in the recent weeks. I am of the general opinion that this war is already over, and Kubernetes has won. Let’s focus on how to run/use/execute a Kubernetes cluster.

First step : you want to try out Kubernetes on your own. The ideal starter pack would be called Minikube (https://github.com/kubernetes/minikube)

. I already wrote about it, the good thing about it is that you can run a Kubernetes installation on your laptop, in a few minutes. No need to worry about setting up any cluster and configurations you do not understand at all.

You might want to play out a bit with Kubernetes the hard way, in order to be able to understand the underlying components. But that is not necessary if you only want to focus on the running pods themselves.

Now you are ready to run a production workload Kubernetes Cluster. And you would like to handle everything on your own. There are many ways to get there.

First, you want to deploy your own cluster, not manually but on your own terms. There is a solution, kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), that will help you along the way, without having to do everything by hand. This is a solution that is compatible with any underlying hardware, cloud, virtual or physical.

On Azure specifically,  there are two concurrent solutions to build your Kubernetes cluster : ACS (https://azure.microsoft.com/en-us/services/container-service/) & ACS-engine (https://github.com/Azure/acs-engine).

ACS (Azure Container Service) is mostly a deployment assistant, that will ask you the relevant questions on your K8s deployment, and then create and launch the corresponding ARM template. After that, you’re on your own. And you may download the template, edit it and re-use it anytime you want!

ACS-Engine is a command line customizable version of ACS, with more power to it 🙂

I feel that both are Azure dedicated versions of Kubeadm, but they do not add value to your production. They still are good ways to quickly deploy your tailored cluster!

BTW, if you go to the official webpage for ACS, it now just speaks about AKS, and you’ll have to dig a bit deeper to find out about the other orchestrators 😉

What if you could have your K8s cluster, be able to run your containers, and just have to manage the clustering and workload details? There is a brilliant solution called AKS (https://azure.microsoft.com/en-us/services/container-service/) , and no it does not stand for Azure K8S Service… It actually means Azure Container Service. Don’t ask. With that solution you just have to take care of your worker nodes, and the running workloads. Azure will manage the control plane for you. Nothing to do on the etcd & control nodes. Cherry on the top : you only pay for the IaaS cost of the worker nodes, the rest is free!

In my opinion, it’s the best solution today, it offers you a wide flexibility and control on your cluster, at a very low cost, and lets you focus on what is important : running your containers.

One last contestant to join the ring : Azure Container Instances (https://azure.microsoft.com/en-us/services/container-instances/). This solution is still in Preview, but might become a strong player soon. The idea is that you just care about running your container, and nothing else. For now it is a plugin for an actual K8S cluster, that will present itself as a dedicated worker node, where you can force a pod to run. I did not have time to fully test the solution and see where the limits and constraints are, but we’ll probably hear from this team again soon.