PaaS and Managed Services

If you know me, or have read some of my previous articles, you will know that I am a big fan of PaaS services.

They provide an easy way for architects and developers to design and build complex applications, without having to spend a lot of time and resources on components that may be used out of the box. And it relieves us IT admins of having to manage lower levels components and irrelevant questions. These questions are the ones that lead me to switch my focus into cloud platforms a few years ago. One day I’ll write an article on my personal journey 🙂

Anyway, my subject today concerns the later stages of the application lifecycle. Let’s say we have designed and built a truly modern app, using only PaaS services. To be concrete, here is a possible design.


I will not dig into this design, that is not my point today.

My point is : now that it is running in production, how do you manage and monitor the application and its components?

I mean from a Managed Services Provider perspective, what do you expect of me?

I have heard recently an approach that I did not agree with but that had its benefits. I will start with this one, and then share my approach.

The careful position

What I heard was a counterpoint of the official Microsoft standpoint, which is “we take care of the PaaS components, just write your code properly and run it”. I may have twisted the words here… The customer’s position was then : “we want to monitor that the PaaS components are indeed running, and that they meets their respective SLAs. And we want to handle security, from code scanning to intrusion detection”.

This vision is both heavy and light on the IT team. The infrastructure monitoring is quite easy to define and build : you just have to read the SLAs of each component and find out the best probe to check for that. Nothing very fancy here.

The security part is more complicated as it requires you to be able to handle vulnerability scanning, including code scanning, which is more often a developer skill, and also vulnerability watching.

This vulnerability scanning and intrusion detection part is difficult, as you are using shared infrastructure in Azure datacenters, and you are not allowed to run these kind of tools there. I will write a more complete article on what we can do, and how on this front sometime this year.

Then comes the remediation process that will need to be defined, including the emergency iteration, as you will have some emergencies to handle on the security front.

The application-centric position

My usual approach is somehow different. I tend to work with our customers to focus on the application, from an end-user perspective. Does that user care that your cloud provider did not meet the SLA regarding the Service Bus you are using? Probably not. However he will call when the application is slow or not working at all, or when he experiences a situation that he thinks is unexpected. What we focus our minds on is to find out which metrics we have to monitor on each PaaS component that have a meaning about the application behavior. And if the standard provided metrics are not sufficient, then we work on writing new ones, or composites that let us know that everything is running smoothly, or not.

The next step would be, if you have the necessary time and resources, to build a Machine Learning solution that will read the data from each of the components (PaaS and code) and be able to determine that an issue is going to arise.

In that approach we do not focus on the cloud provider SLAs. We will know from our monitoring that a component is not working, and work with the provider to solve that, but it’s not the focus. We also assume that the application owners already have code scanning in place. At least we suggest that they should have it.

Monitoring and alerting

Today is another rant day, or, to put it politely a clarification that needs to be made.

As you probably know by now, I’m an infra/Ops guy. So monitoring has always been our core interest and tooling.

There are many tools out there, some dating back to pre-cloud era, some brand new and cloud oriented, some focused on the application, some on the infrastructure. And with some tuning, you can always find the right one for you.

But beware of a fundamental misunderstanding, that is very common : monitoring is not alerting, and vice-versa.

Let me explain a bit. Monitoring is the action of gathering some information about the value of a probe. This probe can measure anything, from CPU load to an application return code. Monitoring will then store this data and give you the ability to graph/query/display/export that.

Alerting is one of the possible actions taken when a probe reaches a defined value. The alert can be an email sent to your Ops team when a certain CPU reaches 80%, or it could be a notification on your IPhone when your spouse get within 50m of your home.

Of course, most tools have both abilities, but that does not mean that you need to mix them and setup alerting for any probe that you have setup.

My regular use case is an IoT solution, cloud-based. We would manage the cloud infrastructure backing the IoT devices and application. In that case usually we would have a minimum of two alerting probes. These two probes would be the number of live connected devices, and the response time of the cloud infrastructure (based on an application scenario).

And that would be it for alerting, in a perfect world. Yes we would have many statistics and probes gathering information about the state of the cloud components (Web applications, databases, load balancers etc.). And these would make nice and pretty graphs, and provide data for analytics. But in the end, who cares if a CPU on one instance of the web app reaches 80%. As long as the response time is still acceptable and there are no marginal variation on the number of connected devices, everything is fine.

When one of the alerting probes goes Blink, then you would need to look into the other probes and statistics to figure out what is going on.

About the solution

There are so many tools available to alert and monitor, there cannot be one size fits all.

Some tools are focused on gathering data and alerting, but not really on the graphing/monitoring part (like Sensu, or some basic Nagios setups) and some are good at both (Nagios+Centreon, NewRelic). Some are mostly application oriented (Application Insight, NewRelic) some are focused on infrastructure, or even hardware (HPE SIM for example).

I have worked with many, and they all their strength and weaknesses. I would not use this blog to promote one or the other, but if you’re interested in discussing the subject, drop me a tweet or an email!

The key thing here is to keep your alerting to a minimum, so that your support team can work in a decluttered environment and be very reactive when an alert is triggered, rather than having a ton of fake alarms, false warnings and “this is red but it’s normal, don’t worry” 🙂

Note : the idea from this post goes to a colleague of mine, and the second screenshot from a tool another colleague wrote.

WPC 2016

It has almost been a year since my first Worldwide Partner Convention organized by Microsoft in Toronto.

At the time, I wanted to share some insights, and some tips to survive the week.

Before WPC, I attended multiple Tech-Ed Europe and VMworld Europe, in several locations over the years. WPC is slightly different as it is a partner-dedicated event, without any customers or end users. It gives a very different tone to the sessions and discussions, as well as a very good opportunity to meet with Microsoft Execs.

As it was my first time, I signed up for the FTA (First Time Attendee) program, which gave me access to a mentor (someone who had already attended at least once) and a few dedicated sessions to help us get the most out of the conference.


The buildup weeks

In the months preceding the event, Microsoft will be pushing to get you registered. They are quite right to do so, for two reasons.

First the registration fee is significantly lower when you register early. So if you are certain to attend, save yourself a few hundred dollars and register as soon as you can. Note that you may even register during the event for the next one.

Second, the hotels fill up very quickly, and if you want to be in a decent area, or even in the same place as your country delegation, be quick!


A few weeks before the event, I had a phone call with my mentor, who gave me some advice and opinion, as well as pointers on how to survive the packed 5 days. This helped me focus on the meetings with potential partners, and meetings with microsoftees, rather than on the sessions themselves. More on that subject later.

During that period, you are also given the opportunity to complete your online WPC profile, which may help get in touch with other partners, and organize some meetings ahead of time.


You also get the sessions schedule, which let you organize your coming days, and see what the focus is.

I had the surprise, a few days before the event, to learn that we had “graduated” in the Microsoft partner program, from remotely managed to fully managed. So we had a new PSE (Microsoft representative handling us as a partner) which was very helpful and set up a lot of meetings with everyone we needed to meet from Microsoft France. This helped, for a first-timer, to be guided by someone who knew the drill.

I was very excited to get there, and a bit anxious as we were scheduled to meet a lot of people, in addition to my original agenda with many sessions planned.



The event

I’ll skip the traveling part and will just say that I was glad I came one day early, so that I had time to settle in my new timezone, visit a bit and get cointed omfortable with the layout of the city and the venue.

I will not give you a blow by blow recount, but I will try to sum up the main points that I found worthy to note.

The main point, which I am still struggling to define whether it was a good or bad point is that we met almost only people from France, microsoftees or partners. I was a bit prepared for that, having heard the talk from other attendees, but it is still surprising to realize that you have traveled halfway across the world, to spend 5 days meeting with fellow countrymen.


There some explanation to that : this is the one time in the year where all the Microsoft execs are available to all partners, and they are all in the same place. So it is a good opportunity to meet them all, at least for your first event. I may play things differently next time.

Nevertheless, we managed to meet some interesting partners from other countries, and started some partner-to-partner relationships from there.


I did not go to any sessions, other than the ones organized for the french delegation. These were kind of mandatory, and all the people we were meeting were going there too. But I cancelled all my other plans to watch any session.

I did not really miss these technical sessions, as I work exclusively on cloud technologies, which are rather well documented and discussed all year round in dedicated events and training sessions. But on some other subjects, technical or more business/marketing some sessions looked very interesting and I might be a bit more forceful to attend those next time.


I have attended the keynotes, which were of various level of interest and quality. They are a great show, and mostly entertaining. The level of interest is different for every attendee, depending on your role and profile.

What I did not expect, even with my experience of other conferences, was the really packed schedule. A standard day was running like that :

  • 8.30 to 11.30 : keynote
  • 11.30 to 18.30 (sometimes more) : back to back meetings, with a short time to get a sandwich
  • 19.30 to whatever time suits you : evening events, either country-organized, or general events.
  • 22.30 get to bed, and start again


You may also insert into that schedule a breakfast meeting, or a late business talk during a party/event.

So, be prepared 🙂



A word on the parties/events : some countries organize day trips to do some sightseeing together. Niagara Falls are not far from Toronto, so it was a choice destination for many of them. We had an evening of BBQ on one of the islands facing Toronto, with splendid views of the city skyline at sunset. Some of the parties are just diners in quiet places, some other are more hectic parties in nightclubs. The main event is usually a big concert, with nothing businesslike, and everything fun-oriented!


The cooldown time

There are a few particulars to that event, mostly linked to Microsoft organization and fiscal year schedule.

The event is planned at the beginning of the year for Microsoft. This means that microsoftees get their annual targets right before the event, and start fresh from there.

The sales people from MS also have a specific event right after WPC in July, which means they are 120% involved in July, and will get you to commit on yearly target numbers and objectives during the event.

To top that, August is a dead month in France, where almost every business is closed or slowed to a crawl. That means that when you get to September, the year will start for good, but the Microsoft will already be starting to close its first quarter!


Practical advice

Remember to wear comfortable shoes, as you will walk and stand almost all day long. Still in the clothing deprtment, bring a jacket/sweater, as A/C is very heavy in these parts. We had a session in a room set at 18°C, when it was almost 30°C outside…

The pace of your week may really depend on the objectives you set with your PSE. Our first year was mostly meeting with Microsoft France staff. Next year may not be the same.

And obviously, be wise with your sleep and jetlag, those are very long days, especially when English is not your native language.


This year the event will be hosted in Washington DC, in July, and it has been rebranded Inspire.

I would not specially comment on the name, but anything sounds better than WPC 🙂


The first steps of your cloud trip

When I talk to customers who are already knowledgeable about the cloud, but still have not started their trip, the main subject we discuss about is : what is the first step to take to move into the cloud?

Usually at that point we all know about the cloud and its various flavors, on a personal level. I have touched already the subject on how to start playing with the cloud as a person : But it’s not that easy to translate a personal journey and knowledge to a corporate view and strategy.

There are two major ways to plan that journey.

The first is : move everything, then transform.

The second is : pick the best target for each application, transform and migrate if needed.


Lift and shift

I will touch quickly on the first path. It’s quite a simple planning, if difficult to implement. The aim is to perform a full migration of your datacenter into the cloud, lift-and-shift style. This can be done one-shot or with multiple steps. But in the end you will have moved all of your infrastructure, mostly as it is, into the cloud. Then you start transforming your applications and workload to take advantage of the capabilities offered by the cloud, in terms of IaaS, PaaS or SaaS offerings. The difficulty in there, for me, is that not all workloads or applications are a good fit for the cloud.


Identify you application portfolio

Enters the second solution : tailor the migration to your applications. Because the application is what matters in the end, along with the impact and use of this application for the business. The question of how you virtualize, or which storage vendor to choose is not relevant to your business.

In that case you will have to identify all of your application portfolio, and split that into for categories :

  1. Existing custom Apps

Mostly business critical applications.

  1. New business apps

Application that you would like to build

  1. Packaged Apps

Bought off the shelves

  1. Applications operations

Everything else. Low use, low criticity

Application breakdown

And here is a breakdown of each category and what you should do with those applications.


For your business critical application, which you have built yourself, or heavily customized, please leave those alone. They are usually largely dependant on other systems and network sensitive, and will not be the most relevant candidates. Keep those as they are, and maybe migrate to the cloud when you change the application. Some exceptions to that rule would be when you have hardware that is on it end of life, or high burst/high-performance computing, which could take advantage of the cloud solutions provided for these use cases.


The new business apps, that you want to build and develop : just build those cloud-native, and empower them with all those benefits from the cloud : scalability, mobility, global scale, fast deployment. Use as much PaaS components as you can, let the platform provider handle resiliency, servicing and management and focus on your business outcome.


Packaged apps are the easiest ones : they are the apps that you buy off the shelf and use as they are. Find out the SaaS version of those applications, and use it. The main examples are office apps (Office 365, G Apps), storage and sharing (Dropbox, Box, G. Drive, OneDrive) email (Exchange, Gmail).


The largest part of your application portfolio will most likely be represented by the Application Operation. They cover most of your apps : all of your infrastructure tools, all of the low use business application. These applications can be moved to the cloud, and transformed along the way, without any impact on their use, while giving you a very good ROI at low risk. Some examples : ITSM/ITIL tooling (inventory, CMDB, ticketing), billing, HR tools etc.

With that in hand, how do you expect to start?

Containers, Azure and Service Fabric


Today I will try to gather some explanations about containers, how they are implemented or used on Azure, and how this all relates to micro-services and Azure Service Fabric.

First let’s share some basic knowledge and definitions.


Containers in a nutshell

To make a very long story short, a container is a higher level virtual machine. You just pack your application and its dependencies in it, and let it run.

The good thing about those is that you do not have to pack  the whole underlying OS in there. This gives us lightweight packages, which could be around 50MB for a web server for example. Originally, containers were designed to be stateless. You were supposed to keep permanent data out of those, and be able to spin out as many instances of your applications to run in parallel, without having to bother about data.

This is not completely true about most deployments. Today many containers are used as lightweight virtual machines, to run multiple identical services, each with its instance.

For example, if you need a monitoring poller for each new customer you have, you might package this in a container and run one instance for each client, where you just have to configure the specifics for this client. It’s simple, modular and quick. The stateless versus stateful containers is a long standing one, see



Just like in virtualization, the case is mostly not about the container technology and limits, but rather about the tools to orchestrate that. Vmware vCenter versus Microsoft SCVMM anyone?

You may run containers manually above Linux or Windows, with some limitations, but the point is not to have a single OS instance running several services. The point is to have a framework where you can integrate that container and instantiate it without having to tinker with all the details : high-availability, load-balancing, registration into a catalog/registry etc. The video below is very good at explaining that :

The Illustrated Children’s Guide to Kubernetes

There are several major orchestrators in the field today. Kubernetes is the open-sourced one from Google’s own datacenters. It has gained a lot of traction and is being used a many production environments. DC/OS is the one based on Apache Mesos, which has been pushed a lot by Microsoft. It uses Marathon as the orchestration brick. And of course Docker has its own orchestrator : Docker Swarm. I will not go into comparing these, as there is a lot of content on that already.


Containers on Azure

As you can do on-premises, there are at least two ways to run containers on Azure. The first is simply to spin of a virtual machine with a container engine (Docker engine, Windows Server 2016, Hyper-V containers…) and start from scratch.

The easy way is to use Azure Container Service, which will do all the heavy lifting for you :

  • Create the needed controller VM(s) to run the orchestrator
  • Create the underlying network and services
  • Create the execution nodes which will run the containers
  • Install the chosen orchestrator

ACS is basically an automated Azure Resource Manager (ARM) configurator to deploy all of this simply.

You have a choice of the orchestrator you want to use, and the number of nodes, and voilà!


A bit anti-climatic, don’t you think? I have to disagree, somehow. I do not think that the value of IT is in installation and configuration of the various tools and frameworks, but rather in finding the right tool to help the business/users to do their job. If someone automates the lengthy part of that installation, I’ll back him up!

A note on pricing for these solutions : you only pay for the storage and IaaS VMs underlying the container infrastructure (controllers and nodes)



If you really do not want to handle the IaaS part of a container infrastructure, you can get a CaaS (Container as a service) option from the Azure Marketplace. This solutions will be priced specifically, with a platform cost (for the Docker Engines running on Azure) and a license cost for the product ( With that you get all the nifty modules and support you want :

  • The control plane (Docker Universal Control Plane)
  • Docker Trusted Registry
  • Docker Engine
  • Support desk from Docker teams


Azure Service Fabric and Micro-services

I will not go deep into this subject, it deserves a whole post into itself. However to complete the subject around containers, let me say a few things about Service Fabric.

Azure Service Fabric will be able to run containers, as a service, in the coming months.

The main target for Azure Service Fabric is more around the development side of micro-services, in the meaning that it is a way of segmenting the different functions and roles needed in an application to render the architecture highly adaptable, resilient and scalable.

Mark Russinovich did a great article on that subject :



How to embrace Azure and the Cloud

For the last year, I have been meeting with customers and partners inside and outside the Microsoft ecosystem.

I have talked with friends that are involved, at different levels, with IT whether Dev or Ops.

I have been trying to explain what the public Cloud is, especially Azure, to many different people.

Of course, I have been using the same evolution charts we all seen everywhere to illustrate my speech and explain where I believe we are headed.

What hit me while speaking with all these different people were the recurring themes: I want/need/must start on the public Cloud, but how? Where do I start?

 And very recently I finally found the right analogy, the one that will put your mind at ease, and allow you to relax and tackle the Cloud with some confidence. It has to do with someone reminding me of the Impostor’s syndrome

The full MSDN library

Some of you will remember as I do, the golden days when we had a TechNet/MSDN subscription, and we received every month the full catalog of Microsoft products, on an indecent number of CD-ROMs. I don’t know how you handled the amount of different products, but my approach was usually to fill the provided disc-book with the latest batch, and leave it at that. Occasionally, I would need to deploy a product and would pull out the matching disc.

Did anyone ever tried to grasp what all these products were and how to use them? I would venture to say that we certainly did not.

And yet, that is what some of us are trying to do with Cloud services. We get an overview of the services, quickly skimming over the names and vague target of each service, and we go home. The next day, we are willing to try new things and enjoy the breadth of options we have now a credit’s card away.

And we are stuck.

Let’s take an example, with what is named Cortana Analytics services. I chose this example because it is way out of my comfort zone, and I will not be tempted to go technical on you.

Here what the overview looks like:

When you were on the receiving end of the speech/session/introduction about these services, it all made sense, right?

What about now, do you have the slightest idea of what you could really do with Azure Stream Analytics?

All right, I am also stuck, what now?

And this is where I will disappoint everyone. I do not have a magic recipe to solve that.

However, I might be able to give some pointers, or at least tell you how I try to sort that out for myself.

If you are lucky, you have access to some expert, architect or presales consultant who have a good understanding of one scope of service (Hosting Websites on Azure, PowerBI, Big Data etc.). In that case, you should talk to that person and discuss what customer cases have been published, and try to get some inspiration by describing your current plans/project/issues. This seems to be more adapted to businesses outside of IT where you have a product and customers for it. Innovation around your product, using cloud services and components will probably have a quick positive impact for you company, and then for you.

If you work for an IT company, whether for an ISV, a consulting firm, a Managed Services Provider, things will get more difficult. In that case, what we have found to be helpful was to run a multi-steps proof of concept.

Start by gathering the innovation-minded people in your company. They might not be in your own organization, but can come for different teams and have different jobs. It does not matter, as long as they can help you brainstorm what kind of PoC you could start building.

Then… Brainstorm! Discuss any ideas of application/solution that you could build, not matter how simple or useless.

Then you choose one of your candidates, and start working on building it, using every cloud service you can around it. It can be a complex target solution, that you build step by step, or a simple one, that could get more complex or feature-rich over time.

We went for a simple, but useless app, that made use of some basic cloud components, in order to get the team to build some real know-how. When we had a skeleton application, that did nothing, but was able to use the cloud services we wanted, we gathered again, and discussed the next evolution, where the app was transformed to now be something useful for us.

Start small, and expand

What I really find attractive in this process is that it allowed us to start by just focusing on small technical bits, without being drowned in large scale/app issues and questions. For example, we wanted to have part of the app that showed interaction with the internal CRM/HR systems. We just focused on part of it, the competencies database, which we interrogated and then synchronized to our own Azure SQL database. This topic is not that wide or complex, but it allowed to work on getting data from an outside source, Salesforce, and transform it to fit another cloud service in Azure. With a bit of mind stretching, if you look again at the Cortana Analytics diagram earlier in this article, you can fit the topic in the first two blocks: Information Management, and (Big) Data Store. Our first iteration had just a visualization step added to that, in a web app we built for that. But we also added authentication, based on Azure AD, as you do not provide this information to anyone out there.

Once you are done with your first hands-on experience, start the next iteration, building on what you learned and expand. Do not hesitate to go for something completely different. We discarded 90% of our first step when we started the second. Don’t forget, the point is to learn, not necessarily to deliver!

Originally published here :

I know Kung-Fu

Almost everyone who has seen the Matrix movie remembers that scene. Neo, played by Keanu Reeves, has just spent the day learning martial arts, by some brain writing sci-fi process. His mentor comes in at the end of one of these “lessons”, Neo opens his eyes and says “I know Kung-Fu”.

Learning is difficult

Of course, learning is not that easy in real life, it takes a certain amount of time, long hours of work and practice. And it probably never ends. Take my current favorite subject, the cloud. To be precise I should say public cloud services on Azure. The scope of what those services cover is extremely wide, and some of them are so specific, they need a specialist to deep dive into.

I can be overwhelming. If you work in this field, or a similar one, you may already have had that feeling when you feel you will never get to the bottom of things, when you have the impression that you can never master the domain, because it keeps evolving. To be honest, it is probably true. There are probably thousands of people working to broaden and deepen cloud services every day, and there is, probably, only one of you (or me).

For the last 15 months, I have been trying to learn as much as possible about Azure services, in any field possible, from IaaS networking to Machine Learning, from Service Bus Relay to Logic Apps. And after all that time and numerous talks, webcasts, seminars and data camps, I almost always ended up thinking “OK, I think I understand how these services work. I probably could do a demo similar to what I have just watched. But how can I use these in real-life scenario?”

And last week, thanks to a very dedicated person, I finally found some insight.

Meet the expert

Allow me to set the stage. We were invited to an Azure Data Camp by Microsoft. The aim of these 3 days was to teach us as much as possible on Azure Data Services (Cortana Intelligence Suite). The team was amazing, knowledgeable and open, the organization perfect, the attendees very curious and full of questions and scenarios that we could relate to. Overall these 3 days were amazing. However, the technical scope was so wide and deep, that we covered some very complex components in under an hour, which, even with the help of night-time labs, was too fast to process and absorb. It left me with the usual feeling. I probably would be able to talk a bit about these components or areas, but my knowledge felt far for operational, and even business presales level. And I am supposed to be an architect, to have all this knowledge and be able to create and design Azure solutions to solve business needs.

So, after two days, that was the stage. Then came in one of the trainers/specialists. I will tell you a bit more about him later on, just do not call him a data scientist. His area of expertise, as far as we are concerned, covers the whole Cortana Suite with an angle that I would qualify as Data Analysis. He had already taken the stage earlier, to explain us what the methodology to handle data was, and how every step related to Cortana Suite services. He even had this speech on multiple occasions. Every time we heard and read it, it made sense, it was useful and relevant.

So, Chris started his part by showing us the same diagram, and asking us “Are you comfortable with that?” Followed by a deep, uneasy silence. My own feelings were that I did understand the process, but did not feel able to apply it or even explain it. I see several reasons for that. The first is that data analysis is far from my comfort zone. I am an IT infrastructure guy, I know virtualization, SAN, networking. I have touched Azure PaaS services around these topics, and extended to some IoT matters. The second was that we did not have time to let the acquired knowledge settle and be absorbed that week. Admittedly, I could have spent more hours in the evening rehearsing what we learned during the day, but we were in London, and I couldn’t miss that. And the last is that I feel we are getting so used to having talks and presentations about subjects we just float on the surface of, that we are numb and we do not dive to deeply into those, probably out of fear. Fear of realizing that we are out of our depths. Impostor’s syndrome, anyone?

Enter the “I know kung-Fu!” moment

Because that was how we felt: having been force-fed a lot of knowledge, but having never really thought about it, or even used it. We felt knowledgeable, until someone asked us to prove it.

Remember what happened next in the Matrix? Neo’s mentor, Morpheus, asks him “Show me”. And kicks his ass, more or less. But still manages to get him to learn something more.

Chris did that to us. He realized that we were actually feeling lost, under the varnish of knowledge. He then spent 45 minutes explaining the approach, and finally got us a simplified scenario, which felt familiar for those us who had studied for previous design certification exams. And asked us which services we would use in that case, what were the key words etc.

And magically made us realize that we could indeed use our newfound knowledge to design data analytics solutions based on Cortana Intelligence Suite.

It might seem obvious that examples and scenarios are an excellent way to teach. Don’t get me wrong, we had tons of those during these days. We actually spent a fair amount of time with Chris and part of the team and attendees that evening discussing that. The trick was to deliver the scenario at exactly the right moment: when we felt lost, but had the tools to understand and analyze it.

My point, to make that long story short, is: it’s OK to feel drowned by the extent of available cloud services and options. We all do. Depending on your job, you may be the go to person for Cloud topics, or an architect trying to be familiar with almost everything in order to know when to use a particular tool, or merely trying to wrap your mind around the cloud. In any case, you just have to find the time to get some hand-son, or read/watch a business case that matches something you are familiar with. This way you can see how the shiny new thing you’ve learnt is put to use.

And you will be able to say “I know Kung-Fu”.

Useful links :

Chris Testa O’neill ‘s blog :

Originally published here :

DevOps, NoOps and No Future

In the wake of the recent MongoDB happy hour debacle, there have been a few mentions of DevOps and NoOps. The pieces were mostly about the fact that this incident proved that the IT business is not really in full DevOps mode, not to mention NoOps. I am not confident that NoOps will be the future for a vast majority of shops. Being from the Ops side of things, I am obviously biased toward anyone stating that NoOps is the future. Because that would mean no job left for me and my comrades in arms. But let me explain 🙂

I would like to be a bit more thorough than usual and explain what I see there, in terms of practices and trends.


First let me set the stage and define what I mean by DevOps, and NoOps.

At its most simple definition, DevOps means that Dev teams and Ops team have to cooperate daily to ensure that they both get what they are responsible for : functionalities for Dev, and stability for Ops. A quick reminder though : business is the main driver, above all. This implies that both teams have to work together and define processes and tooling that enables fast and controlled deployment, accurate testing and monitoring.

We could go deeper into DevOps, but that is not the point here. Of course, Ops team should learn a thing or two from Scrum or any agile methodology. On the other hand, Dev teams should at least grasp the bare minimum of ITIL or ITSM.

What I could imagine in NoOps would be the next steps of DevOps, where the dev team is able to design, deploy and run the application, without the need of an Ops team. I do not feel that realistic for now, but I’ll come back to this point later.

How are DevOps, and the cloud, influencing our processes and organizations

I have worked in several managed services contexts and environments in my few years of experience, where sometimes Dev and Ops were very close, sometimes completely walled of. The main driver for DevOps, usually linked to cloud technologies adoption, on the Ops side, is automation. Nothing new here, you’ve read about it already. But there are several kinds of automation, and the main ones are automated deployment and automated incident recovery.

The second kind has a deep impact, in the long term, on how I’ve seen IT support organization and their processes evolve. Most of the time, when you ask your support desk to handle an incident, they have to follow a written procedure, step by step. The logical progress is to automate these steps, either by scripting them, or using any IT automation tool (Rundeck, Azure Automation, Powershell etc.). You may want to keep the decision to apply the procedure human-based, but it’s not always the case. Many incidents may be resolved automatically by applying directly a correctly written script.

If you associate that to the expanding use of PaaS services, which removes most of the monitoring and management tasks, you will get a new trend that has already be partly identified in a study :

If you combine PaaS services which remove most of the usual L2 troubleshooting, and automated incident recovery you will get what I try to convince my customers to buy these days : 24*7 managed services relying only on L1.

Let me explain a little more what we would see there with an example based on a standard project on public Cloud. Most of the time it is an application or platform which the customer will develop and maintain. This customer does not have the resources to organize 24*7 monitoring and management of the solution. What we can build together is a solution like this :

• We identify all known issues where the support is able to intervene and apply a recovery procedure

• Obviously, my automation genes will kick in and we automate all the recovery procedures

• All other issues will usually be classified into three categories :

○ Cloud platform issues, escalated to the provider

○ Basic OS/VM issues, which should be automated or either be solved by Support team L1 (or removed by the use of PaaS services)

○ Customer software unknown issue/bug which will have to be escalated to the dev team

I am sure you now see my point : once you remove the automated recovery, and the incidents where the Support Team has to escalate to a third party… nothing remains!

And that is how you can remove 24*7 coverage for your L2/L3 teams, and provide better service to your customers and users. Remember that one of the benefits of automated recovery is that it’s guaranteed to be applied exactly the same each time.

A word about your teams

We have experienced firsthand this fading out of traditional Level 2 support teams. These teams are usually the most difficult to recruit and keep as they need to be savvy enough to handle themselves in very different situations and agree to work on shifts. As soon as they get experienced enough, they want to move to L3/consulting and to daytime jobs.

The good thing is that you will not have to worry anymore about staffing these teams and keep them.

The better thing is that they are usually profiles with lots of potential, so just train them a bit more, involve them in automating their work, and they will be happier and probably more to implementation or consulting roles.

How about you, how is your organization coping with these cloud trends and their impact?

Why I love working on IT & the cloud

I remember when I started working full time in IT, all the young professionals were employed by large contractors and consulting firms. The word then was “please help me find a job with a customer/end-user!”. When I recruit today, mostly people a bit younger than me, the word has shifted to “I love working for a contractor, as it does not enclose me in one function”.

OK, I did think about that early today, and wanted to write it somewhere, so I used it as an intro, to show my deep thinking in the wee hours of the morning.

However what I wanted to write about more extensively was about how I love working in IT today, and particularly on Cloud solutions, and how it is gratifying, compared to what we experienced a few years back.

Technology centric and support functions

Not so long ago, IT was a support function, and was supposed to keep the hassle of computers to a bare minimum. When interacting with our customers and users, the main issues and questions were about how we kept printers running, and emails flowing. If you worked on ERP or any management system, same thing : please keep that running so that we can do our job. For years, we had team members who loved technology, who delve deep into configuration and setups so that we could congratulate ourselves in building shiny new infrastructures, to try to keep up with users’ demand.

I will keep the example to my own situation. I went through technological phases, from Windows 2000 Active Directory, to Cisco networking, to virtualization, to SAN storage and blade servers, to end up on hyper-converged systems. For years I would generally not talk shop with friends, family or even friends from school (I went to a mix business/engineering school, so that could explain things). I did not see the point on digging into technical points with people from outside my “technological comfort zone”.

Don’t misunderstand the situation, I was aware that IT department was trying to shift its role from support function to help the business, but it was a bit far-fetched for me. Then came public cloud…

Business centric, and solution provider

At first we had a simplistic and limited public cloud (Hello 2010!), and a private cloud which was just virtualization with a layer of self-service and automation. I could begin to see the point, but still… it was a technologist dream of being able to remove a large portion of our day to day routine.

Situation evolved to a point where we had real PaaS and SaaS offerings that could solve complex technical solutions with a few clicks (or command lines, don’t throw your penguin at me!). And I started to talk with my customers on how we could help them build new solutions for their business, give them better quality of service, and have them understand me!

Of course some of that is linked to my experience, and the fact that am not in the same role as I was 10 years ago, but still. I now enjoy discussing with my former schoolmates and help them figure out a solution to a business issue, being able to help some friend’s business grow and expand.

IT can now be a real solutions provider. We have to work at gaining sufficient knowledge on all the cloud bricks to be able to build the house our business does not know they need.

Originally published here :