Kubernetes, the hard way, AZCLI style

Finally a tech post!

I have been busy this week, on command lines and Kubernetes.

The starting point was the recent announce for Azure Container Instances and the related Kubernetes conenctor : https://github.com/azure/aci-connector-k8s

I admit I did try what Corey Sanders showed in his show : https://channel9.msdn.com/Shows/Tuesdays-With-Corey/Tuesdays-with-Corey-Azure-Container-Instances-with-WINDOWS-containers. However what I found interesting and wanted to try was the ACI connecter to Kubernetes, and how we would work with that.

Of course we have a test Kubernetes cluster here, that someone from our tema built, but it felt too easy just to add the connector. Also I am not comfortable yet with Kubernetes and I wanted to get my hands dirty and know more about the inner workings of a k8s cluster.

I remembered a quote from the Geek Whisperers’ show featuring Kelsey Hightower. He said that he wrote a guide to build a K8s cluster from the ground up, without any shortcuts. The guide is found there : https://github.com/kelseyhightower/kubernetes-the-hard-way

The downside is that the guide is aimed at Google Cloud Platform, and I am an Azure guy.

And there was my pet project for this week : adapt the guide for Azure, using only Azure CLI commands!

There was one final trick for me to learn : store and share all that on GitHub. As I never had to work with Git by myself, it was also a good way to learn the moves.

So, lots of new stuff learnt :

  • Create a K8s cluster from scratch
  • GitHub, and Git
  • Making progress on Azure CLI
  • A good refresh and Azure infrastructure

The project is hosted there : https://github.com/frederimandin/Kubernetes-the-azcli-way

There are many following steps to work on :

  • Integrating properly with Kelsey’s guide
  • Testing my own guide again
  • Adding ACI connector to my cluster and play with it (and write about it of course!)

I’ll keep you posted, of course!

Inspire ’17

We are almost halfway of the first quarter for Microsoft financial year, a month after the partner convention, which has been rebranded “Inspire”.

Now that I am not a newbie any more, I can step back a bit and see past the awe of the first event.

The setting this year was in Washington DC, which is great place for these kind of events. There are many hotels nearby, the city center is small enough to walk around, and there are many chic places for the evenings.

This is not a travel blog, so I will not go further into the tourism information.

This year we had decided, with our PSE, to have a lighter Microsoft agenda, and to be able to attend more sessions and impromptu meetings. I have to say that it was a wise choice. It allowed us to make new connections, to network quietly and to enjoy the Expo and the other partners. Note that I found it way easier to network this time, as our company was better known in the ecosystem, and we also had a better knowledge of the various people, names and acronyms used throughout Microsoft.

This year I was able to attend several sessions, with different format : roundtables, breakout, demo theater, workshop and of course keynotes. The content was really good, though it is definitely not a technical event.

The best way to have a technical discussion is to go to the Microsoft pods with a specific subject in mind and ask for an expert on that matter. Also these pods provide good help and advice on how to build or develop your business along the current track or toward a brand new scope (yes GDPR was a recurrent topic, I’ll write separately about that later on).

I have met many amazing partners and vendors, through the social events, or on their booths and we have started to build new relationships that will hopefully help develop all our business and knowledge.

Once again, it is an event where you have to be prepared, and be prepared to change your plans.

First you need to have an idea of your goal beforehand. Do you want to find new partners within the ecosystem? Would you rather gian some traction or visibility in that ecosystem, both from Microsoft and from the other partners? Are you open to new business opportunities? Are you here to listen to the keynote and get a feeling of what is coming for the near future?

Then, you need to build your agenda around that goal : sessions, meetings, events etc. But do remember to leave some room to be able to continue a discussion with an unexpected partner, or be ready to not attend a session live and see the recording, because something else popped up.

And mostly, have fun 🙂

My journey to the cloud

I may have skimmed that subject a few times before, but as I get to the end of the (Microsoft) year, and begin a new one, it feels right to reflect for a while on what got me where I am now.

The short version is : I got enough of cabling, servers, storage and operating systems, and wanted to move to something else, however related. Okay, that is VERY short. Allow me to develop that further.

I started working in IT about 15 years ago. I did my duties in user support, moved to network engineering and implementation. At the same time, I discovered the wonderful world of Microsoft training and certification, and got my first cert around 2003, quickly followed by an MCSE (yes, on Windows 2000!).

I switched back and forth between networking and systems engineering for several customers. I collected some knowledge along the way, mainly about hardware installation, cabling, storage and servers, but also about virtualization, networking, SAN. I continued my cert trip in parallel, maintaining my MCSE up to Windows 2016 and Azure. I also collected a few other certs : ITIL, Redhat RHCE (6 & 7), Vmware VCP & VCAP-DCD, Prince2 etc. I will say more about certification in a later article, keep in touch!

To complete the brush-up, I tried my hand at project management, as well as people management.

Let’s get to the point where it gets interesting. First time I heard about public cloud was at Tech-Ed Europe, probably in 2010. It was mostly limited to SQL server databases with many limitations. It was not really a hit for me. The subject kept reappearing : public cloud, private cloud, elastic computing, you’ve heard the talk.

There were actually two triggers to my “Frederi, meet Cloud” moment.

The first one was rather a long term evolution of my area of interest. After years spent working with the same company, and on the same software, I got to the point where I could understand the business side of my actions and responsibilities for our customers. It was a slow shift to a more end-user/application centric approach. This is where I try to push today : the major focus and metric is the end-user. If this user is not happy about his experience, then we (the whole team behind the software, from IT infrastructure to developers and designers) have failed. This is why I tend to ask the question early in the discussions : how is the application used? By who?

The second trigger was more of a “a-ha” moment, specifically about public cloud. In a previous job, I was in an outsourcing team, focused on infrastructure. We had a whole Services department, whose job was to design build and deliver custom software. We almost never had a project in common. Until once we had a developer on the phone, and we had the most common conversation  between dev and ops :

Dev : “we have built a php application for that customer, and he wants to know if we can host and operate it, and what the cost would be”

Ops (me) : “OK, tell me your exact need : OS, VM size, which web server, which version, how much disk space, a public IP etc?”

Dev : “I do not know that”

Ops : “in that case, I cannot give you an estimate. We can operate, but we need to know what”

Follows a few days of emails trying to get those details ironed out and try to write a proposal. Two weeks later, we had the same dev on the phone : “Drop it, the customer has already deployed in Azure by himself”.

That is when I realized that we, ops and infra, could not stay on the defense line and ask for what we knew best. We had to ask about the application itself, and we had to get into that “Azure” stuff.

And that’s how I ended up in Azure, and mostly PaaS oriented 😉

Choosing between IaaS, PaaS, SaaS (and something else?)

 

I know, there are tons of materials and training that will explain you how to select between SaaS and custom software.

I’ll summarize their usual points, but I wanted to add some details on how you might have to look at the full scope of cloud services : from Iaas, through PaaS, to Saas, and a detour through containers.

 

First the usual discussion, that have seen unfurl dozens of times : why choose SaaS over a custom/on-premises solution? You know the drill, right?

On one side, you have full control and can customize the solution. This means the software will be tailored to your exact needs, and you will control exactly what is done with it, how is updated, where data is stored, accessed, replicated, backed-up etc. You will know the exact setup of the deployment, which layer is connected to which other layer, how, where traffic goes, how each layer is protected, and replicated. You will handle failover, high-availability etc. In a few words : you will be the master in your own kingdom. Problem with that path : you are, mostly, on your own. All of these domains I just listed are your responsibility, and you have to have knowledge and skills to handle those. You might need to expand those skills to cover 24*7. You’ll need a strong IT team, in addition to a trained software team.

On the other side, you have SaaS : bright new, quick and easy. You set that up in a flash, connect the solution to your other enterprise software, create user accounts and voilà! No administrative overhead, the only skill you have to master is the configuration of the solution. You’ve seen the downside coming : you have absolutely no control over the software, its release cycle, the mechanisms in place to provide high-availability. Sometimes you have some control over your data, but it’s not obvious.

In the end it’s your call to choose the balance you need.

The cloud has integrated the same choices and solutions. You will have to decide whether you want to use IaaS, PaaS or SaaS. The basic triggers are the same, you choose the right balance between control, freedom and responsibility.

Read here a good explanation : https://docs.microsoft.com/fr-fr/azure/app-service-web/choose-web-site-cloud-service-vm

I  would like to add something to that horizon, something spicier, which could probably give you the best of each solution, provided you are ready to learn some new skills. We had the same discussion several times with our customers, revolving around the limitations of Azure App Service for some Java applications, its lack of control, and how moving from that to a full-blown IaaS virtual machines felt like dropping out of the cloud.

Here what we built with some of those customers. We wanted to provide them with the flexibility and ease of use of Azure App Service, tailored to their needs, without adding much IT admin overhead. We had already been running a Kubernetes cluster for our own internal needs for a while, and it was an easy leap to suggest that solution.

Kubernetes is becoming the leader in container orchestration, but you could choose any other solution (DCOS, Swarm etc.)

Here is a short list of the benefits the customer gained in that solution :

  • Flexibility of the deployment and settings of the application, down to every Java VM option
  • Scalability of an enterprise-ready container orchestration, based on a cloud platform that is reliable
  • Ease of deployment : these are containers after all!

 

The only thing you have to keep in mind here is that someone has to learn and master containers and the orchestration layer for those. Kubernetes might not be the most accessible solution here, but it is, in my mind, the most mature and powerful.

 

One last word, for you sceptics who still believe that Microsoft and Open-source are still far from each other : try to make a new build of your software for containers using Visual Studio :

 

https://blogs.msdn.microsoft.com/jcorioland/2016/08/19/build-push-and-run-docker-images-with-visual-studio-team-services/

Voice control and security

I will assume that I am definitely not the first one to write about that, but I feel the need to write anyway.

We saw during a few recent events that our new beloved always listening devices can interpret an ordre form almost anyone (Someone ordered a Whopper? Burger King: OK Google!)

It seems trivial and a bit childish, but when you start integrating many services into a system like that, you may have to think about security.

This goes at different levels : from limiting commands to voice-print recognition.

https://xkcd.com/1807/

The first issue that comes to mind, related to several recent events, is that you may want to include some kind of limitation on your Google, Alexea or whatever voice-activated device you are using. Just to prevent anyone from ordering everything from your favorite shopping website when they are in listening range. This is pretty simple, and you may just set up your system just to load your shopping cart, and wait for your physical confirmation on your phone/laptop.

Second, you may want to rethink the voice activation of many devices. It has been proved that this can be hacked quite easily (http://www.zdnet.com/article/how-to-hack-mobile-devices-using-youtube-videos/). The good thing is that you can limit what you are allowed to do on your phone without providing an unlock code. It depends of your phone, but you should at least check that out.

Then comes the issues that are not really solved, at least on publicly sold devices. There are two that I can think of right now : voice printing, and content securing.

Voice printing would allow your devices to recognize your voice only, so that no one else can use your device. Pretty simple, and some applications are already providing that, in a limited way. I know, it goes a bit against the current flow of speech recognition, which has been improved up to the point where it does not need to be trained any more. If anyone remembers having to go through hundreds of sentences with Dragon Dictate…

Content securing if the other end of the scope : how do you make sure that some content cannot be spoken, from your device, when it is private. “Siri, how much is there on my bank account?” You might not want Siri to tell the amount out loud on the bus, right? I agree, you should not ask the question if you do not want to hear the answer, but still, it might provide an additional security to be able to limit some outputs.

I have been notified that a device has been designed to provide privacy for your conversations and voice commands : HushMe. I have to admit I am a bit puzzled by the device : http://gethushme.com

I am not sure whether it is a real solution, and a viable one 🙂

Sharding your data, and protecting it

I am quite certain that there are many articles, posts and even books already written on that subject.
To be honest, I did not search for any of those. For some reason, I had to figure out sharding almost by myself building a customer design.
So this post will just be my way of walking through the process, and confirm that I can explain it again. If someone finds this useful, I will be happy 🙂

Here is the information I started with. We want to build an application that uses a database. In our case, we chose DocumentDB, but the technology itself is irrelevant. The pain point was that we wanted to be able to expand the application worldwide, but also to keep a single data set for all the users, wherever they were living, connecting from.
That meant finding a way of having a local copy of the data, writable, in every location we needed.

Having a readable replica of a database is quite standard. You may even be able to get multiple replicas of this kind.
Having a writable replica is not very standard, and certainly not a simple operation to setup.
Having multiple writable replicas… let’s say that even with reading the official guide from Microsoft (https://docs.microsoft.com/fr-fr/azure/cosmos-db/multi-region-writers) it took us a while to fully understand.

As I said, we chose to use DocumentDB, which already provides the creation a readable replica with a few clicks.
This is not enough, as we need to have a locally writable database. But we also need to be able to read data that is written from the other locations. What we can start with is to create a multiple ways replica set.
We could have a writable database in our three locations, with a readable copy in each of the other two regions :

And that is where you have to realize that your database design is over.
Have a closer look at that design. Very close look. And think about our prerequisites : we need a locally writable database, check. We need to read data written from the other locations, check. We do not need to solve the last step with database mechanisms.

The final step is made in the application itself. The app needs to write into its local database to maximise performance, and limit data transfer costs between geographically distant regions. This data will then be replicated, with a small delay, to the other regions. And when the application needs to read data, it will access all of the three sets it has access to in its region, and consolidate the data from all three sets into a single view.

And there it is. Tell me now whether it was me that was a bit thick to not understand that from the Microsoft guide at first read, or please someone tell me that I am not alone in having struggled a bit with the design!

Note : this issue, at least for DocumentDB on Azure has been since solved by the introduction of CosmosDB, which provides multiple writable replicas of a database, a click away.

PaaS and Managed Services

If you know me, or have read some of my previous articles, you will know that I am a big fan of PaaS services.

They provide an easy way for architects and developers to design and build complex applications, without having to spend a lot of time and resources on components that may be used out of the box. And it relieves us IT admins of having to manage lower levels components and irrelevant questions. These questions are the ones that lead me to switch my focus into cloud platforms a few years ago. One day I’ll write an article on my personal journey 🙂

Anyway, my subject today concerns the later stages of the application lifecycle. Let’s say we have designed and built a truly modern app, using only PaaS services. To be concrete, here is a possible design.

 

I will not dig into this design, that is not my point today.

My point is : now that it is running in production, how do you manage and monitor the application and its components?

I mean from a Managed Services Provider perspective, what do you expect of me?

I have heard recently an approach that I did not agree with but that had its benefits. I will start with this one, and then share my approach.

The careful position

What I heard was a counterpoint of the official Microsoft standpoint, which is “we take care of the PaaS components, just write your code properly and run it”. I may have twisted the words here… The customer’s position was then : “we want to monitor that the PaaS components are indeed running, and that they meets their respective SLAs. And we want to handle security, from code scanning to intrusion detection”.

This vision is both heavy and light on the IT team. The infrastructure monitoring is quite easy to define and build : you just have to read the SLAs of each component and find out the best probe to check for that. Nothing very fancy here.

The security part is more complicated as it requires you to be able to handle vulnerability scanning, including code scanning, which is more often a developer skill, and also vulnerability watching.

This vulnerability scanning and intrusion detection part is difficult, as you are using shared infrastructure in Azure datacenters, and you are not allowed to run these kind of tools there. I will write a more complete article on what we can do, and how on this front sometime this year.

Then comes the remediation process that will need to be defined, including the emergency iteration, as you will have some emergencies to handle on the security front.

The application-centric position

My usual approach is somehow different. I tend to work with our customers to focus on the application, from an end-user perspective. Does that user care that your cloud provider did not meet the SLA regarding the Service Bus you are using? Probably not. However he will call when the application is slow or not working at all, or when he experiences a situation that he thinks is unexpected. What we focus our minds on is to find out which metrics we have to monitor on each PaaS component that have a meaning about the application behavior. And if the standard provided metrics are not sufficient, then we work on writing new ones, or composites that let us know that everything is running smoothly, or not.

The next step would be, if you have the necessary time and resources, to build a Machine Learning solution that will read the data from each of the components (PaaS and code) and be able to determine that an issue is going to arise.

In that approach we do not focus on the cloud provider SLAs. We will know from our monitoring that a component is not working, and work with the provider to solve that, but it’s not the focus. We also assume that the application owners already have code scanning in place. At least we suggest that they should have it.

Monitoring and alerting

Today is another rant day, or, to put it politely a clarification that needs to be made.

As you probably know by now, I’m an infra/Ops guy. So monitoring has always been our core interest and tooling.

There are many tools out there, some dating back to pre-cloud era, some brand new and cloud oriented, some focused on the application, some on the infrastructure. And with some tuning, you can always find the right one for you.

But beware of a fundamental misunderstanding, that is very common : monitoring is not alerting, and vice-versa.

Let me explain a bit. Monitoring is the action of gathering some information about the value of a probe. This probe can measure anything, from CPU load to an application return code. Monitoring will then store this data and give you the ability to graph/query/display/export that.

Alerting is one of the possible actions taken when a probe reaches a defined value. The alert can be an email sent to your Ops team when a certain CPU reaches 80%, or it could be a notification on your IPhone when your spouse get within 50m of your home.

Of course, most tools have both abilities, but that does not mean that you need to mix them and setup alerting for any probe that you have setup.

My regular use case is an IoT solution, cloud-based. We would manage the cloud infrastructure backing the IoT devices and application. In that case usually we would have a minimum of two alerting probes. These two probes would be the number of live connected devices, and the response time of the cloud infrastructure (based on an application scenario).

And that would be it for alerting, in a perfect world. Yes we would have many statistics and probes gathering information about the state of the cloud components (Web applications, databases, load balancers etc.). And these would make nice and pretty graphs, and provide data for analytics. But in the end, who cares if a CPU on one instance of the web app reaches 80%. As long as the response time is still acceptable and there are no marginal variation on the number of connected devices, everything is fine.

When one of the alerting probes goes Blink, then you would need to look into the other probes and statistics to figure out what is going on.

About the solution

There are so many tools available to alert and monitor, there cannot be one size fits all.

Some tools are focused on gathering data and alerting, but not really on the graphing/monitoring part (like Sensu, or some basic Nagios setups) and some are good at both (Nagios+Centreon, NewRelic). Some are mostly application oriented (Application Insight, NewRelic) some are focused on infrastructure, or even hardware (HPE SIM for example).

I have worked with many, and they all their strength and weaknesses. I would not use this blog to promote one or the other, but if you’re interested in discussing the subject, drop me a tweet or an email!

The key thing here is to keep your alerting to a minimum, so that your support team can work in a decluttered environment and be very reactive when an alert is triggered, rather than having a ton of fake alarms, false warnings and “this is red but it’s normal, don’t worry” 🙂

Note : the idea from this post goes to a colleague of mine, and the second screenshot from a tool another colleague wrote.

WPC 2016

It has almost been a year since my first Worldwide Partner Convention organized by Microsoft in Toronto.

At the time, I wanted to share some insights, and some tips to survive the week.

Before WPC, I attended multiple Tech-Ed Europe and VMworld Europe, in several locations over the years. WPC is slightly different as it is a partner-dedicated event, without any customers or end users. It gives a very different tone to the sessions and discussions, as well as a very good opportunity to meet with Microsoft Execs.

As it was my first time, I signed up for the FTA (First Time Attendee) program, which gave me access to a mentor (someone who had already attended at least once) and a few dedicated sessions to help us get the most out of the conference.

 

The buildup weeks

In the months preceding the event, Microsoft will be pushing to get you registered. They are quite right to do so, for two reasons.

First the registration fee is significantly lower when you register early. So if you are certain to attend, save yourself a few hundred dollars and register as soon as you can. Note that you may even register during the event for the next one.

Second, the hotels fill up very quickly, and if you want to be in a decent area, or even in the same place as your country delegation, be quick!

 

A few weeks before the event, I had a phone call with my mentor, who gave me some advice and opinion, as well as pointers on how to survive the packed 5 days. This helped me focus on the meetings with potential partners, and meetings with microsoftees, rather than on the sessions themselves. More on that subject later.

During that period, you are also given the opportunity to complete your online WPC profile, which may help get in touch with other partners, and organize some meetings ahead of time.

 

You also get the sessions schedule, which let you organize your coming days, and see what the focus is.

I had the surprise, a few days before the event, to learn that we had “graduated” in the Microsoft partner program, from remotely managed to fully managed. So we had a new PSE (Microsoft representative handling us as a partner) which was very helpful and set up a lot of meetings with everyone we needed to meet from Microsoft France. This helped, for a first-timer, to be guided by someone who knew the drill.

I was very excited to get there, and a bit anxious as we were scheduled to meet a lot of people, in addition to my original agenda with many sessions planned.

 

 

The event

I’ll skip the traveling part and will just say that I was glad I came one day early, so that I had time to settle in my new timezone, visit a bit and get cointed omfortable with the layout of the city and the venue.

I will not give you a blow by blow recount, but I will try to sum up the main points that I found worthy to note.

The main point, which I am still struggling to define whether it was a good or bad point is that we met almost only people from France, microsoftees or partners. I was a bit prepared for that, having heard the talk from other attendees, but it is still surprising to realize that you have traveled halfway across the world, to spend 5 days meeting with fellow countrymen.

 

There some explanation to that : this is the one time in the year where all the Microsoft execs are available to all partners, and they are all in the same place. So it is a good opportunity to meet them all, at least for your first event. I may play things differently next time.

Nevertheless, we managed to meet some interesting partners from other countries, and started some partner-to-partner relationships from there.

 

I did not go to any sessions, other than the ones organized for the french delegation. These were kind of mandatory, and all the people we were meeting were going there too. But I cancelled all my other plans to watch any session.

I did not really miss these technical sessions, as I work exclusively on cloud technologies, which are rather well documented and discussed all year round in dedicated events and training sessions. But on some other subjects, technical or more business/marketing some sessions looked very interesting and I might be a bit more forceful to attend those next time.

 

I have attended the keynotes, which were of various level of interest and quality. They are a great show, and mostly entertaining. The level of interest is different for every attendee, depending on your role and profile.

What I did not expect, even with my experience of other conferences, was the really packed schedule. A standard day was running like that :

  • 8.30 to 11.30 : keynote
  • 11.30 to 18.30 (sometimes more) : back to back meetings, with a short time to get a sandwich
  • 19.30 to whatever time suits you : evening events, either country-organized, or general events.
  • 22.30 get to bed, and start again

 

You may also insert into that schedule a breakfast meeting, or a late business talk during a party/event.

So, be prepared 🙂

 

 

A word on the parties/events : some countries organize day trips to do some sightseeing together. Niagara Falls are not far from Toronto, so it was a choice destination for many of them. We had an evening of BBQ on one of the islands facing Toronto, with splendid views of the city skyline at sunset. Some of the parties are just diners in quiet places, some other are more hectic parties in nightclubs. The main event is usually a big concert, with nothing businesslike, and everything fun-oriented!

 

The cooldown time

There are a few particulars to that event, mostly linked to Microsoft organization and fiscal year schedule.

The event is planned at the beginning of the year for Microsoft. This means that microsoftees get their annual targets right before the event, and start fresh from there.

The sales people from MS also have a specific event right after WPC in July, which means they are 120% involved in July, and will get you to commit on yearly target numbers and objectives during the event.

To top that, August is a dead month in France, where almost every business is closed or slowed to a crawl. That means that when you get to September, the year will start for good, but the Microsoft will already be starting to close its first quarter!

 

Practical advice

Remember to wear comfortable shoes, as you will walk and stand almost all day long. Still in the clothing deprtment, bring a jacket/sweater, as A/C is very heavy in these parts. We had a session in a room set at 18°C, when it was almost 30°C outside…

The pace of your week may really depend on the objectives you set with your PSE. Our first year was mostly meeting with Microsoft France staff. Next year may not be the same.

And obviously, be wise with your sleep and jetlag, those are very long days, especially when English is not your native language.

 

This year the event will be hosted in Washington DC, in July, and it has been rebranded Inspire.

I would not specially comment on the name, but anything sounds better than WPC 🙂

 

The first steps of your cloud trip

When I talk to customers who are already knowledgeable about the cloud, but still have not started their trip, the main subject we discuss about is : what is the first step to take to move into the cloud?

Usually at that point we all know about the cloud and its various flavors, on a personal level. I have touched already the subject on how to start playing with the cloud as a person :https://cloudinthealps.mandin.net/2017/03/17/how-to-embrace-azure-and-the-cloud/. But it’s not that easy to translate a personal journey and knowledge to a corporate view and strategy.

There are two major ways to plan that journey.

The first is : move everything, then transform.

The second is : pick the best target for each application, transform and migrate if needed.

 

Lift and shift

I will touch quickly on the first path. It’s quite a simple planning, if difficult to implement. The aim is to perform a full migration of your datacenter into the cloud, lift-and-shift style. This can be done one-shot or with multiple steps. But in the end you will have moved all of your infrastructure, mostly as it is, into the cloud. Then you start transforming your applications and workload to take advantage of the capabilities offered by the cloud, in terms of IaaS, PaaS or SaaS offerings. The difficulty in there, for me, is that not all workloads or applications are a good fit for the cloud.

 

Identify you application portfolio

Enters the second solution : tailor the migration to your applications. Because the application is what matters in the end, along with the impact and use of this application for the business. The question of how you virtualize, or which storage vendor to choose is not relevant to your business.

In that case you will have to identify all of your application portfolio, and split that into for categories :

  1. Existing custom Apps

Mostly business critical applications.

  1. New business apps

Application that you would like to build

  1. Packaged Apps

Bought off the shelves

  1. Applications operations

Everything else. Low use, low criticity

Application breakdown

And here is a breakdown of each category and what you should do with those applications.

 

For your business critical application, which you have built yourself, or heavily customized, please leave those alone. They are usually largely dependant on other systems and network sensitive, and will not be the most relevant candidates. Keep those as they are, and maybe migrate to the cloud when you change the application. Some exceptions to that rule would be when you have hardware that is on it end of life, or high burst/high-performance computing, which could take advantage of the cloud solutions provided for these use cases.

 

The new business apps, that you want to build and develop : just build those cloud-native, and empower them with all those benefits from the cloud : scalability, mobility, global scale, fast deployment. Use as much PaaS components as you can, let the platform provider handle resiliency, servicing and management and focus on your business outcome.

 

Packaged apps are the easiest ones : they are the apps that you buy off the shelf and use as they are. Find out the SaaS version of those applications, and use it. The main examples are office apps (Office 365, G Apps), storage and sharing (Dropbox, Box, G. Drive, OneDrive) email (Exchange, Gmail).

 

The largest part of your application portfolio will most likely be represented by the Application Operation. They cover most of your apps : all of your infrastructure tools, all of the low use business application. These applications can be moved to the cloud, and transformed along the way, without any impact on their use, while giving you a very good ROI at low risk. Some examples : ITSM/ITIL tooling (inventory, CMDB, ticketing), billing, HR tools etc.

With that in hand, how do you expect to start?