Containers, Azure and Service Fabric

 

Today I will try to gather some explanations about containers, how they are implemented or used on Azure, and how this all relates to micro-services and Azure Service Fabric.

First let’s share some basic knowledge and definitions.

 

Containers in a nutshell

To make a very long story short, a container is a higher level virtual machine. You just pack your application and its dependencies in it, and let it run.

The good thing about those is that you do not have to pack  the whole underlying OS in there. This gives us lightweight packages, which could be around 50MB for a web server for example. Originally, containers were designed to be stateless. You were supposed to keep permanent data out of those, and be able to spin out as many instances of your applications to run in parallel, without having to bother about data.

This is not completely true about most deployments. Today many containers are used as lightweight virtual machines, to run multiple identical services, each with its instance.

For example, if you need a monitoring poller for each new customer you have, you might package this in a container and run one instance for each client, where you just have to configure the specifics for this client. It’s simple, modular and quick. The stateless versus stateful containers is a long standing one, see http://www.infoworld.com/article/3106416/cloud-computing/containerizing-stateful-applications.html

 

Orchestration

Just like in virtualization, the case is mostly not about the container technology and limits, but rather about the tools to orchestrate that. Vmware vCenter versus Microsoft SCVMM anyone?

You may run containers manually above Linux or Windows, with some limitations, but the point is not to have a single OS instance running several services. The point is to have a framework where you can integrate that container and instantiate it without having to tinker with all the details : high-availability, load-balancing, registration into a catalog/registry etc. The video below is very good at explaining that :

The Illustrated Children’s Guide to Kubernetes

There are several major orchestrators in the field today. Kubernetes is the open-sourced one from Google’s own datacenters. It has gained a lot of traction and is being used a many production environments. DC/OS is the one based on Apache Mesos, which has been pushed a lot by Microsoft. It uses Marathon as the orchestration brick. And of course Docker has its own orchestrator : Docker Swarm. I will not go into comparing these, as there is a lot of content on that already.

 

Containers on Azure

As you can do on-premises, there are at least two ways to run containers on Azure. The first is simply to spin of a virtual machine with a container engine (Docker engine, Windows Server 2016, Hyper-V containers…) and start from scratch.

The easy way is to use Azure Container Service, which will do all the heavy lifting for you :

  • Create the needed controller VM(s) to run the orchestrator
  • Create the underlying network and services
  • Create the execution nodes which will run the containers
  • Install the chosen orchestrator

ACS is basically an automated Azure Resource Manager (ARM) configurator to deploy all of this simply.

You have a choice of the orchestrator you want to use, and the number of nodes, and voilà!

 

A bit anti-climatic, don’t you think? I have to disagree, somehow. I do not think that the value of IT is in installation and configuration of the various tools and frameworks, but rather in finding the right tool to help the business/users to do their job. If someone automates the lengthy part of that installation, I’ll back him up!

A note on pricing for these solutions : you only pay for the storage and IaaS VMs underlying the container infrastructure (controllers and nodes)

 

Marketplace

If you really do not want to handle the IaaS part of a container infrastructure, you can get a CaaS (Container as a service) option from the Azure Marketplace. This solutions will be priced specifically, with a platform cost (for the Docker Engines running on Azure) and a license cost for the product (https://www.docker.com/products/docker-datacenter#/pricing). With that you get all the nifty modules and support you want :

  • The control plane (Docker Universal Control Plane)
  • Docker Trusted Registry
  • Docker Engine
  • Support desk from Docker teams

 

Azure Service Fabric and Micro-services

I will not go deep into this subject, it deserves a whole post into itself. However to complete the subject around containers, let me say a few things about Service Fabric.

Azure Service Fabric will be able to run containers, as a service, in the coming months.

The main target for Azure Service Fabric is more around the development side of micro-services, in the meaning that it is a way of segmenting the different functions and roles needed in an application to render the architecture highly adaptable, resilient and scalable.

Mark Russinovich did a great article on that subject : https://azure.microsoft.com/fr-fr/blog/microservices-an-application-revolution-powered-by-the-cloud/

 

 

How to embrace Azure and the Cloud

For the last year, I have been meeting with customers and partners inside and outside the Microsoft ecosystem.

I have talked with friends that are involved, at different levels, with IT whether Dev or Ops.

I have been trying to explain what the public Cloud is, especially Azure, to many different people.

Of course, I have been using the same evolution charts we all seen everywhere to illustrate my speech and explain where I believe we are headed.

What hit me while speaking with all these different people were the recurring themes: I want/need/must start on the public Cloud, but how? Where do I start?

 And very recently I finally found the right analogy, the one that will put your mind at ease, and allow you to relax and tackle the Cloud with some confidence. It has to do with someone reminding me of the Impostor’s syndrome

https://en.wikipedia.org/wiki/Impostor_syndrome

https://xkcd.com/451/

The full MSDN library

Some of you will remember as I do, the golden days when we had a TechNet/MSDN subscription, and we received every month the full catalog of Microsoft products, on an indecent number of CD-ROMs. I don’t know how you handled the amount of different products, but my approach was usually to fill the provided disc-book with the latest batch, and leave it at that. Occasionally, I would need to deploy a product and would pull out the matching disc.

Did anyone ever tried to grasp what all these products were and how to use them? I would venture to say that we certainly did not.

And yet, that is what some of us are trying to do with Cloud services. We get an overview of the services, quickly skimming over the names and vague target of each service, and we go home. The next day, we are willing to try new things and enjoy the breadth of options we have now a credit’s card away.

And we are stuck.

Let’s take an example, with what is named Cortana Analytics services. I chose this example because it is way out of my comfort zone, and I will not be tempted to go technical on you.

Here what the overview looks like:

When you were on the receiving end of the speech/session/introduction about these services, it all made sense, right?

What about now, do you have the slightest idea of what you could really do with Azure Stream Analytics?

All right, I am also stuck, what now?

And this is where I will disappoint everyone. I do not have a magic recipe to solve that.

However, I might be able to give some pointers, or at least tell you how I try to sort that out for myself.

If you are lucky, you have access to some expert, architect or presales consultant who have a good understanding of one scope of service (Hosting Websites on Azure, PowerBI, Big Data etc.). In that case, you should talk to that person and discuss what customer cases have been published, and try to get some inspiration by describing your current plans/project/issues. This seems to be more adapted to businesses outside of IT where you have a product and customers for it. Innovation around your product, using cloud services and components will probably have a quick positive impact for you company, and then for you.

If you work for an IT company, whether for an ISV, a consulting firm, a Managed Services Provider, things will get more difficult. In that case, what we have found to be helpful was to run a multi-steps proof of concept.

Start by gathering the innovation-minded people in your company. They might not be in your own organization, but can come for different teams and have different jobs. It does not matter, as long as they can help you brainstorm what kind of PoC you could start building.

Then… Brainstorm! Discuss any ideas of application/solution that you could build, not matter how simple or useless.

Then you choose one of your candidates, and start working on building it, using every cloud service you can around it. It can be a complex target solution, that you build step by step, or a simple one, that could get more complex or feature-rich over time.

We went for a simple, but useless app, that made use of some basic cloud components, in order to get the team to build some real know-how. When we had a skeleton application, that did nothing, but was able to use the cloud services we wanted, we gathered again, and discussed the next evolution, where the app was transformed to now be something useful for us.

Start small, and expand

What I really find attractive in this process is that it allowed us to start by just focusing on small technical bits, without being drowned in large scale/app issues and questions. For example, we wanted to have part of the app that showed interaction with the internal CRM/HR systems. We just focused on part of it, the competencies database, which we interrogated and then synchronized to our own Azure SQL database. This topic is not that wide or complex, but it allowed to work on getting data from an outside source, Salesforce, and transform it to fit another cloud service in Azure. With a bit of mind stretching, if you look again at the Cortana Analytics diagram earlier in this article, you can fit the topic in the first two blocks: Information Management, and (Big) Data Store. Our first iteration had just a visualization step added to that, in a web app we built for that. But we also added authentication, based on Azure AD, as you do not provide this information to anyone out there.

Once you are done with your first hands-on experience, start the next iteration, building on what you learned and expand. Do not hesitate to go for something completely different. We discarded 90% of our first step when we started the second. Don’t forget, the point is to learn, not necessarily to deliver!

Originally published here : https://gooroo.io/GoorooTHINK/Article/17176/How-to-embrace-Azure-and-the-Cloud/27339#.WMMD7TE2se0

I know Kung-Fu

Almost everyone who has seen the Matrix movie remembers that scene. Neo, played by Keanu Reeves, has just spent the day learning martial arts, by some brain writing sci-fi process. His mentor comes in at the end of one of these “lessons”, Neo opens his eyes and says “I know Kung-Fu”.

Learning is difficult

Of course, learning is not that easy in real life, it takes a certain amount of time, long hours of work and practice. And it probably never ends. Take my current favorite subject, the cloud. To be precise I should say public cloud services on Azure. The scope of what those services cover is extremely wide, and some of them are so specific, they need a specialist to deep dive into.

I can be overwhelming. If you work in this field, or a similar one, you may already have had that feeling when you feel you will never get to the bottom of things, when you have the impression that you can never master the domain, because it keeps evolving. To be honest, it is probably true. There are probably thousands of people working to broaden and deepen cloud services every day, and there is, probably, only one of you (or me).

For the last 15 months, I have been trying to learn as much as possible about Azure services, in any field possible, from IaaS networking to Machine Learning, from Service Bus Relay to Logic Apps. And after all that time and numerous talks, webcasts, seminars and data camps, I almost always ended up thinking “OK, I think I understand how these services work. I probably could do a demo similar to what I have just watched. But how can I use these in real-life scenario?”

And last week, thanks to a very dedicated person, I finally found some insight.

Meet the expert

Allow me to set the stage. We were invited to an Azure Data Camp by Microsoft. The aim of these 3 days was to teach us as much as possible on Azure Data Services (Cortana Intelligence Suite). The team was amazing, knowledgeable and open, the organization perfect, the attendees very curious and full of questions and scenarios that we could relate to. Overall these 3 days were amazing. However, the technical scope was so wide and deep, that we covered some very complex components in under an hour, which, even with the help of night-time labs, was too fast to process and absorb. It left me with the usual feeling. I probably would be able to talk a bit about these components or areas, but my knowledge felt far for operational, and even business presales level. And I am supposed to be an architect, to have all this knowledge and be able to create and design Azure solutions to solve business needs.

So, after two days, that was the stage. Then came in one of the trainers/specialists. I will tell you a bit more about him later on, just do not call him a data scientist. His area of expertise, as far as we are concerned, covers the whole Cortana Suite with an angle that I would qualify as Data Analysis. He had already taken the stage earlier, to explain us what the methodology to handle data was, and how every step related to Cortana Suite services. He even had this speech on multiple occasions. Every time we heard and read it, it made sense, it was useful and relevant.

So, Chris started his part by showing us the same diagram, and asking us “Are you comfortable with that?” Followed by a deep, uneasy silence. My own feelings were that I did understand the process, but did not feel able to apply it or even explain it. I see several reasons for that. The first is that data analysis is far from my comfort zone. I am an IT infrastructure guy, I know virtualization, SAN, networking. I have touched Azure PaaS services around these topics, and extended to some IoT matters. The second was that we did not have time to let the acquired knowledge settle and be absorbed that week. Admittedly, I could have spent more hours in the evening rehearsing what we learned during the day, but we were in London, and I couldn’t miss that. And the last is that I feel we are getting so used to having talks and presentations about subjects we just float on the surface of, that we are numb and we do not dive to deeply into those, probably out of fear. Fear of realizing that we are out of our depths. Impostor’s syndrome, anyone?

Enter the “I know kung-Fu!” moment

Because that was how we felt: having been force-fed a lot of knowledge, but having never really thought about it, or even used it. We felt knowledgeable, until someone asked us to prove it.

Remember what happened next in the Matrix? Neo’s mentor, Morpheus, asks him “Show me”. And kicks his ass, more or less. But still manages to get him to learn something more.

Chris did that to us. He realized that we were actually feeling lost, under the varnish of knowledge. He then spent 45 minutes explaining the approach, and finally got us a simplified scenario, which felt familiar for those us who had studied for previous design certification exams. And asked us which services we would use in that case, what were the key words etc.

And magically made us realize that we could indeed use our newfound knowledge to design data analytics solutions based on Cortana Intelligence Suite.

It might seem obvious that examples and scenarios are an excellent way to teach. Don’t get me wrong, we had tons of those during these days. We actually spent a fair amount of time with Chris and part of the team and attendees that evening discussing that. The trick was to deliver the scenario at exactly the right moment: when we felt lost, but had the tools to understand and analyze it.

My point, to make that long story short, is: it’s OK to feel drowned by the extent of available cloud services and options. We all do. Depending on your job, you may be the go to person for Cloud topics, or an architect trying to be familiar with almost everything in order to know when to use a particular tool, or merely trying to wrap your mind around the cloud. In any case, you just have to find the time to get some hand-son, or read/watch a business case that matches something you are familiar with. This way you can see how the shiny new thing you’ve learnt is put to use.

And you will be able to say “I know Kung-Fu”.

Useful links :

Chris Testa O’neill ‘s blog : https://ctestaoneill.wordpress.com

Originally published here : https://gooroo.io/GoorooTHINK/Article/17175/I-know-KungFu/26728#.WMMDqzE2se0

DevOps, NoOps and No Future

In the wake of the recent MongoDB happy hour debacle, there have been a few mentions of DevOps and NoOps. The pieces were mostly about the fact that this incident proved that the IT business is not really in full DevOps mode, not to mention NoOps. I am not confident that NoOps will be the future for a vast majority of shops. Being from the Ops side of things, I am obviously biased toward anyone stating that NoOps is the future. Because that would mean no job left for me and my comrades in arms. But let me explain 🙂

I would like to be a bit more thorough than usual and explain what I see there, in terms of practices and trends.

Definitions

First let me set the stage and define what I mean by DevOps, and NoOps.

https://en.wikipedia.org/wiki/DevOps

http://www.realgenekim.me/devops-cookbook/

At its most simple definition, DevOps means that Dev teams and Ops team have to cooperate daily to ensure that they both get what they are responsible for : functionalities for Dev, and stability for Ops. A quick reminder though : business is the main driver, above all. This implies that both teams have to work together and define processes and tooling that enables fast and controlled deployment, accurate testing and monitoring.

We could go deeper into DevOps, but that is not the point here. Of course, Ops team should learn a thing or two from Scrum or any agile methodology. On the other hand, Dev teams should at least grasp the bare minimum of ITIL or ITSM.

What I could imagine in NoOps would be the next steps of DevOps, where the dev team is able to design, deploy and run the application, without the need of an Ops team. I do not feel that realistic for now, but I’ll come back to this point later.

How are DevOps, and the cloud, influencing our processes and organizations

I have worked in several managed services contexts and environments in my few years of experience, where sometimes Dev and Ops were very close, sometimes completely walled of. The main driver for DevOps, usually linked to cloud technologies adoption, on the Ops side, is automation. Nothing new here, you’ve read about it already. But there are several kinds of automation, and the main ones are automated deployment and automated incident recovery.

The second kind has a deep impact, in the long term, on how I’ve seen IT support organization and their processes evolve. Most of the time, when you ask your support desk to handle an incident, they have to follow a written procedure, step by step. The logical progress is to automate these steps, either by scripting them, or using any IT automation tool (Rundeck, Azure Automation, Powershell etc.). You may want to keep the decision to apply the procedure human-based, but it’s not always the case. Many incidents may be resolved automatically by applying directly a correctly written script.

If you associate that to the expanding use of PaaS services, which removes most of the monitoring and management tasks, you will get a new trend that has already be partly identified in a study :

https://azure.microsoft.com/en-us/resources/total-economic-impact-of-microsoft-azure-paas/

If you combine PaaS services which remove most of the usual L2 troubleshooting, and automated incident recovery you will get what I try to convince my customers to buy these days : 24*7 managed services relying only on L1.

Let me explain a little more what we would see there with an example based on a standard project on public Cloud. Most of the time it is an application or platform which the customer will develop and maintain. This customer does not have the resources to organize 24*7 monitoring and management of the solution. What we can build together is a solution like this :

• We identify all known issues where the support is able to intervene and apply a recovery procedure

• Obviously, my automation genes will kick in and we automate all the recovery procedures

• All other issues will usually be classified into three categories :

○ Cloud platform issues, escalated to the provider

○ Basic OS/VM issues, which should be automated or either be solved by Support team L1 (or removed by the use of PaaS services)

○ Customer software unknown issue/bug which will have to be escalated to the dev team

I am sure you now see my point : once you remove the automated recovery, and the incidents where the Support Team has to escalate to a third party… nothing remains!

And that is how you can remove 24*7 coverage for your L2/L3 teams, and provide better service to your customers and users. Remember that one of the benefits of automated recovery is that it’s guaranteed to be applied exactly the same each time.

A word about your teams

We have experienced firsthand this fading out of traditional Level 2 support teams. These teams are usually the most difficult to recruit and keep as they need to be savvy enough to handle themselves in very different situations and agree to work on shifts. As soon as they get experienced enough, they want to move to L3/consulting and to daytime jobs.

The good thing is that you will not have to worry anymore about staffing these teams and keep them.

The better thing is that they are usually profiles with lots of potential, so just train them a bit more, involve them in automating their work, and they will be happier and probably more to implementation or consulting roles.

How about you, how is your organization coping with these cloud trends and their impact?

Why I love working on IT & the cloud

I remember when I started working full time in IT, all the young professionals were employed by large contractors and consulting firms. The word then was “please help me find a job with a customer/end-user!”. When I recruit today, mostly people a bit younger than me, the word has shifted to “I love working for a contractor, as it does not enclose me in one function”.

OK, I did think about that early today, and wanted to write it somewhere, so I used it as an intro, to show my deep thinking in the wee hours of the morning.

However what I wanted to write about more extensively was about how I love working in IT today, and particularly on Cloud solutions, and how it is gratifying, compared to what we experienced a few years back.

Technology centric and support functions

Not so long ago, IT was a support function, and was supposed to keep the hassle of computers to a bare minimum. When interacting with our customers and users, the main issues and questions were about how we kept printers running, and emails flowing. If you worked on ERP or any management system, same thing : please keep that running so that we can do our job. For years, we had team members who loved technology, who delve deep into configuration and setups so that we could congratulate ourselves in building shiny new infrastructures, to try to keep up with users’ demand.

I will keep the example to my own situation. I went through technological phases, from Windows 2000 Active Directory, to Cisco networking, to virtualization, to SAN storage and blade servers, to end up on hyper-converged systems. For years I would generally not talk shop with friends, family or even friends from school (I went to a mix business/engineering school, so that could explain things). I did not see the point on digging into technical points with people from outside my “technological comfort zone”.

Don’t misunderstand the situation, I was aware that IT department was trying to shift its role from support function to help the business, but it was a bit far-fetched for me. Then came public cloud…

Business centric, and solution provider

At first we had a simplistic and limited public cloud (Hello 2010!), and a private cloud which was just virtualization with a layer of self-service and automation. I could begin to see the point, but still… it was a technologist dream of being able to remove a large portion of our day to day routine.

Situation evolved to a point where we had real PaaS and SaaS offerings that could solve complex technical solutions with a few clicks (or command lines, don’t throw your penguin at me!). And I started to talk with my customers on how we could help them build new solutions for their business, give them better quality of service, and have them understand me!

Of course some of that is linked to my experience, and the fact that am not in the same role as I was 10 years ago, but still. I now enjoy discussing with my former schoolmates and help them figure out a solution to a business issue, being able to help some friend’s business grow and expand.

IT can now be a real solutions provider. We have to work at gaining sufficient knowledge on all the cloud bricks to be able to build the house our business does not know they need.

Originally published here : https://www.linkedin.com/pulse/why-i-love-working-cloud-frederi-mandin