Microsoft Tech Summit France

As the summit has just closed its doors, I would like to share my feedback on this first Tech Summit to happen in France.
As far as I know there are already Tech Summits in several other countries around the world. From what I have heard, they are supposed to be “local Ignite” events. For honesty’s sake, I have to say that I have not attended Ignite so far, only Tech-Ed Europe a few years ago, so I will not compare too much the two events. However according to the community website (http://aka.ms/community/techsummit) the sessions were exactly the same as the ones played at Ignite.

I did not see any numbers published, so far, but it was a rather small event. Attendance to the first keynote on Microsoft 365 was not really high, however the Azure keynote attracted more people and the room was almost full. I had the feeling that Azure was more exciting than Microsoft 365, but maybe 9:30 was too early for most 🙂 Or maybe I am biased toward Azure 😉
The conference took place in one Hall from Paris Expo, on one level. And we were far from crowding it.
As it was a free event, right in Paris, it seems that a lot of people came and went, just for a session or two, rather than stay for the whole two days. Which is rather smart, as it lets local people continue running their business, while being able to attend some sessions. And it lent a quiet feeling to the event itself.

For once, I managed to attend a few sessions, and they were very interesting, very focused on a tight subject. I was never deceived by a catchy title enticing me to a session that had nothing to do with what I could expect.
The speakers were a mix of Microsoft Corp and Microsoft France, most sessions were in English and we could interact easily with every speaker afterwards. Overall the sessions raise some good ideas for me to pitch, and subjects to talk about with my customers. I would have liked more technical sessions, but I think deep dives need a specific environment and public to be able to run properly.

In conclusion, I liked the event overall, but I do not see it as attractive as Experiences. And it was much smaller!
Also Experiences had been criticized has being less technical than the previous event it replaced, Tech Days. From my point of view, Tech Summit is on the same level as Experiences, just smaller and 6 months later (or earlier depending on how you look at it 🙂 )

As usual, the strategy is a bit difficult to read, but the local speakers and content providers were present and accessible, which is almost always my first reason to come 🙂

One final word about the technical levels used to sort the sessions : levels are standard, from 100 to 400, with 100 being introductory and 400 being expert. My advice would be to change the description as the level describes mostly the current knowledge you need to have about the product (Azure for example) than the depth of the session. 400 does not mean you will see live coding and the inners of the platform. It means that you know already where you’re going, and have probably already used the product.

GDPR, my love

The original title was supposed to be “in bed with GDPR”, but it might have been a little too clickbait 🙂

Anyway, short post today, but an important one, I think.

To be honest, I feel like screaming everytime I see/read/hear someone telling me that “we need to have a GDPR offer/business/thing”. Alright, it is a buzzword, and I have to live with that. I have made my peace with AI, Blockchain, Big Data, IoT , Cloud, etc. But I still struggle with GDPR. Here is why.

First this policy is a very important one in Europe, and will impact every business that comes anywhere close to us. You cannot ignore it. And every company has to look into it and find out what is needed to be compliant.

Second, the deadline is looming, but the national law for France is not yet in application. There is a text that is discussed (https://www.legifrance.gouv.fr/affichLoiPreparation.do;jsessionid=?idDocument=JORFDOLE000036195293&type=contenu&id=2&typeLoi=proj&legislature=15) but there might still be many changes before the law is applied in France. That means that we should hurry to wait, but be prepared… tough one.

Last, and most important, and the main reason of my screaming : it is mostly a question of law, for lawyers. Sure IT has to get ready to comply, but most of the consulting and debating and discussing has to be managed by law experts, which will be the right people to understand what it will mean to be compliant.

Sure an IT company can get some services in place, offer some broad suggestions and consulting. But without a lawyer, trained for that (and a proper written and voted law…) our job is almost meaningless.

The risk of innovation burnout

Catchy title, isn’t it? It could have been copied from a Management magazine, or CIO Monthly. Note to self : check before getting a copyright infringement lawsuit.

What I wanted to write about is mostly how to deal with the fast pace of innovation in the IT cloud business.

And mostly, how I deal with it, in my specific role, and how I dealt with it before.

As IT pros, we need to always keep an eye on the market, to check emerging technologies, to check where the existing ones are going and which ones are dying. This serves two purposes :

  • Keep our company and infrastructure up to date
  • Keep our own profile up to date, or at least on the track for the future

In french we have an expression for that : “veille technologique”, which  would roughly translate to “technological watch”.

In some french schools this subject is taught. It mostly describe how to identify the proper source of information to track, and how to track those. The sources are mostly tech websites and influencers. The tools are more diverse : RSS feed, Linkedin, Twitter, Facebook, Reddit…

In my previous position, as an infrastructure consultant & architect, I had to keep up with a limited set of technologies, mostly around databases and virtualization. My watch was purely technical, and dealt with detailed evolution of some component : which new feature was available in the latest version of Vsphere ESX, what capabilities was expected in the future release of Oracle DB etc. In that scenario, using RSS feeds, and attending some virtual events from the software editor was enough. I could keep up with the innovation pace by investing something along the line of one day per month of my time.

Today, if I consider my CTO-like role, the job is more complex. The scope I have to watch is much broader. If you consider only Microsoft Azure and the services it may provide, it is already almost impossible to keep up. For example, if you use the blog posts “Last week in Azure” which only relate to official news from the Azure blog, you get around 30 news per week (https://azure.microsoft.com/en-us/blog/last-week-in-azure-week-of-2018-02-12/). If you want to dig into each announce, and find out how it might affect you, this will take more time than you have in a week 🙂

And that does not count anything outside of official Azure news. If you add some specific content creators, from Microsoft or not, which post also every week, and then also add news and tendencies around DevOps… you get the point. I forgot the podcasts, and videos…

 

The main risk, as the title stated, is innovation burnout, or innovation overload. From what I have seen with colleagues, partners and customers, most of them do not want to keep up with that mass of information. Fortunately, I love learning new stuff, and I love information. Here is how I am currently working to get the most relevant information in my mind, and keep up with the news stream.

I have separate tools for separate needs, and most important, I do not use them in the same environment and pace :

  1. I use RSS feeds to track some news websites and blogs. I use those for both ends of the scope : news websites that publish a lot, which makes RSS feed scanning worthwhile, if you are a quick reader; and professional blogs where the authors publish irregularly.
  2. We also use the official Azure blog RSS feed directly into Slack, so that we can discuss easily any announcement that might be interesting for us or for our customers
  3. I tend to avoid in person micro events where you get to see a session on one subject, for half a day. This usually means that you consume at least half a day, or more, for unguaranteed return on investment. Unless the speaker is well renowned, or the session is a way of meeting with people you need to meet anyway.
  4. By the same book, when I go to official conferences, I mostly do not attend sessions. I attend some of them, but my main information media is people. I’d rather talk to five different people during the same 45 minutes, and get their opinion and feedback on the same tech. Unless the tech is not out yet and no one has been using it 🙂
  5. I also use podcasts, mostly for business and market trends, as well as general information. I listen to those usually when I travel, and am mostly way behind in my podcast list 😉
  6. Lastly, I use discussion with my fellow humans to share and discuss tech (and non-tech) trends : colleagues, partners, competitors, customers, prospects etc.

 

All in all, I tend to remain at a high level of information on tech trends, until I have a real need to dig into one, and find out how it is applicable to a specific scope and project. This allows me to keep my sanity, and have some productivity every day!

Bring your containers to the cloud

Cloud and containers, two buzzwords of the IT world put together. What can go wrong?

This post is a refresh on a previous one (https://cloudinthealps.mandin.net/2017/03/24/containers-azure-and-service-fabric/) with a focus on containers, rather than the other micro-services architectures.

As usual, I’ll speak mainly of the solutions provided by Microsoft Azure, but they usually have an equivalent within Google Cloud Platform or Amazon Web Services, and probably other more boutique providers.

And let’s be more specific, considering what happened in the container orchestrator world in the recent weeks. I am of the general opinion that this war is already over, and Kubernetes has won. Let’s focus on how to run/use/execute a Kubernetes cluster.

First step : you want to try out Kubernetes on your own. The ideal starter pack would be called Minikube (https://github.com/kubernetes/minikube)

. I already wrote about it, the good thing about it is that you can run a Kubernetes installation on your laptop, in a few minutes. No need to worry about setting up any cluster and configurations you do not understand at all.

You might want to play out a bit with Kubernetes the hard way, in order to be able to understand the underlying components. But that is not necessary if you only want to focus on the running pods themselves.

Now you are ready to run a production workload Kubernetes Cluster. And you would like to handle everything on your own. There are many ways to get there.

First, you want to deploy your own cluster, not manually but on your own terms. There is a solution, kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), that will help you along the way, without having to do everything by hand. This is a solution that is compatible with any underlying hardware, cloud, virtual or physical.

On Azure specifically,  there are two concurrent solutions to build your Kubernetes cluster : ACS (https://azure.microsoft.com/en-us/services/container-service/) & ACS-engine (https://github.com/Azure/acs-engine).

ACS (Azure Container Service) is mostly a deployment assistant, that will ask you the relevant questions on your K8s deployment, and then create and launch the corresponding ARM template. After that, you’re on your own. And you may download the template, edit it and re-use it anytime you want!

ACS-Engine is a command line customizable version of ACS, with more power to it 🙂

I feel that both are Azure dedicated versions of Kubeadm, but they do not add value to your production. They still are good ways to quickly deploy your tailored cluster!

BTW, if you go to the official webpage for ACS, it now just speaks about AKS, and you’ll have to dig a bit deeper to find out about the other orchestrators 😉

What if you could have your K8s cluster, be able to run your containers, and just have to manage the clustering and workload details? There is a brilliant solution called AKS (https://azure.microsoft.com/en-us/services/container-service/) , and no it does not stand for Azure K8S Service… It actually means Azure Container Service. Don’t ask. With that solution you just have to take care of your worker nodes, and the running workloads. Azure will manage the control plane for you. Nothing to do on the etcd & control nodes. Cherry on the top : you only pay for the IaaS cost of the worker nodes, the rest is free!

In my opinion, it’s the best solution today, it offers you a wide flexibility and control on your cluster, at a very low cost, and lets you focus on what is important : running your containers.

One last contestant to join the ring : Azure Container Instances (https://azure.microsoft.com/en-us/services/container-instances/). This solution is still in Preview, but might become a strong player soon. The idea is that you just care about running your container, and nothing else. For now it is a plugin for an actual K8S cluster, that will present itself as a dedicated worker node, where you can force a pod to run. I did not have time to fully test the solution and see where the limits and constraints are, but we’ll probably hear from this team again soon.

DevOps is the new black

Yes, DevOps is the new black. I might not be the first to use the phrase, but it’s so obviously true.
I am currently working on building some kind of offer around DevOps, so you’ll probably see more posts on the topic.
But two things struck me recently and I decided I would make a post out of those. Both items are related to the people side of DevOps. The first is the importance of the people involved in your DevOps transformation or organization. The second, corollary to the first, is the recruitment of these people.

People matter

People are important, that’s obvious. However a customer experience has lately surfaced the importance for a successful DevOps transformation. I may not be able to go into many details but the broad outline is quite simple.
The organization is a software team, within a large company. It delivers its own product, to be used by other business units. It has decided to run its own operations. Perfect candidate for DevOps, right? Using a custom approach, based on industry standards, an Agile/DevOps organization is designed and implemented.
Fast forward one year. The transformation is quite successful, the stability and quality of the product have improved. The only thing that prevented the team to be outstanding seems to come from the people. Don’t misunderstand me, I am not judging this team and its members. But for an Agile/DevOps transformation to be successful you need the right people, with the right mindset. And not everyone fits the bill. Like some people are more comfortable in an open-space and some prefer a closed-off office. It has been the same with Agile practices, which could not apply to every situation and team.
We need to pay extra attention to the people we include in these transformation projects, if we want them to succeed.

Recruitment is crucial

As a follow-up of the above assessment, recruitment of people for your team is important. Yes, I know, it has always been important. However, take a second look at the title of this post. Done? Alright. Now have a look at the job offers in IT. See any pattern?
DevOps is written everywhere.
It is somehow justified, as DevOps encompasses many modern practices that we have or would like to implement. Take automation, continuous delivery, continuous deployment, testing, QA etc.
The issue is that not every job offer is for a DevOps team or project. Most of the offers are for traditional sysadmins or developers with a hint of DevOps. Which is a good trend, but not a good fit for a full devops profile.

So, people matter in DevOps environments, so take care of your profile 🙂

Note : this post was inspired by a LinkedIN post that I cannot find, in french, regarding the abusive use of DevOps in job postings in France. If anyone can find it, I’d love to thank and credit its author!

Cloud is for poor companies

I heard that statement from Greg Ferro (@etherealmind) in a podcast a few weeks back.

I have to admit, I was a bit surprised and had a look at Greg’s tweets and posts, while finishing up the podcast.

Of course, the catchphrase is aimed at shocking, but it is quite well defended, and I have to agree, to some point with Greg on that.

Let me try to explain Greg’s point, as far as I have understood it.

 

The IaaS/PaaS platforms, and some of the SaaS ones, are aimed at providing you with on the shelf functionalities and apps, to develop your product quicker. And also to let you focus on your own business, rather than building every expertise needed out there to support your business. However, there are some underlying truths, and even drawbacks :

  1. When you are using someone else’s “product”, you are tied to what this company will do with it. For example, if you were not a Citrix shop, and wanted to use Microsoft Remote Desktop on Azure… you are stuck as MS has discontinued RDS support on Azure, in favor of Citrix. To be a little less extreme, you will have to follow the lifecycle of the product you are using in the cloud, whether it matches with your own priorities and planning. IF you stay on-premises, even with a commercial product, you can still keep an old version, admittedly without support at some point in time. If you build you own solution… then you’re the boss!
  2. The cloud services are aimed at being up and running in minutes, which helps  young companies and startups focus their meager resources on their business. And that’s good! Do you imagine starting a company today and having to setup all of your messaging/communication/office/email solution during a few weeks before being able to work for real? Of course not! You’ll probably start on Gapps, or Office 365 in a few minutes. It’s the same if you start building a software solution, IoT for example. Will you write every service you need from the ground up? Probably not. You’ll start with PaaS building blocks to manage the message queueing and device authentication. Nevertheless, as your product gains traction, and your needs become very specific, you will surely start to build your own services, to match your needs exactly.
  3. And last but not least… the cloud companies are here to make money. Which means, at a point in time, it will not be profitable for you to use their services, rather than build your own.

 

There might be some other drawbacks, but I would like to point out a few advantages of cloud services, even on the long term.

  1. Are you ready to invest the kind of money these companies invested in building their services and infrastructure? Granted, you might not need their level of coverage (geography, breadth of services etc.).
  2. Are you ready to make some long term plans and commitments? You need them if you want to invest and build those services yourself
  3. You might be a large, rich company, but if you want to start a new product/project, cloud services may still be a good solution, until you’ve validated that long term plan.

 

Say you’re building a video streaming service. You would start using AWS or Azure to support all of your services (backend, storage, interface, CDN etc.) But the cloud providers have built their services to satisfy the broader spectrum, and they may not be able to deliver exactly what you need. When your services are more popular, you will start building your own CDN, or supplement the one you have with a specific caching infrastructure, hosted directly within the ISP infrastructure. Yes, this is Netflix 😀

 

Any thoughts on that?

Velocity London ’17 – content

I already posted about this event a few weeks ago, with a focus around my experience and the organization : https://cloudinthealps.mandin.net/2017/11/03/velocity-london-2017/

This time, I would like to share a short summary of what I have learned during these 4 days.
The first two days were a Kubernetes training, so nothing very specific here. I learnt a lot about Kubernetes, which is to be expected 🙂

During the two conference days, I attended the keynotes, and several sessions.
The keynotes are difficult to sum up, as they were very different, and each was a succession of short talks. I attended several large-scale conferences in the past, and that was the first time that I felt that the speakers were really on the edge of research and technology. They were not specifically here to sell us their new product, but to share where their work was headed, what the outcomes could be etc.
They broached subjects ranging from bio-software to chaos engineering, from blockchain to edge computing. Some talks were really oriented toward IT & DevOps, and some were bringing a completely different view on our world.
Overall, it felt energizing to hear some many brilliant minds talk about what is mostly our future!The sessions were a bit more down to earth and provided with data, content and feedbacks that would bring us some changes back home. I was surprised to have most sessions concentrate on general information and feedback, and not so much on specific tools and solutions. I expected more sessions from the toolchains for DevOps (Chef, Puppet, Gitlab, Sensu and so on). Actually, even when the session were presented by these software companies (Datadog, Yahoo, Bitly, Puppet, PagerDuty) they never sold their product. However they used their experience and data to provide very useful insights and feedbacks.
What I brought back could be split into two categories : short term improvements/decisions that could be implemented as soon as I got back (which I did partly), and trends that would have to be thought about and analyzed, and then maybe crafted into a new offer or approach.
In the first category :
• Blameless post-mortems. A lot of data analyzed, with one takeout for us : keep the story focused and short. If you do not have anything to add apart from the basic timeline… maybe you’re not the right team to handle the post-mortem 🙂
• Solving overmonitoring and alert fatigue. This talk was a gamechanger for me. What Kishore Jalleda (https://twitter.com/KishoreJalleda) stated was this : you may stop monitoring applications and services that are not respectful. For example, if you get more than X alerts everyday from an application, you may go to the owner of the application and say “as you are generating too much noise, we will disable monitoring for a moment, until the situation comes back to something that is manageable by the 24*7 team” Of course you have to help the product team get back on track and identify what is monitoring and what is alerting (https://cloudinthealps.mandin.net/2017/05/12/monitoring-and-alerting/). And you need top management support before you go and apply that 🙂
• On the same topic, a session about monitoring containers came down to the same issue : how do you monitor the health of your application? Track the data 🙂

The second group covered mostly higher level topics, on how to organize your teams and company for successful DevOps transformation. I noted an ever spreading use of the term “SRE”, which I would qualify of misused most of the time. At least SRE seems now to qualify any team/engineer in charge of running your infrastructure.
Another trend, in terms of organization, was the model based on this famous SRE team, to provide tooling and best practices for each DevOps/Feature/Product team. I’ll probably post at length sometime later.

To be certified or not to be certified?

I have been pushing my team to get certified on Azure technologies for the past 24 months, with various degrees of success. I am quite lucky to have a team who does not discuss the value of the certification, however much they discuss the relevance of the questions.
But, as I am now going over almost 15 years of certifications in IT, I feel quite entitled to share my views and opinion.
Keep in mind that I work in infrastructure/Operations, and in France, which will probably give some bias to my analysis 🙂

I will start with some general comments on the value of certifications, from a career perspective, and dive into some specifics for each vendor I have certified with over the years. Some of my exams are a bit dated, so please be nice. I will conclude with my general tips to preparing for an exam.

As I said, it’s been almost 15 years since my first cert, and I started that one before even being employed, that gives me some insight about the relevance of such investment in my career. I took my first dip into the certification world during a recruitment process with a consulting company. We were two candidates, I was the young guy and the other one was already holding his Microsoft MCP. I felt, at that time, that I could benefit from one myself, and compensate some of my lack of experience with it. As I registered for my first MCP exam, for Windows 2000 (!), I was contacted to get into a kickstart program to get my certification level up to Microsoft MCSE, everything started from there.
After a few months, I passed the final MCSE exam (out of 7 at that time) and was recruited, to work on Cisco networking, which had nothing to do with my skills, by the very same company that had interviewed me when I discovered the MCP. I still think that the fact that I went through the certification path did a lot to convince my boss to be of my motivation and ability to work hard. Over the years I refreshed my MCSE with each version of Windows (from 2000 to 2016) and added a few new ones, depending on what I worked on at my positions : Cisco, RedHat, Vmware and Prince2.

Even though it was not obvious in my first job, the following ones were pretty clear cases where my certifications held some value to my employer. We discussed the fact during some of the interviews rather openly. And I was in a recruiter’s shoes myself a few times, and here is why I feel is useful regarding the certifications.
First it show that you can focus on sometimes gruesome work, for a while. Passing these kind of exams almost always forces you to learn tons of new information, on software or devices that you maybe never handle.
Then it show dedication to maintain them over time, when they have at least some value to your current position.
And, let’s be candid, it show you can take one for the team, because almost every vendor partnership requires some level of certification.
And, as I said, I know for a fact that I had been recruited, at least partly, twice thanks to my certs.
On the salary part, I am not definite on the impact of certifications. I do not feel that the cert plays a part there, but I cannot prove or disprove it.

That being said, when you take one of these exams, you will experience very different things depending on the vendor, and sometimes on the level of certification. Let’s take a closer look.
We’ll start with my longest running candidate : Microsoft. Apart from one beta test ten years ago, I always had some kind of MCQ with them. You may have some variation around that : drag and drop, point and click etc. But, by and large nothing close to a simulator or designer. This had led to a bad reputation a while ago, when you may have had an MCSE (which was like the Holy Grail of Microsoft certification) while having absolutely no hands-on experience with Windows. They have kept the same format for Azure exams, and are taking some heat also, because the exams are deprecated almost as they go out. I am wondering whether they are working on some other way to certify.
Cisco had a router/switch simulator for a long time, which had brought some rather interesting exams, for the lowest levels. I only took the CCNA 15 years ago, so I do not know how it goes for higher levels. The only caveat, from my perspective, was that the simulator did not allow for inline help and auto-completion, which you still have in real life.
RedHat, for the RHCE exams, had the most interesting experience in my view. The exam was completely in a lab, split in two sections. First you had to repair a broken RHEL server, three times. Then you were given a list of objectives that you had to meet with a RHEL server. You could choose whichever configuration you would prefer, as long as the requirements were met (with SELinux enforced, obviously 🙂 ). You had a fully functional RHEL, with the man pages and documentation, but without an internet access. I still feel to that day that this way let you prove that you really were knowledgeable and had the necessary skills to design and implement a Linux infrastructure. And the trainers were always fun and very skilled.

I also certified on Vmware Vsphere for a while and that brought me to a whole new level of pain. The basic VCP level is fine, just along the same lines as an MCP. But when I started to study for the next level, VCAP-DCD (which stands for Vmware Certified Advanced Professionnal-DataCenter Design), I had to find some new ways of preparing and learning. You see, where a usual exam requires you to learn some basic stuff by heart (like the default OSPF timers, or the minimum Windows 2000 workstation hardware requirements) it was still a limited scope. For this exam, you had to be able to completely design a Vsphere infrastructure, along the official Vmware guidelines, form all of the perspective (Compute, Storage, Network). And there were just a few MCQ. The most questions were just designs, where you had to draw, Visio-style, the required infrastructure, depending on a list of existing constraints and requirements. Believe me, it is one of the few exams where you truly need to manage your time also. I am a fast thinker, and had always completed all of my exams under 25% of the total allotted time. My first take at VCAP-DCD, I almost did not finish in the 3h30 I had. And I failed. There has been talks in the certification world that this exam was the second hardest in the world (at least in its version 5), at that level.

Over all of these exams, I can share some advice that is quite general, and absolutely not ground breaking. First you need to work to pass the exam. Get to know the blueprint of the exam and identify whether there are some areas you do not know about. Watch some videos, practice in labs, learn the recommendations from the vendor. Second, get a decent practice exam, to get used to the form and type of questions and also to check whether you’re ready to register for the real thing. And last : work again, read, practice, discuss if you can with some other people preparing the same exam, or someone who already took it. We are not at liberty to discuss everything in an exam, but at least we can help.

The short version is : get certified, it is worth it, at least in France where most of IT people do not take the time to go through the exercise.

Velocity London 2017

This October I had the opportunity to go at the Velovity conference in London (https://conferences.oreilly.com/velocity/vl-eu). The exact title of the conference is “Build and maintain complex distributed systems”. That’s an ambitious subject. The event had been suggested by a customer who went to one of the US editions and found that it was a brilliant event, both in terms of DevOps subjects covered and in terms the attendees & networking. So here I am, back in London, for 4 days of DevOps and cloud talks.

I have started the conference with a special 2-days training on Kubernetes, by Sebastien Goasgen (@sebgoa)

The training was really intense, as Sebastien described many standard objects and tools of the platform, as well as a few custom options that we can use. We played with Minikube on our laptops, which is a really great way to have the experience of a Kubernetes cluster, in a small box. It was really packed, and we had to rush to keep up with Sebastien’s tests and labs, even with his Github repo containing most of the scripts and K8s manifests. I came out of those days a bit tired by all the things I had learned and tested, and a long list of new functions and tools to try, new ideas to explore etc, it was immensely fun, thank Sebastien!

The conference itself was rather overwhelming, and a big surprise for me. I am used to large conferences like Vmworld or Tech-Ed where you get the good word for the year to come from an editor and its ecosystem. Most of the sessions in those are obviously diving into the products and how to use them.
At Velocity, almost all the keynote speakers were somehow working in research, or such bleeding edge domain that it might not even exist yet. I loved being presented with what might be happening, and by people who are scientists at heart, not just marketing infused with a light touch of technology. Moreover the sessions themselves were mostly feedback on the speakers own experience on a specific domain/issue/subject. Usually they are not into a particular tool or software suite, but rather on how to make things work with DevOps and large distributed systems.

Overall, I I really enjoyed this conference, because it very well organized, and small enough to be a human experience. As we have four days with the same group of around 400 people (227 to this day according to the attendee directory), in a rather small area, you cross path often with the same people, and it makes it easy to start conversations. Also they came up with a lot of ribbons that you can attach to your badge, to let other know what you are here for :

I was used to much larger conferences, where I always found the networking a bit difficult, if you did not know a few people beforehand. In Velocity’s case, it is so easy, you have only a handful of attendees and speakers, and you can meet everyone informally, during lunch breaks or just by asking. It came as a surprise for me to able just to chat with some of the speakers that have impressed me, like Juergen Cito and Kolton Andrus just by going and sit with them at a lunch table.

I’ll probably write about what I learned and took away from this conference in the near future, so stay tuned!

New security paradigms

Obviously you have heard a lot of talk around security, recently and less recently.
I have been in the tech/IT trade for about 15 years, and every time I have met with a new vendor/startup, they would start by saying that we did security wrong and they could help us built Next Gen security.

I am here to help you move to the Next Gen 🙂
All right, I am not. I wanted to share a short synthesis of what I have seen and heard over the past months around security in general, and in the public cloud in particular.
There are few statements I did find interesting :
• Perimetric lockdown, AKA perimeter firewalls, is over.
• No more need for IDS/IPS, in public cloud, you just need clean code (and maybe a Web Application Firewall)
• Public cloud PaaS services are moving to an hybrid mode delivery

Of course, these sentences are not very clear, so let me dig into those.

First, perimeter security. The “old” security model was built lake a medieval castle, with a strong outer wall, and some heavily defended entry points (Firewalls) There were some secret passages (VPNs), and some corrupted guards (Open ACLs 🙂 ).

Herstmonceux Castle with moat

This design has lived and is not relevant any more. It is way too difficult to manage and maintain thousands of access lists, VPNs, exceptions and parallel Internet accesses, not mentioning the hundreds of connected devices that we have floating around.

A more modern design, for enterprise networking, often relies on device security and identity management. You will still need some firewalling around your network, just to make sure that some dumb threat cannot go in by accident. But the core of your protection, networking-wise, will be based on a very stringent device policy that will allow only safe devices to connect to your resources.
This solution will also require that you have a good identity management, ideally with some advanced threat detection in place. Something that can tell you when some accounts should be deactivated/expired, or when you have abnormal behavior : for example, two connections attempts for the same account, from places thousands of kilometers apart.
For those who have already setup 802.1X authentication and Network Access Control on the physical network for workstations know that it requires good discipline and organization to work properly and not hamper actual work.
To complete the setup, you will need to secure your data itself, ideally using a solution that manages the various levels of confidentiality, and can also track the usage and distribution of the documents.

As I said No more need for IPS/IDS. Actually, I think that I have never seen a real implementation that was used in production. Rather there was almost always an IPS/IDS somewhere on the network, to comply with the CSO’s office requirement, but nothing was done with it, mostly because of all the generated noise. Do not misunderstand me, there are surely many true deployments in use that are perfectly valid! But for a cloud application, it is strange to want to get down to that level where your cloud provider is in charge of the lower infrastructure levels. The “official” approach is to write clean code, to make sure that your data entry points are protected and then to trust the defenses in place from your provider.
However, as many of us do not feel comfortable enough to skip the WAF (Web Application Firewall) step, at least Microsoft has heard the clamor and will add the possibility to connect a WAF in front of your App Service shortly. Note here : it is already possible to insert a firewall in front of an Azure App Service, but this requires a Premium service plan, which will come at a *ahem* premium price.

And that was my third point : PaaS services coming to a hybrid delivery mode. Usually when you look at PaaS services in the public cloud, they tend to have public endpoints. You may secure these endpoints with ACLs (or NSG for Azure), but this might not be very easy to do, for example if you do not have a precise IP range for your consumers. This point had been discussed and worked on for a while, at least at Microsoft, and we are now seeing the first announcements for PaaS services that are usable through a Vnet, and thus private IP. This leads to a new model, where you may use these services, Azure SQL for example, for your internal applications, through a Site-To-Site VPN.

These statements are subject to discussion, and will not meet every situation, but I think they are a good conversation starter with a customer, aren’t they?