New security paradigms

Obviously you have heard a lot of talk around security, recently and less recently.
I have been in the tech/IT trade for about 15 years, and every time I have met with a new vendor/startup, they would start by saying that we did security wrong and they could help us built Next Gen security.

I am here to help you move to the Next Gen 🙂
All right, I am not. I wanted to share a short synthesis of what I have seen and heard over the past months around security in general, and in the public cloud in particular.
There are few statements I did find interesting :
• Perimetric lockdown, AKA perimeter firewalls, is over.
• No more need for IDS/IPS, in public cloud, you just need clean code (and maybe a Web Application Firewall)
• Public cloud PaaS services are moving to an hybrid mode delivery

Of course, these sentences are not very clear, so let me dig into those.

First, perimeter security. The “old” security model was built lake a medieval castle, with a strong outer wall, and some heavily defended entry points (Firewalls) There were some secret passages (VPNs), and some corrupted guards (Open ACLs 🙂 ).

Herstmonceux Castle with moat

This design has lived and is not relevant any more. It is way too difficult to manage and maintain thousands of access lists, VPNs, exceptions and parallel Internet accesses, not mentioning the hundreds of connected devices that we have floating around.

A more modern design, for enterprise networking, often relies on device security and identity management. You will still need some firewalling around your network, just to make sure that some dumb threat cannot go in by accident. But the core of your protection, networking-wise, will be based on a very stringent device policy that will allow only safe devices to connect to your resources.
This solution will also require that you have a good identity management, ideally with some advanced threat detection in place. Something that can tell you when some accounts should be deactivated/expired, or when you have abnormal behavior : for example, two connections attempts for the same account, from places thousands of kilometers apart.
For those who have already setup 802.1X authentication and Network Access Control on the physical network for workstations know that it requires good discipline and organization to work properly and not hamper actual work.
To complete the setup, you will need to secure your data itself, ideally using a solution that manages the various levels of confidentiality, and can also track the usage and distribution of the documents.

As I said No more need for IPS/IDS. Actually, I think that I have never seen a real implementation that was used in production. Rather there was almost always an IPS/IDS somewhere on the network, to comply with the CSO’s office requirement, but nothing was done with it, mostly because of all the generated noise. Do not misunderstand me, there are surely many true deployments in use that are perfectly valid! But for a cloud application, it is strange to want to get down to that level where your cloud provider is in charge of the lower infrastructure levels. The “official” approach is to write clean code, to make sure that your data entry points are protected and then to trust the defenses in place from your provider.
However, as many of us do not feel comfortable enough to skip the WAF (Web Application Firewall) step, at least Microsoft has heard the clamor and will add the possibility to connect a WAF in front of your App Service shortly. Note here : it is already possible to insert a firewall in front of an Azure App Service, but this requires a Premium service plan, which will come at a *ahem* premium price.

And that was my third point : PaaS services coming to a hybrid delivery mode. Usually when you look at PaaS services in the public cloud, they tend to have public endpoints. You may secure these endpoints with ACLs (or NSG for Azure), but this might not be very easy to do, for example if you do not have a precise IP range for your consumers. This point had been discussed and worked on for a while, at least at Microsoft, and we are now seeing the first announcements for PaaS services that are usable through a Vnet, and thus private IP. This leads to a new model, where you may use these services, Azure SQL for example, for your internal applications, through a Site-To-Site VPN.

These statements are subject to discussion, and will not meet every situation, but I think they are a good conversation starter with a customer, aren’t they?

Testing days, K8S, Minikube and Azure Stack

As I was getting ready for Velocity conference (https://conferences.oreilly.com/velocity/vl-eu) and the Kubernetes training by Sebastien Goasguen, I happened to be captured by a spiral of testing.

First, I needed to have a K8s cluster running for said training. Sebastien suggested Minikube, which is a nifty way of having a local K8s cluster on your workstation and play with it. As it was too easy, I went through my K8s the hard way (https://cloudinthealps.mandin.net/2017/09/14/kubernetes-the-hard-way-revival/) on Azure again to be able to work on the real stuff, and use kubectl from my Linux env (embedded in windows 10). And I realized that I might have internet issues during the conference and would be happy to have Minikube running.

So back to square one and to setting up minikube and kubectl properly on Windows.
I tried the easy way, which was to download Minikube for Windows and run it. It obviously failed, and I could not find out why. After some try and fails, I just updated Virtualbox, which I was already using for personnal stuff. I just had then to rest the minkube setup that I had, with “minikube delete” and then start fresh : “minikube start” and voilà, I had a brand new Minikube+kubectl setup fully on Windows 10 (and a backup on Linux and Azure).

But as I was working on that, I stumbled on a news there about Azure Stack (https://social.msdn.microsoft.com/Forums/azure/en-US/131985bd-bc56-4c35-bde8-640ac7a44299/microsoft-azure-stack-development-kit-201709283-now-available-released-10102017?forum=AzureStack) and specifically the AS SDK, which allows for a one node setup of Azure Stack.

This tickled my curiosity gene. A quick Google to find if there was any tutorials or advice on running nested Azure Stack on Azure, and here am I, setting up just that.
Keep in mind that the required VM (E16s-V3) is just above 1€/hour, which means 800€ monthly, so do not forget the auto-shutdown if you need to control your costs 🙂
The guide I followed is there : https://azurestack.blog/2017/07/deploy-azure-stack-development-kit-on-an-azure-vm/
I did almost everything using the Azure portal, maybe it might be useful to build a script to do that more quickly.
Note that the email with the download link takes some time to be sent, so you might start with that. Or you can use the direct link : https://aka.ms/azurestackdevkitdownloader

And this first test did not work out the way I expected. There were many differences between the article, the official doc, and what I encountered while deploying. Back again to first step, I redeployed the VM, redownloaded the SDK, and started from scratch, following the official doc (https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-deploy), I just added the tweak to skip the Physical host check, in order for the installation to continue even though it was running on a VM.

After a few hours, Voilà I had a fully running Azure Stack, within an Azure VM!
Now I just have to read the manual and play with it. This’ll be the subject of a future post, keep checking!

My experience @ Experiences’17

It has been a long two-days event for Microsoft France.
I wanted to summarize this event and what happened during those two days.
I will not be extensive about all the announcements and sessions that were offered.
This will just be my experience (pun intended) of the event.

This year I did not present a session, mainly because the process to submit one was very unclear, and I did not want to fight against smoke. And last precision, it was only my second Experiences, and I never attended its predecessor, Techdays.

As I said, it is a two-days event, split between a business day and a technical day. I attended both, as my role is also split between the two aspects. I found that the distinction was very visible regarding the content of the various sessions, apart from the keynotes (and Partner Back to School session). Overall the technical level is rather low, but most of MS staff is onsite and you can have very interesting discussions with them, as well with the other attendees.

A word on the attendees : there are very different groups in there. I have met with numerous Psellers and MVPs, as well as Microsoftees. Obviously, there are many customers and partners around, some of them just for show, some with a very specific project/problem in mind. And there are people that I am not accustomed to see in business events, but who bring a refreshing variety to the general attendee population. These are both students from multiple schools (engineering, but not only), and employees who managed to get their managers to approve because the event is free.
I am not sure whether it is the case in other countries, but in France we usually have difficulties getting approval to travel abroad and pay for a conference. It is not always true with every company, but it has been widespread enough that some European-wide events are replicated to a smaller scale in France to allow French techies to get the content as well.

Back to the event itself, the rhythm was rather intense this year and I missed many sessions, to be able to meet and discuss with everyone I wanted to. As it is with all event, the quality of a session is very dependent on the quality of the speaker. The ones I attended were very good and made a lot of effort to stay focused on their topic and keep everyone on board.
About the keynotes, well they were of the expected quality, on par with Inspire, with several videos, demos, interviews etc. As was the case with Ignite, some talks were highly specific (to AI or Quantum computing) and made me believe that Satya Nadella is taking some moves from Elon Musk. It was very different from the Tech’Ed days were we were shown the new interface for System Center, or a new tablet.

The buzzword this year at Experiences was AI (it was Blockchain last year). I have to admit that the AI Hackademy included some very interesting ideas and startups. I did not manage to visit them all but I was pretty impressed to see so many startups working on the subject, and bringing fresh ideas and concepts to our world.

All right, everything was very positive, I am convinced. I will share one mildly negative thought though : AI was sometimes thinly stretched over a piece of software or idea. I’ve seen some interesting uses of statistics, or even good programming and algorithms, but to say these were truly AI was going a bit far. At least that’s my opinion, but we may not all have the same definition… as for what is a cloud 🙂

https://experiences.microsoft.fr/business/experiences-17-morceaux-choisis/