La transfo DevOps – étape 1

Je sais, tout le monde parle de DevOps, à tort et à travers, mais parfois à raison. C’est un terme souvent utilisé pour redorer une image d’équipe ou de practice, et donner une touche sexy sur un jeu de diapositives. Je le vois souvent utilisé pour décrire ce qui n’est essentiellement que de l’automatisation dans une équipe Ops/infrastructure. C’est un bon début, mais ce n’est qu’un début, pas le voyage complet.

Néanmoins, ayant été acteur, parfois majeur, et témoin de plusieurs de ces transformations vers le DevOps, je voulais donner mon avis, à qui veut bien l’entendre. Ce que je vais dire va sûrement paraître trivial à certains d’entre vous, mais pourrait donner quelques indices à ceux qui se demandent où va le métier d’infogéreur.

Je parle de transformation DevOps, spécifiquement lorsque cette transformation touche une équipe de Managed Services/infogérance. C’est un sujet difficile dans ce contexte, car ces équipes doivent fournir un support moderne à leur client, qui sont parfois déjà organisés en mode DevOps, et même les aider à vivre cette transformation.

Je ne vais pas entrer dans les sujets contractuels et SLA, c’est un point très sensible et complexe. En tout cas, pas aujourd’hui.

J’ai pu observer deux façons d’approcher ce sujet. La première est une transformation complète et radicale de l’existant, pas de quartiers. La seconde est de construire une nouvelle équipe dédiée DevOps, en parallèle de l’existant.

J’ai seulement vécu la seconde méthode personnellement. Il s’agissait d’une opportunité car nous amenions tout un panel de nouvelles technologies et de clients dans une équipe d’infogérance existante. J’ai pu faire le choix de repartir d’une feuille blanche et de bâtir un ensemble d’outils et de méthodes pour desservir des nouveaux clients dans un nouveau modèle, en ne m’appuyant qu’au strict minimum sur l’existant. Cela nous a donné la possibilité d’être créatif et de démontrer la viabilité des nouvelles méthodes de travail et d’interaction avec nos clients.

La partie difficile a été l’adaptation aux services existants. Par exemple, nous avons du commencer en conservant les outils de supervision existants, car nous ne pouvions pas demander aux équipes de monitoring 24*7 de surveiller des outils et dashboards multiples. Notre succès sur les autres domaines (infrastructure as code, focus sur l’applicatif) nous a permis ensuite de driver un changement de plate-forme de supervision, quelques mois après les premiers pas. Et le mouvement, une fois lancé, a continué.

L’autre méthode, aka Big Bang, j’en ai été témoin. C’est une approche très disruptive, et qui ne peut pas être portée par n’importe qui. Il faut avoir un soutien de sa hiérarchie, jusqu’aux plus hauts niveaux, car l’implémentation va avoir des répercussions. Tout d’abord sur les équipes elles-mêmes, car tous les profils ne sont pas prêts à changer du jour au lendemain, même avec un accompagnement personnalisé. Ensuite sur le delivery et les clients, car eux non plus ne sont pas forcément convaincus, ou enchantés de subir les effets de bord de vos transformations internes.

Le côté très positif est par contre d’éviter d’avoir à gérer deux équipes en parallèle pour le même service au client. Et surtout de ne pas conserver de legacy très longtemps.

Je ne peux que vous suggérer deux lectures pour en savoir plus sur les enjeux de cette transformation : Phoenix project, qui est une version romancée d’une évolution DevOps, et The DevOps Handbook, qui donne les clés concrètes pour mener cette transformation.

La prochaine fois, je parlerais de comment il faut choisir la bonne cible, interne ou externe, pour démarrer ce voyage vers le monde merveilleux du DevOps.

DevOps transformation for Managed Services

I know, DevOps is still a buzzword. IT is often used to rebrand an old team or practice, and give more sexiness to a marketing slideset. I have seen it used to describe what is essentially an automation team within an Ops team. Good start, but missing the point.

Anyway, having been a major actor in a devops transformation, and witness to many, I wanted to give some advice to anyone out there patient enough to listen to me. I think most of what I am going to say will seem trivial, but it would help a few of us to put some words on what is happening to good ol’ outsourcing.

I am talking right now about DevOps transformation, but specifically when it happens to a Managed Services team, as in an outsourcing company, or an MSP. This is a difficult spot, as such teams should be able to provide support for DevOps organizations form their customers, and even help them.

I will skip the contract and SLA part, as it is a very tricky subject, at least for now.

I have seen two ways of approach the subject. First is full ahead transformation, no quarters, no mercy. The second is building a new team dedicated to DevOps, in parallel of the existing one.

I have only experienced the second first hand. It was an opportunity as we were bringing a new set of skills and customers into an existing managed services org. I chose to break from the past, and build a new set of tools and processes almost outside of the existing system. This creates the possibility to create and prove the viability of these new ways of working and interacting with your customers.

The difficult part is where you have to merge with the existing tools and processes. For example, we had to start with the monitoring tools we had, as we could not ask our 24*7 monitoring team to have multiple dashboards and tools. The fact that we were successful with the other tools and habits we developped allowed us to push for a new monitoring solution a few months after the initial move. And we kept the momentum after that 🙂

The other way I have witnessed, to transform the whole team at once, is challenging and cannot be carried by anyone on the team. I would advise that you start this path if you have a sufficient executive weight, or support from the executive team, because it will be a disruptive path. This is good, but the cost might be high, both in terms of people inside the team being disgruntled and maybe leaving, and in termes of your existing customers (and prospects) who might not understand what you are undetaking. The uptake is that you avoid having two different teams working with different mindsets and toolsets.

Obviously I would recommend reading the Devops Handbook, and the Phoenix Project, to get a basic understanding of what you are getting yourself into 🙂

Next up, I will try and help you target which team and/or customer would be a best fit to start your devops journey, stay tuned!

Le retour des chercheurs

Ce qui suit est une opinion personnelle, un ressenti de mon expérience, et peut ne pas refléter la réalité ou même le ressenti de l’ensemble de mes camardes, merci de ne pas leur en tenir rigueur 🙂

Lorsque j’ai débuté mon parcours professionnel, voire depuis mes études, nous avions une image assez négative des chercheurs en informatique. Ils étaient certes très intelligents, et avaient des connaissances approfondies, mais inutiles pour le quotidien. Savoir comment fonctionne un compilateur pouvait être passionnant,et servir dans quelques cas d’optimisation. De là à dire que c’était ce qui allait nous servir au quotidien…

Durant les 15 premières années de ce siècle, la tendance a perduré. Ce que j’ai pu observer autour de moi n’était pas très glorieux pour les chercheurs et universitaires.Nous les trouvions déconnectés de la réalité, perdus dans des théories ou sur des problématiques très éloignées des nôtres. Quelques frémissements se sont faits sentir dans certains domaines avec la montée en puissance des grands acteurs actuels, Google en tête. Les questions d’analyse sémantique et de volumétrie de données à traiter ont amené ces acteurs à travailler directement avec la recherche scientifique, car aucun produit sur étagère n’était prévu pour ce genre de cas.

Vu de mon fauteuil,cela aura été le début discret du changement que nous pouvons observer aujourd’hui. Les chercheurs sont sollicités, approchés, séduits. Nous avons besoin de leur vision en pointe, voire en avance sur la pointe, pour résoudre des problématiques spécifiques.

Ce qui a changé,selon moi, est l’état d’esprit, probablement poussé par les start-ups et la digitalisation massive. Nous sommes passés d’une approche “produit”(qu’est-ce que je peux faire avec ce que je connais) à une approche”solution métier” (que faut-il pour résoudre le problème posé par le business).

Et cela change tout.

Là où nous nous limitions à utiliser les capacités de quelques produits et à les mettre en service pour des fonctions prédéfinies, désormais nous sommes en mesure de creuser la problématique métier, qui n’a souvent rien à voir avec un problème IT. Cette problématique, nous la traduisons ensuite en critères techniques, et nous allons à la recherche du meilleur compromis pour résoudre la dite problématique. Et s’il le faut nous nous tournons vers les chercheurs.

Du côté des laboratoires, encore une fois selon moi, ce qui a changé en France est que ces équipes doivent maintenant aller chercher la plus grande part de leur budget dans des financements extérieurs. Et l’issue positive est que nous nous sommes rapprochés. Comme dans une belle histoire Disney de Noël (c’est de saison !),chacun a fait un pas vers l’autre et ensemble nous sommes plus forts. 😉

Le marché privé se rend compte que le mode de fonctionnement et de financement de la recherche publique est particulier. Le privé est capable d’entendre cela et de s’y adapter, car cela permet de créer des nouvelles solutions, avec l’appui des meilleurs cerveaux et technologies, même si elles n’existent pas encore.

Et la recherche publique a admis qu’elle devait travailler avec des projets peut-être plus précis, en termes de planning et d’objectifs, et surtout de ROI.

En bout de chaîne,on retrouve l’émergence des startups de type Deep Tech. Ces dernières sont des sociétés en cours de création qui associent un ou plusieurs chercheurs sur un sujet qui est très avancé et loin d’être industrialisé, et des gens business, qui sont capable de projeter les possibilités de cette technologie et d’en imaginer des usages. C’est l’association ultime de la recherche et du business !

Je l’ai beaucoup répété, cette analyse est issue d’un point de vue restreint, le mien, et ne reflète qu’une perception biaisée de la réalité. Je ne suis pas chercheur, je ne suis qu’un (presque) ingénieur infrastructure IT.

The comeback of Research

What follows below is an expression a purely personal opinion, based on my experience. Not everybody would agree, please excuse them 🙂

When I started working, and even during my studies, we had a rather negative view of IT researchers. We felt they were really smart, with deep technical knowledge,but on subjects that had no relation to what we would do on a daily basis. Knowing how a compiler works can be enticing, and useful in some edge cases of optimization, but we would never need that daily.

During the first 15 years, this trend endured. From what I could observe around me,nothing was very attractive for researchers in the IT world. We found them disconnected from reality, lost in theories or problems so far from our own.The first stir was felt with the advent of what would become today’s cloud giants (Google first). The issues they had around the volume of data they had to analyse, and the semantic analysis of said data pushed them to work directly with the world they were born from: research.

From my couch, this had been the subtle beginning of the change we can observe today. Researchers are wanted, hunted, asked for. We need their advanced knowledge and vision to solve very specific problems.

What changed, in my opinion, is the mindset, probably pushed by start-ups and digitization. We went from a product approach (what can I do with what I have) to a business solution approach (how can I solve this business issue).

And that changes everything.

Where we used to limit ourselves to the possibilities offered by a few products and set those up along predefined models, we are now able to consider the business issue, which has mostly nothing to do with IT. This problem is then translated into technical terms and we go look for a solution to said problem. And, if needed, we turn to research.

On the labs side, again in my opinion, what changed, at least in France, is that these teams now must get most of their budget outside of their usual public funding. The outcome is that we got closer. Just like in a Disney Christmas story (that’s the season!), we both did a step toward the other, and together we are stronger. 😉

Private market is now aware that the way the labs function and are financed is not the same. And it can adapt to that, because it allows for the creation of new solutions, with the aid of the best minds and techs, even if they do not exist, yet.

And public research has probably admitted that it could work with more specific projects, with hard deadlines and mostly a strict view onto ROI and real-world need.

At the end of the way, we find the emergence of Deep Tech startups. These are newborn companies that associate research on a very advanced topic, very far for industrialization, with business partners who can project what the use could be on the market. The ultimate bonding of research and business!

I have said it several times, this analysis is born for a very restricted view, mine,and just reflects my own perception of reality. I am no researcher, I am only an infrastructure engineer 🙂

The end of POCs

Having worked in a team dedicated to them, it feels hard to admit that truth. However, the facts are here : POCs are dying.

Let’s step back a little : a POC, or proof of concept, was often the starting poitn for a large project. Its point was to prove the technical feasability of the project, including the ability of the actors to deliver. This tool has often been used by vendors and providers, to convince a customer regarding e new piece of tech.

Halas, winds have changed. Today vendors are pulling the plug on POCs.

According to my own eminence, the cause is pretty simple. A POC is often paid almost-exclusiveley by the vendor and its partners. The acknowledged purpose, as stated before : validate the technology. But there has been a few hiccups on a smooth ride.

First, a few customers or users, have abused the concept of a POC, in order to get some play material and time. They were able to get their hands on some shiny new hardware or software, and brag about it, without having any intention of deploying it for real.

Second, ad this is particularly valid for IoT or AI topics, the vendors themselves had a different purpose for the POC : create some customer cases, to communicate about and prove to the world that they have the technical know-how to deliver that tech.

If you search a little, choosing a large company, for communiques and testimonies about IoT for example, you will find that there are many firms that have delivered THE IoT platform for a customer, with a glowing testimony from some team from the customer. Which raises the question : how many unique and mind blowing IoT platforms does this customer need? Are they all for real? How many IoT preferred partners can a company have?

The wheel has turned then, and it becomes more difficult, with clear minded actors anyway, to deliver a POC. All is not completely blocked, there are some cases where the POC has a real value. It is even known as a POV (proof of value), because its purpose is extended to prove the value and ROI of the whole project, beyond just technical feasability.

And the money comes jointly for all parties, and not just the vendor. This tends to involve the customer, and limit the POCs to genuine projects for the company.

So yes, recess is over. Serious gaming if not, however 😀

La fin des POCs

Pour avoir passé quelques années au sein d’une équipe dédiée à ce genre d’activité, il m’a été difficile d’accepter la réalité. Cependant, les faits sont là : les POCs sont mourants.

Petit retour en arrière : un POC, ou proof of concept, est souvent le point de départ d’un projet de grande envergure. Son objectif est de prouver la faisabilité technique du projet, y compris la maîtrise par les divers acteurs dudit projet. Cet outil a été souvent utilisé par les constructeurs et revendeurs, afin de convaincre un client sur une nouvelle technologie.

Hélas, le vent a tourné. Aujourd’hui les constructeurs, et les éditeurs, commencent à refuser les POCs.

Selon moi, la cause est assez simple. Le POC était souvent financé quasi-exclusivement par le fournisseur et ses partenaires. Le but avoué, comme dit ci-dessus : valider la technologie. Sauf que quelques grains de sables sont venus perturber ce petit monde.

En premier, certains clients et utilisateurs ont abusé du POC pour pouvoir s’amuser avec une nouvelle technologie, aux frais d’autrui. Et souvent sans aucun projet réel. Il s’agissait parfois de se faire mousser en interne, ou d’occuper son temps…

En second, et c’est particulièrement valable sur l’IoT ou l’IA, les fournisseurs eux-mêmes avaient un objectif primaire différent du client : créer un cas client afin de pouvoir communiquer, et prouver au monde qu’ils avaient la capacité technique de délivrer cette technologie.

Si on couple les deux problèmes, on voit nettement approcher la situation, vécue par beaucoup de grands comptes. Des POCs innombrables, sur les mêmes technologies, mais gérés par des entités internes et des fournisseurs différents. Cherchez un peu, en choisissant une grande entreprise au hasard, et regardez combien de POCs ont été fournis sur la même technologie, par des acteurs différents…

La tendance a donc basculé, et il devient beaucoup plus difficile, avec des acteurs clairvoyants en tout cas, de réaliser des POCs. Tout n’est pas totalement bloqué, il existe des cas où le POC possède une vraie valeur. Il est même parfois nommé Proof of Value, car on étend son objectif à prouver la valeur et le ROI d’un projet, au-delà de la simple faisabilité technique.

Et souvent, le financement du POC se fait de manière conjointe par l’ensemble des acteurs, y compris le client. Cela assure un intérêt réel et commun pour le projet dans sa globalité.

Donc oui, la récréation est finie. Nous pouvons toutefois encore jouer un peu, avec sérieux 😀

Des nouvelles fraîches

Et voilà, un nouveau post pour inaugurer des changements!

Premièrement, vous l’aurez noté, j’écris désormais aussi en français. Le but est de pouvoir toucher aussi mes camarades français, et de pouvoir partager des informations qui parfois ne sont qu’en français, et aussi de satisfaire quelques râleurs français ayant du mal avec la langue de Freddie Mercury. Dans la mesure du possible, je ferais les deux versions de mes articles, mais ce ne sera pas systématique 🙂

 

Pour le détail, j’ai créé deux tags qui permettront de trier les articles.

https://cloudinthealps.mandin.net/tag/english/

https://cloudinthealps.mandin.net/tag/francais/

 

Ensuite, j’inaugure le français pour pouvoir m’excuser de ne pas écrire grand-chose cette semaine, mais plutôt de partager des articles déjà publiés ailleurs.

Les deux premiers traitent de la vulgarisation de l’IA, et ont été écrits par Frédéric Wickert :

https://sway.office.com/VJDbCZHkfAHw1qeo?ref=Link

https://sway.office.com/LCmjkDlRi7kVwkFd?ref=Link

 

Ensuite un article au sujet de l’impact de l’IA sur le monde du travail et l’emploi, écrit par un type brillant :

http://www.aucoeurdesmetiers.fr/ia-des-postes-en-moins-des-emplois-en-plus/

 

Voilà, ce sera tout pour cette première fois en français!

Brainwave, Tensorflow : AI at the edge

About two years ago, Google announced the availability of TensorFlow processing units in its cloud.
They are dedicated microcontrollers built for training and running Machine Learning models. TPU are available within Gcloud as an execution platform for ML (of course, optimized for TensorFlow).
During the summer, they unveiled the edge equivalent of these TPU, which are named… Edge-TPU 🙂
These are very specific ASIC designed to execute ML models on an edge device, i.e. a small device close to the sensors gathering the data. This allows for a fast decision, without the need to send a truckload of data back up to the cloud.

But wait for it… Microsoft did just uncover a device called DataBox Edge. I know, the main purpose of this device is to provide a storage gateway to help you use Azure storage locally, and move the data between the device and Azure, hence the name. Bear with me, the path is a bit convoluted, and I would like you to enjoy every turn of it.
Databox Edge is also equipped with what has been called IoT Edge. This nifty piece of technology will enable you to run Azure-based workloads on an edge device, such as Azure Functions, Azure ML, Azure Stream Analytics etc. IoT Edge has been out in the open for about a year now, to be deployed onto compatible devices.
And, and that’s where we hit the Edge-TPU spot, also included in Databox Edge is a shiny new Microsoft hardware, called Brainwave. The name kind of gives away the purpose, especially after I guided you through the maze. Anyway, this chip is designed to run AI models on an edge device, and do it with impressive performance and efficiency.

I know, at this point, you would point out at the fact that it might again be a case of “We did it first!” from Google.

I’d like to focus a big difference between the two approaches. For once, I could not say which would win in the long term. In theory I prefer the approach from Microsoft, but that does not mean it will prevail (or that they would not change tactics and build something more like Edge-TPU).
The difference is that Google built an ASIC, whereas Microsoft used Intel FPGA to deploy its Brainwave architecture.
OK, this needs some explaining. First the names :
ASIC means Application Specific Integrated Circuit.
FPGA means Field Programmable Gate Array.

https://newsroom.intel.com/news/intel-fpgas-bring-power-artificial-intelligence-microsoft-azure/
Courtesy of Intel Newsroom

You see where this is going?
An ASIC is a very specific chip, designed to do only one thing, but optimized to its core. I should be able to execute one kind of job, but do it perfectly.
One the other hand, an FPGA is reprogrammable after its deployment, to be able to adapt to future needs. Its performance is close to an ASIC, but not quite equal.
To complete the panorama, going from specific to general use, we would then add GPU (Graphical Processing Units, as in your graphics cards) and then CPUs (ye good ol’ Pentium).

Microsoft took the path of versatility, whereas Google focused on a particular use.
As I mentioned, I’m not sure who has the best strategy, and whether there will even be a fight, but I am very curious to see both chips in the wild!

Testing out Hololens

During the summer I had the chance to visit the Porsche Museum in Stuttgart.

And specifically, to try out two technologies I had never experienced myself before.

 

First we had a tour through the original Porsche workshop, and built some components of the 356. Of course, that was using VR glasses. I could not find the maker of the set, glasses and controllers, but they looked a lot like HTC’s Vive.

https://newsroom.porsche.com/en/company/porsche-museum-digital-offers-virtual-reality-experience-app-future-heritage-porscheplatz-stuttgart-zuffenhausen-15868.html

Anyway, the VR experience is really immersive and you have to be careful not to try to run around with the headset on.

The motion control needs some adaptation period, but after the first tries, you usually get very comfortable grabbing a hammer and forming the body parts of the 356, or holding the spray gun to paint your very own Porsche in your favorite color.

 

Overall, a good experience, the only limitation I see would be how to interact with the real world, or rather how to avoid bumping into the objects around you. And of course, it is a fully immersive VR, so you cannot see your body inside, apart from your arms, as you handle the motion controllers.

 

I can see some uses where you could have enough empty space around you to walk around and see a future building before the furniture and all the fittings are in.

 

I was definitely more impressed by the Hololens, mostly because the mixed reality opens up a lot more usages.

In that case the point was to be able to see inside an hybrid Panamera, and understand all the components and moving parts involved with the hybrid technology.

I had seen a lot a demos using Hololens before, but I was really curious about the level of interaction, and the finesse of the controls using specific gestures.

I have to admit the design is slick and the experience, although a bit disturbing, is both impressive and immersive.

I say disturbing, as the fact that some of the real world in your vision is overlaid by a virtual object can feel a little strange at first. You quickly get used to it, but it might be an adoption issue when deploying this technology into a daily worker toolset.

Nevertheless, I was able to quickly navigate around the car, see the insides and get some information and advice. The controls are pretty obvious and do not get in the way. And you are able to avoid anyone (or any wall) getting in your way while you tour the car.

 

There are so many businesses and industries where  this tech could be used :

  • Any maintenance team for very specific hardware and high complexity tooling in the industry : airplane engines, industry automatons, remote stations where you could send any technicians that would be guided by a remote expert etc.
  • Training for the same hardware, for your own maintenance team
  • Anything involving 3D design : architecture, fitting and refitting of stores and offices, in store merchandising to ensure the right placement of all the items and furniture
  • You could create guided tours, using augmented reality, to provide detailed information for the visitors

 

Argh, so many ideas!!!

Finding my way in the AI world

Wow, it has already been almost a month since I started!

My new playground covers IoT and AI, and I am supposed to have a broad understanding of both.

Regarding IoT, my recent background helped me grow a solid groundwork for that. I am fairly comfortable with the concepts, and with the involved technologies. Moreover, I have a colleague whose sole purpose is to understand and build IoT solutions, so my bases are well covered.

When it comes to Artificial Intelligence, the coast is less clear.

First, it is not a domain where I have any background, neither in the theory (math, bio science…) nor practical (any implementation of AI).

Second, AI is the 2018 version of the Cloud in 2014 : everyone wants to do it, but not one has a clear definition of what we are talking about.

Last but not least, the very term AI covers almost anything, from a chatbot to augmented reality to self-driving cars.

My process has been a bit convoluted so far.

First thing I have tried was to register for e-learning (MOOC or otherwise) sessions on the topic. I have tried several, from OpenEDX to Microsoft AI school, to Google and Tensorflow. The content ranged from very high level, which was mostly too high for me, to algebra (which was a bit too deep for me).

Then I tried to read about the market. So I read a lot of whitepapers, from Microsoft, from DataIKU, from Forrester etc.

This was rather useful, as it gave me basic understanding of where the situation was.

I recommend Dataiku Machine Learning Demystified : https://pages.dataiku.com/machine-learning-basics-illustrated-guidebook

But still, I felt I was stuck in the theory and could not find the practical applications.

After some discussions with my usual suspect, Microsoft, I did have a look at their business uses cases and testimonies.

I have to admit, some of them were pretty interesting… however there is absolutely no information about the architecture or implementation of the solution, which left me wanting.

I finally found two Microsoft websites who did a good job of describing architectural templates, along with potential uses cases.

https://azure.microsoft.com/en-us/solutions/architecture/?solution=big-data

https://docs.microsoft.com/en-us/azure/architecture

This is where I started digging, and it made my mind spin with all the possibilities. You will have to wait a bit for the outcomes, and follow what SCC will be doing on this market in the coming weeks 😉

Last note, one of the smartest guy I have met at Microsoft, Frederic Wickert has started an AI business, and is writing, in French, to help debunk AI for us. I definitely recommend reading his posts! I admit I have not yet read the whole post, to avoid repeating everything here 😉