La fin des POCs

Pour avoir passé quelques années au sein d’une équipe dédiée à ce genre d’activité, il m’a été difficile d’accepter la réalité. Cependant, les faits sont là : les POCs sont mourants.

Petit retour en arrière : un POC, ou proof of concept, est souvent le point de départ d’un projet de grande envergure. Son objectif est de prouver la faisabilité technique du projet, y compris la maîtrise par les divers acteurs dudit projet. Cet outil a été souvent utilisé par les constructeurs et revendeurs, afin de convaincre un client sur une nouvelle technologie.

Hélas, le vent a tourné. Aujourd’hui les constructeurs, et les éditeurs, commencent à refuser les POCs.

Selon moi, la cause est assez simple. Le POC était souvent financé quasi-exclusivement par le fournisseur et ses partenaires. Le but avoué, comme dit ci-dessus : valider la technologie. Sauf que quelques grains de sables sont venus perturber ce petit monde.

En premier, certains clients et utilisateurs ont abusé du POC pour pouvoir s’amuser avec une nouvelle technologie, aux frais d’autrui. Et souvent sans aucun projet réel. Il s’agissait parfois de se faire mousser en interne, ou d’occuper son temps…

En second, et c’est particulièrement valable sur l’IoT ou l’IA, les fournisseurs eux-mêmes avaient un objectif primaire différent du client : créer un cas client afin de pouvoir communiquer, et prouver au monde qu’ils avaient la capacité technique de délivrer cette technologie.

Si on couple les deux problèmes, on voit nettement approcher la situation, vécue par beaucoup de grands comptes. Des POCs innombrables, sur les mêmes technologies, mais gérés par des entités internes et des fournisseurs différents. Cherchez un peu, en choisissant une grande entreprise au hasard, et regardez combien de POCs ont été fournis sur la même technologie, par des acteurs différents…

La tendance a donc basculé, et il devient beaucoup plus difficile, avec des acteurs clairvoyants en tout cas, de réaliser des POCs. Tout n’est pas totalement bloqué, il existe des cas où le POC possède une vraie valeur. Il est même parfois nommé Proof of Value, car on étend son objectif à prouver la valeur et le ROI d’un projet, au-delà de la simple faisabilité technique.

Et souvent, le financement du POC se fait de manière conjointe par l’ensemble des acteurs, y compris le client. Cela assure un intérêt réel et commun pour le projet dans sa globalité.

Donc oui, la récréation est finie. Nous pouvons toutefois encore jouer un peu, avec sérieux 😀

Des nouvelles fraîches

Et voilà, un nouveau post pour inaugurer des changements!

Premièrement, vous l’aurez noté, j’écris désormais aussi en français. Le but est de pouvoir toucher aussi mes camarades français, et de pouvoir partager des informations qui parfois ne sont qu’en français, et aussi de satisfaire quelques râleurs français ayant du mal avec la langue de Freddie Mercury. Dans la mesure du possible, je ferais les deux versions de mes articles, mais ce ne sera pas systématique 🙂

 

Pour le détail, j’ai créé deux tags qui permettront de trier les articles.

https://cloudinthealps.mandin.net/tag/english/

https://cloudinthealps.mandin.net/tag/francais/

 

Ensuite, j’inaugure le français pour pouvoir m’excuser de ne pas écrire grand-chose cette semaine, mais plutôt de partager des articles déjà publiés ailleurs.

Les deux premiers traitent de la vulgarisation de l’IA, et ont été écrits par Frédéric Wickert :

https://sway.office.com/VJDbCZHkfAHw1qeo?ref=Link

https://sway.office.com/LCmjkDlRi7kVwkFd?ref=Link

 

Ensuite un article au sujet de l’impact de l’IA sur le monde du travail et l’emploi, écrit par un type brillant :

http://www.aucoeurdesmetiers.fr/ia-des-postes-en-moins-des-emplois-en-plus/

 

Voilà, ce sera tout pour cette première fois en français!

Brainwave, Tensorflow : AI at the edge

About two years ago, Google announced the availability of TensorFlow processing units in its cloud.
They are dedicated microcontrollers built for training and running Machine Learning models. TPU are available within Gcloud as an execution platform for ML (of course, optimized for TensorFlow).
During the summer, they unveiled the edge equivalent of these TPU, which are named… Edge-TPU 🙂
These are very specific ASIC designed to execute ML models on an edge device, i.e. a small device close to the sensors gathering the data. This allows for a fast decision, without the need to send a truckload of data back up to the cloud.

But wait for it… Microsoft did just uncover a device called DataBox Edge. I know, the main purpose of this device is to provide a storage gateway to help you use Azure storage locally, and move the data between the device and Azure, hence the name. Bear with me, the path is a bit convoluted, and I would like you to enjoy every turn of it.
Databox Edge is also equipped with what has been called IoT Edge. This nifty piece of technology will enable you to run Azure-based workloads on an edge device, such as Azure Functions, Azure ML, Azure Stream Analytics etc. IoT Edge has been out in the open for about a year now, to be deployed onto compatible devices.
And, and that’s where we hit the Edge-TPU spot, also included in Databox Edge is a shiny new Microsoft hardware, called Brainwave. The name kind of gives away the purpose, especially after I guided you through the maze. Anyway, this chip is designed to run AI models on an edge device, and do it with impressive performance and efficiency.

I know, at this point, you would point out at the fact that it might again be a case of “We did it first!” from Google.

I’d like to focus a big difference between the two approaches. For once, I could not say which would win in the long term. In theory I prefer the approach from Microsoft, but that does not mean it will prevail (or that they would not change tactics and build something more like Edge-TPU).
The difference is that Google built an ASIC, whereas Microsoft used Intel FPGA to deploy its Brainwave architecture.
OK, this needs some explaining. First the names :
ASIC means Application Specific Integrated Circuit.
FPGA means Field Programmable Gate Array.

https://newsroom.intel.com/news/intel-fpgas-bring-power-artificial-intelligence-microsoft-azure/
Courtesy of Intel Newsroom

You see where this is going?
An ASIC is a very specific chip, designed to do only one thing, but optimized to its core. I should be able to execute one kind of job, but do it perfectly.
One the other hand, an FPGA is reprogrammable after its deployment, to be able to adapt to future needs. Its performance is close to an ASIC, but not quite equal.
To complete the panorama, going from specific to general use, we would then add GPU (Graphical Processing Units, as in your graphics cards) and then CPUs (ye good ol’ Pentium).

Microsoft took the path of versatility, whereas Google focused on a particular use.
As I mentioned, I’m not sure who has the best strategy, and whether there will even be a fight, but I am very curious to see both chips in the wild!

Testing out Hololens

During the summer I had the chance to visit the Porsche Museum in Stuttgart.

And specifically, to try out two technologies I had never experienced myself before.

 

First we had a tour through the original Porsche workshop, and built some components of the 356. Of course, that was using VR glasses. I could not find the maker of the set, glasses and controllers, but they looked a lot like HTC’s Vive.

https://newsroom.porsche.com/en/company/porsche-museum-digital-offers-virtual-reality-experience-app-future-heritage-porscheplatz-stuttgart-zuffenhausen-15868.html

Anyway, the VR experience is really immersive and you have to be careful not to try to run around with the headset on.

The motion control needs some adaptation period, but after the first tries, you usually get very comfortable grabbing a hammer and forming the body parts of the 356, or holding the spray gun to paint your very own Porsche in your favorite color.

 

Overall, a good experience, the only limitation I see would be how to interact with the real world, or rather how to avoid bumping into the objects around you. And of course, it is a fully immersive VR, so you cannot see your body inside, apart from your arms, as you handle the motion controllers.

 

I can see some uses where you could have enough empty space around you to walk around and see a future building before the furniture and all the fittings are in.

 

I was definitely more impressed by the Hololens, mostly because the mixed reality opens up a lot more usages.

In that case the point was to be able to see inside an hybrid Panamera, and understand all the components and moving parts involved with the hybrid technology.

I had seen a lot a demos using Hololens before, but I was really curious about the level of interaction, and the finesse of the controls using specific gestures.

I have to admit the design is slick and the experience, although a bit disturbing, is both impressive and immersive.

I say disturbing, as the fact that some of the real world in your vision is overlaid by a virtual object can feel a little strange at first. You quickly get used to it, but it might be an adoption issue when deploying this technology into a daily worker toolset.

Nevertheless, I was able to quickly navigate around the car, see the insides and get some information and advice. The controls are pretty obvious and do not get in the way. And you are able to avoid anyone (or any wall) getting in your way while you tour the car.

 

There are so many businesses and industries where  this tech could be used :

  • Any maintenance team for very specific hardware and high complexity tooling in the industry : airplane engines, industry automatons, remote stations where you could send any technicians that would be guided by a remote expert etc.
  • Training for the same hardware, for your own maintenance team
  • Anything involving 3D design : architecture, fitting and refitting of stores and offices, in store merchandising to ensure the right placement of all the items and furniture
  • You could create guided tours, using augmented reality, to provide detailed information for the visitors

 

Argh, so many ideas!!!

Finding my way in the AI world

Wow, it has already been almost a month since I started!

My new playground covers IoT and AI, and I am supposed to have a broad understanding of both.

Regarding IoT, my recent background helped me grow a solid groundwork for that. I am fairly comfortable with the concepts, and with the involved technologies. Moreover, I have a colleague whose sole purpose is to understand and build IoT solutions, so my bases are well covered.

When it comes to Artificial Intelligence, the coast is less clear.

First, it is not a domain where I have any background, neither in the theory (math, bio science…) nor practical (any implementation of AI).

Second, AI is the 2018 version of the Cloud in 2014 : everyone wants to do it, but not one has a clear definition of what we are talking about.

Last but not least, the very term AI covers almost anything, from a chatbot to augmented reality to self-driving cars.

My process has been a bit convoluted so far.

First thing I have tried was to register for e-learning (MOOC or otherwise) sessions on the topic. I have tried several, from OpenEDX to Microsoft AI school, to Google and Tensorflow. The content ranged from very high level, which was mostly too high for me, to algebra (which was a bit too deep for me).

Then I tried to read about the market. So I read a lot of whitepapers, from Microsoft, from DataIKU, from Forrester etc.

This was rather useful, as it gave me basic understanding of where the situation was.

I recommend Dataiku Machine Learning Demystified : https://pages.dataiku.com/machine-learning-basics-illustrated-guidebook

But still, I felt I was stuck in the theory and could not find the practical applications.

After some discussions with my usual suspect, Microsoft, I did have a look at their business uses cases and testimonies.

I have to admit, some of them were pretty interesting… however there is absolutely no information about the architecture or implementation of the solution, which left me wanting.

I finally found two Microsoft websites who did a good job of describing architectural templates, along with potential uses cases.

https://azure.microsoft.com/en-us/solutions/architecture/?solution=big-data

https://docs.microsoft.com/en-us/azure/architecture

This is where I started digging, and it made my mind spin with all the possibilities. You will have to wait a bit for the outcomes, and follow what SCC will be doing on this market in the coming weeks 😉

Last note, one of the smartest guy I have met at Microsoft, Frederic Wickert has started an AI business, and is writing, in French, to help debunk AI for us. I definitely recommend reading his posts! I admit I have not yet read the whole post, to avoid repeating everything here 😉

Blameless post-mortem

Nope, my new position is not dead yet, thank you very much.
What I mean by this title is usually a meeting in any IT service, after a major incident has been resolved, where all the team members who have worked on the incident gather and discuss what went wrong, and how to improve tools and processes to do better next time.

I specify blameless, as it is a very good practice to avoid finger pointing, generally and particularly in these meetings. If you want people to be honest and share their best insights, you have to keep in mind that these post-mortems have to cultivate an atmosphere of trust. The aim is really to find out how the events have unfolded, which information had been gathered, what went wrong, what steps were smart, which ones did not work properly etc.
For more information about that, I recommend some DevOps sessions and talks, like this one from @Jasonhand from VictorOps : It’s Not Your Fault – Blameless Post-mortems

But my point today is to write about another kind of post-mortem which I discussed with a friend a few months back.
The methodology of a post mortem could and should be used in different settings than just IT infrastructure incidents. It should be extended to sales, whether you manage to win or lose the deal. It could be applied personally to any job interview, even if there are usually not that many people involved. And it could be used after any major event in your life, personal or professional.

The main focus for me right now would be the sales post-mortem. In most companies I have worked for, the sales pipeline strategy is mostly to respond to as many RFP as possible. Statistically, it makes sense, as you are doomed to success every once in a while. In terms of smart strategy… let’s say I am not completely convinced. I tend to prefer a targeted answer to the cases where my team/company can bring out real value and help the customer while bringing attractive project to our team. I usually do not hesitate to forgo any RFP where there is nothing interesting or that puts us in jeopardy without bringing any value, or sexiness to our job.
When you have time to focus on very interesting cases and invest time on those, you would usually find this time useful, on short and long term. And you should take time, whether you win or lose, to have this post-mortem meeting with your team. It is good to get the feelings and insights from everyone involved about the outcome. And I mean everyone. The first stakeholder you should at least get feedback from is the customer. I try to build a trust relationship with a potential customer during the RFP process where we can exchange honest points of view about our positioning and the project expectations. During the process this helps everyone stay on the right track. And afterwards, it helps to know why you have not been chosen.

Beyond knowing, the most important aspect of these post-mortems is to implement some changes on your process, to be more relevant and have a better chance for success the following time around.

And that’s it for early morning musings, ’til next time!

IoT Challenges

After a long summer break, getting back to writing is a bit difficult, so here is a first post for a new era. I’ll be switching jobs early September, so there might a slight variation in the subjects I’ll write about.

As highlighted in Gartner 2018 Cycle of Hype study, IoT is now a mature tech and we will see more and more large scale projects being deployed in the wild. I would like to expand a bit about what it entails to start an IoT initiative, whether it be to design a new product to sell, or to gain some insight and improve your own processes.
The steps are familiar to anyone who has ever come close to a project in his/her life:

1. Design the solution
2. Gather the requirements
3. Choose the components, protocols
4. Build all the processes (logistics, operations, IT, support)
5. Market and sell
6. Maintain and deliver new functionalities

In terms of project management, there is nothing to learn here. I just wanted to highlight the specifics of an IoT project for these steps. There some particularities due to the type of project, or just points to remember that should be obvious but are often forgotten.

Design the solution

What I mean here is a high level design, functional, that will describe what you are aiming to deliver to your users or customers. Nothing fancy, nothing technical, just plain business.

Gather the requirements

Nothing new here, just make sur you include the future functions and the way you are going to develop. For example, if you start with an MVP (Minimal Viable Product) and build from there in short cycles, you need to have a long term plan/strategy that will keep everything on track. And this plan should help you define your long term requirements.

Choose the components/protocols

This is a technical step, rather complex to execute today, as there are so many solutions to one single question out there. And you have to keep in mind the current state of the art, along with what you expect this state to be in 3, 5 or even 10 years.

Build all the processes (logistics, operations, IT, support)

From my experience, this a often disregarded step, even by some companies that have been in the industry for decades. The simple question is : you are going to deliver a product (physical) to your users. What happens when the product breaks? Who are they going to call (and no, the answer has nothing to do with an 80s movie 🙂 ), how are you going to manage your replacements, stock, warranties etc. How do you handle servicing the device? Remotely, using your current support team, or locally? One specific suggestion, coming from experience : remember to include the ability to remotely upgrade your firmware 😉

Market and sell

Nothing to declare here. This should be rather standard. One word of advice : most IoT project that succeed build on their ecosystem and integration of new functionalities. You should probably add that to your strategy, and to the marketing materials.

Maintain and deliver new functionalities

This point relates both to the maintenance and support I have raised earlier, and to the lifecycle of your product.
Think about the many product we have seen with an incredible starting point in sales or customer acquisition, that dropped of the board after a few weeks, because nothing happened beyond the first wow effect. There nothing more infuriating, as an end user, to have a product with no bugfixes, or without any new functionalities beyond what came with the product out of the box. For example, take a mobile game, Pokemon Go : they had an amazing start, with millions of users daily. But, as the hype faded out, rumored functions and abilities did not come out, and the game statistics went down.
https://www.wandera.com/pokemon-go-data-analysis-popular-game/

The short version is : a connected product is a product, physical, with all the requirements that should be included in such a project. Do not go too fast when your Proof Of Conecpt works. Think long term, and try not to be dazzled by a partner or consultant that show off what a POC platform does on a demo screen 😉

Managed Kubernetes and security

Almost a sponsored post today, or better : a shared announcement.

You probably know that I am following Kubernetes rather closely, especially managed Kubernetes services (AKS, EKS or Openshift for example). One domain where these offerings have been lacking is network and security.

It is still a very sensitive subject for our customers, for containers related project, and still for public cloud projects. Security and networking teams have trouble adapting to the public cloud paradigms and architectures. There some fear of loss of control, some base fear of the unknown, and some real worry about how to handle networking and security.
Kubernetes (and the other orchestrators) adds another abstraction layer on top of the existing public cloud platforms, which does nothing to alleviate fear, to say nothing about complexity and transparency.

There are some very good solutions out there to manage network overlays into Kubernetes. My favourite is Calico, but you may like any of those. I’ll stick with Calico for a simple reason, which you will see below.

Microsoft and AWS are both working hard to provide a network overlay into their managed Kubernetes offering. They each chose their own path, but we will get to approximately the same point in a short time.

Thanks to Jean Poizat, we have the two announcements.
1) From Calico for Azure : https://www.tigera.io/tigera-calico-coming-to-azure-kubernetes-service-aks/
2) For AWS : https://itnext.io/kubernetes-is-hard-why-eks-makes-it-easier-for-network-and-security-architects-ea6d8b2ca965

The summary is that Calico will be integrated into AKS in a few weeks/months, and EKS will include AWS CNI.
And that is exactly what we were waiting for, along with our customers : managed Kubernetes, with security!

Designing your own job

Depending on how you consider things, it is the third time that it happens to me.
Being able to design, under certain limits, your own job, is an amazing opportunity.
I will not go into too many details as some of it is work in progress, but the process was amazingly energizing and I wanted to share a bit of that energy.
For my current job, I met my future boss on the recommendation of a former colleague. We discussed many things, from ITIL to Managed Services, and also public cloud and the need to get dev and ops team closer. We went through those kind of talks several times, at least four if memory serves. We went from a job which look like an Ops engineer/ITIL practitioner, to something closer to an Azure tech lead.
In my previous position I also had the opportunity to be offered a promotion, and been able to discuss some of the content and responsibilities of the future role. I was also able to step down when time came for me to admit that it was not an ideal position, for me or for the company. Which was really appreciated, at least on my part.

And once again a few weeks ago, I was called out of the blue by a colleague’s boss. He started to discuss his own future and what he was trying to design. He wanted to build something new, and was searching for a partner to build that together. And in that scheme, he discussed a position very similar to my dream job, and offered it to me.
I almost fell off my chair.
At that point I was ready to accept, without having any more details about the exact role and responsibilities, or even the salary. That’s where my future boss started to ask me what I would include or exclude from that job description, and how I could make it my own. My mind just froze.
It took some time for me to recover and start thinking again. After some lame jokes, we discussed the position, and what we would like to build together. It took us several meetings and calls to see through the fog, as we are really going to build something new together, and we cannot rely much on what exists around us.
The last funny thing to happen was that my next interview was with the CEO of the company, who was convinced by the both of us in less than 35 minutes. I could not believe my luck in getting there.
Anyway, that’s it for the bragging post. I really needed to write that down to make it real (even if I signed and will start by the end of the summer 🙂 )

Autonomous versus autonomic systems

This is a difficult topic. I have to admit I am still not completely comfortable with all the concepts and functions.
However, the thinking is amazingly interesting, and I will take some time to ingest everything.
First things first, I will use this post to summarize what I have learned so far.

How did I end up reading that kind of work, you ask? Weeeellll, that’s easy 🙂
Brendan Burns, in one of Ignite ’17 sessions, used the comparison “autonomous vs autonomic” to discuss Kubernetes.
This got me thinking on the actual comparison, and aided with our trusted friend, Google, I found a NASA paper about that (https://www.researchgate.net/publication/265111077_Autonomous_and_Autonomic_Systems_with_Applications_to_NASA_Intelligent_Spacecraft_Operations_and_Exploration_Systems) I started to read it, but it was a bit obscure for me, and scientific English, applied to space research, was a bit too hard for an introduction to that topic of autonomic systems.
Some more research, helped by me beloved wife, led to a research thesis, in French, by Rémi Sharrock (https://www.linkedin.com/in/tichadok/). The thesis is available right there : https://tel.archives-ouvertes.fr/tel-00578735/document. This one relates to the same topic, but applied to distributed software and infrastructure, which ends up being way more familiar to me 🙂

The point where I am right now is just over getting the definitions and concepts right.
I will try to describe what I understand here about automated, autonomous and autonomic systems.
There is some progression from the first to the second, and from the second to the third concept.
Let’s start with automated. An automated system is just like an automaton in the old world : something that will execute a series of commands, on the order of a human (or another system). For example, you have a thermostat at home that send the temperature from inside and outside your home to the heater controller.

There is no brain in there, or almost none.
The higher step is an autonomous system. This one is able to take some decision and act on the data it captured. To continue with the thermostat example, you have a heater controller which will handle the current temperature, from both inside and outside, and decide whether to start heating the house, and how.
The short version is that the system is able to execute a task by itself.

And then we have an autonomic system. This is able to have a higher view of its environment, and should be able to control the fact that it will always be able to execute its tasks. I have run out of the heater example, but let’s take a smart mower. The first degree of autonomicity that it has is the way it will control its battery level, and return to its base station to recharge, in order to ensure that it will be able to continue its task, which is mowing the lawn.
There are multiple pillars of autonomicity. Rémi Sharrock described four in his thesis, and I tend to trust him on this :

Each of these four pillars can be implemented into the system, to various degrees.
I am not yet comfortable enough on describing precisely the four pillars, but it will come in a future post!