The comeback of Research

What follows below is an expression a purely personal opinion, based on my experience. Not everybody would agree, please excuse them 🙂

When I started working, and even during my studies, we had a rather negative view of IT researchers. We felt they were really smart, with deep technical knowledge,but on subjects that had no relation to what we would do on a daily basis. Knowing how a compiler works can be enticing, and useful in some edge cases of optimization, but we would never need that daily.

During the first 15 years, this trend endured. From what I could observe around me,nothing was very attractive for researchers in the IT world. We found them disconnected from reality, lost in theories or problems so far from our own.The first stir was felt with the advent of what would become today’s cloud giants (Google first). The issues they had around the volume of data they had to analyse, and the semantic analysis of said data pushed them to work directly with the world they were born from: research.

From my couch, this had been the subtle beginning of the change we can observe today. Researchers are wanted, hunted, asked for. We need their advanced knowledge and vision to solve very specific problems.

What changed, in my opinion, is the mindset, probably pushed by start-ups and digitization. We went from a product approach (what can I do with what I have) to a business solution approach (how can I solve this business issue).

And that changes everything.

Where we used to limit ourselves to the possibilities offered by a few products and set those up along predefined models, we are now able to consider the business issue, which has mostly nothing to do with IT. This problem is then translated into technical terms and we go look for a solution to said problem. And, if needed, we turn to research.

On the labs side, again in my opinion, what changed, at least in France, is that these teams now must get most of their budget outside of their usual public funding. The outcome is that we got closer. Just like in a Disney Christmas story (that’s the season!), we both did a step toward the other, and together we are stronger. 😉

Private market is now aware that the way the labs function and are financed is not the same. And it can adapt to that, because it allows for the creation of new solutions, with the aid of the best minds and techs, even if they do not exist, yet.

And public research has probably admitted that it could work with more specific projects, with hard deadlines and mostly a strict view onto ROI and real-world need.

At the end of the way, we find the emergence of Deep Tech startups. These are newborn companies that associate research on a very advanced topic, very far for industrialization, with business partners who can project what the use could be on the market. The ultimate bonding of research and business!

I have said it several times, this analysis is born for a very restricted view, mine,and just reflects my own perception of reality. I am no researcher, I am only an infrastructure engineer 🙂

Brainwave, Tensorflow : AI at the edge

About two years ago, Google announced the availability of TensorFlow processing units in its cloud.
They are dedicated microcontrollers built for training and running Machine Learning models. TPU are available within Gcloud as an execution platform for ML (of course, optimized for TensorFlow).
During the summer, they unveiled the edge equivalent of these TPU, which are named… Edge-TPU 🙂
These are very specific ASIC designed to execute ML models on an edge device, i.e. a small device close to the sensors gathering the data. This allows for a fast decision, without the need to send a truckload of data back up to the cloud.

But wait for it… Microsoft did just uncover a device called DataBox Edge. I know, the main purpose of this device is to provide a storage gateway to help you use Azure storage locally, and move the data between the device and Azure, hence the name. Bear with me, the path is a bit convoluted, and I would like you to enjoy every turn of it.
Databox Edge is also equipped with what has been called IoT Edge. This nifty piece of technology will enable you to run Azure-based workloads on an edge device, such as Azure Functions, Azure ML, Azure Stream Analytics etc. IoT Edge has been out in the open for about a year now, to be deployed onto compatible devices.
And, and that’s where we hit the Edge-TPU spot, also included in Databox Edge is a shiny new Microsoft hardware, called Brainwave. The name kind of gives away the purpose, especially after I guided you through the maze. Anyway, this chip is designed to run AI models on an edge device, and do it with impressive performance and efficiency.

I know, at this point, you would point out at the fact that it might again be a case of “We did it first!” from Google.

I’d like to focus a big difference between the two approaches. For once, I could not say which would win in the long term. In theory I prefer the approach from Microsoft, but that does not mean it will prevail (or that they would not change tactics and build something more like Edge-TPU).
The difference is that Google built an ASIC, whereas Microsoft used Intel FPGA to deploy its Brainwave architecture.
OK, this needs some explaining. First the names :
ASIC means Application Specific Integrated Circuit.
FPGA means Field Programmable Gate Array.

https://newsroom.intel.com/news/intel-fpgas-bring-power-artificial-intelligence-microsoft-azure/
Courtesy of Intel Newsroom

You see where this is going?
An ASIC is a very specific chip, designed to do only one thing, but optimized to its core. I should be able to execute one kind of job, but do it perfectly.
One the other hand, an FPGA is reprogrammable after its deployment, to be able to adapt to future needs. Its performance is close to an ASIC, but not quite equal.
To complete the panorama, going from specific to general use, we would then add GPU (Graphical Processing Units, as in your graphics cards) and then CPUs (ye good ol’ Pentium).

Microsoft took the path of versatility, whereas Google focused on a particular use.
As I mentioned, I’m not sure who has the best strategy, and whether there will even be a fight, but I am very curious to see both chips in the wild!

Testing out Hololens

During the summer I had the chance to visit the Porsche Museum in Stuttgart.

And specifically, to try out two technologies I had never experienced myself before.

 

First we had a tour through the original Porsche workshop, and built some components of the 356. Of course, that was using VR glasses. I could not find the maker of the set, glasses and controllers, but they looked a lot like HTC’s Vive.

https://newsroom.porsche.com/en/company/porsche-museum-digital-offers-virtual-reality-experience-app-future-heritage-porscheplatz-stuttgart-zuffenhausen-15868.html

Anyway, the VR experience is really immersive and you have to be careful not to try to run around with the headset on.

The motion control needs some adaptation period, but after the first tries, you usually get very comfortable grabbing a hammer and forming the body parts of the 356, or holding the spray gun to paint your very own Porsche in your favorite color.

 

Overall, a good experience, the only limitation I see would be how to interact with the real world, or rather how to avoid bumping into the objects around you. And of course, it is a fully immersive VR, so you cannot see your body inside, apart from your arms, as you handle the motion controllers.

 

I can see some uses where you could have enough empty space around you to walk around and see a future building before the furniture and all the fittings are in.

 

I was definitely more impressed by the Hololens, mostly because the mixed reality opens up a lot more usages.

In that case the point was to be able to see inside an hybrid Panamera, and understand all the components and moving parts involved with the hybrid technology.

I had seen a lot a demos using Hololens before, but I was really curious about the level of interaction, and the finesse of the controls using specific gestures.

I have to admit the design is slick and the experience, although a bit disturbing, is both impressive and immersive.

I say disturbing, as the fact that some of the real world in your vision is overlaid by a virtual object can feel a little strange at first. You quickly get used to it, but it might be an adoption issue when deploying this technology into a daily worker toolset.

Nevertheless, I was able to quickly navigate around the car, see the insides and get some information and advice. The controls are pretty obvious and do not get in the way. And you are able to avoid anyone (or any wall) getting in your way while you tour the car.

 

There are so many businesses and industries where  this tech could be used :

  • Any maintenance team for very specific hardware and high complexity tooling in the industry : airplane engines, industry automatons, remote stations where you could send any technicians that would be guided by a remote expert etc.
  • Training for the same hardware, for your own maintenance team
  • Anything involving 3D design : architecture, fitting and refitting of stores and offices, in store merchandising to ensure the right placement of all the items and furniture
  • You could create guided tours, using augmented reality, to provide detailed information for the visitors

 

Argh, so many ideas!!!

Finding my way in the AI world

Wow, it has already been almost a month since I started!

My new playground covers IoT and AI, and I am supposed to have a broad understanding of both.

Regarding IoT, my recent background helped me grow a solid groundwork for that. I am fairly comfortable with the concepts, and with the involved technologies. Moreover, I have a colleague whose sole purpose is to understand and build IoT solutions, so my bases are well covered.

When it comes to Artificial Intelligence, the coast is less clear.

First, it is not a domain where I have any background, neither in the theory (math, bio science…) nor practical (any implementation of AI).

Second, AI is the 2018 version of the Cloud in 2014 : everyone wants to do it, but not one has a clear definition of what we are talking about.

Last but not least, the very term AI covers almost anything, from a chatbot to augmented reality to self-driving cars.

My process has been a bit convoluted so far.

First thing I have tried was to register for e-learning (MOOC or otherwise) sessions on the topic. I have tried several, from OpenEDX to Microsoft AI school, to Google and Tensorflow. The content ranged from very high level, which was mostly too high for me, to algebra (which was a bit too deep for me).

Then I tried to read about the market. So I read a lot of whitepapers, from Microsoft, from DataIKU, from Forrester etc.

This was rather useful, as it gave me basic understanding of where the situation was.

I recommend Dataiku Machine Learning Demystified : https://pages.dataiku.com/machine-learning-basics-illustrated-guidebook

But still, I felt I was stuck in the theory and could not find the practical applications.

After some discussions with my usual suspect, Microsoft, I did have a look at their business uses cases and testimonies.

I have to admit, some of them were pretty interesting… however there is absolutely no information about the architecture or implementation of the solution, which left me wanting.

I finally found two Microsoft websites who did a good job of describing architectural templates, along with potential uses cases.

https://azure.microsoft.com/en-us/solutions/architecture/?solution=big-data

https://docs.microsoft.com/en-us/azure/architecture

This is where I started digging, and it made my mind spin with all the possibilities. You will have to wait a bit for the outcomes, and follow what SCC will be doing on this market in the coming weeks 😉

Last note, one of the smartest guy I have met at Microsoft, Frederic Wickert has started an AI business, and is writing, in French, to help debunk AI for us. I definitely recommend reading his posts! I admit I have not yet read the whole post, to avoid repeating everything here 😉

Blameless post-mortem

Nope, my new position is not dead yet, thank you very much.
What I mean by this title is usually a meeting in any IT service, after a major incident has been resolved, where all the team members who have worked on the incident gather and discuss what went wrong, and how to improve tools and processes to do better next time.

I specify blameless, as it is a very good practice to avoid finger pointing, generally and particularly in these meetings. If you want people to be honest and share their best insights, you have to keep in mind that these post-mortems have to cultivate an atmosphere of trust. The aim is really to find out how the events have unfolded, which information had been gathered, what went wrong, what steps were smart, which ones did not work properly etc.
For more information about that, I recommend some DevOps sessions and talks, like this one from @Jasonhand from VictorOps : It’s Not Your Fault – Blameless Post-mortems

But my point today is to write about another kind of post-mortem which I discussed with a friend a few months back.
The methodology of a post mortem could and should be used in different settings than just IT infrastructure incidents. It should be extended to sales, whether you manage to win or lose the deal. It could be applied personally to any job interview, even if there are usually not that many people involved. And it could be used after any major event in your life, personal or professional.

The main focus for me right now would be the sales post-mortem. In most companies I have worked for, the sales pipeline strategy is mostly to respond to as many RFP as possible. Statistically, it makes sense, as you are doomed to success every once in a while. In terms of smart strategy… let’s say I am not completely convinced. I tend to prefer a targeted answer to the cases where my team/company can bring out real value and help the customer while bringing attractive project to our team. I usually do not hesitate to forgo any RFP where there is nothing interesting or that puts us in jeopardy without bringing any value, or sexiness to our job.
When you have time to focus on very interesting cases and invest time on those, you would usually find this time useful, on short and long term. And you should take time, whether you win or lose, to have this post-mortem meeting with your team. It is good to get the feelings and insights from everyone involved about the outcome. And I mean everyone. The first stakeholder you should at least get feedback from is the customer. I try to build a trust relationship with a potential customer during the RFP process where we can exchange honest points of view about our positioning and the project expectations. During the process this helps everyone stay on the right track. And afterwards, it helps to know why you have not been chosen.

Beyond knowing, the most important aspect of these post-mortems is to implement some changes on your process, to be more relevant and have a better chance for success the following time around.

And that’s it for early morning musings, ’til next time!

IoT Challenges

After a long summer break, getting back to writing is a bit difficult, so here is a first post for a new era. I’ll be switching jobs early September, so there might a slight variation in the subjects I’ll write about.

As highlighted in Gartner 2018 Cycle of Hype study, IoT is now a mature tech and we will see more and more large scale projects being deployed in the wild. I would like to expand a bit about what it entails to start an IoT initiative, whether it be to design a new product to sell, or to gain some insight and improve your own processes.
The steps are familiar to anyone who has ever come close to a project in his/her life:

1. Design the solution
2. Gather the requirements
3. Choose the components, protocols
4. Build all the processes (logistics, operations, IT, support)
5. Market and sell
6. Maintain and deliver new functionalities

In terms of project management, there is nothing to learn here. I just wanted to highlight the specifics of an IoT project for these steps. There some particularities due to the type of project, or just points to remember that should be obvious but are often forgotten.

Design the solution

What I mean here is a high level design, functional, that will describe what you are aiming to deliver to your users or customers. Nothing fancy, nothing technical, just plain business.

Gather the requirements

Nothing new here, just make sur you include the future functions and the way you are going to develop. For example, if you start with an MVP (Minimal Viable Product) and build from there in short cycles, you need to have a long term plan/strategy that will keep everything on track. And this plan should help you define your long term requirements.

Choose the components/protocols

This is a technical step, rather complex to execute today, as there are so many solutions to one single question out there. And you have to keep in mind the current state of the art, along with what you expect this state to be in 3, 5 or even 10 years.

Build all the processes (logistics, operations, IT, support)

From my experience, this a often disregarded step, even by some companies that have been in the industry for decades. The simple question is : you are going to deliver a product (physical) to your users. What happens when the product breaks? Who are they going to call (and no, the answer has nothing to do with an 80s movie 🙂 ), how are you going to manage your replacements, stock, warranties etc. How do you handle servicing the device? Remotely, using your current support team, or locally? One specific suggestion, coming from experience : remember to include the ability to remotely upgrade your firmware 😉

Market and sell

Nothing to declare here. This should be rather standard. One word of advice : most IoT project that succeed build on their ecosystem and integration of new functionalities. You should probably add that to your strategy, and to the marketing materials.

Maintain and deliver new functionalities

This point relates both to the maintenance and support I have raised earlier, and to the lifecycle of your product.
Think about the many product we have seen with an incredible starting point in sales or customer acquisition, that dropped of the board after a few weeks, because nothing happened beyond the first wow effect. There nothing more infuriating, as an end user, to have a product with no bugfixes, or without any new functionalities beyond what came with the product out of the box. For example, take a mobile game, Pokemon Go : they had an amazing start, with millions of users daily. But, as the hype faded out, rumored functions and abilities did not come out, and the game statistics went down.
https://www.wandera.com/pokemon-go-data-analysis-popular-game/

The short version is : a connected product is a product, physical, with all the requirements that should be included in such a project. Do not go too fast when your Proof Of Conecpt works. Think long term, and try not to be dazzled by a partner or consultant that show off what a POC platform does on a demo screen 😉

Managed Kubernetes and security

Almost a sponsored post today, or better : a shared announcement.

You probably know that I am following Kubernetes rather closely, especially managed Kubernetes services (AKS, EKS or Openshift for example). One domain where these offerings have been lacking is network and security.

It is still a very sensitive subject for our customers, for containers related project, and still for public cloud projects. Security and networking teams have trouble adapting to the public cloud paradigms and architectures. There some fear of loss of control, some base fear of the unknown, and some real worry about how to handle networking and security.
Kubernetes (and the other orchestrators) adds another abstraction layer on top of the existing public cloud platforms, which does nothing to alleviate fear, to say nothing about complexity and transparency.

There are some very good solutions out there to manage network overlays into Kubernetes. My favourite is Calico, but you may like any of those. I’ll stick with Calico for a simple reason, which you will see below.

Microsoft and AWS are both working hard to provide a network overlay into their managed Kubernetes offering. They each chose their own path, but we will get to approximately the same point in a short time.

Thanks to Jean Poizat, we have the two announcements.
1) From Calico for Azure : https://www.tigera.io/tigera-calico-coming-to-azure-kubernetes-service-aks/
2) For AWS : https://itnext.io/kubernetes-is-hard-why-eks-makes-it-easier-for-network-and-security-architects-ea6d8b2ca965

The summary is that Calico will be integrated into AKS in a few weeks/months, and EKS will include AWS CNI.
And that is exactly what we were waiting for, along with our customers : managed Kubernetes, with security!

Designing your own job

Depending on how you consider things, it is the third time that it happens to me.
Being able to design, under certain limits, your own job, is an amazing opportunity.
I will not go into too many details as some of it is work in progress, but the process was amazingly energizing and I wanted to share a bit of that energy.
For my current job, I met my future boss on the recommendation of a former colleague. We discussed many things, from ITIL to Managed Services, and also public cloud and the need to get dev and ops team closer. We went through those kind of talks several times, at least four if memory serves. We went from a job which look like an Ops engineer/ITIL practitioner, to something closer to an Azure tech lead.
In my previous position I also had the opportunity to be offered a promotion, and been able to discuss some of the content and responsibilities of the future role. I was also able to step down when time came for me to admit that it was not an ideal position, for me or for the company. Which was really appreciated, at least on my part.

And once again a few weeks ago, I was called out of the blue by a colleague’s boss. He started to discuss his own future and what he was trying to design. He wanted to build something new, and was searching for a partner to build that together. And in that scheme, he discussed a position very similar to my dream job, and offered it to me.
I almost fell off my chair.
At that point I was ready to accept, without having any more details about the exact role and responsibilities, or even the salary. That’s where my future boss started to ask me what I would include or exclude from that job description, and how I could make it my own. My mind just froze.
It took some time for me to recover and start thinking again. After some lame jokes, we discussed the position, and what we would like to build together. It took us several meetings and calls to see through the fog, as we are really going to build something new together, and we cannot rely much on what exists around us.
The last funny thing to happen was that my next interview was with the CEO of the company, who was convinced by the both of us in less than 35 minutes. I could not believe my luck in getting there.
Anyway, that’s it for the bragging post. I really needed to write that down to make it real (even if I signed and will start by the end of the summer 🙂 )

Autonomous versus autonomic systems

This is a difficult topic. I have to admit I am still not completely comfortable with all the concepts and functions.
However, the thinking is amazingly interesting, and I will take some time to ingest everything.
First things first, I will use this post to summarize what I have learned so far.

How did I end up reading that kind of work, you ask? Weeeellll, that’s easy 🙂
Brendan Burns, in one of Ignite ’17 sessions, used the comparison “autonomous vs autonomic” to discuss Kubernetes.
This got me thinking on the actual comparison, and aided with our trusted friend, Google, I found a NASA paper about that (https://www.researchgate.net/publication/265111077_Autonomous_and_Autonomic_Systems_with_Applications_to_NASA_Intelligent_Spacecraft_Operations_and_Exploration_Systems) I started to read it, but it was a bit obscure for me, and scientific English, applied to space research, was a bit too hard for an introduction to that topic of autonomic systems.
Some more research, helped by me beloved wife, led to a research thesis, in French, by Rémi Sharrock (https://www.linkedin.com/in/tichadok/). The thesis is available right there : https://tel.archives-ouvertes.fr/tel-00578735/document. This one relates to the same topic, but applied to distributed software and infrastructure, which ends up being way more familiar to me 🙂

The point where I am right now is just over getting the definitions and concepts right.
I will try to describe what I understand here about automated, autonomous and autonomic systems.
There is some progression from the first to the second, and from the second to the third concept.
Let’s start with automated. An automated system is just like an automaton in the old world : something that will execute a series of commands, on the order of a human (or another system). For example, you have a thermostat at home that send the temperature from inside and outside your home to the heater controller.

There is no brain in there, or almost none.
The higher step is an autonomous system. This one is able to take some decision and act on the data it captured. To continue with the thermostat example, you have a heater controller which will handle the current temperature, from both inside and outside, and decide whether to start heating the house, and how.
The short version is that the system is able to execute a task by itself.

And then we have an autonomic system. This is able to have a higher view of its environment, and should be able to control the fact that it will always be able to execute its tasks. I have run out of the heater example, but let’s take a smart mower. The first degree of autonomicity that it has is the way it will control its battery level, and return to its base station to recharge, in order to ensure that it will be able to continue its task, which is mowing the lawn.
There are multiple pillars of autonomicity. Rémi Sharrock described four in his thesis, and I tend to trust him on this :

Each of these four pillars can be implemented into the system, to various degrees.
I am not yet comfortable enough on describing precisely the four pillars, but it will come in a future post!

Going back to my (our) roots

Yes, another post with an obscure reference for a title.

After some time discussing tech subjects, I was of a mind of going back to something that has often been misread in the past by IT teams and IT management. And by that I mean : business. Yes, again.

Do not misunderstand me, I am still a technologist, and I love learning about technology, finding out the limits and possibilities of any enw tech that is coming out. I am not a sales person, nor a marketing person. However I have been exposed to many well crafted presentations and talks over the years, and what often came out of even the most interesting ones was that : “our tech is fantastic, buy it!”

All right, I love that tech. Be it virtualisation, SAN, VSAN, public cloud, containers, CI/CD, DevOps… choose whatever you like. But technology is not an end to itself in our day to day world. Whatever matters is what you will do with it for your company or customers.

I will take an example. An easy shot at someone I admire. Mark Russinovich, CTO of Azure, and longtime Windows expert (I would use a stronger term if I knew one 🙂 ). A few months ago, during a conference, he had a demo running where he could spin up thousands of container instances in a few seconds, with a simple command.

First reaction : “Wow!”

Second reaction : “Wooooooowwww!”

Third reaction : “How can we do the same?”

Fourth reaction (probably the sanest one) : “Wait, what’s the point?”

And there we go. What was the point. For me, Mark’s point was to show how good Azure tech is. Which is his job, and this demo made that very clear. But Mark did go further, as he usually does, during his speech and encouraged everyone to think about the usages. Unfortunately, most of the people I have discussed with seem to miss the point. They see the Wow effect, and want to share it. But few of us decide to sit down and think about what the use case could be.

And that is the difficult, and probably multi-million dollar question : how to turn amazing technology into a business benefit.

Never forget that, apart from some very lucky people, we are part of a company that is trying to make money, and our role is to participate to that goal. We should always think about our customers, internal or external, and how we can help them. If doing that involves playing with some cool toys and be able to brag about it, go for it! But that is not the other way around.

PS : to give one answer to how we could use Azure Container instances for the real world, especially the kubelet version of ACI, try and think about batch computing, where you would periodically need to spin up dozens or hundreds of container instances for a very short time. Does that ring any bell for you?

PPS : I could not find the exact session from Mark I am describing here, but there is an almost identical session from Corey Sander and Rick Claus there : Azure Container Instances: Get containers up and running in seconds