Container-based deployment models have rapidly gained popularity across a full spectrum of hybrid IT architectures — from edge, to cloud, to data center.
This next edition of the BriefingsDirect Voice of the Innovator podcast series examines how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.
Stay with us to explore the escalating benefits that come from broad container use with Robert Christiansen, Evangelist in the Office of the Chief Technology Officer at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Containers are being used in more ways and in more places. What was not that long ago a fringe favorite for certain developer use cases is becoming broadly disruptive. How disruptive has the container model become?
Christiansen: Well, it’s the new change in the paradigm. We are looking to accelerate software releases. This is the Agile motion. At the end of the day, software is consuming the world, and if you don’t release software quicker — with more features more often — you are being left behind.
The mechanism to do that is to break them out into smaller consumable segments that we can manage. Typically that motion has been adopted at the public cloud level on containers, and that is spreading into the broader ecosystem of the enterprises and private data centers. That is the fundamental piece — and containers have that purpose.
Gardner: Robert, users are interested in that development and deployment advantage, but are there also operational advantages, particularly in terms of being able to move workloads more freely across multiple clouds and hybrid clouds?
Christiansen: Yes, the idea is twofold. First off was to gain agility and motion, and then people started to ask themselves, “Well, I want to have choice, too.” So as we start abstracting away the dependencies of what runs a container, such as a very focused one that might be on a particular cloud provider, I can actually start saying, “Hey, can I write my container platform services and APIs compatible across multiple platforms? How do I go between platforms; how do I go between on-prem or in the public cloud?
Gardner: And because containers can be tailored to specific requirements needed to run a particular service, can they also extend down to the edge and in resource-constrained environments?
Adjustable containers add flexibility
Christiansen: Yes, and more importantly, they can adjust to sizing issues, too. So think about pushing a container that’s very lightweight into a host that needs to have high availability of compute but may be limited on storage.
There are lots of different use cases. As you collapse the virtualization of an app — that’s what a container really does, it virtualizes app components, it virtualizes app parts and dependencies. You only deploy the smallest bit of code needed to execute that one thing. That works in niche uses like a hospital, telecommunications on a cell tower, on an automobile, on the manufacturing floor, or if you want to do multiple pieces inside a cloud platform that services a large telco. However you structure it, that’s the type of flexibility containers provide.
Gardner: And we know this is becoming a very popular model, because the likes of VMware, the leader in virtualization, is putting a lot of emphasis on containers. They don’t want to lose their market share, they want to be in the next game, too. And then Microsoft with Azure Stack is now also talking about containers — more than I would have expected. So that’s a proof-point, when you have VMware and Microsoft agreeing on something.
Christiansen: Yes, that was really interesting actually. I just saw this little blurb that came up in the news about Azure Stack switching over to a container platform, and I went, “Wow!” Didn’t they just put in three- or five-years’ worth of R and D? They are basically saying, “We might be switching this over to another platform.” It’s the right thing to do.
How to Modernize IT Operations
And Accelerate App Performance
And no one saw Kubernetes coming, or maybe OpenShift did. But the reality now is containers suddenly came out of nowhere. Adoption has been there for a while, but it’s never been adopted like it has been now.
Gardner: And Kubernetes is an important part because it helps to prevent complexity and sprawl from getting out of hand. It allows you to have a view over these different disparate deployment environments. Tell us why Kubernetes is in an accelerant to containers adoption.
Christiansen: Kubernetes fundamentally is an orchestration platform that allows you to take containers and put them in the right place, manage them, shut them down when they are not working or having behavior problems. We need a place to orchestrate, and that’s what Kubernetes is meant to be.
It basically displaced a number of other private, what we call opinionated, orchestrators. There was a number of them out there that were being worked on. And then Google released Kubernetes, which was fundamentally their platform that they had been running their world on for 10 years. They are doing for this ecosystem what the Android system did for cell phones. They were releasing and open sourcing the operating model, which is an interesting move.
Gardner: It’s very rapidly become the orchestration mechanism to rule them all. Has Kubernetes become a de facto industry standard?
Christiansen: It really has. We have not seen a technology platform gain acceptance in the ecosystem as fast as this. I personally in all my years or decades have not seen something come up this fast.
Gardner: I agree, and the fact that everyone is all-in is very powerful. How far will this orchestration model will go? Beyond containers, perhaps into other aspects of deployment infrastructure management?
Christiansen: Let’s examine the next step. It could be a code snippet. Or if you are familiar with functions, or with Amazon Web Services (AWS) Lambda [serverless functions], you are talking about that. That would be the next step of how orchestration – it allows anyone to just run code only. I don’t care about a container. I don’t care about the networking. I don’t care about any of that stuff — just execute my code.
Gardner: So functions-as-a-service (FaaS) and serverless get people used to that idea. Maybe you don’t want to buy into one particular serverless approach versus another, but we are starting to see that idea of much more flexibility in how services can be configured and deployed — not based on a platform, an OS, or even a cloud.
Containers’ tipping point
Christiansen: Yes, you nailed it. If you think about the stepping stones to get across this space, it’s a dynamic fluid space. Containers are becoming, I bet, the next level of abstraction and virtualization that’s necessary for the evolution of application development to go faster. That’s a given, I think, right now.
Malcolm Gladwell talked about tipping points. Well, I think we have hit the tipping point on containers. This is going to happen. It may take a while before the ecosystem changes over. If you put the strategy together, if you are a decision-maker, you are making decisions about what to do. Now your container strategy matters. It matters now, not a year from now, not two years from now. It matters now.
Gardner: The flexibility that containers and Kubernetes give us refocuses the emphasis of how to do IT. It means that you are going to be thinking about management and you are going to be thinking about economics and automation. As such, you are thinking at a higher abstraction than just developing and deploying applications and then attaching and integrating data points to them.
Learn More About
How does this higher abstraction of managing a hybrid estate benefit organizations when they are released from the earlier dependencies?
Christiansen: That’s a great question. I believe we are moving into a state where you cannot run platforms with manual systems, or ticketing-based systems. That type of thing. You cannot do that, right? We have so many systems and so much interoperability between the systems, that there has to be some sort of anatomic or artificial intelligence (AI)-based platforms that are going to make the workflow move for you.
There will still be someone to make a decision. Let’s say a ticket goes through, and it says, “Hey, there is the problem.” Someone approves it, and then a workflow will naturally happen behind that. These are the evolutions, and containers allow you to continue to remove the pieces that cause you problems.
Right now we have big, hairy IT operations problems. We have a hard time nailing down where they are. The more you can start breaking these apart and start looking at your hotspots of areas that have problems, you can be more specifically focused on solving those. Then you can start using some intelligence behind it, some actual workload intelligence, to make that happen.
Gardner: The good news is we have lots of new flexibility, with microservices, very discrete approaches to combining them into services, workflows, and processes. The bad news is we have all that flexibility across all of those variables.
Auspiciously we are also seeing a lot of interest and emphasis in what’s called AIOps, AI-driven IT operations. How do we now rationalize what containers do, but keep that from getting out of hand? Can we start using programmatic and algorithmic approaches? What you are seeing when we combine AIOps and containers?
Simplify your stack
Christiansen: That’s like what happens when I mix oranges with apples. It’s kind of an interesting dilemma. But I can see why people want to say, “Hey, how does my container strategy help me manage my asset base? How do I get to a better outcome?”
One reason is because these approaches enable you to collapse the stack. When you take complexity out of your stack — meaning, what are the layers in your stack that are necessary to operate in a given outcome — you then have the ability to remove complexity.
We are talking about dropping the containers all the way to bare metal. And if you drop to bare metal, you have taken not only cost out of the equation, you have taken some complexity out of the equation. You have taken operational dependencies out of the equation, and you start reducing those. So that’s number one.
Number two is you have to have some sort of service mesh across this thing. So with containers comes a whole bunch of little hotspots all over the place and a service manager must know where those little hotspots are. If you don’t have an operating model that’s intelligent enough to know where those are (that’s called a service mesh, where they are connected to all these things) you are not going to have autonomous behaviors over the top of that that will help you.
So yes, you can connect the dots now between your containers to get autonomous, but you have got to have that layer in between that’s telling where the problems are — and then you have intelligence above that that says how do I handle it.
Gardner: We have been talking, Robert, at an abstract level. Let’s go a bit more to the concrete. Are there use cases examples that HPE is working through with customers that illustrate the points we have been making around containers and Kubernetes?
Practice, not permanence
Christiansen: I just met with the team, and they are working with a client right now, a very large Fortune 500 company, where they are applying the exact functions that I just talked to you about.
First thing that needed to happen is a development environment where they are actually developing code in a solid continuous integration, continuous development, and DevOps practice. We use the word “practice,” it’s like medicine and law. It’s a practice, nothing is permanent.
So we put that in place for them. The second thing is they’re trying to release code at speed. This is the first goal. Once you start releasing code at speed, with containers as the mechanism by which you are doing it, then you start saying, “Okay, now my platform on which I’m dropping on is going through development, quality assurance, integration, and then finally into production.
By the time you get to production, you need to know how you’re managing your platform. So it’s a client evolution. We are in that process right now — from end-to-end — to take one of their applications that is net new and another application that’s a refactor and run them both through this channel.
More Enterprises Are Choosing
Now, most clients we engage with are in that early stage. They’re doing proof of concepts. There are a couple of companies out there that have very large Kubernetes installations, that are in production, but they are born-in-the-cloud companies. And those companies have an advantage. They can build that whole thing I just talked about from scratch. But 90 percent of the people out there today, what I call the crown jewels of applications, have to deal with legacy IT. They have to deal with what’s going on today, their data sources have gravity, they still have to deal with that existing infrastructure.
Those are the people I really care about. I want to give them a solution that goes to that agile place. That’s what we’re doing with our clients today, getting them off the ground, getting them into a container environment that works.
Gardner: How can we take legacy applications and deployments and then use containers as a way of giving them new life — but not lifting and shifting?
Improve past, future investments
Christiansen: Organizations have to make some key decisions on investment. This is all about where the organization is in its investment lifecycle. Which ones are they going to make bets on, and which ones are they going to build new?
We are involved with clients going through that process. What we say to them is, “Hey, there is a set of applications you are just not going to touch. They are done. The best we can hope for is put the whole darn thing in a container, leave it alone, and then we can manage it better.” That’s about cost, about economics, about performance, that’s it. There are no release cycles, nothing like that.
The best we can hope for is put the whole darn thing in a container and we can manage it better. That’s about cost, economics, and performance.
The next set are legacy applications where I can do something to help. Maybe I can take a big, beefy application and break it into four parts, make a service group out of it. That’s called a refactor. That will give them a little bit of agility because they can only release code pieces for each segment.
And then there are the applications that we are going to rewrite. These are dependent on what we call app entanglement. They may have multiple dependencies on other applications to give them data feeds, to give them connections that are probably services. They have API calls, or direct calls right into them that allow them to do this and that. There is all sorts of middleware, and it’s just a gnarly mess.
If you try to move those applications to public cloud and try to refactor them there, you introduce what I call data gravity issues or latency issues. You have to replicate data. Now you have all sorts of cost problems and governance problems. It just doesn’t work.
You have to keep those applications in the datacenters. You have to give them a platform to do it there. And if you can’t give it to them there, you have a real problem. What we try to do is break those applications into part in ways where the teams can work in cloud-native methodologies — like they are doing in public cloud — but they are doing it on-premises. That’s the best way to get it done.
Gardner: And so the decision about on-premises or in a cloud, or to what degree a hybrid relationship exists, isn’t so much dependent upon cost or ease of development. We are now rationalizing this on the best way to leverage services, use them together, and in doing so, we attain backward compatibility – and future-proof it, too.
Christiansen: Yes, you are really nailing it, Dana. The key is thinking about where the app appropriately needs to live. And you have laws of physics to deal with, you have legacy issues to deal with, and you have cultural issues to deal with. And then you have all sorts of data, what we call data nationalization. That means dealing with GDPR and where is all of this stuff going to live? And then you have edge issues. And this goes on and on, and on, and on.
So getting that right — or at least having the flexibility to get it right — is a super important aspect. It’s not the same for every company.
Gardner: We have been addressing containers mostly through an applications discussion. Is there a parallel discussion about data? Can we begin thinking about data as a service, and not necessarily in a large traditional silo database, but perhaps more granular, more as a call, as an API? What is the data lifecycle and DataOps implications of containers?
Everything as a service
Christiansen: Well, here is what I call the Achilles heel of the container world. It doesn’t handle persistent data well at all. One of the things that HPE has been focused on is providing stateful, legacy, highly dependent persistent data stores that live in containers. Okay, that is a unique intellectual property that we offer. I think is really groundbreaking for the industry.
Kubernetes is a stateless container platform, which is appropriate for cloud-native microservices and those fast and agile motions. But the legacy IT world in stateful, with highly persistent data stores. They don’t work well in that stateless environment.
Through the work we’ve been doing over the last several years, specifically with an HPE-acquired company called BlueData, we’ve been able to solve that legacy problem. We put that platform into the AI, machine learning (ML), and big data areas first to really flesh that all out. We are joining those two systems together and offering a platform that is going to be really useful out in marketplace.
Gardner: Another aspect of this is the economics. So one of the big pushes from HPE these days is everything as a service, of being able to consume and pay for things as you want regardless of the deployment model — whether it’s on premises, hybrid, in public clouds, or multi-clouds. How does the container model we have been describing align well with the idea of as a service from an economic perspective?
Christiansen: As-a-service is really how I want to get my services when I need them. And I only want to pay for what I need at the time I need it. I don’t want to overpay for it when I don’t use it. I don’t want to be stuck without something when I do need it.
Top Trends — Stateful Apps Are Key
Solving that problem inside various places in the ecosystem is a different equation, it comes up differently. Some clients want to buy stuff, they want to capitalize it and just put it on the books. So we have to deal with that.
You have other people who say, “Hey, I’m willing to take on this hardware burden as a financer, and you can rent it from me.” You can consume all the pieces you need and then you’ve got the cloud providers as a service. But more importantly, let’s go back to how the containers allow you to have much finer granularity about what it is you’re buying. And if you want to deploy an app, maybe you are paying for that app to be deployed as opposed to the container. But the containers are the encapsulation of it and where you want to have it.
So you still have to get to what I call the basic currency. The basic currency is a container. Where does that container run? It has to run either on premises, in the public cloud, or on the edge. If people are going to agree on that basic currency model, then we can agree on an economic model.
Gardner: Even if people are risk averse, I don’t think they’re in trouble by making some big bets on containers as their new currency and to attain capabilities and skills around both containers and Kubernetes. Recognizing that this is not a big leap of faith, what do you advise people to do right now to get ready?
Christiansen: Get your arms around the Kubernetes installations you already have, because you know they’re happening. This is just like when the public cloud was arriving and there was shadow IT going on. You know it’s happening; you know people are writing code, and they’re pushing it into a Kubernetes cluster. They’re not talking to the central IT people about how to manage or run it — or even if it’s going to be something they can handle. So you’ve got to get a hold of them first.
Go find your hand raisers. That’s what I always say. Who are the volunteers? Who has their hands in the air? Openly say, “Hey, come in. I’m forming a containers, Kubernetes, and new development model team.” Give it a name. Call it the Michael Jordan team of containers. I don’t care. But go get them. Go find out who they are, right?
If your data lives on-premises and an application is going to need data, you’re going to need to have an on-premises solution for containers that can handle legacy and cloud at the same time. If that data goes to the cloud, you can always move the container there, too.
And then form and coalesce that team around that knowledge base. Learn how they think, and what is the best that’s going on inside of your own culture. This is about culture, culture, culture, right? And do it in public so people can see it. This is why people got such black eyes when they were doing their first stuff around public cloud because they snuck off and did it, and then they were really reluctant not to say anything. Bring it out in the open. Let’s start talking about it.
Next thing is looking for instantiations of applications that you either are going to build net new or you are going to refactor. And then decide on your container strategy around that Kubernetes platform, and then work it as a program. Be open about transparency about what you’re doing. Make sure you’re funded.
And most importantly, above all things, know where your data lives. If your data lives on-premises and that application you’re talking about is going to need data, you’re going to need to have an on-premises solution for containers, specifically those that handle legacy and public cloud at the same time. If that data decides it needs to go to public cloud, you can always move it there.