How containers are the new basic currency for pay as you go hybrid IT

puzzleContainer-based deployment models have rapidly gained popularity across a full spectrum of hybrid IT architectures — from edge, to cloud, to data center.

This next edition of the BriefingsDirect Voice of the Innovator podcast series examines how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to explore the escalating benefits that come from broad container use with Robert Christiansen, Evangelist in the Office of the Chief Technology Officer at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Containers are being used in more ways and in more places. What was not that long ago a fringe favorite for certain developer use cases is becoming broadly disruptive. How disruptive has the container model become?

Christiansen: Well, it’s the new change in the paradigm. We are looking to accelerate software releases. This is the Agile motion. At the end of the day, software is consuming the world, and if you don’t release software quicker — with more features more often — you are being left behind.



The mechanism to do that is to break them out into smaller consumable segments that we can manage. Typically that motion has been adopted at the public cloud level on containers, and that is spreading into the broader ecosystem of the enterprises and private data centers. That is the fundamental piece — and containers have that purpose.

Gardner: Robert, users are interested in that development and deployment advantage, but are there also operational advantages, particularly in terms of being able to move workloads more freely across multiple clouds and hybrid clouds?

Christiansen: Yes, the idea is twofold. First off was to gain agility and motion, and then people started to ask themselves, “Well, I want to have choice, too.” So as we start abstracting away the dependencies of what runs a container, such as a very focused one that might be on a particular cloud provider, I can actually start saying, “Hey, can I write my container platform services and APIs compatible across multiple platforms? How do I go between platforms; how do I go between on-prem or in the public cloud?

Gardner: And because containers can be tailored to specific requirements needed to run a particular service, can they also extend down to the edge and in resource-constrained environments?

Adjustable containers add flexibility 

Christiansen: Yes, and more importantly, they can adjust to sizing issues, too. So think about pushing a container that’s very lightweight into a host that needs to have high availability of compute but may be limited on storage.

There are lots of different use cases. As you collapse the virtualization of an app — that’s what a container really does, it virtualizes app components, it virtualizes app parts and dependencies. You only deploy the smallest bit of code needed to execute that one thing. That works in niche uses like a hospital, telecommunications on a cell tower, on an automobile, on the manufacturing floor, or if you want to do multiple pieces inside a cloud platform that services a large telco. However you structure it, that’s the type of flexibility containers provide.

Gardner: And we know this is becoming a very popular model, because the likes of VMware, the leader in virtualization, is putting a lot of emphasis on containers. They don’t want to lose their market share, they want to be in the next game, too. And then Microsoft with Azure Stack is now also talking about containers — more than I would have expected. So that’s a proof-point, when you have VMware and Microsoft agreeing on something.

Christiansen: Yes, that was really interesting actually. I just saw this little blurb that came up in the news about Azure Stack switching over to a container platform, and I went, “Wow!” Didn’t they just put in three- or five-years’ worth of R and D? They are basically saying, “We might be switching this over to another platform.” It’s the right thing to do.

How to Modernize IT Operations

And Accelerate App Performance 

With Container Technology 

And no one saw Kubernetes coming, or maybe OpenShift did. But the reality now is containers suddenly came out of nowhere. Adoption has been there for a while, but it’s never been adopted like it has been now.

Gardner: And Kubernetes is an important part because it helps to prevent complexity and sprawl from getting out of hand. It allows you to have a view over these different disparate deployment environments. Tell us why Kubernetes is in an accelerant to containers adoption.

Christiansen: Kubernetes fundamentally is an orchestration platform that allows you to take containers and put them in the right place, manage them, shut them down when they are not working or having behavior problems. We need a place to orchestrate, and that’s what Kubernetes is meant to be.

racksIt basically displaced a number of other private, what we call opinionated, orchestrators. There was a number of them out there that were being worked on. And then Google released Kubernetes, which was fundamentally their platform that they had been running their world on for 10 years. They are doing for this ecosystem what the Android system did for cell phones. They were releasing and open sourcing the operating model, which is an interesting move.

Gardner: It’s very rapidly become the orchestration mechanism to rule them all. Has Kubernetes become a de facto industry standard?

Christiansen: It really has. We have not seen a technology platform gain acceptance in the ecosystem as fast as this. I personally in all my years or decades have not seen something come up this fast.

Gardner: I agree, and the fact that everyone is all-in is very powerful. How far will this orchestration model will go? Beyond containers, perhaps into other aspects of deployment infrastructure management?

Christiansen: Let’s examine the next step. It could be a code snippet. Or if you are familiar with functions, or with Amazon Web Services (AWS) Lambda [serverless functions], you are talking about that. That would be the next step of how orchestration – it allows anyone to just run code only. I don’t care about a container. I don’t care about the networking. I don’t care about any of that stuff — just execute my code.

Gardner: So functions-as-a-service (FaaS) and serverless get people used to that idea. Maybe you don’t want to buy into one particular serverless approach versus another, but we are starting to see that idea of much more flexibility in how services can be configured and deployed — not based on a platform, an OS, or even a cloud.

Containers’ tipping point 

Christiansen: Yes, you nailed it. If you think about the stepping stones to get across this space, it’s a dynamic fluid space. Containers are becoming, I bet, the next level of abstraction and virtualization that’s necessary for the evolution of application development to go faster. That’s a given, I think, right now.

Malcolm Gladwell talked about tipping points. Well, I think we have hit the tipping point on containers. This is going to happen. It may take a while before the ecosystem changes over. If you put the strategy together, if you are a decision-maker, you are making decisions about what to do. Now your container strategy matters. It matters now, not a year from now, not two years from now. It matters now.

Gardner: The flexibility that containers and Kubernetes give us refocuses the emphasis of how to do IT. It means that you are going to be thinking about management and you are going to be thinking about economics and automation. As such, you are thinking at a higher abstraction than just developing and deploying applications and then attaching and integrating data points to them.

Learn More About 

Cloud and Container Trends 

How does this higher abstraction of managing a hybrid estate benefit organizations when they are released from the earlier dependencies?

Christiansen: That’s a great question. I believe we are moving into a state where you cannot run platforms with manual systems, or ticketing-based systems. That type of thing. You cannot do that, right? We have so many systems and so much interoperability between the systems, that there has to be some sort of anatomic or artificial intelligence (AI)-based platforms that are going to make the workflow move for you.

There will still be someone to make a decision. Let’s say a ticket goes through, and it says, “Hey, there is the problem.” Someone approves it, and then a workflow will naturally happen behind that. These are the evolutions, and containers allow you to continue to remove the pieces that cause you problems.

Right now we have big, hairy IT operations problems. We have a hard time nailing down where they are. The more you can start breaking these apart and start looking at your hotspots of areas that have problems, you can be more specifically focused on solving those. Then you can start using some intelligence behind it, some actual workload intelligence, to make that happen.

Gardner: The good news is we have lots of new flexibility, with microservices, very discrete approaches to combining them into services, workflows, and processes. The bad news is we have all that flexibility across all of those variables.

Auspiciously we are also seeing a lot of interest and emphasis in what’s called AIOps, AI-driven IT operations. How do we now rationalize what containers do, but keep that from getting out of hand? Can we start using programmatic and algorithmic approaches? What you are seeing when we combine AIOps and containers?

Simplify your stack 

Christiansen: That’s like what happens when I mix oranges with apples. It’s kind of an interesting dilemma. But I can see why people want to say, “Hey, how does my container strategy help me manage my asset base? How do I get to a better outcome?”

One reason is because these approaches enable you to collapse the stack. When you take complexity out of your stack — meaning, what are the layers in your stack that are necessary to operate in a given outcome — you then have the ability to remove complexity.

HPE LogoWe are talking about dropping the containers all the way to bare metal. And if you drop to bare metal, you have taken not only cost out of the equation, you have taken some complexity out of the equation. You have taken operational dependencies out of the equation, and you start reducing those. So that’s number one.

Number two is you have to have some sort of service mesh across this thing. So with containers comes a whole bunch of little hotspots all over the place and a service manager must know where those little hotspots are. If you don’t have an operating model that’s intelligent enough to know where those are (that’s called a service mesh, where they are connected to all these things) you are not going to have autonomous behaviors over the top of that that will help you.

So yes, you can connect the dots now between your containers to get autonomous, but you have got to have that layer in between that’s telling where the problems are — and then you have intelligence above that that says how do I handle it.

Gardner: We have been talking, Robert, at an abstract level. Let’s go a bit more to the concrete. Are there use cases examples that HPE is working through with customers that illustrate the points we have been making around containers and Kubernetes?

Practice, not permanence 

Christiansen: I just met with the team, and they are working with a client right now, a very large Fortune 500 company, where they are applying the exact functions that I just talked to you about.

First thing that needed to happen is a development environment where they are actually developing code in a solid continuous integration, continuous development, and DevOps practice. We use the word “practice,” it’s like medicine and law. It’s a practice, nothing is permanent.

So we put that in place for them. The second thing is they’re trying to release code at speed. This is the first goal. Once you start releasing code at speed, with containers as the mechanism by which you are doing it, then you start saying, “Okay, now my platform on which I’m dropping on is going through development, quality assurance, integration, and then finally into production.

By the time you get to production, you need to know how you’re managing your platform. So it’s a client evolution. We are in that process right now — from end-to-end — to take one of their applications that is net new and another application that’s a refactor and run them both through this channel.

More Enterprises Are Choosing 

Containers — Here’s Why 

Now, most clients we engage with are in that early stage. They’re doing proof of concepts. There are a couple of companies out there that have very large Kubernetes installations, that are in production, but they are born-in-the-cloud companies. And those companies have an advantage. They can build that whole thing I just talked about from scratch. But 90 percent of the people out there today, what I call the crown jewels of applications, have to deal with legacy IT. They have to deal with what’s going on today, their data sources have gravity, they still have to deal with that existing infrastructure.

Those are the people I really care about. I want to give them a solution that goes to that agile place. That’s what we’re doing with our clients today, getting them off the ground, getting them into a container environment that works.

Gardner: How can we take legacy applications and deployments and then use containers as a way of giving them new life — but not lifting and shifting?

Improve past, future investments 

Christiansen: Organizations have to make some key decisions on investment. This is all about where the organization is in its investment lifecycle. Which ones are they going to make bets on, and which ones are they going to build new?

We are involved with clients going through that process. What we say to them is, “Hey, there is a set of applications you are just not going to touch. They are done. The best we can hope for is put the whole darn thing in a container, leave it alone, and then we can manage it better.” That’s about cost, about economics, about performance, that’s it. There are no release cycles, nothing like that.

The best we can hope for is put the whole darn thing in a container and we can manage it better. That’s about cost, economics, and performance.

The next set are legacy applications where I can do something to help. Maybe I can take a big, beefy application and break it into four parts, make a service group out of it. That’s called a refactor. That will give them a little bit of agility because they can only release code pieces for each segment.

And then there are the applications that we are going to rewrite. These are dependent on what we call app entanglement. They may have multiple dependencies on other applications to give them data feeds, to give them connections that are probably services. They have API calls, or direct calls right into them that allow them to do this and that. There is all sorts of middleware, and it’s just a gnarly mess.

If you try to move those applications to public cloud and try to refactor them there, you introduce what I call data gravity issues or latency issues. You have to replicate data. Now you have all sorts of cost problems and governance problems. It just doesn’t work.

You have to keep those applications in the datacenters. You have to give them a platform to do it there. And if you can’t give it to them there, you have a real problem. What we try to do is break those applications into part in ways where the teams can work in cloud-native methodologies — like they are doing in public cloud — but they are doing it on-premises. That’s the best way to get it done.

Gardner: And so the decision about on-premises or in a cloud, or to what degree a hybrid relationship exists, isn’t so much dependent upon cost or ease of development. We are now rationalizing this on the best way to leverage services, use them together, and in doing so, we attain backward compatibility – and future-proof it, too.

HPE-GreenlakeChristiansen: Yes, you are really nailing it, Dana. The key is thinking about where the app appropriately needs to live. And you have laws of physics to deal with, you have legacy issues to deal with, and you have cultural issues to deal with. And then you have all sorts of data, what we call data nationalization. That means dealing with GDPR and where is all of this stuff going to live? And then you have edge issues. And this goes on and on, and on, and on.

So getting that right — or at least having the flexibility to get it right — is a super important aspect. It’s not the same for every company.

Gardner: We have been addressing containers mostly through an applications discussion. Is there a parallel discussion about data? Can we begin thinking about data as a service, and not necessarily in a large traditional silo database, but perhaps more granular, more as a call, as an API? What is the data lifecycle and DataOps implications of containers?

Everything as a service

Christiansen: Well, here is what I call the Achilles heel of the container world. It doesn’t handle persistent data well at all. One of the things that HPE has been focused on is providing stateful, legacy, highly dependent persistent data stores that live in containers. Okay, that is a unique intellectual property that we offer. I think is really groundbreaking for the industry.

Kubernetes is a stateless container platform, which is appropriate for cloud-native microservices and those fast and agile motions. But the legacy IT world in stateful, with highly persistent data stores. They don’t work well in that stateless environment.

Through the work we’ve been doing over the last several years, specifically with an HPE-acquired company called BlueData, we’ve been able to solve that legacy problem. We put that platform into the AI, machine learning (ML), and big data areas first to really flesh that all out. We are joining those two systems together and offering a platform that is going to be really useful out in marketplace.

BlueData.logo2.jpgGardner: Another aspect of this is the economics. So one of the big pushes from HPE these days is everything as a service, of being able to consume and pay for things as you want regardless of the deployment model — whether it’s on premises, hybrid, in public clouds, or multi-clouds. How does the container model we have been describing align well with the idea of as a service from an economic perspective?

Christiansen: As-a-service is really how I want to get my services when I need them. And I only want to pay for what I need at the time I need it. I don’t want to overpay for it when I don’t use it. I don’t want to be stuck without something when I do need it.

Top Trends — Stateful Apps Are Key 

To Enterprise Container Strategy 

Solving that problem inside various places in the ecosystem is a different equation, it comes up differently. Some clients want to buy stuff, they want to capitalize it and just put it on the books. So we have to deal with that.

You have other people who say, “Hey, I’m willing to take on this hardware burden as a financer, and you can rent it from me.” You can consume all the pieces you need and then you’ve got the cloud providers as a service. But more importantly, let’s go back to how the containers allow you to have much finer granularity about what it is you’re buying. And if you want to deploy an app, maybe you are paying for that app to be deployed as opposed to the container. But the containers are the encapsulation of it and where you want to have it.

So you still have to get to what I call the basic currency. The basic currency is a container. Where does that container run? It has to run either on premises, in the public cloud, or on the edge. If people are going to agree on that basic currency model, then we can agree on an economic model.

Gardner: Even if people are risk averse, I don’t think they’re in trouble by making some big bets on containers as their new currency and to attain capabilities and skills around both containers and Kubernetes. Recognizing that this is not a big leap of faith, what do you advise people to do right now to get ready?

Christiansen: Get your arms around the Kubernetes installations you already have, because you know they’re happening. This is just like when the public cloud was arriving and there was shadow IT going on. You know it’s happening; you know people are writing code, and they’re pushing it into a Kubernetes cluster. They’re not talking to the central IT people about how to manage or run it — or even if it’s going to be something they can handle. So you’ve got to get a hold of them first.

Teamwork works

Gfind your hand raisers. That’s what I always say. Who are the volunteers? Who has their hands in the air? Openly say, “Hey, come in. I’m forming a containers, Kubernetes, and new development model team.” Give it a name. Call it the Michael Jordan team of containers. I don’t care. But go get them. Go find out who they are, right?

If your data lives on-premises and an application is going to need data, you’re going to need to have an on-premises solution for containers that can handle legacy and cloud at the same time. If that data goes to the cloud, you can always move the container there, too.

And then form and coalesce that team around that knowledge base. Learn how they think, and what is the best that’s going on inside of your own culture. This is about culture, culture, culture, right? And do it in public so people can see it. This is why people got such black eyes when they were doing their first stuff around public cloud because they snuck off and did it, and then they were really reluctant not to say anything. Bring it out in the open. Let’s start talking about it.

Next thing is looking for instantiations of applications that you either are going to build net new or you are going to refactor. And then decide on your container strategy around that Kubernetes platform, and then work it as a program. Be open about transparency about what you’re doing. Make sure you’re funded.

And most importantly, above all things, know where your data lives. If your data lives on-premises and that application you’re talking about is going to need data, you’re going to need to have an on-premises solution for containers, specifically those that handle legacy and public cloud at the same time. If that data decides it needs to go to public cloud, you can always move it there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, Data center transformation, DevOps, Docker, Hewlett Packard Enterprise, multicloud, Software, storage, User experience, Virtualization | Tagged , , , , , , , , , , , , , , , , | Leave a comment

HPE strategist Mark Linesch on the surging role of containers in advancing the hybrid IT estate

HPE containers

Openness, flexibility, and speed to distributed deployments have been top drivers of the steady growth of container-based solutions. Now, IT operators are looking to increase automation, built-in intelligence, and robust management as they seek container-enabled hybrid cloud and multicloud approaches for data and workloads.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

This next edition of the BriefingsDirect Voice of the Innovator podcast series examines the rapidly evolving containers innovation landscape with Mark Linesch, Vice President of Technology Strategy in the CTO Office and Hewlett Packard Labs at Hewlett Packard Enterprise (HPE). The composability strategies interview is conducted byDana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s look at the state of the industry around containers. What are the top drivers for containers adoption now that the technology has matured?

Linesch: The history of computing, as far back as I can remember, has been about abstraction; abstraction of the infrastructure and then a separation of concern between the infrastructure and the applications.

Mark LineschIt used to be it was all bare metal, and then about a decade ago, we went on the journey to virtualization. And virtualization is great, it’s an abstraction that allows for certain amount of agility. But it’s fairly expensive because you are virtualizing the entire infrastructure, if you will, and dragging along a unique operating system (OS) each time you do that.

So the industry for the last few years has been saying, “Well, what’s next, what’s after virtualization?” And clearly things like containerization are starting to catch hold.

Why now? Well, because we are living in a hybrid cloud world, and we are moving pretty aggressively toward a more distributed edge-to-cloud world. We are going to be computing, analyzing, and driving intelligence in all of our edges — and all of our clouds.

Things such as performance- and developer-aware capabilities, DevOps, the ability to run an application in a private cloud and then move it to a public cloud, and being able to drive applications to edge environments on a harsh factory floor — these are all aspects of this new distributed computing environment that we are entering into. It’s a hybrid estate, if you will.

Containers have advantages for a lot of different constituents in this hybrid estate world. First and foremost are the developers. If you think about development and developers in general, they have moved from the older, monolithic and waterfall-oriented approaches to much more agile and continuous integration and continuous delivery models.

And containers give developers a predictable environment wherein they can couple not only the application but the application dependencies, the libraries, and all that they need to run an application throughout the DevOps lifecycle. That means from development through test, production, and delivery.

Containers carry and encapsulate all of the app’s requirements to develop, run, test, and scale. With bare metal or virtualization, as the app moved through the DevOps cycle, I had to worry about the OS dependencies and the type of platforms I was running that pipeline on.

Developers’ package deal

A key thing for developers is they can package the application and all the dependencies together into a distinct manifest. It can be version-controlled and easily replicated. And so the developer can debug and diagnose across different environments and save an enormous amount of time. So developers are the first beneficiaries, if you will, of this maturing containerized environment.

How to Modernize Your IT 

With Container Technology 

But next are the IT operations folks because they now have a good separation of concern. They don’t have to worry about reconfiguring and patching all these kinds of things when they get a hand-off from developers into a production environment. That capability is fundamentally encapsulated for them, and so they have an easier time operating.

And increasingly in this more hybrid distributed edge-to-cloud world, I can run those containers virtually anywhere. I can run them at the edge, in a public cloud, in a private cloud, and I can move those applications quickly without all of these prior dependencies that virtualization or bare metal required. It contains an entire runtime environment and application, plus all the dependencies, all the libraries, and the like.

The third area that’s interesting for containers is around isolation. Containers virtualize the CPU, the memory, storage network resources – and they do that at the OS level. So they use resources much more efficiently for that reason.

Unlike virtualization, which includes your entire OS as well as the application, containers run on a single OS. Each container shares the OS kernel with other containers so it’s lightweight, uses less resources, and spins up instantly.

Unlike virtualization, which includes your entire OS as well as the application, containers run on a single OS. Each container shares the OS kernel with other containers, so it’s lightweight, uses much fewer resources, and spins up almost instantly — in seconds versus virtual machines (VMs) that spin up in minutes.

When you think about this fast-paced, DevOps world we live in — this increasingly distributed hybrid estate from the many edges and many clouds we compute and analyze data in — that’s why containers are showing quite a bit of popularity. It’s because of the business benefits, the technical benefits, the development benefits, and the operations benefits.

Gardner: It’s been fascinating for me to see the portability and fit-for-purpose containerization benefits, and being able to pass those along a DevOps continuum. But one of the things that we saw with virtualization was that too much of a good thing spun out of control. There was sprawl, lack of insight and management, and eventually waste.

How do we head that off with containers? How do containers become manageable across that entire hybrid estate?

Setting the standard 

Linesch: One way is standardizing the container formats, and that’s been coming along fairly nicely. There is an initiative called The Open Container Initiative, part of the Linux Foundation, that develops to the industry standard so that these containers, formats, and runtime software associated with them are standardized across the different platforms. That helps a lot.

Number two is using a standard deployment option. And the one that seems to be gripping the industry is KubernetesKubernetes is an open source capability that provides mechanisms for deploying, maintaining, and scaling containerized applications. Now, the combination of the standard formats from a runtime perspective with the ability to manage that with capabilities like Mesosphere or Kubernetes has provided the tooling and the capabilities to move this forward.

Gardner: And the timing couldn’t be better, because as people are now focused on as-a-service for so much — whether it’s an application, infrastructure, and increasingly, entire data centers — we can focus on the business benefits and not the underlying technology. No one really cares whether it’s running in a virtualized environment, on bare metal, or in a container — as long as you are getting the business benefits.

Linesch: You mentioned that nobody really cares what they are running on, and I would postulate that they shouldn’t care. In other words, developers should develop, operators should operate. The first business benefit is the enormous agility that developers get and that IT operators get in utilizing standard containerized environments.

How to Extend the Cloud Experience 

Across Your Enterprise 

Not only do they get an operations benefit, faster development, lower cost to operate, and those types of things, but they take less resources. So containers, because of their shared and abstracted environment, really take a lot fewer resources out of a server and storage complex, out of a cluster, so you can run your applications faster, with less resources, and at lower total cost.

This is very important when you think about IT composability in general because the combination of containerized environments with things like composable infrastructure provides the flexibility and agility to meet the needs of customers in a very time sensitive and very agile way.


Gardner: How are IT operators making a tag team of composability and containerization? Are they forming a whole greater than the sum of the parts? How do you see these two spurring innovation?

Linesch: I have managed some of our R&D centers. These are usually 50,000-square-foot data centers where all of our developers and hardware and software writers are off doing great work.

And we did some interesting things a few years ago. We were fully virtualized, a kind of private cloud environment, so we could deliver infrastructure-as-a-service (IaaS) resources to these developers. But as hybrid cloud hit and became more of a mature and known pattern, our developers were saying, “Look, I need to spin this stuff up more quickly. I need to be able to run through my development-test pipeline more effectively.”

And containers-as-a-service was just a super hit for these guys. They are under pressure every day to develop, build, and run these applications with the right security, portability, performance, and stability. The containerized systems — and being able to quickly spin up a container, to do work, package that all, and then move it through their pipelines — became very, very important.

From an infrastructure operations perspective, it provides a perfect marriage between the developers and the operators. The operators can use composition and things like our HPE Synergy platform and our HPE OneViewtooling to quickly build container image templates. These then allow those developers to populate that containers-as-a-service infrastructure with the work that they do — and do that very quickly.

Gardner: Another hot topic these days is understanding how a continuum will evolve between the edge deployments and a core cloud, or hybrid cloud environment. How do containers help in that regard? How is there a core-to-cloud and/or core-to-cloud-to-edge benefit when containers are used?

Gaining an edge 

Linesch: I mentioned that we are moving to a much more distributed computing environment, where we are going to be injecting intelligence and processing through all of our places, people, and things. And so when you think about that type of an environment, you are saying, “Well, I’m going to develop an application. That application may require more microservices or more modular architecture. It may require that I have some machine learning (ML) or some deep learning analytics as part of that application. And it may then need to be provisioned to 40 — or 400 — different sites from a geographic perspective.”

When you think about edge-to-cloud, you might have a set of factories in different parts of the United States. For example, you may have 10 factories all seeking to develop inferencing and analyzed actions on some type of an industrial process. It might be video cameras attached to an assembly line looking for defects and ingesting data and analyzing that data right there, and then taking some type of a remediation action.

How to Optimize Your IT Operations 

With Composable Infrastructure 

And so as we think about this edge-to-cloud dance, one of the things that’s critical there is continuous integration and continuous delivery — of being able to develop these applications and the artificial intelligence (AI) models associated with analyzing the data on an ongoing basis. The AI models, quite frankly, drift and they need to be updated periodically. And so continuous integration and continuous delivery types of methodologies are becoming very important.

Then, how do I package up all of those application bits, analytics bits, and ML bits? How do I provision that to those 10 factories? How do I do that in a very fast and fluid way?

That’s where containers really shine. They will give you bare-metal performance. They are packaged and portable – and that really lends itself to the fast-paced delivery and delivery cycles required for these kinds of intelligent edge and Internet of Things (IoT) operations.

Gardner: We have heard a lot about AIOps and injecting more intelligence into more aspects of IT infrastructure, particularly at the June HPE Discover conference. But we seem to be focusing on the gathering of the data and the analysis of the data, and not so much on the what do you do with that analysis – the execution based on the inferences.

It seems to me that containers provide a strong means when it comes to being able to exploit recommendations from an AI engine and then doing something — whether to deploy, to migrate, to port.

Am I off on some rough tangent? Or is there something about containers — and being able to deftly execute on what the intelligence provides — that might also be of benefit?

Linesch: At the edge, you are talking about many applications where a large amount of data needs to be ingested. It needs to be analyzed, and then take a real-time action from a predictive maintenance, classification, or remediation perspective.

We are seeing the benefits of containers really shine in these more distributed edge-to-cloud environments. At the edge, many apps need a large amount of data ingested. The whole cycle time of ingesting data, analyzing it, and taking some action back is highly performant with containers.

And so containers spin up very quickly. They use very few resources. The whole cycle-time of ingesting data, analyzing that data through a container framework, taking some action back to the thing that you are analyzing is made a whole lot easier and a whole lot performant with less resources when you use containers.

Now, virtualization still has a very solid set of constituents, both at the hybrid cloud and at the intelligent edge. But we are seeing the benefits of containers really shine in these more distributed edge-to-cloud environments.

Gardner: Mark, we have chunked this out among the developer to operations and deployment, or DevOps implications. And we have talked about the edge and cloud.

But what about at the larger abstraction of impacting the IT organization? Is there a benefit for containerization where IT is resource-constrained when it comes to labor and skills? Is there a people, skills, and talent side of this that we haven’t yet tapped into?

Customer microservices support 

Linesch: There definitely is. One of the things that we do at HPE is try to help customers move into these new models like containers, DevOps, and continuous integration and delivery. We offer a set of services that help customers, whether they are medium-sized customers or large customers, to think differently about development of applications. As a result, they are able to become more agile and microservices-oriented.

Microservice-oriented development really lends itself to this idea of containers, and the ability of containers to interact with each other as a full-set application. What you see happening is that you have to have a reason not to use containers now.

How to Simplify and Automate 

Across Your Datacenter 

That’s pretty exciting, quite frankly. It gives us an opportunity to help customers to engage from an education perspective, and from a consulting, integration, and support perspective as they journey through microservices and how to re-architect their applications.

Our customers are moving to a more continuous integration-continuous development approach. And we can show them how to manage and operate these types of environments with high automation and low operational cost.

Gardner: A lot of the innovation we see along the lines of digital transformation at a business level requires taking services and microservices from different deployment models — oftentimes multi-cloud, hybrid cloud, software-as-a-service (SaaS) services, on-premises, bare metal, databases, and so forth.

Are you seeing innovation percolating in that way? If you have any examples, I would love to hear them.

Linesch: I am seeing that. You see that every day when you look at the Internet. It’s a collaboration of different services based on APIs. You collect a set of services for a variety of different things from around these Internet endpoints, and that’s really as-a-service. That’s what it’s all about — the ability to orchestrate all of your applications and collections of service endpoints.

Furthermore, beyond containers, there are new as-a-function-based, or serverless, types of computing. These innovators basically say, “Hey, I want to consume a service from someplace, from an HTTP endpoint, and I want to do that very quickly.” They very effectively are using service-oriented methodologies and the model of containers.

ContainersforDummiesWe are seeing a lot of innovation in these function-as-a-service (FaaS) capabilities that some of the public clouds are now providing. And we are seeing a lot of innovation in the overall operations at scale of these hybrid cloud environments, given the portability of containers.

At HPE, we believe the cloud isn’t a place — it’s an experience. The utilization of containers provides a great experience for both the development community and the IT operations community. It truly helps better support the business objectives of the company.

Investing in intelligent innovation

Gardner: Mark, for you personally, as you are looking for technology strategy, how do you approach innovation? Is this something that comes organically, that bubbles up? Or is there a solid process or workflow that gets you to innovation? How do you foster innovation in your own particular way that works?

Linesch: At HPE, we have three big levers that we pull on when we think about innovation.

The first is we can do a lot of organic development — and that’s very important. It involves understanding where we think the industry is going, and trying to get ahead of that. We can then prove that out with proof of concepts and incubation kinds of opportunities with lead customers.

We also, of course, have a lever around inorganic innovation. For example, you saw recently an acquisition by HPE of Cray to turbocharge the next generation of high-performance computing (HPC) and to drive the next generation of exascale computing.

The third area is our partnerships and investments. We have deep collaboration with companies like Docker, for example. They have been a great partner for a number of years, and we have, quite frankly, helped to mature some of that container management technology.

We are an active member of the standards organizations around the containers. Being able to mature the technology with partners like Docker, to get at the business value of some of these big advancements is important. So those are just three ways we innovate.

Longer term, with other HPE core innovations, such as composability and memory-driven computing, we believe that containers are going to be even more important. You will be able to hold the containers in memory-driven computingsystems, in either Dynamic random-access memory (DRAM) or storage-class memory (SCM).

You will be able to spin them up instantly or spin them down instantly. The composition capabilities that we have will increasingly automate a very significant part of bringing up such systems, of bringing up applications, and really scaling and moving those applications to where they need to be.

One of the principles that we are focused on is moving the compute to the data — as opposed to moving the data to the compute. And the reason for that is when you move the compute to the data, it’s a lot easier, simpler, and faster with less resources.

One of the principles that we are focused on is moving the compute to the data — as opposed to moving the data to the compute. And the reason for that is when you move the compute to the data, it’s a lot easier, simpler, and faster — with less resources.

This next generation of distributed computing, memory-driven computing, and composability is really ripe for what we call containers in microseconds. And we will be able to do that all with the composability tooling we already have.

Gardner: When you get to that point, you’re not just talking about serverless. You’re talking about cloudless. It doesn’t matter where the FaaS is being generated as long as it’s at the right performance level that you require, when you require it. It’s very exciting.

Before we break, I wonder what guidance you have for organizations to become better prepared to exploit containers, particularly in the context of composability and leveraging a hybrid continuum of deployments? What should companies be doing now in order to be getting better prepared to take advantage of containers?

Be prepared, get busy

Linesch: If you are developing applications, then think deeply about agile development principles, and developing applications with a microservice-bent is very, very important.

If you are in IT operations, it’s all about being able to offer bare metal, virtualization, and containers-as-a-service options — depending on the workload and the requirements of the business.

How to Manage Your Complex 

Hybrid Cloud More Effectively 

I recommend that companies not stand on the sidelines but to get busy, get to a proof of concept with containers-as-a-service. We have a lot of expertise here at HPE. We have a lot of great partners, such as Docker, and so we are happy to help and engage.

We have quite a bit of on-boarding and helpful services along the journey. And so jump in and crawl, walk, and run through it. There are always some sharp corners on advanced technology, but containers are maturing very quickly. We are here to help our customers on that journey.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Docker, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud | Tagged , , , , , , , , , , , , | Leave a comment

The venerable history of IT systems management meets the new era of AIOps-fueled automation over hybrid and multicloud complexity

data centerThe next edition of the BriefingsDirect Voice of the Innovator podcast series explores the latest developments in hybrid IT management.

IT operators have for decades been playing catch-up to managing their systems amid successive waves of heterogeneity, complexity, and changing deployment models. IT management technologies and methods have evolved right along with the challenge, culminating in the capability to optimize and automate workloads to exacting performance and cost requirements.

But now automation is about to give an AIOps boost from new machine learning (ML) and artificial intelligence (AI) capabilities — just as multicloud and edge computing deployments become more common — and demanding.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we explore the past, present, and future of IT management innovation with a 30-year veteran of IT management, Doug de Werd, Senior Product Manager for Infrastructure Management at Hewlett Packard Enterprise (HPE). The interview is conducted byDana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Management in enterprise IT has for me been about taking heterogeneity and taming it, bringing varied and dynamic systems to a place where people can operate over more, using less. And that’s been a 30-year journey.

Yet heterogeneity these days, Doug, includes so much more than it used to. We’re not just talking about platforms and frameworks – we’re talking about hybrid cloud, multicloud, and many Software as a service (SaaS) applications. It includes working securely across organizational boundaries with partners and integrating business processes in ways that never have happened before.

With all of that new complexity, with an emphasis on intelligent automation, where do you see IT management going next?

Managing management 

Doug deWerd

de Werd

de Werd: Heterogeneity is known by another term, and that’s chaos. In trying to move from the traditional silos and tools to more agile, flexible things, IT management is all about your applications — human resources and finance, for example – that run the core of your business. There’s also software development and other internal things. The models for those can be very different and trying to do that in a single manner is difficult because you have widely varying endpoints.

Gardner: Sounds like we are now about managing the management.

de Werd: Exactly. Trying to figure out how to do that in an efficient and economically feasible way is a big challenge.

Gardner: I have been watching the IT management space for 20-plus years and every time you think you get to the point where you have managed everything that needs to be managed — something new comes along. It’s a continuous journey and process.

But now we are bringing intelligence and automation to the problem. Will we ever get to the point where management becomes subsumed or invisible?

de Werd: You can automate tasks, but you can’t automate people. And you can’t automate internal politics and budgets and things like that. What you do is automate to provide flexibility.

How to Support DevOps, Automation,

And IT Management Initiatives 

But it’s not just the technology, it’s the economics and it’s the people. By putting that all together, it becomes a balancing act to make sure you have the right people in the right places in the right organizations. You can automate, but it’s still within the context of that broader picture.

rackGardner: When it comes to IT management, you need a common framework. For HPE, HPE OneView has been core. Where does HPE OneView go from here? How should people think about the technology of management that also helps with those political and economic issues?

de Werd: HPE OneView is just an outstanding core infrastructure management solution, but it’s kind of like a car. You can have a great engine, but you still have to have all the other pieces.

And so part of what we are trying to do with HPE OneView, and we have been very successful, is extending that capability out into other tools that people use. This can be into more traditional tools like with our Microsoft or VMware partnerships and exposingand bringing HPE OneView functionality into traditional things.

The integration allows the confidence of using HPE OneView as a core engine. All those other pieces can still be customized to do what you need to do — yet you still have that underlying core foundation of HPE OneView.

But it also has a lot to do with DevOps and the continuous integration development types of things with Docker, Chef, and Puppet — the whole slew of at least 30 partners we have.

That integration allows the confidence of using HPE OneView as a core engine. All those other pieces can still be customized to do what you need to do — yet you still have that underlying core foundation of HPE OneView.

Gardner: And now with HPE increasingly going to an as-a-service orientation across many products, how does management-as-a-servicework?

Creativity in the cloud 

de Werd: It’s an interesting question, because part of management in the traditional sense — where you have a data center full of servers with fault management or break/fix such as a hard-drive failure detection – is you want to be close, you want to have that notification immediately.

As you start going up in the cloud with deployments, you have connectivity issues, you have latency issues, so it becomes a little bit trickier. When you have more up levels, up the stack, where you have software that can be more flexible — you can do more coordination. Then the cloud makes a lot of sense.

Management in the cloud can mean a lot of things. If it’s the infrastructure, you tend to want to be closer to the infrastructure, but not exclusively. So, there’s a lot of room for creativity.

Gardner: Speaking of creativity, how do you see people innovating both within HPE and within your installed base of users? How do people innovate with management now that it’s both on- and off-premises? It seems to me that there is an awful lot you could do with management beyond red-light, green-light, and seek out those optimization and efficiency goals. Where is the innovation happening now with IT management?

de Werd: The foundation of it begins with automation, because if you can automate you become repeatable, consistent, and reliable, and those are all good in your data center.

Transform Compute, Storage, and Networking

Into Software-Defied Infrastructure 

You can free up your IT staff to do other things. The truth is if you can do that reliably, you can spend more time innovating and looking at your problems from a different angle. You gain the confidence that the automation is giving you.

Automation drives creativity in a lot of different ways. You can be faster to market, have quicker releases, those types of things. I think automation is the key.

Gardner: Any examples? I know sometimes you can’t name customers, but can you think of instances where people are innovating with management in ways that would illustrate its potential?

Automation innovation 

de Werd: There’s a large biotech genome sequencing company, an IT group that is very innovative. They can change their configuration on the fly based on what they want to do. They can flex their capacity up and down based on a task — how much compute and storage they need. They have a very flexible way of doing that. They have it all automated, all scripted. They can turn on a dime, even as a very large IT organization.

And they have had some pretty impressive ways of repurposing their IT. Today we are doing X and tonight we are doing Y. They can repurpose that literally in minutes — versus days for traditional tasks.

Gardner: Are your customers also innovating in ways that allow them to get a common view across the entire lifecycle of IT? I’m thinking from requirements, through development, deployment, test, and continuous redeployment.

de Werd: Yes, they can string all of these processes together using different partner tools, yet at the core they use HPE OneView and HPE Synergy underneath the covers to provide that real, raw engine.

By using the HPE partner ecosystem integrated with HPE OneView, they have visibility. Then they can get into things like Docker Swarm. It may not be HPE OneView providing that total visibility. At the hardware level it is, but because we feed into upper-level apps they can adjust to meet the needs across the entire business process.

By using the HPE partner ecosystem integrated with HPE OneView, they have that visibility. Then they can get into things like Docker Swarm. It may not be HPE OneView providing that total visibility. At the hardware and infrastructure level it is, but because we are feeding into upper-level and broader applications, they can see what’s going on and determine how to adjust to meet the needs across the entire business process.

Gardner: In terms of HPE Synergy and composability, what’s the relationship between composability and IT management? Are people making the whole greater than the sum of the parts with those?

de Werd: They are trying to. I think there is still a learning curve. Traditional IT has been around a long time. It just takes a while to change the mentality, skills sets, and internal politics. It takes a while to get to that point of saying, “Yeah, this is a good way to go.”

But once they dip their toes into the water and see the benefits — the power, flexibility, and ease of it — they are like, “Wow, this is really good.” One step leads to the next and pretty soon they are well on their way on their composable journey.

Gardner: We now see more intelligence brought to management products. I am thinking about how HPE InfoSight is being extended across more storage and server products.

How to Eliminate Complex Manual Processes 

And Increase Speed of IT Delivery 

We used to access log feeds from different IT products and servers. Then we had agents and agent-less analysis for IT management. But now we have intelligence as a service, if you will, and new levels of insight. How will HPE OneView evolve with this new level of increasingly pervasive intelligence?

managersde Werd: HPE InfoSight is a great example. You see it being used in multiple ways, things like taking the human element out, things like customer advisories coming out and saying, “Such-and-such product has a problem,” and how that affects other products.

If you are sitting there looking at 1,000 or 5,000 servers in your data center, you’re wondering how I am affected by this? There are still a lot of manual spreadsheets out there, and you may find yourself pouring through a list.

Today, you have the capability of getting an [intelligent alert] that says, “These are the ones that are affected. Here is what you should do. Do you want us to go fix it right now?” That’s just an example of what you can do.

It makes you more efficient. You begin to understand howyou are using your resources, where your utilization is, and how you can then optimize that. Depending on how flexible you want to be, you can design your systems to respond to those inputs and automatically flex [deployments] to the places that you want to be.

This leads to autonomous computing. We are not quite there yet, but we are certainly going in that direction. You will be able to respond to different compute, storage, and network requirements and adjust on the fly. There will also be self-healing and self-morphing into a continuous optimization model.

Gardner: And, of course, that is a big challenge these days … hybrid cloud, hybrid IT, and deploying across on-premises cloud, public cloud, and multicloud models. People know where they want to go with that, but they don’t know how to get there.

How does modern IT management help them achieve what you’ve described across an increasingly hybrid environment?

Manage from the cloud down 

de Werd: They need to understand what their goals are first. Just running virtual machines (VMs) in the cloud isn’t really where they want to be. That was the initial thing. There are economic considerations involved in the cloud, CAPEX and OPEX arguments.

Simply moving your infrastructure from on-premises up into the cloud isn’t going to get you where you really need to be. You need to look at it from a cloud-native-application perspective, where you are using micro services, containers, and cloud-enabled programming languages — your Javas and .NETs and all the other stateless types of things – all of which give you new flexibility to flex performance-wise.

From the management side, you have to look at different ways to do your development and different ways to do delivery. That’s where the management comes in. To do DevOps and exploit the DevOps tools, you have to flip the way you are thinking — to go from the cloud down.

Cloud application development on-premises, that’s one of the great things about containers and cloud-native, stateless types of applications. There are no hardware dependencies, so you can develop the apps and services on-premises, and then run them in the cloud, run them on-premises, and/or use your hybrid cloud vendor’s capabilities to burst up into a cloud if you need it. That’s the joy of having those types of applications. They can run anywhere. They are not dependent on anything — on any particular underlying operating system.

But you have to shift and get into that development mode. And the automation helps you get there, and then helps you respond quickly once you do.

Gardner: Now that hybrid deployment continuum extends to the edge. There will be increasing data analytics, measurement, and making deployment changes dynamically from that analysis at the edge.

It seems to me that the way you have designed and architected HPE IT management is ready-made for such extensibility out to the edge. You could have systems run there that can integrate as needed, when appropriate, with a core cloud. Tell me how management as you have architected it over the years helps manage the edge, too.

Businesses need to move their processing further out to the edge and gain the instant response, instant gratification. You can’t wait to have an input analyzed by going all the way back to the cloud. You want the processing toward the edge to get that instantaneous response.

de Werd: Businesses need to move their processing further out to the edge, and gain the instant response, instant gratification. You can’t wait to have an input analyzed on the edge, to have it go all the way back to a data source or all the way up to a cloud. You want to have the processing further and further toward the edge so you can get that instantaneous response that customers are coming to expect.

But again, being able to automate how to do that, and having the flexibility to respond to differing workloads and moving those toward the edge, I think, is key to getting there.

InfoSightGardner: And Doug, for you, personally, do you have some takeaways from your years of experience about innovation and how to make innovation a part of your daily routine?

de Werd: One of the big impacts on the team that I work with is in our quality assurance (QA) testing. It’s a very complex thing to test various configurations; that’s a lot of work. In the old days, we had to manually reconfigure things. Now, as we use an Agile development process, testing is a continuous part of it.

We can now respond very quickly and keep up with the Agile process. It used to be that testing was always the tail-end and the longest thing. Development testing took forever. Now because we can automate that, it just makes that part of the process easier, and it has taken a lot of stress off of the teams. We are now much quicker and nimbler in responses, and it keeps people happy, too.

How to Get Simle, Automated Management 

Of Your Hybrid Infrastructure 

Gardner: As we close out, looking to the future, where do you see management going, particularly how to innovate using management techniques, tools, and processes? Where is the next big green light coming from?

Set higher goals 

de Werd: First, get your house in order in terms of taking advantage of the automation available today. Really think about how not to just use the technology as the end-state. It’s more of a means to get to where you want to be.

Define where your organization wants to be. Where you want to be can have a lot of different aspects; it could be about how the culture evolves, or what you want your customers’ experience to be. Look beyond just, “I want this or that feature.”

Then, design your full IT and development processes. Get to that goal, rather than just saying, “Oh, I have 100 VMs running on a server, isn’t that great?” Well, if it’s not achieving the ultimate goal of what you want, it’s just a technology feat. Don’t use technology just for technology’s sake. Use it to get to the larger goals, and define those goals, and how you are going to get there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Microsoft, multicloud, Security, storage, User experience, VMware | Tagged , , , , , , , , , , , , | Leave a comment

How the Catalyst UK program seeds the next generations of HPC, AI, and supercomputing

cray-supercomputerThe next BriefingsDirect Voice of the Customer discussion explores a program to expand the variety of CPUs that support supercomputer and artificial intelligence (AI)-intensive workloads.

The Catalyst program in the UK is seeding the advancement of the ARM CPU architecture for high performance computing (HPC) as well as establishing a vibrant software ecosystem around it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to learn about unlocking new choices and innovation for the next generations of supercomputing with Dr. Eng Lim Goh, Vice President and Chief Technology Officer for HPC and AI at Hewlett Packard Enterprise (HPE), and Professor Mark Parsons, Director of the Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, why is there a need now for more variety of CPU architectures for such use cases as HPC, AI, and supercomputing?

Mark Parsons


Parsons: In some ways this discussion is a bit odd because we have had huge variety over the years in supercomputing with regard to processors. It’s really only the last five to eight years that we’ve ended up with the majority of supercomputers being built from the Intel x86 architecture.

It’s always good in supercomputing to be on the leading edge of technology and getting more variety in the processor is really important. It is interesting to seek different processor designs for better performance for AI or supercomputing workloads. We want the best type of processors for what we want to do today.

Gardner: What is the Catalyst program? Why did it come about? And how does it help address those issues?

Parsons: The Catalyst UK program is jointly funded by a number of large companies and three universities: The University of Bristol, the University of Leicester, and the University of Edinburgh. It is UK-focused because Arm Holdings is based in the UK, and there is a long history in the UK of exploring new processor technologies.

Through Catalyst, each of the three universities hosts a 4,000-core ARM processor-based system. We are running them as services. At my university, for example, we now have a number of my staff using this system. But we also have external academics using it, and we are gradually opening it up to other users.

Catalyst for change in processor

We want as many people as possible to understand how difficult it will be to port their code to ARM. Or, rather — as we will explore in this podcast — how easy it is.

You only learn by breaking stuff, right? And so, we are going to learn which bits of the software tool chain, for example, need some work. [Such porting is necessary] because ARM predominantly sat in the mobile phone world until recently. The supercomputing and AI world is a different space for the ARM processor to be operating in.

Gardner: Eng Lim, why is this program of interest to HPE? How will it help create new opportunity and performance benchmarks for such uses as AI?



Goh: Mark makes a number of very strong points. First and foremost, we are very keen as a company to broaden the reach of HPC among our customers. If you look at our customer base, a large portion of them come from the commercial HPC sites, the retailers, banks, and across the financial industry. Letting them reach new types of HPC is important and a variety of offerings makes it easier for them.

The second thing is the recent reemergence of more AI applications, which also broadens the user base. There is also a need for greater specialization in certain areas of processor capabilities. We believe in this case, the ARM processor — given the fact that it enables different companies to build innovative variations of the processor – will provide a rich set of new options in the area of AI.

Gardner: What is it, Mark, about the ARM architecture and specifically the Marvell ThunderX2 ARM processor that is so attractive for these types of AI workloads?

Expanding memory for the future 

Parsons: It’s absolutely the case that all numerical computing — AI, supercomputing, and desktop technical computing — is controlled by memory bandwidth. This is about getting data to the processor so the processor core can act on it.

What we see in the ThunderX2 now, as well as in future iterations of this processor, is the strong memory bandwidth capabilities. What people don’t realize is a vast amount of the time, processor cores are just waiting for data. The faster you get the data to the processor, the more compute you are going to get out with that processor. That’s one particular area where the ARM architecture is very strong.

Goh: Indeed, memory bandwidth is the key. Not only in supercomputing applications, but especially in machine learning (ML) where the machine is in the early phases of learning, before it does a prediction or makes an inference.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

It has to go through the process of learning, and this learning is a highly data-intensive process. You have to consume massive amounts of historical data and examples in order to tune itself to a model that can make good predictions. So, memory bandwidth is utmost in the training phase of ML systems.

And related to this is the fact that the ARM processor’s core intellectual property is available to many companies to innovate around. More companies therefore recognize they can leverage that intellectual property and build high-memory bandwidth innovations around it. They can come up with a new processor. Such an ability to allow different companies to innovate is very valuable.


Gardner: Eng Lim, does this fit in with the larger HPE drive toward memory-intensive computing in general? Does the ARM processor fit into a larger HPE strategy?

Goh: Absolutely. The ARM processor together with the other processors provide choice and options for HPE’s strategy of being edge-centric, cloud-enabled, and data-driven.

Across that strategy, the commonality is data movement. And as such, the ARM processor allowing different companies to come in to innovate will produce processors that meet the needs of all these various kinds of sectors. We see that as highly valuable and it supports our strategy.

Gardner: Mark, Arm Holdings controls the intellectual property, but there is a budding ecosystem both on the processor design as well as the software that can take advantage of it. Tell us about that ecosystem and why the Catalyst UK program is facilitating a more vibrant ecosystem.

The design-to-build ecosystem 

Parsons: The whole Arm story is very, very interesting. This company grew out of home computing about 30 to 40 years ago. The interesting thing is the way that they are an intellectual property company, at the end of the day. Arm Holdings itself doesn’t make processors. It designs processors and sells those designs to other people to make.

We’ve had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. It’s no surprise it’s the most common processor in the world today.

So, we’ve had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. With the wide variety of different ARM processors in mobile phones, for example, there is no surprise that it’s the most common processor in the world today.

Now, people think that x86 processors rule the roost, but actually they don’t. The most common processor you will find is an ARM processor. As a result, there is a whole load of development tools that come both from ARM and also within the developer community that support people who want to develop code for the processors.

In the context of Catalyst UK, in talking to Arm, it’s quite clear that many of their tools are designed to meet their predominant market today, the mobile phone market. As they move into the higher-end computing space, it’s clear we may find things in the programs where the compiler isn’t optimized. Certain libraries may be difficult to compile, and things like that. And this is what excites me about the Catalyst program. We are getting to play with leading-edge technology and show that it is easy to use all sorts of interesting stuff with it.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

Gardner: And while the ARM CPU is being purpose-focused for high-intensity workloads, we are seeing more applications being brought in, too. How does the porting process of moving apps from x86 to ARM work? How easy or difficult is it? How does the Catalyst UK program help?

Parsons: All three of the universities are porting various applications that they commonly use. At the EPCC, we run the national HPC service for the UK called ARCHER. As part of that we have run national [supercomputing] services since 1994, but as part of the ARCHER service, we decided for the first time to offer many of the common scientific applications as modules.

You can just ask for the module that you want to use. Because we saw users compiling their own copies of code, we had multiple copies, some of them identically compiled, others not compiled particularly well.

U of E2So, we have a model of offering about 40 codes on ARCHER as precompiled where we are trying to keep them up to date and we patch them, etc. We have 100 staff at EPCC that look after code. I have asked those staff to get an account on the Catalyst system, take that code across and spend an afternoon trying to compile. We already know for some that they just compile and run. Others may have some problems, and it’s those that we’re passing on to ARM and HPE, saying, “Look, this is what we found out.”

The important thing is that we found there are very few programs [with such problems]. Most code is simply recompiling very, very smoothly.

Gardner: How does HPE support that effort, both in terms of its corporate support but also with the IT systems themselves?

ARM’s reach 

Goh: We are very keen about the work that Mark and the Catalyst program are doing. As Mark mentioned, the ARM processor came more from the edge-centric side of our strategy. In mobile phones, for example.

Now we are very keen to see how far these ARM systems can go. Already we have shipped to the US Department of Energy at the Sandia National Lab a large ARM processor-based supercomputer called Astra. These efforts are ongoing in the area of HPC applications. We are very keen to see how this processor and the compilers for it work with various HPC applications in the UK and the US.

Gardner: And as we look to the larger addressable market, with the edge and AI being such high-growth markets, it strikes me that supercomputing — something that has been around for decades — is not fully mature. We are entering a whole new era of innovation.

Mark, do you see supercomputing as in its heyday, sunset years, or perhaps even in its infancy?

Parsons: I absolutely think that supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It’s strange because quite often people think that supercomputing has solved everything — and it really hasn’t. I will give you a direct example of that.

Supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It’s strange because people think that supercomputers have already solved everything. 

A few years ago, a European project I was running won an award for simulating the highest accuracy of water flowing through a piece of porous rock. It took over a day on the whole of the national service [to run the simulation]. We won a prize for this, and we only simulated 1 cubic centimeter of rock.

People think supercomputers can solve massive problems — and they can, but the universe and the world are complex. We’ve only scratched the surface of modeling and simulation.

This is an interesting moment in time for AI and supercomputing. For a lot of data analytics, we have at our fingertips for the very first time very, very large amounts of data. It’s very rich data from multiple sources, and supercomputers are getting much better at handling these large data sources.

The reason the whole AI story is really hot now, and lots of people are involved, is not actually about the AI itself. It’s about our ability to move data around and use our data to train AI algorithms. The link directly into supercomputing is because in our world we are good at moving large amounts of data around. The synergy now between supercomputing and AI is not to do with supercomputing or AI – it is to do with the data.

Gardner: Eng Lim, how do you see the evolution of supercomputing? Do you agree with Mark that we are only scratching the surface?

Top-down and bottom-up data crunching 

Goh: Yes, absolutely, and it’s an early scratch. It’s still very early. I will give you an example.

Solving games is important to develop a method or strategy for cyber defense. If you just take the most recent game that machines are beating the best human players, the game of Go, is much more complex than chess in terms of the number of potential combinations. The number of combinations is actually 10[171], if you comprehensively went through all the different combinations of that game.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

You know how big that number is? Well, okay, if we took all computers in the world together, all the supercomputers, all of the computers in the data centers of the Internet companies and put them all together, run them for 100 years — all you can do is 10[30], which is so very far from 10[171]. So, you can see just by this one game example alone that we are very early in that scratch.

A second group of examples relates to new ways that supercomputers are being used. From ML to AI, there is now a new class of applications changing how supercomputers are used. Traditionally, most supercomputers have been used for simulation. That’s what I call top-down modeling. You create your model out of physics equations or formulas and then you run that model on a supercomputer to try and make predictions.

ARM logoThe new way of making predictions uses the ML approach. You do not begin with physics. You begin with a blank model and you keep feeding it data, the outcomes of history and past examples. You keep feeding data into the model, which is written in such a way that for each new piece of data that is fed, a new prediction is made. If the accuracy is not high, you keep tuning the model. Over time — with thousands, hundreds of thousand, and even millions of examples — the model gets tuned to make good predictions. I call this the bottom-up approach.

Now we have people applying both approaches. Supercomputers used traditionally in a top-down simulation are also employing the bottom-up ML approach. They can work in tandem to make better and faster predictions.

Supercomputers are therefore now being employed for a new class of applications in combination with the traditional or gold-standard simulations.

Gardner: Mark, are we also seeing a democratization of supercomputing? Can we extend these applications and uses? Is what’s happening now decreasing the cost, increasing the value, and therefore opening these systems up to more types of uses and more problem-solving?

Cloud clears the way for easy access

Parsons: Cloud computing is having a big impact on everything that we do, to be quite honest. We have all of our photos in the cloud, our music in the cloud, et cetera. That’s why EPCC last year got rid of its file server. All our data running the actual organization is in the cloud.

The cloud model is great inasmuch as it allows people who don’t want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.

The cloud model is great inasmuch as it allows people who don’t want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.

The other side of that is that there are fantastic software frameworks now that didn’t exist even five years ago for doing AI. There is so much open source for doing simulations.

It doesn’t mean that an organization like EPCC, which is a supercomputing center, will stop hosting large systems. We are still great aggregators of demand. We will still have the largest computers. But it does mean that, for the first time through the various cloud providers, any company, any small research group and university, has access to the right level of resources that they need in a cost-effective way.

Gardner: Eng Lim, do you have anything more to offer on the value and economics of HPC? Does paying based on use rather than a capital expenditure change the game?

More choices, more innovation 

Goh: Oh, great question. There are some applications and institutions with processes that work very well with a cloud, and there are some applications that don’t and processes that don’t. That’s part of the reason why you embrace both. And, in fact, we at HPE embrace the cloud and we also we build on-premises solutions for our customers, like the one at the Catalyst UK program.

We also have something that is a mix of the two. We call that HPE GreenLake, which is the ability for us to acquire the system the customer needs, but the customer pays per use. This is software-defined experience on consumption-based economics.

HPE logoThese are some of the options we put together to allow choice for our customers, because there is a variation of needs and processes. Some are more CAPEX-oriented in a way they acquire resources and others are more OPEX-oriented.

Gardner: Do you have examples of where some of the fruits of Catalyst, and some of the benefits of the ecosystem approach, have led to applications, use cases, and demonstrated innovation?

Parsons: What we are trying to do is show how easy ARM is to use. We have taken some really powerful, important code that runs every day on our big national services and have simply moved them across to ARM. Users don’t really understand or don’t need to understand they are running on a different system. It’s that boring.

We have picked up one or two problems with code that probably exist in the x86 version, but because you are running a new processor, it exposes it more, and we are fixing that. But in general — and this is absolutely the wrong message for an interview — we are proceeding in a very boring way. The reason I say that is, it’s really important that this is boring, because if we don’t show this is easy, people won’t put ARM on their next procurement list. They will think that it’s too difficult, that it’s going to be too much trouble to move codes across.

One of the aims of Catalyst, and I am joking, is definitely to be boring. And I think at this point in time we are succeeding.

More interestingly, though, another aim of Catalyst is about storage. The ARM systems around the world today still tend to do storage on x86. The storage will be running on Lustre or BeeGFS server, all sitting on x86 boxes.

We have made a decision to do everything on ARM, if we can. At the moment, we are looking at different storage software on ARM services. We are looking at Ceph, at Lustre, at BeeGFS, because unless you have the ecosystem running in ARM as well, people won’t think it’s as pervasive of a solution as x86, or Power, or whatever.

The benefit of being boring 

Goh: Yes, in this case boring is good. Seamless movement of code across different platforms is the key. It’s very important for an ecosystem to be successful. It needs to be easy to develop code for and it, and it needs to be easy to port. And those are just as important with our commercial HPC systems for the broader HPC customer base.

In addition to customers writing their own code and compiling it well and easily to ARM, we also want to make it easy for the independent software vendors (ISVs) to join and strengthen this ecosystem.

Parsons: That is one of the key things we intend to do over the next six months. We have good relationships, as does HPE, with many of the big and small ISVs. We want to get them on a new kind of system, let them compile their code, and get some help to do it. It’s really important that we end up with ISV code on ARM, all running successfully.

Gardner: If we are in a necessary, boring period, what will happen when we get to a more exciting stage? Where do you see this potentially going? What are some of the use cases using supercomputers to impact business, commerce, public services, and public health?

Goh: It’s not necessarily boring, but it is brilliantly done. There will be richer choices coming to supercomputing. That’s the key. Supercomputing and HPC need to reach a broader customer base. That’s the goal of our HPC team within HPE.

Over the years, we have increased our reach to the commercial side, such as the financial industry and retailers. Now there is a new opportunity coming with the bottom-up approach of using HPC. Instead of building models out of physics, we train the models with example data. This is a new way of using HPC. We will reach out to even more users.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

So, the success of our supercomputing industry is getting more users, with high diversity, to come on board.

Gardner: Mark, what are some of the exciting outcomes you anticipate?

Parsons: As we get more experience with ARM it will become a serious player. If you look around the world today, in Japan, for example, they have a big new ARM-based supercomputer that’s going to be similar to the Thunder X2 when it’s launched.

I predict in the next three or four years we are going to see some very significant supercomputers up at the X2 level, built from ARM processors. Based on what I hear, the next generations of these processors will produce a really exciting time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, data analysis, Hewlett Packard Enterprise, machine learning | Tagged , , , , , , , , , , | Leave a comment

HPE and PTC join forces to deliver best manufacturing outcomes from the OT-IT productivity revolution

Seagate_drives_being_testedThe next BriefingsDirect Voice of the Customer edge computing trends discussion explores the rapidly evolving confluence of operational technology (OT) and Internet of Things (IoT).

New advances in data processing, real-time analytics, and platform efficiency have prompted innovative and impactful OT approaches at the edge. We’ll now explore how such data analysis platforms bring manufacturers data-center caliber benefits for real-time insights where they are needed most.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To hear more about the latest capabilities in gaining unprecedented operational insights, we sat down with Riaan Lourens, Vice President of Technology in the Office of the Chief Technology Officer at PTC, and Tripp Partain, Chief Technology Officer of IoT Solutions at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Riaan, what kinds of new insights are manufacturers seeking into how their operations perform?

Riaan Lourens


Lourens: We are in the midst of a Fourth Industrial Revolution, which is really an extension of the third, where we used electronics and IT to automate manufacturing. Now, the fourth is the digital revolution, a fusion of technology and capabilities that blur the lines between the physical and digital worlds.

With the influx of these technologies, both hardware and software, our customers — and manufacturing as a whole, as well as the discrete process industries — are finding opportunities to either save or make more money. The trend is focused on looking at technology as a business strategy, as opposed to just pure IT operations.

There are a number of examples of how our customers have leveraged technology to drive their business strategy.

Gardner: Are we entering a golden age by combining what OT and IT have matured into over the past couple of decades? If we call this Industrial Revolution 4.0 (I4.0) there must be some kind of major opportunities right now.

Lourens: There are a lot of initiatives out there, whether it’s I4.0, Made in China 2025, or the Smart Factory Initiative in the US. By democratizing the process of providing value — be it with cloud capabilities, edge computing, or anything in between – we are inherently providing options for manufacturers to solve problems that they were not able to solve before.

The opportunity for manufacturers today allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable. 

If you look at it from a broader technology standpoint, in the past we had very large, monolith-like deployments of technology. If you look at it from the ISA-95 model, like Level 3 or Level 4, your MES deployments or large-scale enterprise resource planning (ERP), those were very large deployments that took many years. And the return on investment (ROI) the manufacturers saw would potentially pay off over many years.

The opportunity that exists for manufacturers today, however, allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable. Then they can lift and drop and so scale [those new solutions] across the enterprise. That does make this an era the likes of which nobody has seen before.

Gardner: Tripp, do you agree that we are in a golden age here? It seems to me that we are able to both accommodate a great deal of diversity and heterogeneity of the edge, across all sorts of endpoints and sensors, but also bring that into a common-platform approach. We get the best of efficiency and automation.

Tripp Partain


Partain: There is a combination of two things. One, due to the smartphone evolution over the last 10 years, the types of sensors and chips that have been created to drive that at the consumer level are now at such reasonable price points you are able to apply these to industrial areas.

To Riaan’s point, the price points of these technologies have gotten really low — but the capabilities are really high. A lot of existing equipment in a manufacturing environment that might have 20 or 30 years of life left can be retrofitted with these sensors and capabilities to give insights and compute capabilities at the edge. The capability to interact in real-time with those sensors provides platforms that didn’t exist even five years ago. That combines with the right software capabilities so that manufacturers and industrials get insights that they never had before into their processes.

Gardner: How is the partnership between PTC and HPE taking advantage of this new opportunity? It seems you are coming from different vantage points but reinforcing one another. How is the whole greater than the sum of the parts when it comes to the partnership?

Partnership for progress, flexibility

Lourens: For some context, PTC is a software vendor. Over the last 30 years we targeted our efforts at helping manufacturers either engineer software with computer-aided design (CAD) or product lifecycle management (PLM). We have evolved to our growth areas today of IoT solution platforms and augmented reality (AR) capabilities.

The challenge that manufacturers face today is not just a software problem. It requires a robust ecosystem of hardware vendors, software vendors, and solutions partners, such as regional or global systems integrators.

The reason we work very closely with HPE as an alliance partner is because HPE is a leader in the space. HPE has a strong offering of compute capabilities — from very small gateway-level compute all the way through to hybrid technologies and converged infrastructure technologies.

Ultimately our customers need flexible options to deploy software at the right place, at the right time, and throughout any part of their network. We find that HPE is a strong partner on this front.

edge boxGardner: Tripp, not only do we have lower cost and higher capability at the edge, we also have a continuum of hybrid IT. We can use on-premises micro-datacenters, converged infrastructure, private cloud, and public cloud options to choose from. Why is that also accelerating the benefits for manufacturers? Why is a continuum of hybrid IT – edge to cloud — an important factor?

Partain: That flexibility is required if you look at the industrial environments where these problems are occurring for our joint customers. If you look at any given product line where manufacturing takes place — no two regions are the same and no two factories are the same. Even within a factory, a lot of times, no two production lines are the same.

There is a wide diversity in how manufacturing takes place. You need to be able to meet those challenge with the customers to give them the deployment options that meet each of those environments.

It’s interesting. Factories don’t do enterprise IT-like deployments, where every factory takes on new capabilities at the same time. It’s much more balanced in the way that products are made. You have to be able to have that same level of flexibility in how you deploy the solutions, to allow it to be absorbed the same way the factories do all of their other types of processes.

We have seen the need for different levels of IT to match up to the way they are implemented in different types of factories. That flexibility meets them where they are and allows them to get to the value much quicker — and not wait for some huge enterprise rollout, like what Riaan described earlier with ERP systems that take multiple years.

By leveraging new, hybrid, converged, and flexible environments, we allow a single plant to deploy multiple solutions and get results much quicker. We can also still work that into an enterprise-wide deployment — and get a better balance between time and return.

Gardner: Riaan, you earlier mentioned democratization. That jumped out at me. How are we able to take these advances in systems, software, and access and availability of deployments and make that consumable by people who are not data scientists? How are we able to take the results of what the technology does and make it actionable, even using things like AR?

Lourens: As Tripp described, every manufacturing facility is different. There are typically different line configurations, different programmable logic controller (PLC) configurations, different heterogeneous systems — be it legacy IT systems or homegrown systems — so the ability to leverage what is there is inherently important.

From a strategic perspective, PTC has two core platforms; one being our ThingWorx Platform that allows you to source data and information from existing systems that are there, as well as from assets directly via the PLC or by embedding software into machines.

We also have the ability to simplify and contextualize all of that information and make sense of it. We can then drive analytical insights out of the data that we now have access to. Ultimately we can orchestrate with end users in their different personas – be that the maintenance operator, supervisor, or plant manager — enabling and engaging with these different users through AR.

Four capabilities for value

There are four capabilities that allow you to derive value. Ultimately our strategy is to bring that up a level and to provide capabilities solutions to our end customers across four different areas.

One, we look at it from an enterprise operational intelligence perspective; the second is intelligent asset optimization; the third, digital workforce productivity, and fourth, scalable production management.

assembly lineSo across those four solution areas we can apply our technology together with that of our sourced partners. We allow our customers to find use-cases within those four solution areas that provides them a return on investment.

One example of that would be leveraging augmented work instructions. So instead of an operator going through a maintenance procedure by opening a folder of hundreds of pages of instructions, they can leverage new technology such as AR to guide the operator in process, and in situ, in terms of how to do something.

There are many use cases across those four solution areas that leverage the core capabilities across the IoT platform, ThingWorx, as well as the AR platform, Vuforia.

Gardner: Tripp, it sounds like we are taking the best of what people can do and the best of what systems and analytics can do. We also move from batch processing to real time. We have location-based services so we can tell where things and people are in new ways. And then we empower people in ways that we hadn’t done before, such as AR.

Are we at the point where we’re combining the best of cognitive human capabilities and machine capabilities?

Partain: I don’t know if we have gotten to the best yet, but probably the best of what we’ve had so far. As we continue to evolve these technologies and find new ways to look at problems with different technology — it will continue to evolve.

We are getting to the new sweet spot, if you will, of putting the two together and being able to drive advancements forward. One of the things that’s critical has to do with where our current workforce is.

A number of manufacturers I talk to — and I’ve heard similar from PTC’s customers and our joint customers — is you are at a tipping point in terms of the current talent pool, with those currently employed and those getting close to retirement age.

The next generation that’s coming in is not going to have the same longevity and the same skill sets. Having these newer technologies and bringing these pieces together, it’s not only a new matchup based on the new technology – it’s also better suited for the type of workers carrying these activities forward. Manufacturing is not going away, but it’s going to be a very different generation of factory workers and types of technologies.

The solutions are now available to really enhance those jobs. We are starting to see all of the pieces come together. That’s where both IoT solutions — but even especially AR solutions like PTC Vuforia — really come into play.

Gardner: Riaan, in a large manufacturing environment, only small iterative improvements can make a big impact on the economics, the bottom line. What sort of future categorical improvements value are we looking at? To what degree do we have an opportunity to make manufacturing more efficient, more productive, more economically powerful?

Tech bridges skills gap, talent shortage

Lourens: If you look at it from the angle that Tripp just referred to, there are a number of increasing pressures across the board in the industrial markets via the workers’ skills gap. Products are also becoming more complex. Workspaces are becoming more complex. There are also increasing customer demands and expectations. Markets are just becoming more fiercely competitive.

But if you leverage capabilities such as AR — which provides augmented 3-D work instructions, expert guidance, and remote assistance, training, and demonstrations — that’s one area. If you combine that, to Tripp’s point, with the new IoT capabilities, then I think you can look at improvements such as reducing waste in processes and materials.

We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent. And we’re looking at improving productivity by 20 to 30 percent.

We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent at a very large ship manufacturer, a customer of PTC’s. And we’re generally looking at improving productivity by 20 to 30 percent.

By leveraging this technology in a meaningful way to get iterative improvements, you can then scale it across the enterprise very rapidly, and multiple use cases can become part of the solution. In these areas of opportunity, very rapidly you get that ROI.

Gardner: Do we have concrete examples to help illustrate how those general productivity benefits come about?

Joint solutions reduce manufacturing pains 

Lourens: A joint-customer between HPE and PTC focuses on manufacturing and distributing reusable and recyclable food packaging containers. The company, CuBE Packaging Solutions, targeted protective maintenance in manufacturing. Their goal is to have the equipment notify them when attention is needed. That allows them to service what they need when they need to and focus on reducing unplanned downtime.

In this particular example, there are a number of technologies that play across both of our two companies. The HPE Nimble Storage capability and HPE Synergy technology were leveraged, as well as a whole variety of HPE Aruba switches and wireless access points, along with PTC’s ThingWorx solution platform.

The CuBE Packaging solution ultimately was pulled together through an ecosystem partner, Callisto Integration, which we both worked with very closely. In this use case, we not only targeted the plastic molding assets that they were monitoring, but the peripheral equipment, such as cooling and air systems, that may impact their operations. The goal is to avoid anything that could pause their injection molding equipment and plants.

Gardner: Tripp, any examples of use-cases that come to your mind that illustrate the impact?

Partain: Another joint-customer that comes to mind is Texmark Chemicals in Galena Park, Texas. They are using number of HPE solutions, including HPE Edgeline, our micro-datacenter. They are also using PTC ThingWorx and a number of other solutions.

sparksThey have very large pumps critical to the operation as they move chemicals and fluids in various stages around their plant in the refining process. Being able to monitor those in real time, predict potential failures before they happen, and use a combination of live data and algorithms to predict wear and tear, allows them to determine the optimal time to make replacements and minimize downtime.

Such uses cases are one of the advantages when customers come and visit our IoT Lab in Houston. From an HPE standpoint, not only do they see our joint solutions in the lab, but we can actually take them out to the Texmark location and Texmark will host and allow you them see these technologies in real-time working at their facility.

Similar as Riaan mentioned, we started at Texmark with condition monitoring and now the solutions have moved into additional use cases — whether it’s mechanical integrity, video as a sensor, and employee-safety-related use cases.

We started with condition monitoring, proved that out, got the technology working, then took that framework — including best-in-class hardware and software — and continued to build and evolve on top of that to solve expanded problems. Texmark has been a great joint customer for us.

Gardner: Riaan, when organizations hear about these technologies and the opportunity for some very significant productivity benefits, when they understand that more-and-more of their organization is going to be data-driven and real-time analysis benefits could be delivered to people in their actionable context, perhaps using such things as AR, what should they be doing now to get ready?

Start small

Lourens: Over the last eight years of working with ThingWorx, I have noticed the initial trend of looking at the technology versus looking at specific use-cases that provide real business value, and of working backward from the business value.

My recommendation is to target use cases that provide quick time-to-value. Apply the technology in a way that allows you to start small, and then iterate from there, versus trying to prove your ROI based on the core technology capabilities.

Ultimately understand the business challenges and how you can grow your top line or your bottom line. Then work backward from there, starting small by looking at a plant or operations within a plant, and then apply the technology across more people. That helps create a smart connected people strategy. Apply technology in terms of the process and then relative to actual machines within that process in a way that’s relevant to use cases — that’s going to drive some ROI.

Gardner: Tripp, what should the IT organization be newly thinking? Now, they are tasked with maintaining systems across a continuum of cloud-to-edge. They are seeing micro-datacenters at the edge; they’re doing combinations of data-driven analytics and software that leads to new interfaces such as AR.

How should the IT organization prepare itself to take on what goes into any nook and cranny in almost any manufacturing environment?

IT has to extend its reach 

Partain: It’s about doing all of that IT in places where typically IT has had a little or no involvement. In many industrial and manufacturer organizations, as we go in and start having conversations, IT really has usually stopped at the datacenter back-end. Now there’s lots of technology in the manufacturing side, too, but it has not typically involved the IT department.

PTC_New_LogoOne of the first steps is to get educated on the new edge technologies and how they fit into the overall architecture. They need to have the existing support frameworks and models in place that are instantly usable, but also work with the business side and frame-up the problems they are trying to solve.

As Riaan mentioned, being able to say, “Hey, here are the types of technologies we in IT can apply to this that you [OT] guys haven’t necessarily looked at before. Here’s the standardization we can help bring so we don’t end up with something completely different in every factory, which runs up your overall cost to support and run.”

It’s a new world. And IT is going to have to spend much more time with the part of the business they have probably spent the least amount of time with. IT needs to get involved as early as possible in understanding what the business challenges are and getting educated on these newer IoT, AR, virtual reality (VR), and edge-based solutions. These are becoming the extension points of traditional technology and are the new ways of solving problems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, data analysis, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, machine learning, Security, server | Tagged , , , , , , , , , , , , , , , | Leave a comment

IT and HR: Not such an odd couple

Human Resources SectionHow businesses perform has always depended on how well their employees perform. Yet never before has the relationship between how well employees work and the digital technology that they use been so complex.

At the same time, companies are grappling with the transition to increasingly data-driven and automated processes. What’s more, the top skills at all levels are increasingly harder to find — and hold onto — for supporting strategic business agility.

As a result, business leaders must enhance and optimize today’s employee experience so that they in turn can optimize the customer experience and — by extension — better support the success of the overall business.

Stay with us as BriefingsDirect explores how those writing the next chapters of human resources (HR) and information technology (IT) interactions are finding common ground to significantly improve the modern employee experience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

We’re now joined by two leaders in this area who will share their thoughts on how intelligent workspace solutions are transforming work — and heightening worker satisfaction. Please welcome Art Mazor, Principal and Global Human Resources Transformation Practice Leader at Deloitte, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Art, is there more of a direct connection now between employee experience and overall business success?

Mazor: There has been a longstanding sense on the part of leaders intuitively that there must be a link. For a long time people have said, “Happy employees equal happy customers.” It’s been understood.

Arthur Mazor


But now, what’s really powerful is we have true evidence that demonstrates the linkage. For example, in our Deloitte Global Human Capital Trends Report 2019, in its ninth year running, we noticed a very important finding in this regard: Purpose-focused companies outperformed their S&P 500 peers by a factor of 8. And, when you think about, “Well, how do you get to purpose for people working in an organization?” It’s about creating that strong experience.

What’s more, I was really intrigued when MIT recently published a study that demonstrated the direct linkage between positive employee experience and business performance. They showed that those with really strong employee experiences have twice the innovation, double the satisfaction of customers, and 25 percent greater profitability.

So those kinds of statistics tell me pretty clearly that it matters — and it’s driving business results.

Gardner: It’s seemingly commonsense and an inevitable outcome when employees and their positive experiences impact the business. But reflecting on my own experiences, some companies will nonetheless talk the talk, but not always walk the walk on building better employee experiences, unless they are forced to.

Do you sense, Art, that there are some pressures on companies now that hadn’t been there before?

Purposeful employees produce profits 

Mazor: Yes, I think there are. Some of those pressures, appropriately, are coming from the market. Customers have a very high bar with which they measure their experience with an organization. We know that if the employee or workforce experience is not up to par, the customers feel it.

That demand, that pressure, is coming from customers who have louder voices now than ever before. They have the power of social media, the ability to make their voices known, and their perspectives heard.

There is also a tremendous amount of competition among a variety of customers. As a result, leaders recognize that they have to get this right. They have to get their workers in a place where those workers feel they can be highly productive and in the service of customer outcomes.

Minahan: Yes, I totally agree with Art. In addition, there is an added pressure going on in the market today and that is the fact that there is a huge talent crunch. Globally McKinsey estimates there is a shortage of 95 million medium- to high-skilled workers.

Tim Minahan


We are beginning to see even forward-thinking digital companies like Amazon saying, “Hey, look, we can’t go out and hire everyone we need; certainly not in one location.” So that’s why you have the HQ2 competition, and the like.

Just in July, Amazon committed to investing more than $700 million to retrain a third of their workforce with the skills that they need to continue to advance. This is part of that pressure companies are feeling: “Hey, we need to drive growth. We need to digitize our businesses. We need to provide a greater customer experience. But we need these new skills to do it, and there just is not enough talent in the market.”

So companies are rethinking that whole employee engagement model to advance.

Gardner: Tim, the concept of employee experience was largely in the domain of people like Art and those that he supports in the marketplace — the human resources and human capital management (HCM) people.

How does IT now have more of a role? Why do IT and HR leaders need to be more attached at the hip?

Download The Economist Research

On How Technology Drives 

The Modern Employee Experience 

Minahan: Much of what chief human resources officers (CHROs) and chief people officers (CPOs) have done to advance the culture and physical environment with which to attract and retain the right talent has gone extremely far. That includes improving benefits, ensuring there is a purpose, and ensuring that the work environment is very pleasurable.

However, we just conducted a study together with The Economist, a global study into employee experience and how companies are prioritizing it. And one of things that we found is organizations have neglected to take a look at the tools and the access to information that they give their employees to get their jobs done. And that seems to be a big gap.

This gap was reaffirmed by a recent global Gallup study where right behind the manager, the number one indicator of employee engagement was if they feel they have the right access to the information and tools they need to do their best job.

So technology — the digital workspace, if you will — plays an increasingly important role, particularly in how we work today. We don’t always work at a desk or in a physical environment. In fact, most of us work in multiple locations throughout the day. And so our digital workspace needs to travel with us, and it needs to simplify our day — not make it more complex.

Gardner: Art, as part of The Economist study that Tim cited, “ease of access to information required to get work done” was one of the top things those surveyed identified as being part of a world-class employee experience.

That doesn’t surprise me because we are asking people to be more data-driven. But to do so we have to give them that data in a way they can use it.

Are you seeing people thinking more about the technology and the experience of using and accessing technology when it comes to HR challenges and improvement?

HR plus IT gets the job done 

Mazor: Yes, for sure. And in the HR function, technology has been front and center for many years. In fact, HR executives, their teams, and the workers they serve have been at an advantage in that technology investments have been quite rich. The HR space was one of the first to move to the cloud. That’s created lots of opportunities beyond those that may have been available even just a few short years ago.

To your point, though, and building on Tim’s comments, [employee experience requirements] go well beyond the traditional HR technologies. They are focused around areas like collaboration, knowledge sharing, interaction, and go into the toolsets that foster those kinds of necessities. They are at the heart of being able to drive work in the way that work needs to get done today.

The days of traditional hierarchies — where your manager tells you what to do and you do it — are quickly dwindling. We are moving to a world where teams are forming in a more agile way, demanding new toolsets.

The days of traditional hierarchies — where your manager tells you what to do and you go do it — are quickly dwindling. Now, we still have leaders and they tell us to do things and that’s important; I don’t mean to take away from that. Yet, we are moving to a world where, in order to act with speed, teams are forming in a more agile way. Networked groups are operating together cross-functionally and across businesses, and geographies — and it’s all demanding, to your point, new toolsets.

Fortunately, there are a lot of tools that are out there for that. Like with any new area of innovation, though, it can be overwhelming because there are just so many technologies coming into the marketplace to take advantage of.

The trick we are finding is for organizations to be able to separate the noise from the impactful technologies and create a suite of tools that are easy to navigate and remove that kind of friction from the workplace.

Gardner: Tim, a fire hose of technology is certainly not the way to go. From The Economist survey we heard that making applications simple — with a consumer-like user experience — and with the ability to work from anywhere are all important. How do you get the right balance between the use of technology, but in a simplified and increasingly automated way?

A workspace to unify work

Minahan: Art hit the exact right word. All this choice and access to technology that we use to get our jobs done has actually created a lot more complexity. The typical employee now uses a dozen or more apps throughout the day, and oftentimes needs to navigate four more applications just to get a single task or a bit of information that they are looking for. As a result, they need to navigate a whole bunch of different environments, remember a whole bunch of different usernames and passwords, and it’s creating a lot of noise in their day.

To Art’s point, there is an emergence of a new category of technology, a digital workspace that unifies everything for an employee, gives them single sign-on access to everything they need to be productive, and one unified experience, so they don’t need to have as much noise in their day.

Workspace AppCertainly, it also provides an added layer of security around things. And then the third component that gets very, very exciting is that forward-thinking companies are beginning to infuse things like machine learning (ML) and simplified workflows or micro apps that connect some of these technologies together so that the employee can be guided through their day — very much like they are in their personal lives, where Facebook might guide you and curate your day for the news and social interactions you want.

Netflix, for example, will make up the recommendations based on your historical behaviors and preferences. And that’s beginning to work its way into the workplace. So the study we just did with The Economist clearly points to bringing that consumer-like experience into the workplace as a priority among IT and HR leaders.

Gardner: Art, you have said that a positive employee experience requires removing friction from work. What do you mean by friction and is that related to this technology issue, or is it something even bigger?

Remove friction, maximize productivity 

Mazor: I love that you are asking that, Dana. I think it is something bigger than technology — yet technology plays a massively important role.

When we think about friction, and what I love about that word in this context, is it’s a plain English word. We know that friction means. It’s what causes something to slow down.

And so it’s bigger than just technology in the sense that to create that positive worker experience we need to think about a broader construct, which is the human experience overall. And elevating that human experience is about, first and foremost, recognizing that everyone wakes up every morning as a human. We might play the role of a worker, we might play the role of customer, or some other role. But in our day-to-day life, anything that slows us down from being as productive as possible is, in my view, the element that is this  friction.

So that could be process-oriented, it could be policy and bureaucracy that gets in the way. It could be managers who may be struggling with empowerment of their teams. It might even be technology, to your point, that causes it to be more difficult to, as Tim was rightly saying, navigate through to all the different apps or tools.

And so this idea of friction and removing it is really about enabling that workforce to be focused myopically on delivering results for customers, the business, and the other workers in the enterprise. Whatever it may be, anything that stands in the way should be evaluated as a potential cause of friction.

Sometimes that friction is good in the sense of slowing things down for purposes like compliance or risk management. In other cases, it’s bad friction that just gets in the way of good results.

View Video on How Companies

Drive Improved Employee Experience

To Foster Better Business Results 

Minahan: I love what Art’s talking about. That is the next wave we will see in technology. When we talk about these digital workspaces — moving from traditional enterprise applications — built around giving functions and modern collaborations tools, they are focused on team-based collaboration. Still, individuals need to navigate all of these environments — and oftentimes work in different ways.

And so this idea of people-centric computing, in which you put the person at the center, makes it easy for them to interact with all of these different channels and remove some of the noise from their day. They can do much more meaningful work — or in some cases, as one person put it to me, “Get the job done that I was hired to do.” I really believe this is where we are now going.

And you have seen it in consumer technologies. The web came about to organize the world’s information, and apps came about to organize the web. Now you have this idea of the workspace coming about to organize all of those apps so that we can finally get all the utility that had been promised.

Gardner: If we return to our North Star concept, the guiding principle, that this is all about the customer experience, how do we make a connection between solidifying that employee experience as Tim just described but to the benefit of the customer experience?

Art, who in the organization needs to make sure that there isn’t a disconnect or dissonance between that guiding principle of the customer experience and buttressing it through the employee experience?

Leaders emphasize end-customer experience 

Mazor: We are finding this is one of the biggest challenges, because there isn’t a clear-cut owner for the workforce experience. That’s probably a good thing in the long run, because there are way too many individual groups, teams, and leaders who must be involved to have only one accountable leader.

That said, we are finding a number of organizations achieving great success by at least appointing either an existing function — and in many cases we are finding that happens to be HR — or in some organizations finding a different way of having accountability for orchestrating the experience. The best meaning is around bringing together a variety of groups — those could be HR, IT, real estate, marketing, finance, and the business leaders for sure to all play their roles inside of that experience.

Delivering on that end-customer experience as the brass ring, or the North Star to mix metaphors, becomes a way of thinking. It requires a different mindset that enterprises are shaping for themselves — and their leaders can model that behavior.

Delivering on that end-customer experience as the brass ring becomes a way of thinking. It requires a different mindset that enterprises are shaping for themselves — and their leaders can model that behavior.

I will share with you one great example of this. In the typical world of an airline, you would expect that flight attendants are there — as you hear on the announcements — for your safety first, and then to provide services. But one particular major airline recognized that those flight attendants are also the ones who can create the greatest stickiness to customer relationships because they see their top customers in flight, where it matters the most.

Deloitte logoAnd they have equipped that group of flight attendants with data in the form of a mobile device app that they use to see who is on board and where they sit in the importance of being customers in terms of revenue and other important factors. That provides triggers to those flight attendants, and others on the flight staff, to help recognize those customers and to ensure that they are having a great experience. And when things don’t go as well as possible, perhaps due to Mother Nature, those flight attendants are there to keep watch over their most important customers.

That’s a very new kind of construct in a world where the typical job was not focused on customers. Now, in an unwitting way, those flight attendants are playing a critical role in fostering and advancing those relationships with key customers.

There are many, many examples like that that are the outcome of leaders across functions coming together to orchestrate an experience that ultimately is centered around creating a rich customer experience where it matters the most.

Minahan: Two points. One, what Art said is absolutely consistent with the findings of the study we conducted jointly with The Economist. There is no clear-cut leader on employee experience today. In fact, both CHROs and CIOs equally indicated that they were on-point as the lead for driving that experience.

 We are beginning to see the emergence of a digital employee experience officer that’s emerging at some organizations to help drive the coordination that Art is talking about.

But the second point to your question, Dana, around how do we keep employees focused on the customer experience, it goes back to your opening question around purpose. Increasingly, as Art indicated, there is clear demonstration of companies that have clear purpose and are performing better — and that’s because that purpose tends to be on some business outcome. It drives some greater experience or innovation or business outcome for their customers.

If we can ensure that employees have the right tools, information, skills, and training to deliver that customer experience, then they are clearly aligned. I think it all ties very well together.

Gardner: Tim, when I heard Art talking about the flight attendants, it occurred to me that there is a whole class of such employees that are in that direct-interaction-with-the-customer role. It could be retail, the person on the floor of a clothing seller; or it could be a help desk operator. These are the power users that need to get more data, help, and inference knowledge delivered to them. They might be the perfect early types of users that you provide a digital workspace to.

Let’s focus on that workspace. What sort of qualities does that workspace need to have? Why are we in a better position, when it comes to automation and intelligence, than ever before to empower those employees, the ones on the front lines interacting with the customers?

Effective digital workspace requirements

Minahan: Excellent question. There are three, and an emerging fourth, capabilities required for an effective digital workspace. The first is it needs to be unified. We talked about all of the complexity and noise that bogs down an employee’s day, and all of the applications they need to navigate. Well, the digital workspace must unify that by giving a single-sign-on experience into the workspace to access all the apps and content that an employee needs to be productive and to do engaging work, whether they are at the office, on the corporate network, or on their tablet at home, or on their smartphone on a train or a plane.

The second part is obviously — in this day and age, considering especially those front-line employees that are touching customer information — it all needs to be secure. The apps and content need to be more secure within the workspace than when accessed natively. That means dynamically applying security policies and perhaps asking for a second layer of authentication, based on that employee’s behavior.

The third part is around intelligence. Bringing things like machine learning and simplified workflows into the workspace to create a consumer-like experience, where the employee is presented with the right information and the right task within the workspace so that they can quickly access those — rather than needing to log-in to multiple applications and go four layers deep.

citrix-logo-blackThe fourth capability that’s emerging, and that we hear a lot about, is the assurance that those applications — especially for front-line employees who are engaged with customers — are performing at their very best within the workspace. [Such high-level performance needs to be delivered] whether that employee is at a corporate office or more likely at a remote retail branch.

Bringing some of the historical infrastructure like networking technology to bear in order to ensure those applications are always on and reliable is the fourth pillar of what’s making new digital workspace strategies emerge in the enterprise.

The Employee Experience is Broken,

Learn How IT and HR Together Can Fix it 

Gardner: Art, for folks like Tim and me, we live in this IT world and we sometimes get lost in the weeds and start talking in acronyms and techy-talk. At Deloitte, you are widely regarded as the world’s number-one HR transformation consultancy.

First, tell us about the HR consultancy practice at Deloitte. And then, is explaining what technology does and is capable of a big part of what you do? Are you trying to explain the tech to the HR people, and then perhaps HR to the tech people?

Transforming HR with technology 

Mazor: First, thanks for the recognition. We are truly humbled and yet proud to be the world’s leading HR transformation firm. By having the opportunity as we do to partner with the world’s leading enterprises to shape and influence the future of HR, it gives us a really interesting window into exactly what you are describing.

At a lot of the organizations we work with, the HR leaders and their teams are increasingly well-versed in the various technologies out there. The biggest challenge we find is being able to harness the value of those technologies, to find the ones that are going to produce impact at a pace and at a cost and return that really is valued by the enterprise overall.

For sure, the technology elements are critical enablers. We recently published a piece on the future of HR-as-a-function that’s based on a combination of our research and field experience. What we identified is that the future of HR requires a shift in four big areas:

  • The mindset, meaning the culture and the behaviors of the HR function.

  • The focus, meaning focusing in on the customers themselves.

  • The lens through which the HR function operates, meaning the operating model and the shift toward a more agile-network kind of enterprise HR function.

  • The enablers, meaning the wide array of technologies from core HR platform technologies to collaboration tools to automation, ML, artificial intelligence (AI), and so on.

The combination of these four areas enables HR-as-a-function to shift into what we’re referring to as a world that is exponential. I will give you one quick example though where all this comes together.

There is a solution set that we are finding is incredibly powerful inside of driving employee experiences that we refer to as creating a unified engagement platform, meaning the blend of all these technologies in a simple-to-navigate experience that empowers the workers across an enterprise.

We, Deloitte, have actually created one of those platforms in the market that leads the space, called ConnectMe, and there are certainly others. And in that, what we are essentially finding is that HR leaders are looking for that simple-to-navigate, frictionless kind of environment where people can get their jobs done and enjoy doing them at the same time using technology to empower them.

HR leaders are navigating this complex set of technologies out there that are terrific because they’re providing advantages for the business functions. A lot of technology firms are investing heavily in worker-facing technologies.

The premise that you described is spot-on. HR leaders are navigating this complex set of technologies out there that are terrific because they’re providing advantages for the business functions. A lot of the technology firms are investing heavily in worker-facing technology platforms, for exactly the reason we have been chatting about here.

Gardner: Tim, when it comes to the skills gap, it is an employee’s market. Unemployment rates are very low, and the types of skills in demand are hard to find. And so the satisfaction of that top-tier worker is essential.

It seems to me that the better tools you can give them, the more they want to work. If I were a top-skilled employee, I would want to go with the place that has the best information that empowers me in the best way and brings contextual information with security to my fingertips.

But that’s really difficult to do. How do businesses then best enhance and entice employees by giving them the best intelligence tools?

Intelligent tools support smart workers 

Minahan: If you think about your top-performing employees, they want to do their most meaningful work and to perform at their best. As a result, they want to eliminate a lot of the noise from their day, and, as Art mentioned before, that friction.

And that friction is not solely technological, it’s often manifested through technology due to certain tasks or requirements that we need to do that may not pertain to our core jobs.

So, last time I checked, I don’t think either Art or myself were hired to review and approve expense reports or to spend a good chunk of our time approving vacations or doing full-scale performance reviews. Yet those types of applications that may not be pertinent to our jobs or processes, tend to take up a good part of our time.

What digital workspaces or digital work platforms do in the first phase is remove that noise from your day so that your best-performing employees can do their best work. The second phase uses those same platforms to help employees do better work through making sure that information is pushed to them as they need it.

Citrix campusThat’s information that is pertinent to their jobs. In a salesperson’s environment that might be a change in pipeline status, or a change in a prospect or customer activity. Not only do they get information at their fingertips, they can take action.

And what gets very exciting about that is you have the opportunity now to elevate the skills of every employee. We talk about the skills gap, but this is but one way to go re-train everybody.

Another way is to make sure that you’re giving them an unfair advantage within the work platforms you are using to guide them through the right process. So a great example is sales force productivity. A typical company takes 9-12 months to get a salesperson up to full productivity. Average tenure of a salesperson is somewhere around 36 months. So a company is getting a year-and-a-half of productivity out of a salesperson.

What if by eliminating all that noise, and by using this digital work platform to help push the right information, tasks, right actions, and the right customer sales pitches to them at the right time, you can cut that time to full productivity in half?

Think about the real business value that comes from using technology to actually elevate the skill set of the entire workforce, rather than bog it down.

Gardner: Tim, do you have any examples that illustrate what you just described? Any named or use case types of examples that show how what you’re doing at Citrix has been a big contributor?

Minahan: One example that’s top-of-mind not only helps improve employee experiences to elevate the experience for customers, but also allows companies to rethink work models in ways they probably haven’t since the days of Henry Ford. And the example that comes to mind is eBay.

We are all familiar with eBay, one of the world’s largest online digital marketplaces. Like many other companies, they have a large customer call center where buyers and sellers ask questions. These call center employees have to have the right information at their fingertips to get things done.

Well, the challenge they faced was with the talent gap and labor shortage. Traditionally they would build a big call center, hire a bunch of employees, and train them at the call center. But now, it’s harder to do that; they are competing with the likes of Amazon, Google and others who are all trying to do the same thing.

And so they used technology to break the traditional mold and to create a new work model. Now they go to where the talent is, such as the stay-at-home parent in Montana and the retiree in Florida, or the gig worker in Boston or New York. They can now arm them with a digital workspace and push the right information and toolsets to them. By doing so you ensure they get the job done even though if you or I call in we don’t know that they are not sitting in a centralized call center.

This is just one example as we begin to harness and unify this technology of how we can change work models. We can create not just the better employee experience, but entirely new ways to work.

How to Harness Technology

To Inspire Workers to Perform 

At Their Unencumbered Best 

Gardner: Art, it’s been historically difficult to measure productivity, and especially to find out what contributes to that productivity. The same unfortunately is the case with technology. It’s very difficult to measure quantitatively and qualitatively what technology directly does for both employee productivity and overall organizational productivity.

Are there ways for us to try to measure how new workspaces and good HR contribute to good employee satisfaction — and ultimately customer satisfaction? How do we know when we are doing this all right?

Success, measured 

Mazor: This is the holy grail in many ways, right? You get what you measure, and this whole space of workforce experience in many ways is a newer discipline. Customer experience has been around for a while and gained great traction and measurement. We can measure customer feedback. We can measure net promoter scores, and a variety of other indicators, not the least of which may be revenue, for example, or even profitability relative to customer base. We equally are now starting to see the emergence of measurements in the workforce experience arena.

And at the top-line we can see measurements like measuring workforce engagement. As that rises, likely there is a connection to positive worker experience. We can measure productivity. We can even measure the growth of capabilities within the workforce that are being gained as a result of — as we like to say — learning in the flow of work, to develop their capabilities.

That path is really important to chart out because it has similarities to those tools, methods, and approaches used inside the customer space. We think about it in very simple terms, we need to first look, listen, and understand to sense what’s happening with the workforce.

We need to generate and prioritize different ideas of ways in which the experience for the workforce can be moved. Then we need to iterate, test, refine, and plan the kinds of changes you might prototype that provides you that foundation to measure. And in the workforce experience space, it’s a variety of measures that we are starting to see to get down into the granular levels below those top-line measures that I mentioned.

What comes to mind for me are things like measuring the user experience for all of the workers. How effective is the product or service that they are being asked to use? How quickly can they deliver their work? What feedback do we get from workers? So kind of a worker feedback category.

We need to generate and prioritize different ideas of ways in which the experience for the workforce can be moved. We need to iterate, test, refine, and plan the types of changes you might prototype that provide a foundation to measure.

And then there are a set of operational measures that can track inputs and outputs from various processes and various portions of the experience. There is that kind of categorization “in those three buckets” that really seems to be working well for many of our clients to measure that notion of workforce experience to your point, of, “Did we get it right?”

But in the end, as I shared at the beginning, I think it’s really critical that organizations measure that workforce experience through the ultimate lens, which is, “How are we dealing with our customers?” When that’s performing well, chances are pretty good, based on the research that we have seen, that the connection is there to the employee or workforce experience.

Minahan: When we are talking about the employee experience, we should be careful — it’s not synonymous with just productivity. It’s a balance of productivity and employee engagement that together ultimately drives greater business results, customer experience, satisfaction, and improved profitability. Employee experience has been synonymous with productivity, it’s certainly a key integer into it, but it’s not the only one.

Gardner: Tim, how should IT people be thinking differently when it comes to how they view their own success? It was not that long ago where simply the performance of the systems — when all the green lights were on and the networks were not down — was the gauge of success. Should IT be elevating how it perceives itself and therefore how it should rate itself when it’s successful within these larger digital transformation endeavors?

Information, technology, and better business

Minahan: Yes, absolutely. I think this could be the revitalization of IT as it moves beyond the items that you mentioned: keeping the networks up, keeping the applications performing well. IT can now drive better business outcomes and results.

Human Resources SectionThose forward-thinking companies looking to digitize their business realize that it’s very hard to ask an employee base to drive a greater digital customer experience without arming them with the right tools, information, and experience in their own right in order to get that done. IT plays a very major role here, locking arms in unison with the CHRO, to move the needle and turn employee experience into a competitive edge — not just for attracting and retaining talent, but ultimately for driving better business results.

Gardner: I hope, if anything, this conversation prompts more opportunity for the human resources leadership and the IT leadership to spend time together and brainstorm and find commonality.

Before we sign off, just a quick look to the future. Art, for you, what might be changing soon that will help remove even more friction for employees? What is  it that’s down the pike over the next three to five years — technologies, processes, market forces — that might be an accelerant to removing friction? Are there bright spots in your thinking about the future?

Bright symphony ahead

Mazor: I think the future is really bright. We are optimistic by nature, and we see enterprises making terrific, bold moves to embrace their future as challenging as the future is.

One of the biggest opportunities is the recognition of the imperative for executives and their teams to operate in a more symphonic way. And when I say that I mean to work together to achieve a common set of results, moving away from the historical silos that were emerging from a zeal for efficiency and that led to organizations having these various departments, and then the departments working within themselves and finding it a struggle to create integration.

We are seeing a huge unlocking of that, in the spirit of creating more cross-functional teams and more agile ways of working — truly operating in the digital age. As we talked about in one of our recent capital trends reports, the idea of driving this is a more symphonic C-Suite, which then has a cascading effect for teams across the board inside of enterprises all to be working better together.

And then, secondly, there is a big recognition by enterprises now around the imperative to create meaning in the work that workers are doing. Increasingly, we are seeing this as a demand. This is not a single-generational demand. It’s not that the younger generation needs meaning or anything like that, that fits into stereotypes.

Rather, it’s a recognition that when we create purpose and meaning for the workers in an enterprise, they are more committed. They are more focused on outcomes, as opposed to activities. They begin to recognize the outcomes’ linkage to their own personal purpose, meaning for the enterprise, and for the work itself.

 And so, I think those two things will continue to emerge on a fairly rapid basis, to be able to embrace that need for symphonic operations and symphonic collaboration, as well as the imperative to create meaning and purpose for the workers of an enterprise. This will all unlock and unleash those capabilities focused on the customer through creating terrific employee or workforce experiences.

Gardner: Tim, last word to you. How do you foresee over the next several years technology evolving to support and engender the symphonic culture that Art just described?

Minahan: We have gotten to the point where employees are asking for a simplification of their environment, a unified access to everything, and to remove noise from their days so they can do that meaningful, purposeful work.

But what’s exciting is that same platform can be enabled to elevate the skill sets of all employees, giving them the right information, and the right task at the right time so they can perform at their very best.

But what gets me very excited about the future is the technology and a lot of the new thinking that’s going on. In the next few years, we’re going to see work models similar to the example I shared about eBay. We will see change in ways we work that we haven’t see in the past 100 years, where the lines between different functions and different organizations begin to evaporate.

What gets me excited about the future is the technology and a lot of new thinking that’s going on. In the next few years, we’re going to see new work models. We will see change in the ways we work that we haven’t seen in the past 100 years.

Instead we will have work models where companies are beginning to organize around pools of talent, where they know who has the right skills and the right knowledge, regardless if they are full-time employees or a contractor. Technology will pull them together into workgroups no matter where they are in the world, to solve the given problem or produce a given outcome, and then dissolve them very quickly again. So I am very excited about what we are going to see in just the next five years ahead.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, Citrix, Cloud computing, Cyber security, Data center transformation, Deloitte, Enterprise architect, machine learning, Mobile apps, mobile computing, Security, social media, User experience | Tagged , , , , , , , , , , , , , , , | Leave a comment

How rapid machine learning at the racing edge accelerates Venturi Formula E Team to top-efficiency wins

Venturi E frontThe next BriefingsDirect Voice of the Customer advanced motorsports efficiency innovation discussion explores some of the edge computing and deep analytics technology behind the Formula E auto racing sport.

Our interview focuses on how data-driven technology and innovation make high-performance electric racing cars an example for all endeavors where limits are continuously tested and bested.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the latest in Formula E efficiency strategy and refinement, please welcome Susie Wolff, Team Principal at Venturi Formula E Team in Monaco. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Aside from providing a great viewing and fan experience, what are the primary motivators for Formula E racing? Why did it start at all?

Wolff: It’s a really interesting story, because Formula E is like a startup. We are only in our fifth season, and Formula E and the management of Formula E disrupted the world of motorsport because it brought to the table a new concept of growing racing.

Susie Wolff


We race in city centers. That means that the tracks are built up just for one-day events, right in the heart of some of the most iconic capitals throughout the world. Because it’s built up within a city center and it’s usually only a one-day event, you get very limited track time, which is quite unusual in motorsport. In the morning we get up, we test, we go straight into qualifying, and then we race.

Yet, it’s attracting a new audience because people don’t need to travel to a race circuit. They don’t need to buy an expensive ticket. The race comes to the people, as opposed to the people going out to see racing.

Obviously, the technology is something completely new for people. There is very little noise, mostly you hear the whooshing of the cars going past. It’s a showcase for new technologies, which we are all going to see appearing on the road in the next three to five years.

Race down to Electric Avenue 

The automotive industry is going through a massive change with electric mobility and motorsport is following suit with Formula E.

We already see some of the applications on the roads, and I think that will increase year on year. What motorsport is so good at is testing and showcasing the very latest technology.

Gardner: I was going to ask you about the noise because I had the privilege and joy of watching a Formula One event in Monaco years ago, and the noise was a big part of it. Aside from these cars being so quiet, what is also different in terms of an electric Formula E race compared to traditional Formula One?

Wolff: The noise is the biggest factor, and that takes a bit of getting used to. It’s the roaring of the engines that creates emotion and passion. Obviously, in the Formula E cars you are missing any kind of noise.

Venturi E cityEven the cars we are starting to drive on the roads now have a little electric start and every time I switch it on I think, “Oh, the car is not working, I have a problem.” I forget that there is no noise when you switch an electric car on.

Also, in Formula E, the way that technology is developing and how quickly it’s developing is very clear through the racing. Last season, the drivers had two cars and they had to switch cars in the middle of the race because the battery wouldn’t last long enough for a complete race distance. Now, because the battery technology has advanced so quickly, we are doing one race with one car and one battery. So I think that’s really the beauty of what Formula E is. It’s showcasing this new technology and electric mobility. Add to this the incredible racing and the excitement that brings, and you have a really enticing offering.

Gardner: Please tell us about Venturi, as a startup, and how you became Team Principal. You have been involved with racing for quite some time.

A new way to manage a driving career

Wolff: Yes, my background is predominately in racing. I started racing cars when I was only eight years old, and I made it through the ranks as a racing driver, all the way to becoming a test driver in Formula One.

Then I stepped away and decided to do something completely different and started a second career. I was pretty sure it wouldn’t be in motorsport, because my husband, Toto Wolff, works in motorsport. I didn’t want to work for him and didn’t want to work against him, so I was very much looking for a different challenge and then Venturi came along.

The President of Venturi, a great gentleman, Gildo Pastor, is a pioneer in electric mobility. He was one of the first to see the possibility of using batteries in cars, and he set a number of land speed records — all electric. He joined Formula E from the very beginning, realizing the potential it had.

Venturi bugThe team is based in Monaco, which is a very small principality, but one with a very rich history in racing because of the Grand Prix. Gildo had approached me previously when I was still racing to drive for his team in Formula E. I was one of the cynics, not sure Formula E was going to be for the long-term. So I said, “Thank you, but no thank you.”

But then he contacted me last year and said, “Look, I think we should work together. I think you will be fantastic running the team.” We very quickly found a great way to work together, and for me, it was just the perfect challenge. It’s a new form of racing, it’s breaking new ground and it’s at such an exciting stage of development. So, it was the perfect step for me into the business and management side of motorsports.

Gardner: For me, the noise difference is not much of an issue because the geek factorgets me jazzed about automobiles, and I don’t think I am alone in that. I love the technology. I love the idea of the tiny refinements that improve things and that interaction between the best of what people can do and what machines can do.

Tell us about your geek factor. What is new and fascinating for you about Formula E cars? What’s different from the refinement process that goes on with traditional motorsport and the new electric version?

The software challenge 

Wolff: It’s a massively different challenge than what we are used to within traditional forms of motorsport.

The new concept behind Formula E has functioned really well. Just this season, for example, we had eight races with eight different winners. In other categories, for example in Formula One, you just don’t get that. There is only the possibility for threeteams to win a race, whereas in Formula E, the competition is very different.

Also, as a team, we don’t build the cars from scratch. A Formula One team would be responsible for the design and build of their whole car. In Formula E, 80 percent of the car is standardized. So every team receives the same car up to that 80 percent. The last part is the power train, the rear suspension, and some of the rear-end design of the car.

HPE bugThe big challenge within Formula E then, is in the software. It’s ultimately a software race: Who can develop, upgrade, and react quickly enough on the software side. And obviously, as soon as you deal with software, you are dealing with a lot of data.

That’s one of the biggest challenges in Formula E — it’s predominantly a software race as opposed to a hardware race. If it’s hardware, it’s set at the beginning of the season, it’s homologated, and it can’t be changed.

In Formula E, the performance differentiators are the software and how quickly you can analyze, use, and redevelop your data to enable you to find the weak points and correct them quickly enough to bring to the on-track performance.

Gardner: It’s fascinating to me that this could be the ultimate software development challenge, because the 80/20 rule applies to a lot of other software development, too. The first 80 percent can be fairly straightforward and modular; it’s the last 20 percent that can make or break an endeavor.

Tell us about the real-time aspects. Are you refining the software during the race day? How does that possibly happen?

Winning: When preparation meets data 

Wolff: Well, the preparation work is a big part of a race performance. We have a simulator based back at our factory in Monaco. That’s where the bulk of the preparation work is done. Because we are dealing with only a one-day event, it means we have to get everything crammed into an eight-hour window, which leaves us very little time between stations to analyze and use the data.

The bulk of the preparation work is done in the simulator back at the factory. Each driver does between four to six days in a simulator preparing for a race. That’s where we do all of the coding and try to find the most efficient ways to get from the start to the finish of the race. That’s where we do the bulk of the analytical work.

Venturi_Massa_Marrakesch_2019When we arrive at the actual race, we are just doing the very fine tweaks because the race day is so compact. It means that you need to be really efficient. You need to minimize the errors and maximize the opportunities, and that’s something that is hugely challenging.

If you had a team of 200 engineers, it would be doable. But in Formula E, the regulations limit you to 20 people on your technical team on a race day. So that means that efficiency is of the utmost importance to get the best performance.

Gardner: I’m sure in the simulation and modeling phase you leverage high-performance computing (HPC) and other data technologies. But I’m particularly curious about that real-time aspect, with a limit of 20 people and the ability to still make some tweaks. How did you solve the data issues in a real-time, intensive, human-factor-limited environment like that?

Wolff: First of all, it’s about getting the right people on-board and being able to work with the right people to make sure that we have the knowhow on the team. The data is real-time, so in a race situation we are aware if there is a problem starting to arise in the car. It’s very much up to the driver to control that themselves, from within the car, because they have a lot of the controls. The very important data numbers are on their steering wheel.

They have the ability to change settings within the car — and that’s also what makes it massively challenging for the driver. This is not just about how fast you can go, it’s also how much extra capacity you have to manage in your car and your battery — to make sure that you are being efficient.

The data is utmost in importance for how it’s created and then how quickly it can be analyzed and used to help performance. That’s something HPE has been a huge benefit for us for. … We can apply that ability to crunch the numbers more quickly. 

The data is utmost in importance for how it’s created and then how quickly it can be analyzed and used to help performance. That’s something that Hewlett Packard Enterprise (HPE) has been a huge benefit to us for. First of all, HPE has been able to increase the speed at which we can send data from factory to race track, between engineers. That technology has also increased the level of our simulator and what it’s able to crunch through in the preparation work.

And that was just the start. We are now looking at all the different areas where we can apply that ability to crunch the numbers more quickly. It allows us to look at every different aspect, and it will all come down to those marginal gains in the end.

Gardner: Given that this is a team sport on many levels, you are therefore working with a number of technology partners. What do you look for in a technology partner?

Partner for performance 

Wolff: In motorsport, you very quickly realize if you are doing a good job or not. Every second weekend you go racing, and the results are shown on the track. It’s brutal because if you are at the top step of the podium, you have done a great job. If you are at the end, you need to do a better job. That’s a reality check we get every time we go racing.

For us to be the best, we need to work with the best. We’re obviously very keen to always work with the best-in-field, but also with companies able to identify the exact needs we have and build a product or a package that helps us. Within motorsports, it’s very specific. It’s not like a normal IT company or a normal business where you can plug-and-play. We need to redefine what we can do, and what will bring added performance.

Edgeline boxWe need to work with companies that are agile. Ideally they have experience within motorsports. They know what you need, and they are able to deliver. They know what’s not needed in motorsports because everything is very time sensitive. We need to make sure we are working on the areas that bring performance — and not wasting resources and time in areas that ultimately are not going to help our track performance.

Gardner: A lot of times with motorsports it’s about eking out the most performance and the highest numbers when it comes to variables like skidpad and the amounts of friction versus acceleration. But I can see that Formula E is more about the interplay between the driver, the performance, and the electrical systems efficiency.

Is there something we can learn from Formula E and apply back to the more general electric automobile industry? It seems to me they are also fighting the battle to make the batteries last longest and make the performance so efficient that every electron is used properly.

Wolff: Absolutely. That’s why we have so many manufacturers in Formula E … the biggest names in the industry, like BMW, AudiJaguar and now Mercedes and Porsche. They are all in Formula E because they are all using it as a platform to develop and showcase their technology. And there are huge sums of money being spent within the automotive industry now because there is such a race on to get the right technology in the next generation of electric cars. The technology is advancing so quickly. The beauty of Formula E is that we are at the very pinnacle of that.

We are purely performance-based and it means that those race cars and power trains need to be the most efficient, and the quickest. All of the technology and everything that’s learned from the manufacturers doing Formula E eventually filters back into the organizations. It helps them to understand where they can improve and what the main challenges are for their electrification and electric mobility in the end.

Gardner: There is also an auspicious timing element here. You are pursuing the refinement and optimization of electric motorsports at the same time that artificial intelligence (AI) and machine learning (ML) technologies are becoming more pervasive, more accessible, and brought right to the very edge … such as on a steering wheel.

Is there an opportunity for you to also highlight the use of such intelligence technologies? Will data analytics start to infer what should be happening next, rather than just people analyzing data? Is there a new chapter, if you will, in how AI can come to bear on your quest for the Formula E best?

AI accelerates data 

Wolff: A new chapter is just beginning. Certainly, in some of the conversations we’ve had with our partners — and particularly with HPE — it’s like opening up a treasure chest, because the one thing we are very good at in motorsports is generating lots of data.

The one thing that we are clear at, and it’s purely down to manpower and time and resource, is the analyzing of data. There is only so much that we have capacity for. And with AI there are a couple of examples that I wouldn’t even want to share because I wouldn’t want my competitors to know what’s possible.

There are a couple of examples where we have seen that AI can constitute the numbers in a matter of seconds and spit out the results. I can’t even comprehend how long it would take us to get to those numbers otherwise. It’s a clear example of how much AI is going to accelerate our learning on the data side, and, particularly, because it’s software, there’s so much analyzing of the data needed to bring new levels of performance. For us it’s going to be game changer and we are only at the start.

It’s incredibly exciting but also so important to make sure that we are getting it right. There is so much possibility that if we don’t get it right, there could be big areas that we could end up losing on.

48 Edoardo MortaraGardner: Perhaps soon, race spectators will not only be watching the cars and how fast they are going. Perhaps there will be a dashboard that provides views of the AI environment’s performance, too. It could be a whole new type of viewer experience — when you’re looking at what the AI can do as well as the car. Whoever thought that AI would be a spectator sport?

Wolff: It’s true and it’s not far away. It’s very exciting to think that that could be coming.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, performance engineering, Software | Tagged , , , , , , , , , , , , , , | Leave a comment

The budding storage relationship between HPE and Cohesity brings the best of startup innovation to global enterprise reach

cohesity-scale-out-file-servicesThe next BriefingsDirect enterprise storage partnership innovation discussion explores how the best of startup culture and innovation can be married to the global reach, maturity, and solutions breadth of a major IT provider.

Stay with us to unpack the budding relationship between an upstart in the data management space, Cohesity, and venerable global IT provider Hewlett Packard Enterprise (HPE).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in total storage efficiency strategies and HPE’s Pathfinder program we welcome Rob Salmon, President and Chief Operating Officer at Cohesity in San Jose, California, and Paul Glaser, Vice President and Head of the Pathfinder Program at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Paul, how have technology innovation, the nature of startups, and the pace of business today made a deep relationship between HPE and Cohesity the right fit for your mutual customers?

Paul Glaser


Glaser: That’s an interesting question, Dana. To start, the technology ecosystem and the startup ecosystem in Silicon Valley, California — as well as other tech centers on a global basis — fuel the fire of innovation. And so, the ample funding that’s available to startups, the research that’s coming out of top tier universities such as Stanford, Carnegie Mellon, or MIT out on the East Coast, fuels a lot of interesting ideas — disruptive ideas that lead their way into small startups.

The challenge for HPE as a large, global technology player is to figure out how to tap into the ecosystem of startups and the new disruptive technologies coming out of the universities, as well as serial entrepreneurs, foster and embrace that, and deliver those solutions and technologies to our customers.

Gardner: Paul, please describe the Pathfinder thesis and approach. What does it aim to do?

Insight, investment, and solutions

Glaser: Pathfinder, at the top level is the venture capital (VC) program of HPE and can be subdivided into three core functions. First is insight, second is investments, and third is the solutions function. The insight component acts like a center of excellence, it keeps a finger on the pulse, if you will, of disruptive innovation in the startup community. It helps HPE as a whole interact with the startup community, the VC community, and identifies and curates leading technology innovations that we can ultimately deliver to our customers.

The second component is investments. It’s fairly straight-forward. We act like a VC firm, taking small equity stakes in some of these startup companies.

And third, solutions. For the companies that are in our portfolio, we work with them to make introductions to product and technical organizations inside of HPE, fostering dialogue from a product evolution perspective and a solution perspective. We intertwine HPE’s products and technologies with the startup technology to create one-plus-one-equals-three. And we deliver that solution to customers and solve their challenges from a digital transformation perspective.

Gardner: How many startup companies are we talking about? How many in a year have typically been included in Pathfinder?

Glaser: We are a very focused program, so we align around the strategies for HPE. Because of that close collaboration with our portfolio companies and the business units, we are limited to about eight investments or new portfolio companies on an annual basis.

Today, the four-and-a-half-year-old program has about two dozen companies inside in the portfolio. We expect to add another eight over the next 12 months.

Gardner: Rob, tell us about Cohesity and why it’s such a good candidate, partner, and success story when it comes to the Pathfinder program.

Rob Salmon


Salmon: Cohesity is a five-year-old company focused on data management for about 70 to 80 percent of all the data in an enterprise today. This is for large enterprises trying to figure out the next great thing to make them more operationally efficient, and to give them better access to data.

Companies like HPE are doing exactly the same thing, looking to figure out how to bring new conversations to their customers and partners. We are a software-defined platform. The company was founded by Dr. Mohit Aron, who has spent his entire 20-plus-year career working on distributed file systems. He is one of the key architects of the Google File System and co-founder of Nutanix. The hyperconverged infrastructure (HCI) movement, really, was his brainchild.

He started Cohesity five years ago because he realized there was a new, better way to manage large sets of data. Not only in the data protection space, but for file services, test dev, and analytics. The company has been selling the product for more than two and a half years now, and we’ve been a partner with Paul and the HPE Pathfinder team for more than three years now. It’s been a quite successful partnership between the two companies.

Gardner: As I mentioned in my set-up, Rob, speed-to-value is the name of the game for businesses today. How have HPE and Cohesity together been able to help each other be faster to market for your customers?

One plus one equals three

Salmon: The partnership is complimentary. What HPE brings to Cohesity is experience and reach. We get a lot of value by working with Paul, his team, and the entire executive team at HPE to bring our product and solutions to market.

When we think about the combination between the products from HPE and Cohesity, one-plus-one-equals-three-plus. That’s what customers are seeing as well. The largest customers we have in the world running Cohesity solutions run on HPE’s platform.

HPE brings credibility to a company of our size, in all areas of the world, and with large customers. We just could not do that on our own.

Gardner: And how does working with HPE specifically get you into these markets faster?

Salmon: In fact, we just announced an original equipment manufacturer (OEM) relationship with HPE whereby they are selling our solutions. We’re very excited about it.

Simplify Secondary Storage with HPE and Cohesity

I can give you a great example. I met with one of the largest healthcare providers in the world a year ago. They loved hearing about the solution. The question they had was, “Rob, how are you going to handle us? How will you support us?” And they said, “You are going to let us know, I’m sure.”

They immediately introduced me to the general manager of their account at HPE. We took that support question right off the table. Everything has been done through HPE. It’s our solution, wrapped around the broad support services and hardware capabilities of HPE. That made for a total solution for our customers, because that’s ultimately what these kinds of customers are looking for.

They are not just looking for great, new innovative solutions. They are looking for how they can roll that out at scale in their environments and be assured it’s going to work all the time.

Gardner: Paul, HPE has had huge market success in storage over the past several years, being on the forefront of flash and of bringing intelligence to how storage is managed on a holistic basis. How does the rest of storage, the so-called secondary level, fit into that? Where do you see this secondary storage market’s potential?

Glaser: HPE’s internal product strategy has been around the primary storage capability. You mentioned flash, so such brands as 3PAR and Nimble Storage. That’s where HPE has a lot of its own intellectual property today.

On the secondary storage side, we’ve looked to partners to round out our portfolio, and we will continue to do so going forward. Cohesity has become an important part of that partner portfolio for us.

But we think about more than just secondary storage from Cohesity. It’s really about data management. What does the data management lifecycle of the future look like? How do you get more insights on where your data is? How do you better utilize that?

Cohesity and that ecosystem will be an important part of how we think about rounding out our portfolio and addressing what is a tens of billions of dollars market opportunity for both companies.

Gardner: Rob, let’s dig into that total data management and lifecycle value. What are the drivers in the market making a holistic total approach to data necessary?

Cohesity makes data searchable, usable 

Salmon: When you look at the sheer size of the datasets that enterprises are dealing with today, there is an enormous data management copy problem. You have islands of infrastructures set up for different use cases for secondary data and storage. Oftentimes the end users don’t know where to look, and it may be in the wrong place. After a time, the data has to be moved.

The Cohesity platform indexes the data on ingest. We therefore have Google-like search capabilities across the entire platform, regardless of the use-case and how you want to use the data.


When we think about the legacy storage solutions out there for data protection, for example, all you can do is protect the data. You can’t do anything else. You can’t glean any insights from that data. Because of our indexing on ingest, we are able to provide insights into the data and metadata in ways unlike customers and enterprises have ever seen before. As we think about the opportunity, the larger the datasets that are run on the Cohesity platform and solution, the more insight customers can have into their data.

And it’s not just about our own applications. We recently introduced a marketplace where applications such as Splunk reside and can sit on top and access the data in the Cohesity platform. It’s about bringing compute, storage, networking, and the applications all together to where the data is, versus moving data to the compute and to the applications.

Gardner: It sounds like a solution tailor-made for many of the new requirements we’re seeing at the edge. That means massive amounts of data generated from the Internet of things (IoT) and the industrial Internet of things (IIoT). What are you doing with secondary storage and data management that aligns with the many things HPE is doing at the edge?

Seamless at the edge

Salmon: When you think about both the edge and the public cloud, the beauty of a next-generation solution like Cohesity is we are not redesigning something to take advantage of the edge or the public clouds. We can run a virtual edition of our software at the edge, and in public cloud. We have a multiple-cloud offering today.

So, from the edge all the way to on-premises and into public clouds it’s a seamless look at all of your data. You have access and visibility to all of the data without moving the data around.

Gardner: Paul, it sounds like there’s another level of alignment here, and it’s around HPE’s product strategies. With HPE InfoSightOneView — managing core-to-edge issues across multiple clouds as well as a hybrid cloud — this all sounds quite well-positioned. Tell us more about the product strategy synergy between HPE and Cohesity.

Glaser: Dana, I think you hit it spot-on. HPE CEO Antonio Neri talks about a strategy for HPE that’s edge-centric, cloud-enabled, and data-driven. As we think about building our infrastructure capabilities — both for on-premise data centers and extending out to the edge — we are looking for partners that can help provide that software layer, in this case the data management capability, that extends our product portfolio across that hybrid cloud experience for our customers.

As you think about a product strategy for HPE, you really step up to the macro strategy, which is, how do we provide a solution for our customers that allows us to span from the edge all the way to the core data center? We look at partners that have similar capabilities and similar visions. We work through the OEMs and other types of partnership arrangements to embed that down into the product portfolio.

Gardner: Rob, anything to offer additionally on the alignment between Cohesity and HPE, particularly when it comes to the data lifecycle management?

Salmon: The partnership started with Pathfinder, and we are absolutely thrilled with the partnership we have with HPE’s Pathfinder group. But when we did the recent OEM partnership with HPE, it was actually with HPE’s storage business unit. That’s really interesting because as you think about competing or not, we are working directly with HPE’s storage group. This is very complementary to what they are doing.

When we did the recent OEM partnership with HPE, it was actually with HPE’s storage business unit. That’s really interesting because as you think about competing or not, we are working directly with HPE storage. This is very complementary to what they are doing.

We understand our swim lane. They understand our swim lane. And yet this gives HPE a far broader portfolio into environments where they are looking at what the competitors are doing. They are saying, “We now have a better solution for what we are up to in this particular area by working with Cohesity.”

We are excited not just to work with the Pathfinder group but by the opportunity we have with Antonio Neri’s entire team. We have been welcomed into the HPE family quite well over the last three years, and we are just getting started with the opportunity as we see it.

Gardner: Another area that is top-of-mind for businesses is not just the technology strategy, but the economics of IT and how it’s shifted given the cloud, Software as a Service (SaaS), and pay-on-demand models. Is there something about what HPE is doing with its GreenLake Flex Capacity approach that is attractive to Cohesity? Do you see the reception in your global market improved because of the opportunity to finance, acquire, and consume IT in a variety of different ways?

Flexibility increases startups’ strength 

Salmon: Without question! Large enterprises want to buy it the way they want to buy it, whether it be for personalized licenses or a subscription model. They want to dictate how it will be used in their environments. By working with HPE and GreenLake, we are able to offer the flexible options required to win in this market today.

Gardner: Paul, any thoughts about the economics of consuming IT and how Pathfinder might be attractive to more startups because of that?

Glaser: There are two points Rob touched on that are important. One, working with HPE as a large company, it’s a journey. As a startup you are looking for that introduction or that leg up that gives you visibility across the global HPE organization. That’s what Pathfinder provides. So, you start working directly with the Pathfinder organization, but then you have the ability to spread out across HPE.

For Cohesity, it’s led to the OEM agreement with the storage business unit. It is the ability to leverage different consumption models utilizing GreenLake, and some of our flexible pricing and flexible consumption offers.

The second point is Amazon Web Services has conditioned customers to think about pay-per-use. Customers are asking for that, and they are looking for flexibility. As a startup, that sometimes is hard to figure out — how to economically provide that capability. Being able to partner with HPE and Pathfinder, to utilizing GreenLake or some of our other tools, it really provides them a leg up in terms of the conversation with customers. It helps them trust that the solution will be there and that somebody will be there to stand behind it over the coming years.

Gardner: Before we close out, I would like to peek in the crystal ball for the future. When you think about the alignment between Cohesity and HPE, and when we look at what we can anticipate — an explosion of activity at the edge and rapidly growing public cloud market — there is a gorilla in the room. It’s the new role for inference and artificial intelligence (AI), to bring more data-driven analytics to more places more rapidly.

Any thoughts about where the relationship between HPE and Cohesity will go on an AI tangent product strategy?

AI enhances data partnership

Salmon: You touched earlier, Dana, on HPE InfoSight, and we are really excited about the opportunity to partner even closer with HPE on it. That’s an incredibly successful product in its own right. The opportunity for us to work closer and do some things together around InfoSight is exciting.

On the Cohesity side, we talk a lot about not just AI but machine learning (ML) and where we can go proactively to give customers insights into not only the data, but also the environment itself. It can be very predictive. We are working incredibly hard on that right now. And again, I think this is an area that is really just getting started in terms of what we are going to be able to do over a long period of time.

Gardner: Paul, anything to offer on the AI future?

Glaser: Rob touched on the immediate opportunity for the two companies to work together, which is around HPE InfoSight and marrying our capabilities in terms of predictability and ML around IT infrastructure and creative solutions around that.

As you extend the vision to being edge-centric, as you look into the future where applications become more edge-centric and compute is going to move toward the data at the edge, the lifecycle of what that data looks like from a data management perspective at the edge — and where it ultimately resides — is going to become an interesting opportunity. Some of the AI capabilities can provide insight on where the best place is for that computation, and for that data, to live. I think that will be interesting down the road.

As you extend the vision to being edge-centric, compute is going to move toward the data at the edge. The lifecycle of what that data looks like from a data management perspective at the edge is an interesting opportunity.

Gardner: Rob, for other startups that might be interested in working with a big vendor like HPE through a program like Pathfinder, any advice that you can offer?

Salmon: As a startup, you know you are good at something, and it’s typically around the technology itself. You may have a founder like Mohit Aron, who is absolutely brilliant in his own right in terms of what he has already done in the industry and what we are going to continue to do. But you have got to do all the building around that brilliance and that technology and turn it into a true solution.

And again, back to this notion of solution, the solution needs global scale, it’s giving the support to costumers, not just one experience with you, but what they are expecting to experience from the enterprises that support them. You can learn a lot from working with large enterprises. They may not be the ones to tell you exactly how you are going to code your product; we have got that figured out with the brilliance of a Mohit and the engineering team around him. But as we think about getting to scale, and scaling the operation in terms of what we are doing, leaning on someone like the Pathfinder group at HPE has helped us an awful lot.


Salmon: The other great thing about working with the Pathfinder group is, as Paul touched on earlier, they work with other portfolio companies. They are working with companies that may be in a little different space than we are, but they are seeing a similar challenge as we are.

How do you grow? How do you open up a market? How do you look at bringing the product to market in different ways? We talked about consumption pricing and the new consumption models. Since they are experiencing that with others, and what they have already done at HPE, we can benefit from that experience. So leveraging a large enterprise like an HPE and the Pathfinder group, for what they know and what they are good at, has been invaluable to Cohesity.

Gardner: Paul, for those organizations that might want to get involved with Pathfinder, where should they go and what would you guide them to in terms of becoming a potential fit?

Glaser: I’d just point them to You can find information on the program there, contact information, portfolio companies, and that type of thing.

We also put out a set of perspectives that talk about some of our investment theses and you can see our areas of interest. So at a high level, we look for companies that are aligned to HPE’s core strategies, which is going to be around building up the hybrid IT business as well as the intelligent edge.

So we have those specific swim lanes from a strategic perspective. And then second is we are looking for folks who have demonstrated success from a product perspective, and so whether that’s a couple of initial customer wins and then needing help to scale that business, those are the types of opportunities that we are looking for.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, Cloud computing, Data center transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, machine learning, multicloud, Nutanix, storage | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

HPE’s Erik Vogel on what’s driving success in hybrid cloud adoption and optimization

ISS-49_Multi-hued_clouds_over_the_Bering_SeaThe next BriefingsDirect Voice of the Innovator discussion explores the latest insights into hybrid cloud success strategies.

As with the often ad hoc adoption of public cloud services by various groups across an enterprise, getting the right mix and operational coordination required of true hybrid cloud cannot be successful if it’s not well managed. While many businesses recognize there’s a hybrid cloud future, far fewer are adopting a hybrid cloud approach with due diligence, governance, and cost optimization.

Stay with us as we examine the innovation maturing around hybrid cloud models and operations and learn how proper common management of hybrid cloud can make or break the realization of its promised returns.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to explain how to safeguard successful hybrid cloud deployments and operations is Erik Vogel, Global Vice President of Hybrid IT and Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The cloud model was very attractive, people jumped into it, but like with many things, there are unintended consequences. What’s driving cloud and hybrid cloud adoption, and what’s holding people back?

Vogel: All enterprises are hybrid at this point, and whether they have accepted that realization depends on the client. But pretty much all of them are hybrid. They are all using a combination of on-premises, public cloud, and software-as-a-service (SaaS) solutions. They have brought all of that into the enterprise. There are very few enterprises we talk to that don’t have some hybrid mix already in place.

Hybrid is here, but needs rationalization

But when we ask them how they got there; most have done it in an ad hoc fashion. Most have had developers who went out to one or multiple hyperscale cloud providers, or the business units went out and started to consume SaaS solutions, or IT organizations built their own on-premises solutions whether that’s an open private cloud or a Microsoft Azure Stack environment.

Erik VogelThey have done all of this in pockets within the organization. Now, they are seeing the challenge of how to start managing and operating this in a consistent, common fashion. There are a lot of different solutions and technologies, yet everyone has their own operating model, own consoles, and own rules to work within.

And that is where we see our clients struggling. They don’t have a holistic strategy or approach to hybrid, but rather they’ve done it in this bespoke or ad hoc fashion. Now they realize they are going to have to take a step back to think this through and decide what is the right approach to enforce common governance and gain common management and operating principles, so that they’re not running 5, 6, 8 or even 10 different operating models. Rather, they need to ask, “How do we get back to where we started?” And that is a common operating model across the entire IT estate.

Gardner: IT traditionally over the years has had waves of adoption that led to heterogeneity that created complexity. Then that had to be managed. When we deal with multicloud and hybrid cloud, how is that different from the UNIX wars, or distributed computing, and N-tier computing? Why is cloud a more difficult heterogeneity problem to solve than the previous ones?

Vogel: It’s more challenging. It’s funny, we typically referred to what we used to see in the data center as the  Noah’s Ark data center. You would typically walk into a data center and you’d see two of everything, two of every vendor, just about everything within the data center.

How to Better Manage 

Multicloud Sprawl 

And it was about 15 years ago when we started to consolidate all of that into common infrastructures, common platforms to reduce the operational complexity. It was an effort to reduce total cost of ownership (TCO) within the data center and to reduce that Noah’s Ark data center into common, standardized elements.

Now that pendulum is starting to swing back. It’s becoming more of a challenge because it’s now so easy to consume non-standard and heterogeneous solutions. Before there was still that gatekeeper to everything within the data center. Somebody had to make a decision that a certain piece of infrastructure or component would be deployed within the data center.

Now, we have developers go to a cloud and consume with just a swipe of a credit card, any of the three or four hybrid hyperscale solutions, and literally thousands of SaaS solutions. Just look at the platform and all of the different options that surround that.

All of a sudden, we lost the gatekeeper. Now we are seeing sprawl toward more heterogeneous solutions occurring even much faster than what we saw 10 or 15 years ago with the Noah’s Ark data center.

The pendulum is definitely shifting back toward consuming lots of different solutions with lots of different capabilities and services. And we are seeing it moving much faster than it did before because of that loss of a gatekeeper.

Gardner: Another difference is that we’re talking mostly about services. By consuming things as services, we’re acquiring them not as a capital expenditure that has a three- to five-year cycle of renewal, this is on-demand consumption, as you use it.

That makes it more complicated, but it also makes it a problem that can be solved more easily. Is there something about the nature of an all-services’ hybrid and multicloud environment on an operations budget that makes it more solvable?

Services become the norm 

Vogel: Yes, absolutely. The economics definitely play into this. I have this vision that within the next five years, we will no longer call things “as a service” because it will be the norm, the standard. We will only refer to things that are not as a service, because as an industry we are seeing a push toward everything being consumed as a service.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. … [Before] we would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. Again, if you look back 10 or 15 years, typically within a data center, we’d be buying for a three- or four-year lifespan. That forced us to make predictions as to what type of demand we would be placing on capital expenditures.

And what would happen? We would always overestimate. If you looked at utilization of CPU, of disk, of memory, they were always 20 to 25 percent; very low utilization, especially pre-virtualization. We would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

There was very little ability to dial that up or down. The economic capability of being able to consume everything as a service is definitely changing the game, even for things you wouldn’t think of as a service, such as buying a server. Our enterprise customers are really taking notice of that because it gives them the ability to flex the expenditures as their business cycles go up and down.

Rarely do we see enterprises with constant demand for compute capacity. So, it’s very nice for them to be able to flex that up and down, adjust the normal seasonal effects within a business, and be able to flex that operating expense as their business fluctuates.

That is a key driver of moving everything to an as-a-service model, giving flexibility that just a few years ago we did not have.

Gardner: The good news is that these are services — and we can manage them as services. The bad news is these are services coming from different providers with different economic and consumption models. There are different application programming interfaces (APIs), stock keeping unit (SKU) definitions, and management definitions that are unique to their own cloud organization. So how do we take advantage of the fact that it’s all services but conquer the fact that it’s from different organizations speaking, in effect, different languages?

Vogel: You’re getting to the heart of the challenge in terms of managing a hybrid environment. If you think about how applications are becoming more and more composed now, they are built with various different pieces, different services, that may or may not be on-premises solutions.

One of our clients, for example, has built an application for their sales teams that provides real-time client data and client analytics before a seller goes in and talks to a customer. And when you look at the complexity of that application, they are using, they have an on-premises customer database, and they get point of sales solutions from another SaaS provider.

Why You Need Everything 

As a Service 

They also have analytics engines they get from one of the cloud hyperscalers. And all of this comes together to drive a mobile app that presents all of this information seamlessly to their end-user seller in real-time. They become better armed and have more information when they go meet with their end customer.

When we look at how these new applications or services – I don’t even call them applications because they are more services built from multiple applications — they are crossing multiple service providers, multiple SaaS providers, and multiple hyperscalers.

And as you look at how we interface and connect with those, how we pass data, exchange information across these different service providers, you are absolutely right, the taxonomies are different, the APIs are different, the interfaces and operations challenges are different.

When that seller goes to make that call, and they bring up their iPad app and all of a sudden, there is no data or it hasn’t been refreshed in three months, who do you call? How do you start to troubleshoot that? How do you start to determine if it’s a Salesforce problem, a database problem, a third-party service provider problem? Maybe it’s my encrypted connection I had to install between Salesforce and my on-premises solution. Maybe it’s the mobile app. Maybe it’s a setting on the iPad itself.

Adding up all of that complexity is what’s building the problem. We don’t have consistent APIs, consistent taxonomies, or even the way we look at billing and the underlying components for billing. And when we break that out, it varies greatly between service providers.

cloud journeyThis is where we understand the complexity of hybrid IT. We have all of these different service providers all working and operating independently. Yet we’re trying to bring them together to provide end-customer services. Composing those different services creates one of the biggest challenges we have today within hybrid cloud environment.

Gardner: Even if we solve the challenge on the functional level — of getting the apps and services to behave as we want — it seems as much or more a nightmare for the chief financial officer (CFO) who has to determine whether you’re getting a good deal or buying redundancy across different cloud providers. A lot of times in procurement you cut a deal on volume. But how you do that if you don’t know what you’re buying from whom?

How do we pay for these aggregate cloud services in some coordinated framework with the least amount of waste?

How to pay the bills

Vogel: That is probably one of the most difficult jobs within IT today, the finance side of it. There are a lot of challenges of putting that bill together. What does that bill really look like? And not just at an individual component level. I may be able to see what I’m paying from Amazon Web Services (AWS) or what Azure Stack is costing me. But how do we aggregate that? What is the cost to provide a service? And this has been a challenge for IT forever. It’s always been difficult to slice it by service.

We knew what compute costs, what network costs, and what the storage costs were. But it was always difficult to make that vertical slice across the budget. And now we have made that problem worse because we have all these different bills coming in from all of these different service providers.

The procurement challenge is even more acute because now we have these different service providers. How do we know what we are really paying? Developers swipe credit cards, where they don’t even see the bill or a true accounting of what’s being spent across the public clouds. It comes through as a credit card expense and so not really directed to IT.

We need to get our hands around these different expenses, where we are spending money, and think differently about our procurement models for these services.

In the past, we talked about this as a brokerage but it’s a lot more than that. It’s more about strategic sourcing procurement models for cloud and hybrid cloud-related services.

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

It’s less about brokerage and looking for that lowest-cost provider and trying to reduce the spend. It’s more about, are we getting the service-level agreements (SLAs) we are paying for? Are we getting the services we are paying for? Are we getting the uptime we are paying for?

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

Gardner: In business over the years, when you have a challenge, you can try to solve it yourself and employ intelligence technologies to tackle complexity. Another way is to find a third-party that knows the business better than you do, especially for small- to medium-sized businesses (SMBs).

Are we starting to see an ecosystem develop where the consumption model for cloud services is managed more centrally, and then those services are repurposed and resold to the actual consumer business?

Third-parties help hybrid manage costs 

Vogel: Yes, I am definitely starting to see that. There’s a lot is being developed to help customers in terms of consuming and buying these services and being smarter about it. I always joke that the cheapest thing you can buy is somebody else’s experience, and that is absolutely the case when it comes to hybrid cloud services providers.

The reality is no enterprise can have expertise in all three of the hyperscalers, in all of the hundreds of SaaS providers, for all of the on-premises solutions that are out there. It just doesn’t exist. You just can’t do it all.

It really becomes important to look for people who can aggregate this capability and bring the collective experience back to you. You have to reduce overspend and make smarter purchasing decisions. You can prevent things like lock-in to and reduce the risk of buying via these third-party services. There is tremendous value being created by these firms that are jumping into that model and helping clients address these challenges.

The third-parties have people who have actually gone out and consumed and purchased within the hyperscalers, who have run workloads within those environments, and who can help predict what the true cost should be — and, more importantly, maintain that optimization going forward.

How to Remove Complexity 

From Multicloud and Hybrid IT 

It’s not just about going in and buying anymore. There is ongoing optimization that has to incur, ongoing cost optimization where we’re continuously evaluating about the right decisions. And we are finding that the calculus changes over time.

So, while it might have made a lot of sense to put a workload, for example, on-premises today, based on the demand for that application and on pricing changes, it may make more sense to move that same workload off-premises tomorrow. And then in the future it may also make sense to bring it back on-premises for a variety of reasons.

You have to constantly be evaluating that. That’s where a lot of the firms playing in the space can add a lot of value now, in helping with ongoing optimization, by making sure that we are always making the smart decision. It’s a very dynamic ecosystem, and the calculus, the metrics are constantly changing. We have the ability to constantly reevaluate. That’s the beauty of cloud, it’s the ability to flex between these different providers.

Gardner: Erik, for those organizations interested in getting a better handle on this, are there any practical approaches available now?

The right mix of data and advice 

Vogel: We have a tool, called HPE Right Mix Advisor, which is our ability to go in and assess very large application portfolios. The nice thing is, it scales up and down very nicely. It is delivered in a service model so we are able to go in and assess a set of applications against the variables I mentioned, in the weighing of the factors, and come up with a concrete list of recommendations as to what should our clients do right now.

In fact, we like to talk not about the thousand things they could do — but what are the 10 or 20 things they should start on tomorrow morning. The ones that are most impactful for their business.

The Right Mix Advisor tool helps identify those things that matter the most for the business right now, and provides a tactical plan to say, “This is what we should start on.”

And it’s not just the tool, we also bring our expertise, whether that’s from our Cloud Technology Partners (CTP) acquisition, RedPixie, or our existing HPE business where we have done this for years and years. So, it’s not just the tool, but also experts, looking at that data, helping to refine that data, and coming up with a smart list that makes sense for our clients to get started on right now.

And of course, once they have accomplished those things, we can come back and look at it again and say, “Here is your next list, the next 10 or 20 things.” And that’s really how Right Mix Advisor was designed to work.

Gardner: It seems to me there would be a huge advantage if you were able to get enough data about what’s going on at the market level, that is to aggregate how the cloud providers are selling, charging, and the consumption patterns.

If you were in a position to gather all of the data about enterprise consumption among and between the cloud providers, you would have a much better idea of how to procure properly, manage properly, and optimize. Is such a data well developing? Is there anyone in the right position to be able to gather the data and start applying machine learning (ML) technologies to develop predictions about the best course of action for a hybrid cloud or hybrid IT environment?

Vogel: Yes. In fact, we have started down that path. HPE has started to tackle this by developing an expert system, a set of logic rules that helps make those decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years, primarily with HPE’s history of doing a lot of application migration work. We really understand on the on-premises side where applications should reside based on how they are architected and what the requirements are, and what type of performance needs to be derived from that application.

HPE has developed an expert system, a set of logic rules, that helps make those hybrid decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years. We understand the on-premises side … We have now combined that with our other datasets from our acquisitions of CTP and RedPixie.

We have combined that with other datasets from some of our recent cloud acquisitions, CTP and RedPixie, for example. That has brought us a huge wealth of information based on a tremendous number of application migrations to the public clouds. And we are able to combine those datasets and develop this expert system that allows us to make those decisions pretty quickly as to where applications should reside based on a number of factors. Right now, we look at about 60 different variables.

But what’s really important when we do that is to understand from a client’s perspective what matters. This is why I go back to that strategic sourcing discussion. It’s easy to go in and assume that every client wants to reduce cost. And while every client wants to do that — no one would ever say no to that — usually that’s not the most important thing. Clients are worried about performance. They also want to drive agility, and faster time to market. To them that is more important than the amount they will save from a cost-reduction perspective.

The first thing we do when we run our expert system, is we go in and weight the variables based on what’s important to that specific client, aligned to their strategy. This is where it gets challenging for any enterprise trying to make smart decisions. In order to make strategic sourcing decisions, you have to understand strategically what’s important to your business. You have to make intelligent decisions about where workloads should go across the hybrid IT options that you have. So we run an expert system to help make those decisions.

Now, as we collect more data, this will move toward more artificial intelligence (AI). I am sure everybody is aware AI requires a lot of data, since we are still in the very early stages of true hybrid cloud and hybrid IT. We don’t have a massive enough dataset yet to make these decisions in a truly automated or learning-type model.

We started with an expert system to help us do that, to move down that path. But very quickly we are learning, and we are building those learnings into our models that we use to make decisions.

So, yes, there is a lot of value in people who have been there and done that. Being able to bring that data together in a unified fashion is exactly what we have done to help our clients. These decisions can take a year to figure out. You have to be able to make these decisions quickly because it’s a very dynamic model. A lot of things are constantly changing. You have to keep loading the models with the latest and greatest data so you are always making the best, smartest decision, and always optimizing the environment.

Innovation, across the enterprise 

Gardner: Not that long ago, innovation in a data center was about speeds and feeds. You would innovate on technology and pass along those fruits to your consumers. But now we have innovated on economics, management, and understanding indirect and direct procurement models. We have had to innovate around intelligence technologies and AI. We have had to innovate around making the right choices — not just on cost but on operations benefits like speed and agility.

How has innovation changed such that it used to be a technology innovation but now cuts across so many different dynamic principles of business?

HPE BugVogel: It’s a really interesting observation. That’s exactly what’s happening. You are right, even as recently as five years ago we talked about speeds and feeds, trying to squeeze a little more out of every processor, trying to enhance the speed of the memory or the storage devices.

But now, as we have pivoted toward a services mentality, nobody asks when you buy from a hyperscaler — Google Cloud, for example — what central processing unit (CPU) chips they are running or what the chip speeds are. That’s not really relevant in an as-a-service world. So, the innovation then is around the service sets, the economic models, the pricing models, that’s really where innovation is being driven.

At HPE, we have moved in that direction as well. We provide our HPE GreenLake model and offer a flex-capacity approach where clients can buy capacity on-demand. And it becomes about buying compute capacity. How we provide that, what speeds and feeds we are providing becomes less and less important. It’s the innovation around the economic model that our clients are looking for.

We are only going to continue to see that type of innovation going forward, where it’s less about the underlying components. In reality, if you are buying the service, you don’t care what sort of chips and speeds and feeds are being provided on the back end as long as you are getting the service you have asked for, with the SLA, the uptime, the reliability, and the capabilities you need. All of what sits behind that becomes less and less important.

Think about how you buy electricity. You just expect 110 volts at 60 hertz coming out of the wall, and you expect it to be on all the time. You expect it to be consistent, reliable, and safely delivered to you. How it gets generated, where it gets generated — whether it’s a wind turbine, a coal-burning plant, a nuclear plant — that’s not important to you. If it’s produced in one state and transferred to another over the grid, or if it’s produced in your local state, that all becomes less important. What really matters is that you are getting consistent and reliable services you can count on.

How to Leverage Cloud, IoT, 

Big Data, and Other Disruptive Technologies 

And we are seeing the same thing within IT as we move to that service model. The speeds and feeds, the infrastructure, become less important. All of the innovation is now being driven around the as-a-service model and what it takes to provide that service. We innovate at the service level, whether that’s for flex capacity or management services, in a true as-a-service capability.

Gardner: What do your consumer organizations need to think about to be innovative on their side? How can they be in a better position to consume these services such as hybrid IT management-as-a-service, hybrid cloud decision making, and the right mixture of decisions-as-a-service?

What comes next when it comes to how the enterprise IT organization needs to shift?

Business cycles speed IT up 

Vogel: At a business level, within almost every market or every industry, we are moving from what used to be slow-cycle business to standard-cycles. In a lot of cases it’s moving from standard-cycle business to a fast-cycle business. Even businesses that were traditionally slow-cycle or standard-cycle are accelerating. This underlying technology is creating that.

So every company is a technology company. That is becoming more and more true every day. As a result, it’s driving business cycles faster and faster. So, IT, in order to support those business cycles, has to move at that same speed.

And we see enterprises moving away from a traditional IT model when those enterprises’ IT cannot move at the speed the business is demanding. We will still see IT, for example, take six months to provide a platform when the business says, “I need it in 20 minutes.”

We will see a split between traditional IT and a digital innovation group within the enterprise. This group will be owned by the business unit as opposed to core IT.

So, businesses are responding to IT not being able to move fast enough and not being able to provide the responsiveness and the level of service by going out and looking outside and consuming services externally.

At HPE, as we look at some of the services we have announced, they are to help our clients move faster and to provide operational support and management for hybrid to remove that burden from IT so they can focus on the things that accelerate their businesses.

As we move forward, how can clients start to move in this direction? At HPE, as we look at some of the services we have announced and will be rolling out in the next six-12 months, they are to help our clients move faster. They are designed to provide operational support and management for hybrid to take that burden away from IT, especially where IT may not have the skill sets or capability and be able to provide that seamless operating experience to our IT customers. Those customers need to focus on the things that accelerate their business — that is what the business units are demanding.

To stay relevant, IT is going to have to do that, too. They are going to have to look for help and support so that they can move at the same speed and pace that businesses are demanding today. And I don’t see that slowing down. I don’t think anybody sees that slowing down; if anything, we see the pace continuing to accelerate.

When I talked about fast-cycle — where services or solutions we put into the market may have had a market shelf life of two to three years — we are seeing it compressed to six months. It’s amazing how fast competition comes in even if we are doing innovative type of solutions. So, IT has to accelerate at that speed as well.

The HPE GreenLake hybrid cloud offering, for example, gives our clients the ability to operate at that speed by providing managed services capabilities across the hybrid estate. It provides a consistent platform, and then allows them to innovate on top of it. It takes away the management operation from their focus and lets them focus on what matters to the business today, which is innovation.

Gardner: For you personally, Erik, where do you get inspiration for innovation? How do you think out of the box when we can now see that that’s a necessary requirement?

Inspired by others

Vogel: One of the best parts about my job is the time I get to spend with our customers and to really understand what their challenges are and what they are doing. One of the things we look at are adjacent businesses.

We try to learn what is working well in retail, for example. What innovation is there and what lessons learned can we apply elsewhere? A lot of times the industry shifts so quickly that we don’t have all of the answers. We can’t take a product-out approach any longer. We really have to start looking at the customers’ back end. And I think having kind of that broad view and looking outside is really helping us. It’s where we are getting a lot of our inspiration.

For example, we are really focused on the overall experience that our clients have with HPE, and trying to drive a very consistent, standardized, easy-to-choose type of experience with us as a company. And it’s interesting as an engineering company, with a lot of good development and engineering capabilities, that we tend to look at it from a product-out view. We build a portal that they can work within, we create better products, and we get that out in front of the customer.

But by looking outside, we are saying, “Wait a minute, what is it, for example, about Uber that everybody likes?” It’s not necessarily that their app is good, but it’s really about the clean car, it’s about not having to pay when you get out of the car, not have to fumble for a credit card. It’s about seeing a map and knowing where the driver is. It’s about a predictable cost, where you know what it’s going to cost. And that experience, that overall experience is what makes Uber, Uber. It’s not just creating an app and saying, “Well, the app is the experience.”

We are learning a lot from adjacent businesses, adjacent industries, and incorporating that into what we are doing. It’s just part of that as-a-service mentality where we have to think about the experience our customers are asking for and how do we start building solutions that meet that experience requirement — not just the technical requirement. We are very good at that, but how do we start to meet that experience requirement?

How to Develop Hybrid 

Cloud Strategies With Confidence 

And this has been a real eye-opener for me personally. It has been a really fun part of the job, to look at the experience we are trying to create. How do we think differently? Rather than producing products and putting them out into the market, how do we think about creating that experience first and then designing and creating the solutions that sit underneath it?

When you talk about where we get inspiration, it’s really about looking at those adjacencies. It’s understanding what’s happening in the broader as-a-service market and taking the best of what’s happening and saying, “How can we employ those types of techniques, those tricks, those lessons learned into what we are doing?” And that’s really driving a lot of our development and inspiration in terms of how we are innovating as a company within HPE.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud, Security, User experience | Tagged , , , , , , , , , , , , , | Leave a comment

How total deployment intelligence overcomes the growing complexity of multicloud management


The next BriefingsDirect Voice of the Innovator discussion focuses on the growing complexity around multicloud management and how greater accountability is needed to improve business impacts from all-too-common haphazard cloud adoption.

Stay with us to learn how new tools, processes, and methods are bringing insights and actionable analysis that help regain control over the increasing challenges from hybrid cloud and multicloud sprawl.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explore a more pragmatic path to modern IT deployment management is Harsh Singh, Director of Product Management for Hybrid Cloud Products and Solutions at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving the need for multicloud at all? Why are people choosing multiple clouds and deployments?

Harsh Singh


Singh: That’s a very interesting question, especially today. However, you have to step back and think about why people went to the cloud in the first place – and what were the drivers – to understand how sprawl expanded to a multicloud environment.

Initially, when people began moving to public cloud services, the idea was speed, agility, and quick access to resources. IT was in the way for gaining on-premises resources. People said, “Let me get the work going and let me deploy things faster.”

And they were able to quickly launch applications, and this increased their velocity and time-to-market. Cloud helped them get there very fast. However, as we now get choices of multicloud environments, where you have various public cloud environments, you also now have private cloud environments where people can do similar things on-premises. There came a time when people realized, “Oh, certain applications fit in certain places better than others.”

From cloud sprawl to cloud smart

For example, if I want to run a serverless environment, I might want to run in one cloud provider versus another. But if I want to run more machine learning (ML), artificial intelligence (AI) kinds of functionality, I might want to run that somewhere else. And if I have a big data requirement, with a lot of data to crunch, I might want to run that on-premises.

So you now have more choices to make. People are thinking about where’s the best place to run their applications. And that’s where multicloud comes in. However, this doesn’t come for free, right?

How to Determine 

Ideal Workload Placement 

As you add more cloud environments and different tools, it leads to what we call tool sprawl. You now have people tying all of these tools together trying to figure out the cost of these different environments. Are they in compliance with the various norms we have within our organization? Now it becomes very complex very fast. It becomes a management problem in terms of, “How do I manage all of these environments together?”

Gardner: It’s become too much of a good thing. There are very good reasons to do cloud, hybrid cloud, and multicloud. But there hasn’t been a rationalization about how to go about it in an organizational way that’s in the best interest of the overall business. It seems like a rethinking of how we go about deploying IT in general needs to be part of it.

Singh: Absolutely right. I see three pillars that need to be addressed in terms of looking at this complexity and managing it well. Those are people, process, and technology. Technology exists, but unfortunately, unless you have the right skill set in the people — and you have the right processes in place — it’s going to be the Wild West. Everything is just going to be crazy. At the end you falter, not achieving what you really want to achieve.

I look at people, process, and technology as the three pillars of this tool sprawl, which is absolutely necessary for any company as they traverse their multicloud journey.

Gardner: This is a long-term, thorny problem. And it’s probably going to get worse before it gets better.

Singh: I do see it getting worse, but I also see a lot of people beginning to address these problems. Vendors, including we at HPE, are looking at this problem. We are trying to get ahead of it before a lot of enterprises crash and burn. We have experience with our customers, and we have engaged with them to help them on this journey.

You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. At the end of the day, you have to manage multiple environments.

It is going to get worse and people are going to realize that they need professional help. It requires that we work with these customers very closely and take them along based on what we have experienced together.

Gardner: Are you taking the approach that the solution for hybrid cloud management and multicloud management can be done in the same way? Or are they fundamentally different?

Singh: Fundamentally, it’s the same problem set. You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. Sometimes the terminology blurs. But at the end of the day, you have to manage multiple environments.

CIYou may be connecting private or off-premises hybrid clouds, and maybe there are different clouds. The problem will be the same — you have multiple tools, multiple environments, and the people need training and the processes need to be in place for them to operate properly.

Gardner: What makes me optimistic about the solution is there might be a fourth leg on that stool. People, process, and technology, yes, but I think there is also economics. One of the things that really motivates a business to change is when money is being lost and the business people think there is a way to resolve that.

The economics issue — about cost overruns and a lack of discipline around procurement – is both a part of the problem and the solution.

Economics elevates visibility 

Singh: I am laughing right now because I have talked to so many customers about this.  A CIO from an entertainment media company, for example, recently told me she had a problem. They had a cloud-first strategy, but they didn’t look at the economics piece of it. She didn’t realize, she told me, where their virtual machines (VMs) and workloads were running.

“At the end of the month, I’m seeing hundreds of thousands of dollars in bills. I am being surprised by all of this stuff,” she said. “I don’t even know whether they are in compliance. The overhead of these costs — I don’t know how to get a handle on it.”

So this is a real problem that customers are facing. I have heard this again and again: They don’t have visibility into the environment. They don’t know what’s being utilized. Sometimes they are underutilized, sometimes they are over utilized. And they don’t know what they are going to end up paying at the end of the day.

A common example is, in a public cloud, people will launch a very large number of VMs because that’s what they are used to doing. But they consume maybe 10 to 20 percent of that. What they don’t realize is that they are paying for the whole bill. More visibility is going to become key to getting a handle on the economics of these things.

Gardner: We have seen these kinds of problems before in general business procurement. Many times it’s the Wild West, but then they bring it under control. Then they can negotiate better rates as they combine services and look for redundancies. But you can’t do that until you know what you’re using and how it costs.

So, is the first step getting an inventory of where your cloud deployments are, what the true costs are, and then start to rationalize them?

Guardrails reduce risk, increase innovation

Singh: Absolutely, right. That’s where you start, and at HPE we have services to do that. The first thing is to understand where you are. Get a base level of what is on-premises, what is off-premises, and which applications are required to run where. What’s the footprint that I require in these different places? What is the overall cost I’m incurring, and where do I want to be? Answering those questions is the first step to getting a mixed environment you can control — and get away from the Wild West.

Put in the compliance guardrails so that IT is again looking at avoiding the problems we are seeing today.

Gardner: As a counterpoint, I don’t think that IT wants to be perceived as the big bad killjoy that comes to the data scientists and says, “You can’t get those clusters to support the data environment that you want.” So how do you balance that need for governance, security, and cost control with not stifling innovation and allowing creative freedom?

How to Transform

The Traditional Datacenter 

Singh: That’s a very good question. When we started building out our managed cloud solutions, a key criterion was to provide the guardrails yet not stifle innovation for the line of business managers and developers. The way you do that is that you don’t become the man in the middle. The idea is you allow the line of businesses and developers to access the resources they need. However, you put guardrails around which resources they can access, how much they can access, and you provide visibility into the budgets. You still let them access the direct APIs of the different multicloud environments.

You don’t say, “Hey, you have to put in a request to us to do these things.” You have to be more behind-the-scenes, hidden from view. At the same time, you need to provide those budgets and those controls. Then they can perform their tasks at the speed they want and access to the resources that they need — but within the guardrails, compliance, and the business requirements that IT has.

Gardner: Now that HPE has been on the vanguard of creating the tools and methods to get the necessary insights, make the measurements, recognize the need for balance between control and innovation — have you noticed changes in organizational patterns? Are there now centers of cloud excellence or cloud-management bureaus? Does there need to be a counterpart to the tools, of management structure changes as well?

Automate, yet hold hands, too

Singh: This is the process and the people parts that you want to address. How do you align your organizations, and what are the things that you need to do there? Some of our customers are beginning to make those changes, but organizations are difficult to change to get on this journey. Some of them are early; some of them are at much later stage. A lot of the customers frankly are still in the early phases of multicloud and hybrid cloud. We are working with them to make sure they understand the changes they’ll need to make in order to function properly in this new world.

Gardner: Unfortunately, these new requirements come at a time when cloud management skills — of understanding data and ops, IT and ops, and cloud and ops — are hard to find and harder to keep. So one of the things I’m seeing is the adoption of automation around guidance, strategy, and analysis. The systems start to do more for you. Tell me how automation is coming to bear on some of these problems, and perhaps mitigate the skill shortage issues.

sphere image

Singh: The tools can only do so much. So you automate. You make sure the infrastructure is automated. You make sure your access to public cloud — or any other cloud environment — is automated.

That can mitigate some of the problems, but I still see a need for hand-holding from time to time in terms of the process and people. That will still be required. Automation will help tie in a storage network, and compute, and you can put all of that together. This [composability] reduces the need and dependency on some of the process and people. Automation mitigates the physical labor and the need for someone to take days to do it. However, you need that expertise to understand what needs to be done. And this is where HPE is helping.

Automation will help tie in a storage network and compute, and you can put all of that together. Composability reduces the need and dependency on some of the process and the people. Automation mitigates the physical labor and the need for someone to take days to do it.

You might have heard about our HPE GreenLake managed cloud services offerings. We are moving toward an as-a-service model for a lot of our software and tooling. We are using the automation to help customers fill the expertise gap. We can offer more of a managed service by using automation tools underneath it to make our tasks easier. At the end of the day, the customer only sees an outcome or an experience — versus worrying about the details of how these things work.

Gardner: Let’s get back to the problem of multicloud management. Why can’t you just use the tools that the cloud providers themselves provide? Maybe you might have deployments across multiple clouds, but why can’t you use the tools from one to manage more? Why do we need a neutral third-party position for this?

Singh: Take a hypothetical case: I have deployments in Amazon Web Services (AWS) and I have deployments in Google Cloud Platform (GCP). And to make things more complicated, I have some workloads on premises as well. How would I go about tying these things together?

Now, if I go to AWS, they are very, very opinionated on AWS services. They have no interest in looking at builds coming out of GCP or Microsoft Azure. They are focused on their services and what they are delivering. The reality is, however, that customers are using these different environments for different things.

The multiple public cloud providers don’t have an interest in managing other clouds or to look at other environments. So third parties come in to tie everything together, and no one customer is locked into one environment.

If they go to AWS, for example, they can only look at billing, services, and performance metrics of that one service. And they do a very good job. Each one of these cloud guys does a very good job of exposing their own services and providing you visibility into their own services. But they don’t tie it across multiple environments. And especially if you throw the on-premises piece into the mix, it’s very difficult to look at and compare costs across these multiple environments.

Gardner: When we talk about on-premises, we are not just talking about the difference between your data center and a cloud provider’s data center. We are also taking about the difference between a traditional IT environment and the IT management tools that came out of that. How has HPE crossed the chasm between a traditional IT management automation and composability types of benefits and the higher-level, multicloud management?

Tying worlds together

Singh: It’s a struggle to tie these worlds together from my experience, and I have been doing this for some time. I have seen customers spend months and sometimes years, putting together a solution from various vendors, tying them together, and deploying something on premises and also trying to tie that to an off-premises environment.

At HPE, we fundamentally changed how on-premises and off-premises environments are managed by introducing our own SaaS management environment, which customers do not have to manage. Such a Software as a Service (SaaS) environment, a portal, connects on-premises environments. Since we have a native, programmable, API-driven infrastructure, we were able to connect that. And being able to drive it from the cloud itself made it very easy to hook up to other cloud providers like AWS, Azure, and GCP. This capability ties the two worlds together. As you build out the tools, the key is understanding automation on the infrastructure piece, and how can you connect and manage this from a centralized portal that ties all these things together with a click.

Through this common portal, people can onboard their multicloud environments, get visibility into their costs, get visibility into compliance — look at whether they are HIPAA compliant or not, PCI compliant or not — and get access to resources that allow them to begin to manage these environments.

How to Better Manage

Hybrid and Multicloud Economics 

For example, onboarding into any public cloud is very, very complex. Setting up a private cloud is very complex. But today, with the software that we are building, and some of our customers are using, we can set up a private cloud environment for people within hours. All you have to do is connect with our tools like HPE OneView and other things that we have built for the infrastructure and automation pieces. You then tie that together to a public cloud-facing tenant portal and onboard that with a few clicks. We can connect with their public cloud accounts and give them visibility into their complete environment.

And then we can bring in cost analytics. We have consumption analytics as part of our HPE GreenLake offering, which allows us to look at cost for on-premises as well as off-premises resources. You can get a dashboard that shows you what you are consuming and where.

Gardner: That level of management and the capability to be distributed across all these different deployment models strikes me as a gift that could keep on giving. Once you have accomplished this and get control over your costs, you are next able to rationalize what cloud providers to use for which types of workloads. It strikes me that you can then also use that same management and insight to start to actually move things around based on a dynamic or even algorithmic basis. You can get cost optimization on the fly. You can react to market forces and dynamics in terms of demand on your servers or on your virtual machines anywhere.

Are you going to be able to accelerate the capability for people to move their fungible workloads across different clouds, both hybrid and multicloud?

Optimizing for the future

Singh: Yes, absolutely right. There is more complexity in terms of moving workloads here and there, because there is data proximity requirements and various other requirements. But the optimization piece is absolutely something we can do on the fly, especially if you start throwing AI into the mix.

You will be learning over time what needs to be deployed where, and where your data gravity might be, and where you need applications closer to the data. Sometimes it’s here, sometimes it’s there. You might have edge environments that you might want to manage from this common portal, too. All that can be brought together.

HPE BugAnd then with those insights, you can make optimization decisions: “Hey, this application is best deployed in this location for these reasons.” You can even automate that. You can make that policy-driven.

Think about it this way — you are a person who wants to deploy something. You request a resource, and that gets deployed for you based on the algorithm that has already decided where the optimal place to put it is. All of that works behind the scenes without you having to really think about it. That’s the world we are headed to.

Gardner: We have talked about some really interesting subjects at a high level, even some thought leadership involved. But are there any concrete examples that illustrate how companies are already starting to do this? What kinds of benefits do they get?

Singh: I won’t name the company, but there was a business in the UK that was able to deploy VMs within minutes on their on-premises environment, as well as gain cost benefits out of their AWS deployments.

We were able to go in, connect to their VMware environment, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs. They saved 40 percent in operational efficiency. They gained self-service access.

We were able to go in, connect to their VMware environment, in this case, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs and request resources in that environment. They saved 40 percent in operational efficiency. So now they were mostly cost optimized, their IT team was less pressured to go and launch VMs for their developers, and they gained direct self-service access through which they could go and deploy VMs and other resources on-premises.

At the same time, IT had the visibility into what was being deployed in the public cloud environments. They could then optimize those environments for the size of the VMs and assets they were running there and gain some cost advantages there as well.

How to Solve Cost and Utilization

Challenges of Hybrid Cloud 

Gardner: For organizations that recognize they have a sprawl problem when it comes to cloud, that their costs are not being optimized, but that they are still needing to go about this at sort of a crawl, walk, run level — what should they be doing to put themselves in an advantageous position to be able to take advantage of these tools?

Are there any precursor activities that companies should be thinking about to get control over their clouds, and then be able to better leverage these tools when the time comes?

Watch your clouds

Singh: Start with visibility. You need an inventory of what you are doing. And then you need to ask the question, “Why?” What benefit are you getting from these different environments? Ask that question, and then begin to optimize. I am sure there are very good reasons for using multicloud environments, and many customers do. I have seen many customers use it, and for the right reasons.

However, there are other people who have struggled because there was no governance and guardrails around this. There were no processes in place. They truly got into a sprawled environment, and they didn’t know what they didn’t know.

So first and foremost, get an idea of what you want to do and where you are today — get a baseline. And then, understand the impact and what are the levers to the cost. What are the drivers to the efficiencies? Make sure you understand the people and process — more than the technology, because the technology does exist, but you need to make sure that your people and process are aligned.

And then lastly, call me. My phone is open. I am happy to have a talk with any customer that wants to have a talk.

How to Achieve Composability

Across Your Datacenter 

Gardner: On that note of the personal approach, people who are passionate in an organization around things like efficiency and cost control are looking for innovation. Where do you see the innovation taking place for cloud management? Is it the IT Ops people, the finance people, maybe procurement? Where is the innovative thinking around cloud sprawl manifesting itself?

cloud-journeySingh: All three are good places for innovation. I see IT Ops at the center of the innovation. They are the ones who will be effecting change.

Finance and procurement, they could benefit from these changes, and they could be drivers of the requirements. They are going to be saying, ‘I need to do this differently because it doesn’t work for me.” And the innovation also comes from developers and line of businesses managers who have been doing this for a while and who understand what they really need.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud, Security | Tagged , , , , , , , , , , , , , | Leave a comment

How an agile focus for Enterprise Architects builds competitive advantage for digital transformation

SpiralThe next BriefingsDirect business trends discussion explores the reinforcing nature of Enterprise Architecture (EA) and agile methods.

We’ll now learn how Enterprise Architects can embrace agile approaches to build competitive advantages for their companies.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about retraining and rethinking for EA in the Digital Transformation (DT) era, we are joined by Ryan Schmierer, Director of Operations at Sparx Services North America, and Chris Armstrong, President at Sparx Services North America. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ryan, what’s happening in business now that’s forcing a new emphasis for Enterprise Architects? Why should Enterprise Architects do things any differently than they have in the past?

Ryan Schmierer


Schmierer: The biggest thing happening in the industry right now is around DT. We been hearing about DT for the last couple of years and most companies have embarked on some sort of a DT initiative, modernizing their business processes.

But now companies are looking beyond the initial transformation and asking, “What’s next?” We are seeing them focus on real-time, data-driven decision-making, with the ultimate goal of enterprise business agility — the capability for the enterprise to be aware of its environments, respond to changes, and adapt quickly.

For Enterprise Architects, that means learning how to be agile both in the work they do as individuals and how they approach architecture for their organizations. It’s not about making architectures that will last forever, but architectures that are nimble, agile, and adapt to change.

Gardner: Ryan, we have heard the word, agile, used in a structured way when it comes to software development — Agile methodologies, for example. Are we talking about the same thing? How are they related?

Agile, adaptive enterprise advances 

Schmierer: It’s the same concept. The idea is that you want to deliver results quickly, learn from what works, adapt, change, and evolve. It’s the same approach used in software development over the last few years. Look at how you develop software that delivers value quickly. We are now applying those same concepts in other contexts.

First is at the enterprise level. We look at how the business evolves quickly, learn from mistakes, and adapt the changes back into the environment.

Second, in the architecture domain, instead of waiting months or quarters to develop an architecture, vision, and roadmap, how do we start small, iterate, deliver quickly, accelerate time-to-value, and refine it as we go?

Gardner: Many businesses want DT, but far fewer of them seem to know how to get there. How does the role of the Enterprise Architect fit into helping companies attain DT?

The core job responsibility for Enterprise Architects is to be an extension of the company leadership and its executives. They need to look at where a company is trying to go … and develop a roadmap on how to get there.

Schmierer: The core job responsibility for Enterprise Architects is to be an extension of company leadership and its executives. They need to look at where a company is trying to go, all the different pieces that need to be addressed to get there, establish a future-state vision, and then develop a roadmap on how to get there.

This is what company leadership is trying to do. The EA is there to help them figure out how to do that. As the executives look outward and forward, the Enterprise Architect figures out how to deliver on the vision.

Gardner: Chris, tools and frameworks are only part of the solution. It’s also about the people and the process. There’s the need for training and best practices. How should people attain this emphasis for EA in that holistic definition?

Change is good 

Chris Armstrong


Armstrong: We want to take a step back and look at how Ryan was describing the elevation of value propositions and best practices that seem to be working for agile solution delivery. How might that work for delivering continual, regular value? One of the major attributes, in our experience, of the goodness of any architecture, is based on how well it responds to change.

In some ways, agile and EA are synonyms. If you’re doing good Enterprise Architecture, you must be agile because responding to change is one of those quality attributes. That’s a part of the traditional approach of architecture – to be concerned with the interoperability and integration.

As it relates to the techniques, tools, and frameworks we want to exploit — the experiences that we have had in the past – we try to push those forward into more of an operating model for Enterprise Architects and how they engage with the rest of the organization.

Learn About Agile Architecture

At The Open Group July Denver Event 

So not starting from scratch, but trying to embrace the concept of reuse, particularly reuse of knowledge and information. It’s a good best practice, obviously. That’s why in 2019 you certainly don’t want to be inventing your own architecture method or your own architecture framework, even though there may be various reasons to adapt them to your environment.

Starting with things like the TOGAF® Framework, particularly its Architecture Development Method (ADM) and reference models — those are there for individuals or vertical industries to accelerate the adding of value.

The challenge I’ve seen for a lot of architecture teams is they get sucked into the methodology and the framework, the semantics and concepts, and spend a lot of time trying to figure out how to do things with the tools. What we want to think about is how to enable the architecture profession in the same way we enable other people do their jobs — with instant-on service offerings, using modern common platforms, and the industry frameworks that are already out there.

We are seeing people more focused on not just what the framework is but helping to apply it to close that feedback loop. The TOGAF standard, a standard of The Open Group, makes perfect sense, but people often struggle with, “Well, how do I make this real in my organization?”

Partnering with organizations that have had that kind of experience helps close that gap and accelerates the use in a valuable fashion. It’s pretty important.

Gardner: It’s ironic that I’ve heard of recent instances where Enterprise Architects are being laid off. But it sounds increasingly like the role is a keystone to DT. What’s the mismatch there, Chris? Why do we see in some cases the EA position being undervalued, even though it seems critical?

EA here to stay 

Armstrong: You have identified something that has happened multiple times. Pendulum swings happen in our industry, particularly when there is a lot of change going on. People are getting a little conservative. We’ve seen this before in the context of fiscal downturns in economic climates.

But to me, it really points to the irony of what we perceive in the architecture profession based on successes that we have had. Enterprise Architecture is an essential part of running your business. But if executives don’t believe that and have not experienced that then it’s not surprising when there’s an opportunity to make changes in investment priorities that Enterprise Architecture might not be at the top of the list.

We need to be mindful of where we are in time with the architecture profession. A lot of organizations struggle with the glass ceiling of Enterprise Architecture. It’s something we have encountered pretty regularly, where executives are, “I really don’t get what this EA thing is, and what’s in it for me? Why should I give you my support and resources?”

Learn About Agile Architecture

At The Open Group July Denver Event 

But what’s interesting about that, of course, is if you take a step back you don’t see executives saying the same thing about human resources or accounting. Not to suggest that they aren’t thinking about ways to optimize those as a core competency or as strategic. We still do have an issue with acceptance of enterprise architecture based on the educational and developmental experiences a lot of executives have had.

We’re very hopeful that that trend is going to be moving in a different direction, particularly as relates to new master’s programs and doctorate programs, for example, in the Enterprise Architecture field. Those elevate and legitimize Enterprise Architecture as a profession. When people are going through an MBA program, they will have heard of enterprise architecture as an essential part of delivering upon strategy.

Pieces of jigsaw puzzle and global network concept.Gardner: Ryan, looking at what prevents companies from attaining DT, what are the major challenges? What’s holding up enterprises from getting used to real-time data, gaining agility, and using intelligence about how they do things?

Schmierer: There are a couple of things going on. One of them ties back to what Chris was just talking about — the role of Enterprise Architects, and the role of architects in general. DT requires a shift in the relationship between business and IT. With DT, business functions and IT functions become entirely and holistically integrated and inseparable.

When there are no separate IT processes and no businesses process — there are just processes because the two are intertwined. As we use more real-time data and as we leverage Enterprise Architecture, how do we move beyond the traditional relationship between business and IT? How do we look at such functions as data management and data architecture? How do we bring them into an integrated conversation with the folks who were part of the business and IT teams of the past?

A good example of how companies can do this comes in a recent release from The Open Group, the Digital Practitioner Body of Knowledge™ (DPBoK™). It says that there’s a core skill set that is general and describes what it means to be such a practitioner in the digital era, regardless of your job role or focus. It says we need to classify job roles more holistically and that everyone needs to have both a business mindset and a set of technical skills. We need to bring those together, and that’s really important.

As we look at what’s holding up DT we need to take functions that were once considered centralized assets like EA and data management and bring them into the forefront. … Enterprise Architects need to be living in the present.

As we look at what’s holding up DT — taking the next step to real-time data, broadening the scope of DT – we need to take functions that were once considered centralized assets, like EA and data management, and bring them into the forefront, and say, “You know what? You’re part of the digital transmission story as well. You’re key to bringing us along to the next stage of this journey, which is looking at how to optimize, bring in the data, and use it more effectively. How do we leverage technology in new ways?”

The second thing we need to improve is the mindset. It’s particularly an issue with Enterprise Architects right now. And it is that Enterprise Architects — and everyone in digital professions — need to be living in the present.

You asked why some EAs are getting laid off. Why is that? Think about how they approach their job in terms of the questions that would be asked in a performance review.

Those might be, “What have you done for me over the years?” If your answer focuses on what you did in the past, you are probably going to get laid off. What you did in the past is great, but the company is operating in the present.

What’s your grand idea for the future? Some ideal situation? Well, that’s probably going to get you shoved in a corner some place and probably eventually laid off because companies don’t know what the future is going to bring. They may have some idea of where they want to get to, but they can’t articulate a 5- to 10-year vision because the environment changes so quickly.

TOG BugWhat have you done for me lately? That’s a favorite thing to ask in performance-review discussions. You got your paycheck because you did your job over the last six months. That’s what companies care about, and yet that’s not what Enterprise Architects should be supporting.

Instead, the EA emphasis should be what can you do for the business over the next few months? Focus on the present and the near-term future.

That’s what gets Enterprise Architects a seat at the table. That’s what gets the entire organization, and all the job functions, contributing to DT. It helps them become aligned to delivering near-term value. If you are entirely focused on delivering near-term value, you’ve achieved business agility.

Gardner: Chris, because nothing stays the same for very long, we are seeing a lot more use of cloud services. We’re seeing composability and automation. It seems like we are shifting from building to assembly.
Doesn’t that fit in well with what EAs do, focusing on the assembly and the structure around automation? That’s an abstraction above putting in IT systems and configuring them.

Reuse to remain competitive 

Armstrong: It’s ironic that the profession that’s often been coming up with the concepts and thought-leadership around reuse struggles a with how to internalize that within their organizations. EAs have been pretty successful at the implementation of reuse on an operating level, with code libraries, open-source, cloud, and SaaS.

There is no reason to invent a new method or framework. There are plenty of them out there. Better to figure out how to exploit those to competitive advantage and focus on understanding the business organization, strategy, culture, and vision — and deliver value in the context of those.

For example, one of the common best practices in Enterprise Architecture is to create things called reference architectures, basically patterns that represent best practices, many of which can be created from existing content. If you are doing cloud or microservices, elevate that up to different types of business models. There’s a lot of good content out there from standards organizations that give organizations a good place to start.

Learn About Agile Architecture

At The Open Group July Denver Event 

But one of the things that we’ve observed is a lot of architecture communities tend to focus on building — as you were saying — those reference architectures, and don’t focus as much on making sure the organization knows that content exists, has been used, and has made a difference.

We have a great opportunity to connect the dots among different communities that are often not working together. We can provide that architectural leadership to pull it together and deliver great results and positive behaviors.

Gardner: Chris, tell us about Sparx Services North America. What do you all do, and how you are related to and work in conjunction with The Open Group?

Armstrong: Sparx Services is focused on helping end-user organizations be successful with Enterprise Architecture and related professions such as solution architecture and solution delivery, and systems engineering. We do that by taking advantage of the frameworks and best practices that standards organizations like The Open Group create, helping make those standards real, practical, and pragmatic for end-user organizations. We provide guidance on how to adapt and tailor them and provide support while they use those frameworks for doing real work.

And we provide a feedback loop to The Open Group to help understand what kinds of questions end-user organizations are asking. We look for opportunities for improving existing standards, areas where we might want to invest in new standards, and to accelerate the use of Enterprise Architecture best practices.

Gardner: Ryan, moving onto what’s working and what’s helping foster better DT, tell us what’s working. In a practical sense, how is EA making those shorter-term business benefits happen?

One day at a time 

Schmierer: That’s a great question. We have talked about some of the challenges. It’s important to focus on the right path as well. So, what’s working that an enterprise architect can do today in order to foster DT?

Number one, embrace agile approaches and an agile mindset in both architecture development (how you do your job) and the solutions you develop for your organizations. A good way to test whether you are approaching architecture in an agile way is the first iteration in the architecture. Can you go through the entire process of the Architecture Development Method (ADM) on a cocktail napkin in the time it takes you to have a drink with your boss? If so, great. It means you are focused on that first simple iteration and then able to build from there.

Number two, solve problems today with the components you have today. Don’t just look to the future. Look at what you have now and how you can create the most value possible out of those. Tomorrow the environment is going to change, and you can focus on tomorrow’s problems and tomorrow’s challenges tomorrow. So today’s problems today.

Third, look beyond your current DT initiative and what’s going on today, and talk to your leaders. Talk to your business clients about where they need to go in the future. That goal is enterprise business agility, which is helping the company become more nimble. DT is the first step, then start looking at steps two and three.

Architects need to understand technology better, such things as new cloud services, IoT, edge computing, ML, and AI. These are going to have disruptive effects on your businesses. You need to understand them to be a trusted advisor to your organization.

Fourth, Architects need to understand technology better, such things as fast-moving, emerging technology like new cloud services, Internet of Things (IoT), edge computingmachine learning (ML), and artificial intelligence (AI) — these are more than just buzz words and initiatives. They are real technology advancements. They are going to have disruptive effects on your businesses and the solutions to support those businesses. You need to understand the technologies; you need to start playing with them so you can truly be a trusted advisor to your organization about how to apply those technologies in business context.

Gardner: Chris, we hear a lot about AI and ML these days. How do you expect Enterprise Architects to help organizations leverage AI and ML to get to that DT? It seems really essential to me to become more data driven and analytics driven and then to re-purpose to reuse those analytics over and over again to attain an ongoing journey of efficiency and automation.

Better business outcomes 

Armstrong: We are now working with our partners to figure out how to best use AI and ML to help run the business, to do better product development, to gain a 360-degree view of the customer, and so forth.

Architecture_frameworkIt’s one of those weird things where we see the shoemaker’s children not having any shoes because they are so busy making shoes for everybody else. There is a real opportunity, when we look at some of the infrastructure that’s required to support the agile enterprise, to exploit those same technologies to help us do our jobs in enterprise architecture.

It is an emerging part of the profession. We and others are beginning to do some research on that, but when I think of how much time we and our clients have spent on the nuts and bolts collection of data and normalization of data, it sure seems like there is a real opportunity to leverage these emerging technologies for the benefit of the architecture practice. Then, again, the architects can be more focused on building relationships with people, understanding the strategy in less time, and figuring out where the data is and what the data means.

Obviously humans still need to be involved, but I think there is a great opportunity to eat your own dog food, as it were, and see if we can exploit those learning tools for the benefit of the architecture community and its consumers.

Gardner: Chris, do we have concrete examples of this at work, where EAs have elevated themselves and exposed their value for business outcomes? What’s possible when you do this right?

Armstrong: A lot of organizations are working things from the bottoms up, and that often starts in IT operations and then moves to solution delivery. That’s where there has been a lot of good progress, in improved methods and techniques such as scaled agile and DevOps.

But a lot of organizations struggle to elevate it higher. The DPBoK™  from The Open Group provides a lot of guidance to help organizations navigate that journey, particularly getting to the fourth level of the learning progression, which is at the enterprise level. That’s where Enterprise Architecture becomes essential. It’s great to develop software fast, but that’s not the whole point of agile solution delivery. It should be about building the right software the right way to meet the right kind of requirements — and do that as rapidly as possible.

We need an umbrella over different release trains, for example, to make sure the organization as a whole is marching forward. We have been working with a number of Fortune 100 companies that have made good progress at the operational implementation levels. They nonetheless now are finding that particularly trying, to connect to business architecture.

There have been some great advancements from the Business Architecture Guild and that’s been influencing the TOGAF framework, to connect the dots across those agile communities so that the learnings of a particular release train or the strategy of the enterprise is clearly understood and delivered to all of those different communities.

Gardner: Ryan, looking to the future, what should organizations be doing with the Enterprise Architect role and function?

EA evolution across environments 

Schmierer: The next steps don’t just apply to Enterprise Architects but really to all types of architects. So look at the job role and how your job role needs to evolve over the next few years. How do you need to approach it differently than you have in the past?

For example, we are seeing Enterprise Architects increasingly focus on issues like security, risk, reuse, and integration with partner ecosystems. How do you integrate with other companies and work in the broader environments?

We are seeing Business Architects who have been deeply engaged in DT discussions over the last couple of years start looking forward and shifting the role to focus on how we light up real-time decision-making capabilities. Solution Architects are shifting from building and designing components to designing assembly and designing the end systems that are often built out of third-party components instead of things that were built in-house.

Look at the job role and understand that the core need hasn’t changed. Companies need Enterprise Architects and Business Architects and Solution Architects more than ever right now to get them where they need to be. But the people serving those roles need to do that in a new way — and that’s focused on the future, what the business needs are over the next 6 to 18 months, and that’s different than what they have done in past.

Gardner: Where can organizations and individuals go to learn more about Agile Architecture as well as what The Open Group and Sparx Services are offering?

Schmierer: The Open Group has some great resources available. We have a July event in Denver focused on Agile Architecture, where they will discuss some of the latest thoughts coming out of The Open Group Architecture ForumDigital Practitioners Work Group, and more. It’s a great opportunity to learn about those things, network with others, and discuss how other companies are approaching these problems. I definitely point them there.

Learn About Agile Architecture 

At The Open Group July Denver Event 

I mentioned the DPBoK™. This is a recent release from The Open Group, looking at the future of IT and the roles for architects. There’s some great, forward-looking thinking in there. I encourage folks to take a look at that, provide feedback, and get involved in that discussion.

And then Sparx Services North America, we are here to help architects be more effective and add value to their organizations, be it through tools, training, consulting, best practices, and standards. We are here to help, so feel free to reach out at our website. We are happy to talk with you and see how we might be able to help.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, machine learning, Microsoft, multicloud, The Open Group | Tagged , , , , , , , , , , , | Leave a comment

For a UK borough, solving security issues leads to operational improvements and cost-savings across its IT infrastructure

Barnsley_at_Night)The next BriefingsDirect enterprise IT productivity discussion focuses on solving tactical challenges around security to unlock strategic operational benefits in the public sector.

For a large metropolitan borough council in South Yorkshire, England, an initial move to thwarting recurring ransomware attacks ended up a catalyst to wider IT infrastructure performance, cost, operations, and management benefits.

This security innovations discussion then examines how the Barnsley Metropolitan Borough Council information and communications technology (ICT) team rapidly deployed malware protection across 3,500 physical and virtual workstations and servers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the story of how that one change in security software led to far higher levels of user satisfaction — and a heightened appreciation for the role and impact of small IT teams — is Stephen Furniss, ICT Technical Specialist for Infrastructure at Barnsley Borough Council. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stephen, tell us about the Barnsley Metropolitan Borough. You are one of 36 metropolitan counties in England, and you have a population of about 240,000. But tell us more about what your government agencies provide to those citizens.

Stephen Furniss


Furniss: As a Council, we provide wide-ranging services to all the citizens here, from things like refuse collection on a weekly basis; maintaining roads, potholes, all that kind of stuff, and making sure that we look after the vulnerable in society around here. There is a big raft of things that we have to deliver, and every year we are always challenged to deliver those same services, but actually with less money from central government.

So it does make our job harder, because then there is not just a squeeze across a specific department in the Council when we have these pressures, there is a squeeze across everything, including IT. And I guess one of our challenges has always been how we deliver more or the same standard of service to our end users, with less budget.

So we turn to products that provide single-pane-of-glass interfaces, to make the actual management and configuration of things a lot easier. And [we turn to] things that are more intuitive, that have automation. We try and drive, making everything that we do easier and simpler for us as an IT service.

Gardner: So that boils down to working smarter, not harder. But you need to have the right tools and technology to do that. And you have a fairly small team, 115 or so, supporting 2,800-plus users. And you have to be responsible for all aspects of ICT — the servers, networks, storage, and, of course, security. How does being a small team impact how you approach security?

Furniss: We are even smaller than that. In IT, we have around 115 people, and that’s the whole of IT. But just in our infrastructure team, we are only 13 people. And our security team is only three or four people.

In IT, we have around 115 people, but just in infrastructure we are only 13 people. It can become a hindrance when you get overwhelmed with security incidents, yet it’s great to have  a small team to bond and come up with solutions.

It can become a hindrance when you get overwhelmed with security incidents or issues that need resolving. Yet sometimes it’s great to have that small team of people. You can bond together and come up with really good solutions to resolve your issues.

Gardner: Clearly with such a small group you have to be automation-minded to solve problems quickly or your end users will be awfully disappointed. Tell us about your security journey over the past year-and-a-half. What’s changed?

Furniss: A year-and-a-half ago, we were stuck in a different mindset. With our existing security product, every year we went through a process of saying, “Okay, we are up for renewal. Can we get the same product for a cheaper price, or the best price?”

We didn’t think about what security issues we were getting the most, or what were the new technologies coming out, or if there were any new products that mitigate all of these issues and make our jobs — especially being a smaller team — a lot easier.

But we had a mindset change about 18 months back. We said, “You know what? We want to make our lives easier. Let’s think about what’s important to us from a security product. What issues have we been having that potentially the new products that are out there can actually mitigate and make our jobs easier, especially with us being a smaller team?”

Gardner: Were reoccurring ransomware attacks the last straw that broke the camel’s back?

Staying a step ahead of security breaches

Furniss: We had been suffering with ransomware attacks. Every couple of years, some user would be duped into clicking on a file, email, or something that would cause chaos and mayhem across the network, infecting file-shares, and not just that individual user’s file-share, but potentially the files across 700 to 800 users all at once. Suddenly they found their files had all been encrypted.

From an IT perspective, we had to restore from the previous backups, which obviously takes time, especially when you start talking about terabytes of data.

That was certainly one of the major issues we had. And the previous security vendor would come to us and say, “All right, you have this particular version of ransomware. Here are some settings to configure and then you won’t get it again.” And that’s great for that particular variant, but it doesn’t help us when the next version or something slightly different shows up, and the security product doesn’t detect it.

Barnsley Town HallThat was one of our real worries and pain that we suffered, that every so often we were just going to get hit with ransomware. So we had to change our mindset to want something that’s actually going to be able to do things like machine learning (ML) and have ransomware protection built-in so that we are not in that position. We could actually get on with our day-to-day jobs and be more proactive – rather than being reactive — in the environment. That’s was a big thing for us.

Also, we need to have a lot of certifications and accreditations, being a government authority, in order to connect back to the central government of the UK for such things as pensions. So there were a lot of security things that would get picked up. The testers would do a penetration test on our network and tell us we needed to think about changing stuff.

Gardner: It sounds like you went from a tactical approach to security to more of an enterprise-wide security mindset. So let’s go back to your thought process. You had recurring malware and ransomware issues, you had an audit problem, and you needed to do more with less. Tell us how you went from that point to get to a much better place.

Safe at home, and at work 

Furniss: As a local authority, with any large purchase, usually over 2,500 pounds (US$3,125), we have to go through a tender process. We write in our requirements, what we want from the products, and that goes on a tender website. Companies then bid for the work.

It’s a process I’m not involved in. I am purely involved in the techie side of things, the deployment, and managing and looking after the kit. That tender process is all done separately by our procurement team.

So we pushed out this tender for a new security product that we wanted, and obviously we got responses from various different companies, including Bitdefender. When we do the scoring, we work on the features and functionality required. Some 70 percent of the scoring is based on the features and functionality, with 30 percent based on the cost.

What was really interesting was that Bitdefender scored the highest on all the features and functionalities — everything that we had put down as a must-have. And when we looked at the actual costs involved — what they were going to charge us to procure their software and also provide us with deployment with their consultants — it came out at half of what we were paying for our previous product.

Bitdefender scored the highest on all the features and functionalities — everything that we had put down as must-have. And the actual costs were half of what we were paying.

So you suddenly step back and you think, “I wish that we had done this a long time ago, because we could have saved money as well as gotten a better product.”

Gardner: Had you been familiar with Bitdefender?

Furniss: Yes, a couple of years ago my wife had some malware on her phone, and we started to look at what we were running on our personal devices at home. And I came up with Bitdefender as one of the best products after I had a really good look around at different options.

I went and bought a family pack, so effectively I deployed Bitdefender at home on my own personal mobile, my wife’s, my kids’, on the tablets, on the computers in the house, and what they used for doing schoolwork. And it’s been great at protecting us from anything. We have never had any issues with an infection or malware or anything like that at home.

It was quite interesting to find out, once we went through the tender process, that it was Bitdefender. I didn’t even know at that stage who was in the running. When the guys told me we are going to be deploying Bitdefender, I was thinking, “Oh, yeah, I use that at home and they are really good.”

Monday, Monday, IT’s here to stay 

Gardner: Stephen, what was the attitude of your end users around their experiences with their workstations, with performance, at that time?

Furniss: We had had big problems with end users’ service desk calls to us. Our previous security product required a weekly scan that would run on the devices. We would scan their entire hard drives every Friday around lunchtime.

You try to identify when the quiet periods are, when you can run an end-user scan on their machine, and we had come up with Friday’s lunchtime. In the Council we can take our lunch between noon and 2 p.m., so we would kick it off at 12 and hope it would finish in time for when users came back and did some work on the devices.

And with the previous product — no matter what we did, trying to change dates, trying to change times — we couldn’t get anything that would work in a quick enough time frame and complete the scans rapidly. It could be running for two to three hours, taking high resources on their devices. A lot of that was down to the spec of the end-user devices not being very good. But, again, when you are constrained with budgets, you can only put so many resources into buying kit for your users.

So, we would end up with service desk calls, with people complaining, saying, “Is there any chance you can change the date and time of the scan? My device is running slow. Can I have a new device?” And so, we received a lot of complaints.

And we also noticed, usually Monday mornings, that we would also have issues. The weekend was when we did our server scans and our full backup. So we would have the two things clashing, causing issues. Monday morning, we would come in expecting those backups to have completed, but because it was trying to fight with the scanning, neither was fully completed. We worried if we were going to be able to recover back to the previous week.

Our backups ended up running longer and longer as the scans took longer. So, yes, it was a bit painful for us in the past.

Gardner: What happened next?

Smooth deployer 

Furniss: Deployment was a really, really good experience. In the past, we have had suppliers come along and provide us a deployment document, some description, and it would be their standard document, there was nothing customized. They wouldn’t speak with us to find out what’s actually deployed and how their product fit in. It was just, “We are going to deploy it like this.” And we would then have issues trying to get things working properly, and we’d have to go backward and forward with a third party to get things resolved.

In this instance, we had Bitdefender’s consultants. They came on-site to see us, and we had a really good meeting. They were asking us questions: “Can you tell us about your environment? Where are your DMZs? What applications have you got deployed? What systems are you using? What hypervisor platforms have you got?” And all of that information was taken into account in the design document that they customized completely to best fit their best practices and what we had in place.

We ended up with something we could deploy ourselves, if we wanted to. We didn’t do that. We took their consultancy as a part of the deployment process. We had the Bitdefender guys on-site for a couple of days working with us to build the proper infrastructure services to run GravityZone.

And it went really well. Nothing was missed from the design. They gave us all the ports and firewall rules needed, and it went really, really smoothly.

We initially thought we were going to have a problem with deploying out to the clients, but we worked with the consultants to come up with a way around impacting our end-users during the deployment.

One of our big worries was that when you deploy Bitdefender, the first thing it does is see if there is a competitive vendor’s product on the machine. If it finds that, it will remove it, and then restart the user’s device to continue the installation. Now, that was going to be a concern to us.

So we came up with a scripted solution that we pushed out through Microsoft System Center Configuration Manager. We were able to run the uninstall command for the third-party product, and then Bitdefender triggered for the install straightaway. The devices didn’t need rebooting, and it didn’t impact any of our end users at all. They didn’t even know there was anything happening. The only thing that would see is the little icon in the taskbar changing from the previous vendor’s icon to Bitdefender.

It was really smooth. We got the automation to run and push out the client to our end users, and they just didn’t know about it.

Gardner: What was the impact on the servers?

Environmental change for the better 

Furniss: Our server impact has completely changed. The full scanning that Bitdefender does, which might take 15 minutes, is far less time than the two to three hours before on some of the bigger file servers.

And then once it’s done with that full scan, we have it set up to do more frequent quick scans that take about three minutes. The resource utilization of this new scan set up has just totally changed the environment.

Because we use virtualization predominantly across our server infrastructure, we have even deployed the Bitdefender scan servers, which allow us to do separate scans on each of our virtualized server hosts. It does all of the offloading of the scanning of files and malware and that kind of stuff.

It’s a lightweight agent, it takes less memory, less footprint, and less resources. And the scan is offloaded to the scan server that we run.

The impact from a server perspective is that you no longer see spikes in CPU or memory utilization with backups. We don’t have any issues with that kind of thing anymore. It’s really great to see a vendor come up with a solution to issues that people seem to have across the board.

Gardner: Has that impacted your utilization and ability to get the most virtual machines (VMs) per CPU? How has your total costs equation been impacted?

Furniss: The fact that we are not getting all these spikes across the virtualization platform means we can squeeze in more VMs per host without an issue. It means we can get more bank for buck, if you like.

Gardner: When you have a mixed environment — and I understand you have Nutanixhyperconverged (HCI), Hyper-V and vSphere VMs, some Citrix XenServer, and a mix of desktops — how does managing such heterogeneity with a common security approach work? It sounds like that could be kind of a mess.

Furniss: You would think it would be a mess. But from my perspective, Bitdefender GravityZone is really good because I have this all on a single pane of glass. It hooks into Microsoft ActiveDirectory, so it pulls back everything in there. I can see all the devices at once. It hooks into our Nutanix HCI environment. I can deploy small scan servers into the environment directly from GravityZone.

If I decide on an additional scan server, it automatically builds that scan server in the virtual environment for me, and it’s another box that we’ve got for scanning everything on the virtual service.

Bitdefender GravityZone is really good because I have this all on a single pane of glass. I can see all the devices at once. I can deploy small scan servers into the environment directly from GravityZone.

It’s nice that it hooks into all these various things. We currently have some legacy VMware. Bitdefender lets me see what’s in that environment. We don’t use the VMware NSX platform, but it gives me visibility across an older platform even as I’m moving to get everything to the Nutanix HCI.

So it makes our jobs easier. The additional patch management module that we have in there, it’s one of the big things for us.

For example, we have always been really good at keeping our Windows updates on devices and servers up to the latest level. But we tended to have problems keeping updates ongoing for all of our third-party apps, such as Adobe ReaderFlash, and Java across all of the devices.

You can get lost as to what is out there unless you do some kind of active scanning across your entire infrastructure, and the Bitdefender patch management allows us to see where we have different versions of apps and updates on client devices. It allows us to patch them up to the latest level and install the latest versions.

From that perspective, I am again using just one pane of glass, but I am getting so much benefit and extra features and functionality than I did previously in the many other products that we use.

Gardner: Stephen, you mentioned a total cost of ownership (TCO) benefit when it comes to server utilization and the increased VMs. Is there another economic metric when it comes to administration? You have a small number of people. Do you see a payback in terms of this administration and integration value?

Furniss: I do. We only have 13 people on the infrastructure team, but only two or three of us actively go into the Bitdefender GravityZone platform. And on a day-to-day basis, we don’t have to do that much. If we deploy a new system, we might have to monitor and see if there is anything that’s needed as an exception if it’s some funky application.

But once our applications are deployed and our servers are up and running, we don’t have to make any real changes. We only have to look at patch levels with third-parties, or to see if there are any issues on our end points and needs our attention.

The actual amount of time we need to be in the Bitdefender console is quite reduced so it’s really useful to us.

Gardner: What’s been the result this last year that you have had Bitdefender running in terms of the main goal — which is to be free of security concerns?

Proactive infection protection 

Furniss: That’s just been the crux of it. We haven’t had any malware any ransomware attacks on our network. We have not had to spend days, weeks, or hours restoring files back or anything like that — or rebuilding hundreds of machines because they have something on them. So that’s been a good thing.

Another interesting thing for us, we began looking at the Bitdefender reports from day one. And it had actually found, going back 5, 6, or 7 years, that there was malware or some sort of viruses still out there in our systems.

And the weird thing is, our previous security product had never even seen this stuff. It had obviously let it through to start with. It got through all our filtering and everything, and it was sitting in somebody’s mailbox ready — if they clicked on it – to launch and infect the entire network.

Straightaway from day one, we were detecting stuff that sat for years in people’s mailboxes. We just didn’t even know about it.

So, from that perspective, it’s been fantastic. We’ve not had any security outbreaks that we had to deal with, or anything like that.

And just recently, we had our security audit from our penetration testers. One of the things they try to do is actually put some malware on to a test device. They came back and said they had not been able to do that. They have been unable to infect any of our devices. So that’s been a really, really good thing from our perspective.

Gardner: How is that translated into the perception from your end users and your overseers, those people managing your budgets? Has there been a sense of getting more value? What’s the satisfaction quotient, if you will, from your end users?

Furniss: A really good, positive thing has been that they have not come back and said that there’s anything that we’ve lost. There are no complaints about machines being slow.

We even had one of our applications guys say that their machine was running faster than it normally does on Fridays. When we explained that we had swapped out the old version of the security product for Bitdefender, it was like, “Oh, that’s great, keep it up.”

There are no complaints about machines being slow. One of our apps guy says that their machine was running faster than normal. From IT, we are really pleased. 

For the people higher up, at the minute, I don’t think they appreciate what we’ve done.  That will come in the next month as we start presenting to them our security reports and the reports from the audit about how they were unable to infect an end-user device.

From our side, from IT, we are really, really pleased with it. We understand what it does and how much it’s saving us from the pains of having to restore files. We are not being seen as one of these councils or entities that’s suddenly plastered across the newspaper and had its reputation tarnished because anyone has suddenly lost all their systems or been infected or whatever.

Gardner: Having a smoothly running organization is the payoff.

Before we close out, what about the future? Where would you like to see your security products go in terms of more intelligence, using data, and getting more of a proactive benefit?

Cloud on the horizon 

Furniss: We are doing a lot more now with virtualization. We have only about 50 physical servers left. We are also thinking about the cloud journey. So we want the security products working with all of that stuff up in the cloud. It’s going to be the next big thing for us. We want to secure that area of our environment if we start moving infrastructure servers up there.

Can we protect stuff up in the cloud as well as what we have here?

Gardner: Yes, and you mentioned, Stephen, at home that you are using Bitdefender down into your mobile devices, is that also the case with your users in the council, in the governance there or is there a bring your own device benefit or some way that you are looking to allow people to use more of their own devices in context of work? How does that mobile edge work in the future?

Furniss: Well, I don’t know. I think a mobile device is quite costly for councils to actually deploy, but we have taken the approach of — if you need it for work, then you get one. We currently have got a project to look at deploying the mobile version of Bitdefender to our actual existing Android users.

Gardner: Now that you have 20/20 hindsight with using this type of security environment over the course of a year, any advice for folks in a similar situation?

Furniss: Don’t be scared of change. I think one of the things that always used to worry me was that we knew what we were doing with a particular vendor. We knew what our difficulties were. Are we going to be able to remove it from all the devices?

Don’t worry about that. If you are getting the right product, it’s going to take care of lot of the issues that you currently have. We found that deploying the new product was relatively easy and didn’t cause any pain to our end-users. It was seamless. They didn’t even know we had done it.

Some people might be thinking that they have a massive estate and it’s going to be a real headache. But with automation and a bit of thinking about how and what are you going to do, it’s fairly straightforward to deploy a new antivirus product to your end users. Don’t be afraid of change and moving into something new. Get the best use of the new products that there are out there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, machine learning, risk assessment, Security, User experience, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

Financial stability, a critical factor for choosing a business partner, is now much easier to assess

The next BriefingsDirect digital business risk remediation discussion explores new ways companies can gain improved visibility, analytics, and predictive indicators to better assess the financial viability of partners and global supply chains.

Businesses are now heavily relying upon their trading partners across their supply chains — and no business can afford to be dependent on suppliers that pose risks due to poor financial health.

We will now examine new tools and methods that create a financial health rating system to determine the probability of bankruptcy, default, or disruption for both public and private companies — as much as 36 months in advance.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the exploding sophistication around gaining insights into supply-chain risk of a financial nature, please welcome Eric Evans, Managing Director of Business Development at RapidRatings in New York, and Kristen Jordeth, Go-to-Market Director for Supplier Management Solutions, North America at SAP Ariba. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Eric, how do the technologies and processes available now provide a step-change in managing supplier risk, particularly financial risk?

Eric Evans


Evans: Platform-to-platform integrations enabled by application programming interfaces (APIs), which we have launched over the past few years, allows partnering with SAP Ariba Supplier Risk. It’s become a nice way for our clients to combine actionable data with their workflow in procurement processes to better manage suppliers end to end — from sourcing to on-boarding to continuous monitoring.

Gardner: The old adage of “garbage in, garbage out” still applies to the quality and availability of the data. What’s new about access to better data, even in the private sector?

Dig deep into risk factors

Evans: We go directly to the source, the suppliers our customers work with. They introduce us to those suppliers and we get the private company financial data, right from those companies. It’s a quantitative input, and then we do a deeper “CAT scan,” if you will, on the financials, using that data together with our predictive scoring.

Gardner: Kristen, procurement and supply chain integrity trends have been maturing over the past 10 years. How are you able to focus now on more types of risk? It seems we are getting better and deeper at preventing unknown unknowns.

Jordeth: Exactly, and what we are seeing is customers managing risk from all aspects of the business. The most important thing is to bring it all together through technology.

Within our platform, we enable a Controls Framework that identifies key areas of risk that need to be addressed for a specific type of engagement. For example, do they need to pull a financial rating? Do they need to do a background check? We use the technology to manage the controls across all of the different aspects of risk in one system.

Gardner: And because many companies are reliant on real-time logistics and supplier services, any disruption can be catastrophic.

No alt text provided for this image


Jordeth: Absolutely. We need to make sure that the information gets to the system as quickly as it’s available, which is why the API connect to RapidRatings is extremely important to our customers. On top of that, we also have proactive incidents tracking, which complements the scores.

If you see a medium-risk business, from a financial perspective, you can look into that incident to see if they are under investigation, or if things going on where they might be laying off departments.

It’s fantastic and to have it all in one place with one view. You can then slice and dice the data and roll it up into scores. It’s very helpful for our customers.

Gardner: And this is a team sport, with an ecosystem of partners, because there is such industry specialization. Eric, how important is it being in an ecosystem with other specialists examining other kinds of risk?

Evans: It’s really important. We listen to our customers and prospects. It’s about the larger picture of bringing data into an end-to-end procurement and supplier risk management process.

No alt text provided for this image

We feel really good about being part of SAP PartnerEdge and an app extension partner to SAP Ariba. It’s exciting to see our data and the integration for clients.

Gardner: Rapid Ratings International, Inc. is the creator of the proprietary Financial Health Rating (FHR), also known as RapidRatings. What led up to the solution? Why didn’t it exist 30 years ago?

Rate the risk over time

Evans: The company was founded by someone with a background in econometrics and modeling. We have 24 industry models that drive the analysis. It’s that kind of deep, precise, and accurate modeling — plus the historical database of more than 30 years of data that we have. When you combine those, it’s much more accurate and predictive, it’s really forward-looking data.

Gardner: You provide a 0 to 100 score. Is that like a credit rating for an individual? How does that score work in being mindful of potential risk?

Evans: The FHR is a short-term score, from 0 to 100, that looks at the next 12 months with a probability of default. Then a Core Health Score, which is around 24 to 36 months out, looks at operating efficiency and other indicators of how well a company is managing the business and operationalizing.

We can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. Having industry depth — and the historical data behind it — that’s what drives the go-forward assessments.

When you combine the two, or look at them individually, you can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. If they don’t look good in the long-term and in the short-term, they still may have less risk because they have cash on hand. And that’s happening out in the marketplace these days with a lot of the initial public offerings (IPOs) such as Pinterest or Lyft. They have a medium-risk FHR because they have cash, but their long-term operating efficiency needs to be improved because they are not yet profitable.

Gardner: How are you able to determine risk going 36 months out when you’re dealing mostly with short-term data?

Evans: It’s because of the historical nature and the discrete modeling underneath, that’s what gets precise about the industry that each company is in. Having 24 unique industry models is very different than taking all of the companies out there and stuffing them into a plain-vanilla industry template. A software company is very different than pharmaceuticals, which is very different than manufacturing.

Having that industry depth — and the historical data behind it — is what’s drives the go-forward assessments.

Gardner: And this is global in nature?

Evans: Absolutely. We have gone out to more than 130 countries to get data from those sources, those suppliers. It is a global data set that we have built on a one-to-one basis for our clients.

Gardner: Kristen, how does somebody in the Ariba orbit take advantage of this? How is this consumed?

Jordeth: As with everything at SAP Ariba, we want to simplify how our customers get access to information. The PartnerEdge program works with our third parties and partners to create an API whereby all our customers need to do is get a license key from RapidRatings and apply it to the system.

The infrastructure and connection are already there. Our deployment teams don’t have to do anything, just add that user license and the key within the system. So, it’s less touch, and easy to access the data.

Gardner: For those suppliers that want to be considered good partners with low financial risk, do they have access to this information? Can they work to boost up their scores?

To reduce risk, discuss data details 

Evans: Our clients actually own the subscription and the license, and they can share the data with their suppliers. The suppliers can also foster a dialogue with our tool, called the Financial Dialogue, and they can ask questions around areas of concern. That can be used to foster a better relationship, build transparency, and it doesn’t have to be a negative conversation to be a positive one.

No alt text provided for this image

They may want to invest in their company, extend payment terms or credit, work with them on service-level agreements (SLAs), and send in people to help manage. So, it could be a good way to just build up that deeper relationship with that supplier and use it as a better foundation.

Gardner: Kristen, when I put myself in the position of a buyer, I need to factor lots of other issues, such as around sustainability, compliance, and availability. So how do you see the future unfolding for the holistic approach to risk mitigation, of not only taking advantage of financial risk assessments, but the whole compendium of other risks? It’s not a simple, easy task.

Jordeth: When you look at financial data, you need to understand the whole story behind it. Why does that financial data look the way it does today? What I love about RapidRatings is they have financial scores, and it’s more about the health of the company in the future.

But in our SAP Ariba solution, we provide insights on other factors such as sustainability, information security, and are they funding things such as women’s rights in Third World countries? Once you start looking at the proactive awareness of what’s going on — and all the good and the bad together — you can weigh the suppliers in a total sense.

Their financials may not be up to par, but they are not high risk because they are funding women’s rights or doing a lot of things with the youth in America. To me, that may be more important. So I might put them on a tracker to address their financials more often, but I am not going to stop doing business with them because one of my goals is sustainability. That holistic picture helps tell the true story, a story that connects to our customers, and not just the story we want them to have. So, it creates and crafts that full picture for them.

Gardner: Empirical data that can then lead to a good judgment that takes into full account all the other variables. How does this now get to the SAP Ariba installed base? When is the general availability?

Customize categories, increase confidence 

Jordeth: It’s available now. Our supplier risk module is the entryway for all of these APIs, and within that module we connect to the companies that provide financial data, compliance screening, and information on forced labor, among others. We are heavily expanding in this area for categories of risk with our partners, so it’s a fantastic approach.

Within the supplier risk module, customers have the capability to not only access the information but also create their own custom scores on that data. Because we are a technology organization, we give them the keys so an administrator can go in and alter that the way they want. It is very customizable.

It’s all in our SAP Ariba Supplier Risk solution, and we recently released the connection to RapidRatings.

Evans: Our logo is right in there, built in, under the hood, and visible. In terms of getting it enabled, there’s no professional services or implementation wait time. So once the data set is built out on our end, if it’s a new client that’s through our implementation team, and basically we just give the API key credentials to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. So there is no wait time and no future developments to get at the data.

Once the data set is built on our end, we just give the API key to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. There is no wait time.

Jordeth: That helps us with security, too, because everybody wants to ensure that any data going in and out of a system is secure, with all of the compliance concerns we have. So our partner team also ensures the secure connection back and forth with their data system and our technology. So, that’s very important for customers.

Gardner: Are there any concrete examples? Maybe you can name them, maybe you can’t, instances where your rating system has proven auspicious? How does this work in the real world?

Evans: GE Healthcare did a joint-webinar with our CEO last year, explained their program, and showed how they were able to de-risk their supply base using RapidRatings. They were able to reduce the number of companies that were unhealthy financially. They were able to have mitigation plans put in place and corrective actions. So it was an across the board win-win.

No alt text provided for this image

Oftentimes, it’s not about the return on investment (ROI) on the platform, but the fact that companies were thwarting a disruption. An event did not happen because we were able to address it before it happened.

On the flip side, you can see how resilient companies are regardless of all the disruptions out there. They can use the financial health scores to observe the capability of a company to be resilient and bounce back from a cyber breach, a regulatory issue, or maybe a sustainability issue.

By looking at all of these risks inside of SAP Ariba Supplier Risk, they may want to order an FHR or look at an FHR for a new company that they hadn’t thought of if they are looking at other risks, operational risks. So that’s another way to tie it in.

Another interesting example is a large international retailer. A company got flagged as high risk and had just filed for bankruptcy, which alerted the buyer. The buyer had signed a contract, but they had the product on the shelf, so it had to be resourced and they had to find a new supplier. They mitigated risk, but they had to take quick action, get another product, and some scrambling had to be done. But they had de-risked some brand reputation damage by having done that. They hadn’t looked at that company before, it was a new company, and it was alerted. So that’s another way of not just running it at the time of contract, but it’s also running it when you’re going to market.

Identify related risks 

Gardner: It also seems logical that if a company is suffering on the financial aspects of doing business, then it might be an indicator that they’re not well-managed in general. It may not just be a cause, but an effect. Are there other areas, you could call them adjacencies, where risks to quality, delivery times, logistics are learned from financial indicators?

Evans: It’s a really good point. What’s interesting is we took a look at some data our clients had around timeliness, quality, performance, delivery, and overlaid it with the financial data on those suppliers. The companies that were weak financially were more than two times likely to ship a defective product. And companies that were weak financially were more than 2.5 times more likely to ship wrong or late.

The whole just-in-time shipping or delivery value went out the window. To your point, it can be construed that companies — when they are stressed financially – may be cutting corners, with things getting a little shoddy. They may not have replaced someone. Maybe there are infrastructure investments that should have been made but weren’t. So, all of those things have a reverberating effect in other operational risk areas.

Gardner: Kristen, now that we know that more data is good, and that you have more services like at RapidRatings, how will a big platform and network like SAP Ariba be able to use machine learning (ML) and artificial intelligence (AI) to further improve risk mitigation?

Jordeth: The opportunity exists for this to not only impact the assessment of a supplier, but throughout the full source-to-pay process, because it is embedded into the full SAP Ariba suite. So, even though you’re accessing it through risk, it’s visible when you’re sourcing, when you’re contracting, when you’re paying. So that direct connect is very important.

We want our customers to have it all. So I don’t cringe at the fact that they ask for it all because they should have it all. It’s just visualizing it in a manner that makes sense and it’s clear to them.

Gardner: And specifically on your set of solutions, Eric, where do you see things going in the next couple years? How can the technology get even better? How can the risk be reduced more?

Evans: We will be innovating products so our clients can bring in more scope around their supply base, not just the critical vendors but across the longer tail of a supply base and look at scores across different segments of suppliers. There could be sub-tiers, as a traversing with sub-tier third and fourth parties, particularly in the banking industry or manufacturing industry.

And so that coupled with more intelligence or enhanced APIs and data visualization, these are things that we are looking into as well as additional scoring capabilities.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Using AI to solve data and IT complexity — and thereby better enable AI

The next BriefingsDirect data disruption discussion focuses on why the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence (AI) to the rescue.

Stay with us to explore how the latest AI innovations improve both data and services management across a cloud deployment continuum — and in doing so set up an even more powerful way for businesses to exploit AI.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how AI will help conquer complexity to allow for higher abstractions of benefits from across all sorts of analysis, we welcome Rebecca Lewington, Senior Manager of Innovation Marketing at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We have been talking about massive amounts of data for quite some time. What’s new about data buildup that requires us to look to AI for help?

Lewington: Partly it is the sheer amount of data. IDC’s Data Age Study predicts the global data sphere will be 175 zettabytes by 2025, which is a rather large number. That’s what, 1 and 21 zeros? But we have always been in an era of exploding data.


Yet, things are different. One, it’s not just the amount of data; it’s the number of sources the data comes from. We are adding in things like mobile devices, and we are connecting factories’ operational technologies to information technology (IT). There are more and more sources.

Also, the time we have to do something with that data is shrinking to the point where we expect everything to be real-time or you are going to make a bad decision. An autonomous car, for example, might do something bad. Or we are going to miss a market or competitive intelligence opportunity.

So it’s not just the amount of data — but what you need to do with it that is challenging.

Gardner: We are also at a time when Al and machine learning (ML) technologies have matured. We can begin to turn them toward the data issue to better exploit the data. What is new and interesting about AI and ML that make them more applicable for this data complexity issue?

Data gets smarter with AI

Lewington: A lot of the key algorithms for AI were actually invented long ago in the 1950s, but at that time, the computers were hopeless relative to what we have today; so it wasn’t possible to harness them.

For example, you can train a deep-learning neural net to recognize pictures of kittens. To do that, you need to run millions of images to train a working model you can deploy. That’s a huge, computationally intensive task that only became practical a few years ago. But now that we have hit that inflection point, things are just taking off.

Gardner: We can begin to use machines to better manage data that we can then apply to machines. Does that change the definition of AI?

Lewington: The definition of AI is tricky. It’s malleable, depending on who you talk to. For some people, it’s anything that a human can do. To others, it means sophisticated techniques, like reinforcement learning and deep learning.

How to Remove Complexity

From Multicloud and Hybrid IT 

One useful definition is that AI is what you use when you know what the answer looks like, but not how to get there.

Traditional analytics effectively does at scale what you could do with pencil and paper. You could write the equations to decide where your data should live, depending on how quickly you need to access it.

But with AI, it’s like the kittens example. You know what the answer looks like, it’s trivial for you to look at the photograph and say, “That is a cat in the picture.” But it’s really, really difficult to write the equations to do it. But now, it’s become relatively easy to train a black box model to do that job for you.

Gardner: Now that we are able to train the black box, how can we apply that in a practical way to the business problem that we discussed at the outset? What is it about AI now that helps better manage data? What’s changed that gives us better data because we are using AI?

The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Lewington: It’s a circular thing. The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Now, there are many ways you can apply that. You can apply it to the trivial case of the cat we just talked about. You can apply it to helping a surgeon review many more MRIs, for example, by allowing him to focus on the few that are borderline, and to do the mundane stuff for him.

But, one of the other things you can do with it is use it to manipulate the data itself. So we are using AI to make the data better — to make AI better.

Gardner: Not only is it circular, and potentially highly reinforcing, but when we apply this to operations in IT — particularly complexity in hybrid cloud, multicloud, and hybrid IT — we get an additional benefit. You can make the IT systems more powerful when it comes to the application of that circular capability — of making better AI and better data management.

AI scales data upward and outward

Lewington: Oh, absolutely. I think the key word here is scale. When you think about data — and all of the places it can be, all the formats it can be in — you could do it yourself. If you want to do a particular task, you could do what has traditionally been done. You can say, “Well, I need to import the data from here to here and to spin up these clusters and install these applications.” Those are all things you could do manually, and you can do them for one-off things.

But once you get to a certain scale, you need to do them hundreds of times, thousands of times, even millions of times. And you don’t have the humans to do it. It’s ridiculous. So AI gives you a way to augment the humans you do have, to take the mundane stuff away, so they can get straight to what they want to do, which is coming up with an answer instead of spending weeks and months preparing to start to work out the answer.

No alt text provided for this image

Gardner: So AI directed at IT, what some people call AIOps could be an accelerant to this circular advantageous relationship between AI and data? And is that part of what you are doing within the innovation and research work at HPE?

Lewington: That’s true, absolutely. The mission of Hewlett Packard Labs in this space is to assist the rest of the company to create more powerful, more flexible, more secure, and more efficient computing and data architectures. And for us in Labs, this tends to be a fairly specific series of research projects that feed into the bigger picture.

For example, we are now doing the Deep Learning Cookbook, which allows customers to find out ahead of time exactly what kind of hardware and software they are going to need to get to a desired outcome. We are automating the experimenting process, if you will.

And, as we talked about earlier, there is the shift to the edge. As we make more and more decisions — and gain more insights there, to where the data is created — there is a growing need to deploy AI at the edge. That means you need a data strategy to get the data in the right place together with the AI algorithm, at the edge. That’s because there often isn’t time to move that data into the cloud before making a decision and waiting for the required action to return.

Once you begin doing that, once you start moving from a few clouds to thousands and millions of endpoints, how do you handle multiple deployments? How do you maintain security and data integrity across all of those devices? As researchers, we aim to answer exactly those questions.

And, further out, we are looking to move the natural learning phase itself to the edge, to do the things we call swarm learning, where devices learn from their environment and each other, using a distributed model that doesn’t use a central cloud at all.

Gardner: Rebecca, given your title is Innovation Marketing Lead, is there something about the very nature of innovation that you have come to learn personally that’s different than what you expected? How has innovation itself changed in the past several years?

Innovation takes time and space 

Lewington: I began my career as a mechanical engineer. For many years, I was offended by the term innovation process, because that’s not how innovation works. You give people the space and you give them the time and ideas appear organically. You can’t have a process to have ideas. You can have a process to put those ideas into reality, to wean out the ones that aren’t going to succeed, and to promote the ones that work.

How to Better Understand

What AI Can do For Your Business

But the term innovation process to me is an oxymoron. And that’s the beautiful thing about Hewlett Packard Labs. It was set up to give people the space where they can work on things that just seem like a good idea when they pop up in their heads. They can work on these and figure out which ones will be of use to the broader organization — and then it’s full steam ahead.

Gardner: It seems to me that the relationship between infrastructure and AI has changed. It wasn’t that long ago when we thought of business intelligence (BI) as an application — above the infrastructure. But the way you are describing the requirements of management in an edge environment — of being able to harness complexity across multiple clouds and the edge — this is much more of a function of the capability of the infrastructure, too. Is that how you are seeing it, that only a supplier that’s deep in its infrastructure roots can solve these problems? This is not a bolt-on benefit.

Lewington: I wouldn’t say it’s impossible as a bolt-on; it’s impossible to do efficiently and securely as a bolt-on. One of the problems with AI is we are going to use a black box; you don’t know how it works. There were a number of news stories recently about AIs becoming corrupted, biased, and even racist, for example. Those kinds of problems are going to become more common.

And so you need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

If you are someone like HPE, which has its fingers in lots of pies, either directly or through our partners, it’s easier to make a more efficient solution.

You need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

Gardner: Is it fair to say that AI should be a new core competency, for not only data scientists and IT operators, but pretty much anybody in business? It seems to me this is an essential core competency across the board.

Lewington: I think that’s true. Think of AI as another layer of tools that, as we go forward, becomes increasingly sophisticated. We will add more and more tools to our AI toolbox. And this is one set of tools that you just cannot afford not to have.

Gardner: Rebecca, it seems to me that there is virtually nothing within an enterprise that won’t be impacted in one way or another by AI.

Lewington: I think that’s true. Anywhere in our lives where there is an equation, there could be AI. There is so much data coming from so many sources. Many things are now overwhelmed by the amount of data, even if it’s just as mundane as deciding what to read in the morning or what route to take to work, let alone how to manage my enterprise IT infrastructure. All things that are rule-based can be made more powerful, more flexible, and more responsive using AI.

Gardner: Returning to the circular nature of using AI to make more data available for AI — and recognizing that the IT infrastructure is a big part of that — what are doing in your research and development to make data services available and secure? Is there a relationship between things like HPE OneView and HPE OneSphere and AI when it comes to efficiency and security at scale?

Let the system deal with IT 

Lewington: Those tools historically have been rules-based. We know that if a storage disk gets to a certain percentage full, we need to spin up another disk — those kinds of things. But to scale flexibly, at some point that rules-based approach becomes unworkable. You want to have the system look after itself, to identify its own problems and deal with them.

Including AI techniques in things like HPE InfoSight, HPE Clearpath, and network user identity behavior software on the HPE Aruba side allows the AI algorithms to make those tools more powerful and more efficient.

You can think of AI here as another class of analytics tools. It’s not magic, it’s just a different and better way of doing IT analytics. The AI lets you harness more difficult datasets, more complicated datasets, and more distributed datasets.

Gardner: If I’m an IT operator in a global 2000 enterprise, and I’m using analytics to help run my IT systems, what should I be thinking about differently to begin using AI — rather than just analytics alone — to do my job better?

Lewington: If you are that person, you don’t really want to think about the AI. You don’t want the AI to intrude upon your consciousness. You just want the tools to do your job.

For example, I may have 1,000 people starting a factory in Azerbaijan, or somewhere, and I need to provision for all of that. I want to be able to put on my headset and say, “Hey, computer, set up all the stuff I need in Azerbaijan.” You don’t want to think about what’s under the hood. Our job is to make those tools invisible and powerful.

Composable, invisible, and insightful 

Gardner: That sounds a lot like composability. Is that another tangent that HPE is working on that aligns well with AI?

Lewington: It would be difficult to have AI be part of the fabric of an enterprise without composability, and without extending composability into more dimensions. It’s not just about being able to define the amount of storage and computer networking with a line of code, it’s about being able to define the amount of memory, where the data is, where the data should be, and what format the data should be in. All of those things – from the edge to cloud – need to be dimensions in composability.

How to Achieve Composability 

Across Your Datacenter 

You want everything to work behind the scenes for you in the best way with the quickest results, with the least energy, and in the most cost-effective way possible. That’s what we want to achieve — invisible infrastructure.

Gardner: We have been speaking at a fairly abstract level, but let’s look to some examples to illustrate what we’re getting at when we think about such composability sophistication.

Do you have any concrete examples or use cases within HPE that illustrate the business practicality of what we’ve been talking about?

Lewington: Yes, we have helped a tremendous number of customers either get started with AI in their operations or move from pilot to volume use. A couple of them stand out. One particular manufacturing company makes electronic components. They needed to improve the yields in their production lines, and they didn’t know how to attack the problem. We were able to partner with them to use such things as vision systems and photographs from their production tools to identify defects that only could be picked up by a human if they had a whole lot of humans watching everything all of the time.

This gets back to the notion of augmenting human capabilities. Their machines produce terabytes of data every day, and it just gets turned away. They don’t know what to do with it.

No alt text provided for this image

We began running some research projects with them to use some very sophisticated techniques, visual autoencoders, that allow you, without having a training set, to characterize a production line that is performing well versus one that is on the verge of moving away from the sweet spot. Those techniques can fingerprint a good line and also identify when the lines go just slightly bad. In that case, a human looking at line would think it was working just perfectly.

This takes the idea of predictive maintenance further into what we call prescriptive maintenance, where we have a much more sophisticated view into what represents a good line and what represents a bad line. Those are couple of examples for manufacturing that I think are relevant.

Gardner: If I am an IT strategist, a Chief Information Officer (CIO) or a Chief Technology Officer (CTO), for example, and I’m looking at what HPE is doing — perhaps at the HPE Discover conference — where should I focus my attention if I want to become better at using AI, even if it’s invisible? How can I become more capable as an organization to enable AI to become a bigger part of what we do as a company?

The new company man is AI

Lewington: For CIOs, their most important customers these days may be developers and increasingly data scientists, who are basically developers working with training models as opposed to programs and code. They don’t want to have to think about where that data is coming from and what it’s running on. They just want to be able to experiment, to put together frameworks that turn data into insights.

It’s very much like the programming world, where we’ve gradually abstracted things from bare-metal, to virtual machines, to containers, and now to the emerging paradigm of serverless in some of the walled-garden public clouds. Now, you want to do the same thing for that data scientist, in an analogous way.

Today, it’s a lot of heavy lifting, getting these things ready. It’s very difficult for a data scientist to experiment. They know what they want. They ask for it, but it takes weeks and months to set up a system so they can do that one experiment. Then they find it doesn’t work and move on to do something different. And that requires a complete re-spin of what’s under the hood.

Now, using things like software from the recent HPE BlueData acquisition, we can make all of that go away. And so the CIO’s job becomes much simpler because they can provide their customers the tools they need to get their work done without them calling up every 10 seconds and saying, “I need a cluster, I need a cluster, I need a cluster.”

That’s what a CIO should be looking for, a partner that can help them abstract complexity away, get it done at scale, and in a way that they can both afford and that takes the risk out. This is complicated, it’s daunting, and the field is changing so fast.

Gardner: So, in a nutshell, they need to look to the innovation that organizations like HPE are doing in order to then promulgate more innovation themselves within their own organization. It’s an interesting time.

Containers contend for the future 

Lewington: Yes, that’s very well put. Because it’s changing so fast they don’t just want a partner who has the stuff they need today, even if they don’t necessarily know what they need today. They want to know that the partner they are working with is working on what they are going to need five to 10 years down the line — and thinking even further out. So I think that’s one of the things that we bring to the table that others can’t.

Gardner: Can give us a hint as to what some of those innovations four or five years out might be? How should we not limit ourselves in our thinking when it comes to that relationship, that circular relationship between AI, data, and innovation?

Lewington: It was worth coming to HPE Discover in June, because we talked about some exciting new things around many different options. The discussion about increasing automation abstractions is just going to accelerate.

We are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

For example, the use of containers, which have a fairly small penetration rate across enterprises, is at about 10 percent adoption today because they are not the simplest thing in the world. But we are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

Beyond that, the elephant in the room for AI is that model complexity is growing incredibly fast. The compute requirements are going up, something like 10 times faster than Moore’s Law, even as Moore’s Law is slowing down.

We are already seeing an AI compute gap between what we can achieve and what we need to achieve — and it’s not just compute, it’s also energy. The world’s energy supply is going up, can only go up slowly, but if we have exponentially more data, exponentially more compute, exponentially more energy, and that’s just not going to be sustainable.

So we are also working on something called Emergent Computing, a super-energy-efficient architecture that moves data around wherever it needs to be — or not move data around but instead bring the compute to the data. That will help us close that gap.

How to Transform

The Traditional Datacenter

And that includes some very exciting new accelerator technologies: special-purpose compute engines designed specifically for certain AI algorithms. Not only are we using regular transistor-logic, we are using analog computing, and even optical computing to do some of these tasks, yet hundreds of times more efficiently and using hundreds of times less energy. This is all very exciting stuff, for a little further out in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Cloud computing, data analysis | Tagged , , , , , , , , , , , , | Leave a comment

How IT can fix the broken employee experience

The next BriefingsDirect intelligent workspaces discussion explores how businesses are looking to the latest digital technologies to transform how employees work.

There is a tremendous amount of noise, clutter, and distraction in the scattershot, multi-cloud workplace of today — and it’s creating confusion and frustration that often pollute processes and hinder innovative and impactful work. 

We’ll now examine how IT can elevate the game of sorting through apps, services, data, and delivery of simpler, more intelligent experiences that enable people — in any context — to work on relevancy and consistently operate at their informed best. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To illustrate new paths to the next generation of higher productivity work, please welcome Marco Stalder, Team Leader of Citrix Workspace Services at Bechtle AG, one of Europe’s leading IT providers, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, improving the employee experience has become a hot topic, with billions of productivity dollars at stake. Why has how workers do or don’t do their jobs well become such a prominent issue?

Minahan: The simple answer is the talent crunch. Just about everywhere you look, workforce, management, and talent acquisition have become a C-suite level, if not board level, priority.


And this really boils down to three things. Number one, demographically there is not enough talent. You have heard the numbers from McKinsey that within the next year there will be a shortage of 95 million medium- to high-skilled workers around the globe. And that’s being exacerbated by the fact that our traditional work models — where we build a big office building or a call center and try to hire people around it — is fundamentally broken.

The second key reason is a skills gap. Many companies are reengineering their business to drive digital transformation and new digital business or engagement models with their customers. But oftentimes their employee base doesn’t have the right skills and they need to work on developing them. 

The third issue exacerbating the talent crunch is the fact that if you are fortunate enough to have the talent, it’s highly likely they are disengaged at work. Gallup just did its global Future of Work Study and found that 85 percent of employees are either disengaged or highly disengaged at work. A chief reason is they don’t feel they have access to the information and the tools they need to get their jobs done effectively.

Gardner: We have dissatisfaction, we have a hard time finding people, and we have a hard time keeping the right people. What can we bring to the table to help solve that? Is there some combination of what human resources (HR) used to do and IT maybe didn’t think about doing but has to do?

Enhance the employee experience 

Minahan: The concept of employee experience is working its way into the corporate parlance. The chief reason is that you want to be able to ensure the employees have the right combination of physical space and an environment conducive with interacting and partnering with their project teams — and for getting work done. 

Digital spaces, right? That is not just access to technology, but a digital space that is simplified and curated to ensure workers get the right information and insights to do their jobs. And then, obviously, cultural considerations, such as, “Who is my manager, what’s my development career, am I continuing to move forward?”

Those three things are combining when we talk about employee experience.

Gardner: And you talked about the where, the physical environment. A lot of companies have experimented with at-home workers, remote workers, and branch offices. But many have not gotten the formula right. At the same time, we are seeing cities become very congested and very expensive. 

The traditional work models of old just aren’t working, especially in light of the talent crunch and skills gap we’re seeing. Traditional work models are fundamentally broken. 

Do we need to give people even more choice? And if we do, how can we securely support that? 

Minahan: The traditional work models of old just aren’t working, especially in light of the talent crunch and skills gap we are seeing. The high-profile example is Amazon, right? So over the past year in the US there was a big deal over Amazon selecting their second and third headquarters. Years ago Amazon realized they couldn’t hire all the talent they needed in Seattle or Silicon Valley or Austin. Now they have 17-odd tech centers around the US, with anywhere from 400 to several thousand people at each one. So you need to go where the talent is. 

When we think about traditional work models — where we would build a call center and hire a lot of people around that call center – it’s fundamentally broken. As evidence of this, we did a study recently where we surveyed 5,000 professional knowledge workers in the US. These were folks who moved to cities because they had opportunities and they got paid more. Yet 70 percent of them said that they would move out of the city if they could have more flexible work schedules and reliable connectivity. 

Gardner: It’s pretty attractive when you can get twice the house for half the money, still make city wages, and have higher productivity. It’s a tough equation to beat. 

Minahan: Yes, there is that higher productivity thing, this whole concept of mindfulness that’s working its way into the lingo. People should be hired to do a core job, not spending their days doing things like expense report approvals, performance reviews, or purchase requisitions. Yet those are a big part of everyone’s job, when they are in an office. 

You compound that with two-hour commutes, and that there are a lot of distractions in the office. We often need to navigate multiple different applications just to get a bit of the information that we need. We often need to navigate multiple different applications to get a single business process done and that’s just not dealing with all the different interfaces, it’s dealing with all the different authentications, and so on. All of that noise in your day really frustrates workers. They feel they were hired to do a job based on core skills they are really passionate about – but they spend all their time doing task work. 

Gardner:I feel like I spend way too much time in email. I think everybody knows and feels that problem. Now, how do we start to solve this? What can the technology side bring to the table and how can that start to move into the culture, the methods, and the rethinking of how work gets done?

De-clutter intelligently

Minahan: The simple answer is you need to clear way the clutter. And you need to bring intelligence to bear. We believe that artificial intelligence (AI) and machine learning (ML) play a key role. And so Citrix has delivered a digital workspace that has three primary attributes. 

First, it’s unified. Users and employees gain everything they need to be productive in one unified experience. Via single sign-on they gain access to all of their Software as a service (SaaS) apps, web apps, mobile apps, virtualized apps, and all of their content in one place. That all travels consistently with them wherever they are — across their laptop, to a tablet, to a smartphone, or even if they need to log on from a distinct terminal. 

The second component, in addition to being unified, is being secure. When things are within the workspace, we can apply contextual security policies based on who you are. We know, for example, that Dana logs in every day from a specific network, using his device. If you were to act abnormally or outside of that pattern, we could apply an additional level of authentication, or some other rules like shutting off certain functionalities such as downloading. So your applications and content are far more secure inside of the workspace than outside. 

When things are within the workspace, we can apply contextual security policies based on who you are. Your applications and content are far more secure inside of the workspace than outside.

The third component, intelligence, gets to the frustration part for the employees. Infusing ML and simplified workflows — what we call micro apps — within the workspace brings in a lot of those consumer-like experiences, such as curating your information and news streams, like Facebook. Or, like Netflix, it provides recommendations on the content you would like to see.

We can bring that into the workspace so that when you show up you get presented in a very personalized way the insights and tasks that you need, when you need them, and remove that noise from your day so you can focus on your core job. 

Gardner: Getting that triage based on context and that has a relevancy to other team processes sounds super important.

When it comes to IT, they may have been part of the problem. They have just layered on more apps. But IT is clearly going to be part of the solution, too. Who else needs to play a role here? How else can we re-architect work other than just using more technology?

To get the job done, ask employees how 

Minahan: If you are going to deliver improved employee experiences, one of the big mistakes a lot of companies make is they leave out the employee. They go off and craft the great employee experience and then present it to them. So definitely bring employees in. 

When we do research and engage with customers who prioritize on the employee experience, it’s usually a union between IT and human resources to best understand what the work is that an employee needs to get done. What’s the preferred environment? How do they want to work? With that understanding, you can ensure you are adapting the digital workspaces — and the physical workplaces — to support that.

Gardner: It certainly makes sense in theory. Let’s learn how this works in practice. 

Marco, tell us about Bechtle, what you have been doing, and why you made solving employee productivity issues a priority.

Stalder: Bechtle AG is one of Europe’s leading IT providers. We currently have about 70 systems integrators (SIs) across Germany, Switzerland, and Austria, as well as e-commerce businesses in 14 different European countries. 


We were founded in 1983 and our company headquarters is in Neckarsulm, a small town in the southern part of Germany. We currently have 10,300 employees spread across all of Europe.

As an IT company, one of our key priorities is to make IT as easy as possible for the end users. In the past, that wasn’t always the case because the priorities had been set in the wrong place. 

Gardner: And when you say the priorities were set in the wrong place, when you tried to create the right requirements and the right priorities, how did you go about that, what were the top issues you wanted to solve?

Stalder: The hard part is gaining the balance between security and user experience. In the past, priorities were more focused on the security part. We have tried to shift this through our Corporate Workspace Project to give the user the right kind of experience back again, and letting it show in the work and focus on what the user has to do.

Gardner: And just to be clear, are we talking about the users that are just within your corporation or did this extend also to some of your clients and how you interact with them?

Stalder: The primary focus was our internal user base, but of course we also have contractors that externally have to access our data and our applications.

Gardner: Tim, this is yet another issue companies are dealing with: contingent workforces, contractors that come and go, and creative people that are often on another continent. We have to think about supporting that mix of workers, too.

Synchronizing the talent pool 

Minahan: Absolutely. We are seeing a major shift in how companies think of the workforce, between full-time and part-time contractors, and the like. Leading companies are looking around for pools of talent. They are asking, “How do I organize the right skills and resources I need? How do I bring them together in an environment, whether it’s physical or digital, to collaborate around a project and then dissolve them when that project is complete?”

And these new work models excite me when we talk about the workspace opportunity that technology can enable. A great example is a customer of ours, eBay, which people are familiar with. A long time ago, eBay recognized that they could not get ahead of the older call center model. They kept training people, but the turnover was too fast. So they began using the Citrix Workspace together with some of our networking technologies to go to where the employees are. 

Now they can go to the stay-at-home parent in Montana, the retiree in Florida, or the gig worker in New York. In this way, they can Uberfythe call center model by giving them, regardless of location, the applications, knowledge base, and reliable connectivity they need. So when you or I call in, it sounds like we are calling into a call center, and we get the answers we need to solve our problems.

Gardner: Marco, your largely distributed IT organization has permeable boundaries. There isn’t a hard wall between you and where your customers start and end. The Citrix Workspace helped you solve that. What were some of the other problems, and what was the outcome?

Stalder: One of the main criteria for Bechtle is agility. We have been growing constantly for the last 36 years. Bechtle started as a small company with only 100 employees, but organic and inorganic growth continues, and we are still growing quite rapidly. We just acquired another four companies at the end of last year, for example, with 400 to 500 employees. We need to on-board them quickly. 

One of the main criteria for Bechtle is agility. We have been growing constantly for the last 36 years. And our teams are spread around different office locations. We also have to adapt to new technologies rapidly because we want to be ahead of the technology.

And our teams are spread around different office locations; even my team, for example. I am based in Switzerland with four people. Another part of our group is in Germany, and I have one colleague in Budapest. Giving all of these people the correct and secure access to all of their applications and data is definitely key.

As an IT company, we also have to adapt to new technologies rapidly and quickly, probably faster than other companies because we want to be ahead of the technology for our employees. We are selling these same solutions to our customers, along with the same experience — and a good experience.

Gardner: We often call that drinking your own champagne. Tell us about the process through which you evaluated the Citrix Workspace solution and why that’s proven so powerful.

One platform to rule them all 

Stalder: In early 2016, we began with a high-level design for a basic corporate workspace. We began with an on-premises design, like a lot of companies. Then we were introduced to something called Citrix Cloud services by our partner manager in Germany.

In January 2017, we started to think about the Citrix Cloud solution as an interesting addition to what we were already planning. And we quickly realized that the team I am leading — we are six to eight people with limited resources – could only deliver all those services out to our end users with help. The Citrix Cloud services were a perfect fit for the project we wanted to do.

There are different reasons. One is standardization, to build and use one platform to access all of our applications, data, and services. Another is flexibility. While most of our workloads are currently in our own data centers in Germany, we are also thinking about bringing workloads and data out to the cloud. It doesn’t matter if it’s Microsoft AzureAmazon Web Services (AWS), or you name it.

Another benefit, of course, is scalability. As I said, we have been growing a lot and we are going to grow a lot more in the future. We really need to be able to scale out, and it doesn’t matter where the workload is going to be or where the data is going to be at the end. 

And, as an IT company, we are facing another issue. We are selling different kinds of IT products to our customers, and people tend to like to use the product they are selling to their customers. So we have to explore and use different kinds of applications for different tasks. 

For example, we use Microsoft TeamsCisco WebEx Teams, as well as Skype for Business. We are using many other kinds of applications, too. That perfectly fits into what we have seen [from Citrix at their recent Synergy conference keynote]. It brings it all together in the Citrix Workspace using micro apps and micro services.

Another important attribute is efficiency. As I said before, with seven or eight IT support people, we cannot build very complex and large things. You have to focus on doing things very efficiently.

Another really important thing for us as we set up the workspaces project is engaging with the executive board of Bechtle. If we find that those people are not standing behind the idea and understanding what we are trying to do, then the project is definitely going to fail. 

It was not that easy, just telling those board people what we would like to do. We had to build a proof of concept system to let them see, touch, and feel it themselves. Only in this way can one really understand it.

Gardner: Of course, such solutions are a team sport. You don’t just buy this out of the box. Digital transformation doesn’t come with a pretty ribbon on it. How did you go about creating this workspace?

There is IT in team 

Stalder: It was via teamwork spread between different kinds of groups. We have been working very closely with Citrix Consulting Services in Germany, for example. We have been working together with the engineers within our business units who are selling and implementing those solutions within our customers.

And another very important part, in my opinion, was not just engaging the Citrix people, but also engaging with the application owners. It doesn’t really help if I give them a very nice virtual desktop and they are able to log-on fast but they don’t have any applications on it. Or the application doesn’t work very well. Or if they have to log-on again, for example, or configure it before using it. We tried to provide an end-to-end solution by engaging with all of the different people — from the front-end client, to the networking, and through to the applications’ back end.

And we have been quite successful. For example, for our main business applications, SAP or Microsoft, we have been telling the people what we want to do to get those application guys on board. They understand what it means for them. In the past we had been rolling out version updates for 70 different locations.

They were sending out emails saying, “Can you please go to the next version? Can you please update to this or that?” That, of course, requires a lot of time and is very hard to troubleshoot and configure.

But now, by standardizing those things together [as a workspace], we can deploy it once, configure it once, and it doesn’t matter who is going to use it. It has made those rollouts much easier. For example, for our virtual apps and desktops, we just reached about 30 percent of our employees. It’s being done in a highly standardized project basis across every business unit.

We also realized the importance of informing and guiding the people as to how they have to use the new solutions, because it’s changing and some people, they react a bit slow to change. At first they say, “I don’t want to try it. I don’t need it.” It was a learning process to see what kind of documentation and guidance the people needed.

The changes are simple things [that deliver big paybacks]. Because if the people can take a PC back home and use a VPN to connect to their company resources, they may no longer need that PC. They can simply use any device to access their work from home or from on the road. Those are very simple things, but people have to understand that they can do that now.

Gardner: As I like to say, we used to force people to conform to the apps and now we can get the apps and services to conform to what the people want and need. 

But we have talked about this in terms of the productivity of the employee. How about your IT department? How have your IT people reacted to this?

Stalder: I also needed a lot of time to convince the IT people, especially some security guys. They said, “You are going to go to Citrix Cloud? What does it mean for security?”

We have been working very closely with Citrix to explain to the security officer what kind of data goes to the cloud, how it’s stored, and how it’s processed. And that took quite a while to get approval, but at the end it went through, definitely. 

The IT guys have to understand and use the solution. They sometimes think that it’s just for the end users. But IT is also an end user. They have to get on board and use the solutions. Only in this way everyone knows what the other one is talking about.

Gardner: Now that you have been through this process and the workspace is in place, what have you found? What are the metrics of success? When you do it well, what do you get back?

Positive feedback 

Stalder: Unfortunately, measuring productivity is very hard to do. I don’t have any numbers on that yet. I just get feedback from employees who are talking about different things as they try the system.

And I have quite an interesting story. For example, one guy in our application consulting group was a bit skeptical. One day his notebook PC was broken so he had to use the new Citrix Workspace. He had no choice but to try it.

He wrote back some very interesting facts and figures, saying it was faster. It was faster to log on and the applications started faster. And it was easy to use. Because he does a lot of presentations and training, he realized he could start the work on one device and then switch back to another device, maybe in the meeting room or go to the training room, and just continue the work.

We also get feedback saying they can work from everywhere, can access everything they need, especially if they go out to the customer, and that they only have to remember one place to log on to. They just log-on once and they have all the data and all the applications they are going to need.

Gardner: Tim, when you hear about such feedback from Marco, what jumps out at you? 

Minahan: What stands out is the universal challenge we are all experiencing now. The employee experience is less than adequate in most organizations. It is impacting not only the ability to develop and retain great talent, but it’s also impacting your overall business.

What also stands out is that when technology is harnessed in a way that puts the employee first — and drives superior experience to allow them to have access to the information and the tools they need to get their jobs done — not only does employee retention go up, but you also drive better customer experiences, and better business end results.

The third thing that stands out is the recognition that traditionally we in the IT sector focused on putting security in the way of the experience. Now, if you put the employee at the center, we are beginning to attain a  better balance between experience and security. It’s not an either-or equation anymore. This story at Bechtle is a great example of that in reality.

Gardner: What was interesting for me, too, was that employees get used to the way things are. You hit inertia. But when a necessity crops up, and somebody was forced to try something new, they found that there are better ways to do things.

Minahan: Right, it’s the old saw … If you only asked folks what they wanted, they would want a faster horse — and we never would have had the car.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, data center, Data center transformation, Deloitte, Enterprise transformation, machine learning, Microsoft, multicloud, Security, User experience, vdi, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

Architectural firm attains security and performance confidence across virtualized and distributed desktops environment

Better security over data and applications remains a foremost reason IT organizations embrace and extend the use of client virtualization. Yet performance requirements for graphics-intense applications and large files remain one of the top reasons the use of thin clients and virtualized desktops trails the deployment of full PC clients.

For a large architectural firm in Illinois, gaining better overall security, management, and data center consolidation had to go hand in hand with preserving the highest workspace performance — even across multiple distributed offices.

The next BriefingsDirect security innovations discussion examines how BLDD Architects, Inc. developed an IT protection solution that fully supports all of its servers and mix of clients in a way that’s invisible to its end users. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to share the story of how to gain the best cloud workload security, regardless of the apps and the data, is Dan Reynolds, Director of IT at BLDD Architects in Decatur, Illinois. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dan, tell us about BLDD Architects. How old is the firm? Where you are located? And what do you have running in your now-centralized data center?

Reynolds: We are actually 90 years old this year, founded in 1929. It has obviously changed names over the years, but the same core group of individuals have been involved the entire time. We used to have five offices: three in central Illinois, one in Chicago, and one in Davenport, Iowa. Two years ago, we consolidated all of the Central Illinois offices into just the Decatur office.

When we did that, part of the initiative was to allow people to work from home. Because we are virtualized, that was quite easy. Their location doesn’t matter. The desktops are still here, in the central office, but the users can be wherever they need to be.

On the back-end, we are a 100 percent Microsoft shop, except for VMware, of course. I run the desktops from a three-node Hewlett Packard Enterprise (HPE) DL380 cluster. I am using a Storage Area Network (SAN) product called the StarWind Virtual SAN, which has worked out very well. We are all VMware for the server and client virtualization, so VMware ESXi 6.5 and VMware Horizon 7.

Gardner: Please describe the breadth of architectural, design, and planning work you do and the types of clients your organization supports.

Architect the future, securely 

Reynolds: We are wholly commercial. We don’t do any residential designs, or only very, very rarely. Our biggest customers are K-12 educational facilities. We also design buildings for religious institutions, colleges, and some healthcare clinics. 

Recently we have begun designing senior living facilities. That’s an area of growth that we have pursued. Our reason for opening the office in Davenport was to begin working with more school districts in that state. 

Along time ago, I worked as a computer-aided design (CAD) draftsman. The way the architecture industry has changed since then has been amazing. They now work with clients from cradle to grave. With school districts, for example, they need help at the early funding level. We go in and help them with campaigns, to put projects on the ballot, and figure out ways to help them – from gaining money all the way to long-term planning. There are several school districts where we are their architect-of-record. We help them plan for the future. It’s amazing. It really surprises me.

Gardner: Now that we know what you do and your data center platforms, let’s learn more about your overall security posture. How do you approach security knowing that it’s not from one vendor, it’s not one product? You don’t just get security out of a box. You have to architect it. What’s your philosophy, and what do you have in place as a result?

Reynolds: I like to have a multilayered approach. I think you have to. It can’t just be antivirus, and it can’t just be firewall. You have to allow the users freedom to do what they need to do, but you also have to figure out where they are going to screw up — and try to catch that. 

I like to have a multilayered approach. I think you have to. It can’t just be antivirus, and it can’t just be a firewall. You have to allow the users freedom to do what they need to do, but you also have to figure out where they are going to screw up — and try and catch that.

And it’s always a moving target. I don’t pretend to know this perfectly at all. I use OpenDNS as a content filter. Since it’s at the DNS level, and OpenDNS is so good at whitelisting, we pick up on some of the content choices and that keeps our people from accidentally making mistakes. 

In addition, last year I moved us to Cisco Meraki Security Appliances, and their network-based malware protection. I have a site-to-site virtual private network (VPN) for our Davenport office. All of our connections are Fiber Ethernet. In Illinois, it’s all Comcast Metro E. I have another broadband provider for the Davenport office. 

And then, on top of all of that, I have Bitdefender GravityZone Enterprise Security for the endpoints that are not thin clients. And then, of course, for the VMware environment I also use GravityZone; that works perfectly with VMWare NSX virtual networking on the back-end and the scanning engine that comes with that.

Gardner: Just to be clear Dan, you have a mix of clients; you have got some zero clients, fat clients, both Mac and Windows, is that right?

Diversity protects mixed clients

Reynolds: That’s correct. For some of the really high-end rendering, you need the video hardware. You just can’t do everything with virtualization, but you can knock out probably 90 to 95 percent of all that we do with it. 

And, of course, on those traditional PC machines I have to have conventional protection, and we also have laptops and Microsoft Surfaces. The marketing department has Mac OSX machines. There are just times you can’t completely do everything with a virtual machine. 

Gardner: Given such a diverse and distributed environment to protect, is it fair to say that being “paranoid about security” has paid off? 

Reynolds: I am confident, but I am not cocky. The minute you get cocky, you are setting yourself up. But I am definitely confident because I have multi-layers of protection. I build my confidence by making sure these layers overlap. It gives me a little bit of cushion so I am not constantly afraid.

And, of course, another factor many of us in the IT security world are embracing is around better educating the end users. We try to make them as aware to help share your paranoia with them to help them understand. That is really important. 

On the flip side, I also use a product called StorageCraft and I encrypt all my backups. Like I said, I am not cocky. I am not going to put a target on my back and say, “Hit me.” 

Gardner: Designers, like architects, are often perfectionists. It’s essential for them to get apps, renderings, and larger 3D files the way they want them. They don’t want to compromise.

As an IT director, you need to make sure they have 100 percent availability — but you also have to make sure everything is secure. How have you been able to attain the combined requirements of performance and security? How did you manage to tackle both of them at the same time?

Reynolds: It was an evolving process. In my past life I had experience with VMware and I knew of virtual desktops, but I wasn’t really aware of how they would work under [performance] pressure. We did some preliminary testing using VMware ESXi on high-end workstations. At that point we weren’t even using VMware View. We were just using remote desktops. And it was amazing. It worked, and that pushed me to then look into VMware View.

Of course, when you embrace virtualization, you can’t go without security. You have to have antivirus (AV); you just have to. The way the world is now, you can’t live without protecting your users — and you can’t depend on them to protect themselves because they won’t do it.

The way that VMware had approached antivirus solutions — knowing that native agents and the old-fashioned types of antivirus solutions would impact performance — was they built it into the network. It completely insulated the user from any interaction with the antivirus software. I didn’t want anything running on the virtual desktop. It was completely invisible to them, and it worked.

Gardner: When you go to fully virtualized clients, you solve a lot of problems. You can centralize to better control your data and apps. That in itself is a big security benefit. Tell me your philosophy about security and why going virtualized was the right way to go.

Centralization controls chaos, corruption 

Reynolds: Well, you hit the nail on the head. By centralizing, I can have one image or only a few images. I know how the machines are built. I don’t have desktops out there that users customize and add all of their crap to. I can control the image. I can lock the image down. I can protect it with Bitdefender. If the image gets bad, it’s just an image. I throw it away and I replace it.

I tend to use full clones and non-persistent desktops simply for that reason. It’s so easy. If somebody begins having a problem with their machine or their Revit software gets corrupted or something else happens, I just throw away the old virtual machine (VM) and roll a new one in. It’s easy-peasy. It’s just done.

Gardner: And, of course, you have gained centralized data. You don’t have to worry about different versions out there. And if corruption happens, you don’t lose that latest version. So there’s a data persistence benefit as well.

Reynolds: Yes, very much so. That was the problem when I first arrived here. They had five different silos [one for each branch office location]. There were even different versions of the same project in different places. They were never able to bring all of the data into one place.

I saw that as the biggest challenge, and that drove me to virtualization in the first place. We were finally able to put all the data in one place and back it up in one place.

Gardner: How long have you been using Bitdefender GravityZone Enterprise Security, and why do you keep renewing? 

Reynolds: It’s been about nine years. I keep renewing because it works, and I like their support. Whenever I have a problem, or whenever I need to move — like from different versions of VMware or going to NSX and I change the actual VMware parts — the Bitdefender technology is just there, and the instructions are there, too. 

It’s all about relationships with me. I stick with people because of relationships — well, the performance as well, but that’s part of the relationship. I mean, if your friend kept letting you down, they wouldn’t be your friend anymore.

Gardner: Let’s talk about that performance. You have some really large 2-D and 3-D graphics files at work constantly. You’re using Autodesk Revit, as you mentioned, Bluebeam RevuMicrosoft OfficeAdobe, so quite a large portfolio.

These are some heavy-lifting apps. How does their performance hold up? How do you keep the virtualized delivery invisible across your physical and virtualized workstations? 

High performance keeps users happy 

Reynolds: Number one, I must keep the users happy. If the users aren’t happy and if they don’t think the performance is there, then you are not going to last long.

I have a good example, Dana. I told you I have Macs in the marketing department, and the reason they kept Macs is because they want their performance with the Adobe apps. Now, they use the Macs as thin clients and connect to a virtual desktop to do their work. It’s only when they are doing big video editing that they resume using their Macs natively. Most of the time, they are just using them as a thin client. For me, that’s a real vote of confidence that this environment works.

Gardner: Do you have a virtualization density target? How are you able to make this as efficient as possible, to get full centralized data center efficiency benefits?

Reynolds: I have some guidelines that I’ve come up with over the years. I try to limit my hosts to about 30 active VMs at a time. We are actually now at the point where I am going to have to add another node to the cluster. It’s going to be compute only, it won’t be involved in the storage part. I want to keep the ratio of CPUs and RAM about the same. But generally speaking, we have about 30 active virtual desktops per host.

Gardner: How does Bitdefender’s approach factor into that virtualization density?

I like the way Bitdefender licenses their coverage. It gives me a lot of flexibility, and it helps me plan out my environment. I’m not paying by the core, and I’m not paying by the desktop. I’m paying by the socket, and I really like it that way.

Reynolds: The way that Bitdefender does it — and I really like this — is they license by the socket. So whether I have 10 or 100 on there, it’s always by the socket. And these are HPE DL380s, so they are two sockets, even though I have 40 cores.

I like the way they license their coverage. It gives me a lot of flexibility, and it helps me plan out my environment. Now, I’m looking at adding another host, so I will have to add a couple of more cores. But that still gives me a lot of growth room because I could have 120 active desktops running and I’m not paying by the core, and I’m not paying by the individual virtual desktop. I am paying for Bitdefender by the socket, and I really like it that way.

Gardner: You don’t have to be factoring the VMs along the way as they spin up and spin down. It can be a nightmare trying to keep track of them all.

Reynolds: Yes, I am glad I don’t have to do that. As long as I have the VMware agent installed and NSX on the VMware side, then it just shows up in GravityZone, and it’s protected.

Prevent, rather than react, to problems

Gardner: Dan, we have been focusing on performance from the end-user perspective. But let’s talk about how this impacts your administration, your team, and your IT organization. 

How has your security posture, centralization, and reliance on virtualization allowed your team to be the most productive?

Reynolds: I use GravityZone’s reporting features. I have it tell me weekly the posture of my physical machines and my virtual machines. I use the GravityZone interface. I look at it quite regularly, maybe two or three times a week. I just get in and look around and see what’s going on.

I like that it keeps itself up to date or lets me know it needs to be updated. I like the way that the virus definitions get updated automatically and pushed out automatically, and that’s across all environments. I really like that. That helps me, because it’s something that I don’t have to constantly do.

I would rather watch than do. I would rather have it tell me or e-mail me than I find out from my users that their machines aren’t working properly. I like everything about it. I like the way it works. It works with me.

Gardner: It sounds like Bitdefender had people like you, a jack of all trades, in mind when it was architected, and that wasn’t always the case with security. Usually before the security would play catch-up to the threats, rather than anticipating the needs of those in the trenches fighting the security battle.

Reynolds: Yes, very much so. At other places I have worked and with other products, that was an absolute true statement, yes.

Gardner: Let’s look at some of the metrics of success. Tell us how you measure that. I know security is measured best when there are no problems.

But in terms of people, process, and technology, how do we evaluate in terms of costs, man hours, of being proactive? How do we measure success when it comes to a good security posture for an organization like yours?

Security supports steady growth

Reynolds: I will be the first to admit I am a little weak in describing that. But I do have some metrics that work. For example, we didn’t need to replace our desktops often. We had been using our desktops for eight years, which is horrible in one sense, but in another sense, it says we didn’t have to. And then when those desktops were about as dead as dead could be, we replaced them with less expensive thin clients, which are almost disposable devices.

I envision a day when we’re using Raspberry Pi as our thin clients and we don’t spend any big money. That’s the way to sum it up. All my money is spent on maintenance for applications and platform software, and you are not going to get rid of that.

Another big payoff is around employee happiness. A little over two years ago, when we had to collapse the offices, more people could work from home. It kept a lot of people that probably would have walked out. That happened because of the groundwork and foundation I had put in. From that time, we have had two of the best years the company has ever had, even after that consolidation.

And so, for me, personally, that was kind of like I had something to do with that, and I can take some pride in that.

Gardner: Dan, when I hear your story, the metrics of success that I think about are that you’re able to accommodate growth, you can scale up, and if you had to – heaven forbid — you could scale down. You’re also in a future-proofing position because you’ve gone software-defined, you have centralized and consolidated, you’ve gone highly virtualized across-the-board, and you can accommodate at-home users and bring your own devices (BYOD).

Perhaps you have a merger and acquisition in the works, who knows? But you can accommodate that and that means business agility. These are some of the top business outcome metrics of success that I know companies large and small look for. So hats off to you on that.

Reynolds: Thank you very much. I hate to use the word “pride” but I’m proud of what I’ve been able to accomplish the last few years. All the work I have done in the prior years is paying off.

Gardner: One of my favorite sayings is, “Architecture is destiny.” If you do the blocking and tackling, and you think strategically — even while you are acting tactically — it will pay off in spades later.

Okay, let’s look to the future before we end. There are always new things coming out for modernizing data centers. On the hardware side, we’re hearing about hyper-converged infrastructure (HCI), for example. We’re also seeing use of automated IT ops and using artificial intelligence (AI) and machine learning (ML) to help optimize systems.

Where does your future direction lead, and how does your recent software and security posture work enable you to modernize when you want?

Future solutions, scaled to succeed 

Reynolds: Obviously, hyper-converged infrastructure is upon us and many have embraced it. I think the small- to medium-sized business (SMB) has been a little reluctant because the cost is very high for an SMB.

I think that cost of entry is going to come down. I think we are going to have a solution that offers all the benefits but is scaled down for a smaller firm. When that happens, everything I have done is going to transfer right over.

I have software-based storage. I have some software-based networking, but I would love to embrace that even more. That would be the icing on the cake and take some of the physical load off of me. The work that I have to do with switches and cabling and network adapters — if I could move that into the hyper-converged arena, I would love that.

When I started, everybody said there’s no way we could virtualize Revit and Autodesk. We did and it worked fine. You have to be willing to experiment and take some chances sometimes. It’s a long road but it’s worth it. It will pay off.

Gardner: Also, more companies are looking to use cloud, multi-cloud, and hybrid cloud. Because you’re already highly virtualized, because your security is optimized for that, whatever choices your company wants to take with vis-à-vis cloud and Software-as-a-Service (SaaS) you’re able to support that.

Reynolds: Yes, we have a business application that manages our projects, does our time keeping, and all the accounting. It is a SaaS app. And, gosh, I was glad when it went SaaS. That was just one thing that I could get off of my plate — and I don’t mean that in a bad way. I wanted it to be handled even better by moving to SaaS where you get economy of scale that you can’t provide as an IT individual.

Gardner: Any last words of advice for organizations — particularly those wanting to recognize all the architectural and economic benefits, but might be concerned about security and performance?

Reynolds: Research, research, research — and then more research. When I started, everybody said there’s no way we could virtualize Revit and Autodesk. Of course, we did and it worked fine. I ignored them, and you have to be willing to experiment and take some chances sometimes. But by researching, testing, and moving forward gently, it’s a long road, but it’s worth it. It will pay off.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in application transformation, Bitdefender, Cyber security, Data center transformation, Hewlett Packard Enterprise, risk assessment, Security, server, storage, User experience | Tagged , , , , , , , , , , | Leave a comment

Qlik’s CTO on why the cloud data diaspora forces businesses to rethink their analytics strategies

The next BriefingsDirect business intelligence (BI) trends discussion explores the impact of dispersed data in a multicloud world. 

Gaining control over far-flung and disparate data has been a decades’ old struggle, but now as hybrid and public clouds join the mix of legacy and distributed digital architectures, new ways of thinking are demanded if comprehensive analysis of relevant data is going to become practical. 

Stay with us now as we examine the latest strategies for making the best use of data integration, data catalogs and indices, as well highly portable data analytics platforms. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about closing the analysis gap between data and multiple — and probably changeable — cloud models, we are joined by Mike Potter, Chief Technology Officer (CTO) at Qlik. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mike, businesses are adopting cloud computing for very good reasons. The growth over the past decade has been strong and accelerating. What have been some of the — if not unintentional — complicating factors for gaining a comprehensive data analysis strategy amid this cloud computing complexity? 

Potter: The biggest thing is recognizing that it’s all about where data lives and where it’s being created. Obviously, historically most data have been generated on-premises. So, there is a strong pull there, but you are seeing more and more cases now where data is born in the cloud and spends its whole lifetime in the cloud. 

And so now the use cases are different because you have a combination of those two worlds, on-premises and cloud. To add further complexity, data is now being born in different cloud providers. Not only are you dealing with having some data and legacy systems on-premises, but you may have to reconcile that you have data in AmazonGoogle, or Microsoft

Our whole strategy around multicloud and hybrid cloud architectures is being able to deploy Qlik where the data lives. It allows you to leave the data where it is, but gives you options so that if you need to move the data, we can support the use cases on-premises to cloud or across cloud providers. 

Gardner: And you haven’t just put on the patina of cloud-first or software as a service (Saas) -first. You have rearchitected and repositioned a lot of what your products and technologies do. Tell us about being “SaaS-first” as a strategy

Scaling the clouds

Potter: We began our journey about 2.5 years ago, when we started converting our monolith architecture into a microservices-based architecture. That journey struck to the core of the whole product. 

Qlik’s heritage was a Windows Server architecture. We had to rethink a lot of things. As part of that we made a big bet 1.5 years ago on containerization, using Docker and Kubernetes. And that’s really paid off for us. It has put us ahead of the technology curve in many respects. When we did our initial release of our multicloud product in June 2018, I had conversations with customers who didn’t know what Kubernetes was. 

One enterprise customer had an infrastructure team who had set up an environment to provision Kubernetes cluster environments, but we were only the second vendor that required one, so we were ahead of the game quite a bit. 

Gardner: How does using a managed container platform like Kubernetes help you in a multicloud world?

Potter: The single biggest thing is it allows you to scale and manage workloads at a much finer grain of detail through auto-scaling capabilities provided by orchestration environments such as Kubernetes. 

More importantly it allows you to manage your costs. One of the biggest advantages of a microservice-based architecture is that you can scale up and scale down to a much finer grain. For most on-premises, server-based, monolith architectures, customers have to buy infrastructure for peak levels of workload. We can scale up and scale down those workloads — basically on the fly — and give them a lot more control over their infrastructure budget. It allows them to meet the needs of their customers when they need it. 

Gardner: Another aspect of the cloud evolution over the past decade is that no one enterprise is like any other. They have usually adopted cloud in different ways. 

Has Qlik’s multicloud analytics approach come with the advantage of being able to deal with any of those different topologies, enterprise by enterprise, to help them each uniquely attain more of a total data strategy?

Potter: Yes, I think so. The thing we want to focus on is, rather than dictate the cloud strategy – often the choice of our competitors — we want to support your cloud strategy as you need it. We recognize that a customer may not want to be on just one cloud provider. They don’t want to lock themselves in. And so we need to accommodate that. 

There may be very valid reasons why they are regionalized, from a data sovereignty perspective, and we want to accommodate that. There will always be on-premises requirements, and we want to accommodate that.

The reality is that, for quite a while, you are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated.

You are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated. 

Gardner: And there is another variable in the mix over the next years — and that’s the edge. We have an uncharted, immature environment at the edge. But already we are hearing that a private cloud at the edge is entirely feasible. Perhaps containers will be working there.

At Qlik, how are you anticipating edge computing, and how will that jibe with the multicloud approach?

Running at the edge

Potter: One of the key features of our platform architecture is not only can we run on-premises or in any cloud at scale, we can run on an edge device. We can take our core analytics engine and deploy it on a device or machine running at the edge. This enables a new opportunity, which is taking analytics itself to the edge.

lot of Internet of Things (IoT) implementations are geared toward collecting data at the sensor, transferring it to a central location to be processed, and then analyzing it all there. What we want to do is push the analytics problem out to the edge so that the analytic data feeds can be processed at the edge. Then only the analytics events are transmitted back for central processing, which obviously has a huge impact from a data-scale perspective.

But more importantly, it creates a new opportunity to have the analytic context be very immediate in the field, where the point of occurrence is. So if you are sitting there on a sensor and you are doing analytics on the sensor, not only can you benefit at the sensor, you can send the analytics data back to the central point, where it can be analyzed as well.

Gardner: It’s auspicious, the way that Qlik’s catalog, indexing, and abstracting out the information about where data is approach can now be used really well in an edge environment.

Potter: Most definitely. Our entire data strategy is intricately linked with our architectural strategy in that respect, yes.

Gardner: Analytics and being data-driven across an organization is the way of the future. It makes sense to not cede that core competency of being good at analytics to a cloud provider or to a vendor. The people, process, and tribal knowledge about analytics seems essential.

Do you agree with that, and how does Qlik’s strategy align with keeping the core competency of analytics of, by, and for each and every enterprise?

Potter: Analytics is a specialization organizationally within all of our customers, and that’s not going to go away. What we want to do is parlay that into a broader discussion. So our focus is enabling three key strategies now.

It’s about enabling the analytics strategy, as we always have, but broadening the conversation to enabling the data strategy. More importantly, we want to close the organizational, technological, and priority gaps to foster creating an integrated data and analytics strategy.

By doing that, we can create what I describe as a raw-to-ready analytics platformbased on trust, because we own the process of the data from source to analysis, and that not only makes the analytics better, it promotes the third part of our strategy, which is around data literacy. That’s about creating a trusted environment in which people can interact with their data and do the analysis that they want to do without having to be data scientists or data experts.

So owning that whole end-to-end architecture is what we are striving to reach.

Gardner: As we have seen in other technology maturation trend curves, applying automation to the problem frees up the larger democratization process. More people can consume these services. How does automation work in the next few years when it comes to analytics? Are we going to start to see more artificial intelligence (AI) applied to the problem?

Automated, intelligent analytics

Potter: Automating those environments is an inevitability, not only from the standpoint of how the data is collected, but in how the data is pushed through a data operations process. More importantly, automating enables on the other end, too, by embedding artificial and machine learning (ML) techniques all the way along that value chain — from the point of source to the point of consumption.

Gardner: How does AI play a role in the automation and the capability to leverage data across the entire organization?

Potter: How we perform analytics within an analytic system is going to evolve. It’s going to be more conversational in nature, and less about just consuming a dashboard and looking for an insight into a visualization.

The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but the analytics system itself can initiate the conversation by identifying insights based on context and on other feeds. Those can come from the collective intelligence of the people you work with, or even from people not involved in the process.

The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but it will initiate the conversation by identifying insights based on context and other feeds.

Gardner: I have been at some events where robotic process automation (RPA) has been a key topic. It seems to me that there is this welling opportunity to use AI with RPA, but it’s a separate track from what’s going on with BI, analytics, and the traditional data warehouse approach.

Do you see an opportunity for what’s going on with AI and use of RPA? Can what Qlik is doing with the analytics and data assimilation problem come together with RPA? Would a process be able to leverage analytic information, and vice versa?

Potter: It gets back to the idea of pushing analytics to the edge, because an edge isn’t just a device-level integration. It can be the edge of a process. It can be the edge of not only a human process, but an automated business process. The notion of being able to embed analytics deep into those processes is already being done. Process analytics is an important field.

But the newer idea is that analytics is in service of the process, as opposed to the other way around. The world is getting away from analytics being a separate activity, done by a separate group, and as a separate act. It is as commonplace as getting a text message, right?

Gardner: For the organization to get to that nirvana of total analytics as a common strategy, this needs to be part of what the IT organization is doing, with full stack architecture and evolution. So AIOps and DataOps also getting closer over time.

How does DataOps in your thinking relate to what the larger IT enterprise architects are doing, and why should they be thinking about data more?

Optimizing data pipelines

Potter: That’s a really good question. From my perspective, when I get a chance to talk to data teams, I ask a simple question: “You have this data lake. Is it meeting the analytic requirements of your organization?”

And often I don’t get very good answers. And a big reason why is because what motivates and prioritizes the data team is the storage and management of data, not necessarily the analytics. And often those priorities conflict with the priorities of the analytics team. 

What we are trying to do with the Qlik integrated data and analytic strategy is to create data pipelines optimized for analytics, and data operations optimized for analytics. And our investments and our acquisitions in Attunity and Podium are about taking that process and focusing on the raw-to-ready part of the data operations.

Gardner: Mike, we have been talking at a fairly abstract level, but can you share any use cases where leading-edge organizations recognize the intrinsic relationship between DataOps and enterprise architecture? Can you describe some examples or use cases where they get it, and what it gets for them?

Potter: One of our very large enterprise customers deals in medical devices and related products and services. They realized an essential need to have an integrated strategy. And one of the challenges they have, like most organizations, is how to not only overcome the technology part but also the organizational, cultural, and change-management aspects as well.

They recognized the business has a need for data, and IT has data. If you intersect that, how much of that data is actually a good fit? How much data does IT have that isn’t needed? How much of the remaining need is unfulfilled by IT? That’s the problem we need to close in on.

Gardner: Businesses need to be thinking at the C-suite level about outcomes. Are there some examples where you can tie together such strategic business outcomes back to the total data approach, to using enterprise architecture and DataOps?

Data decision-making, democratized

Potter: The biggest ones center on end-to-end governance of data for analytics, the ability to understand where the data comes from, and building trust in the data inside the organization so that decisions can be made, and those decisions have traceability back to results.

The other aspect of building such an integrated system is a total cost of ownership (TCO) opportunity, because you are no longer expending energy managing data that isn’t relevant to adding value to the organization. You can make a lot more intelligent choices about how you use data and how you actually measure the impact that the data can have.

Gardner: On the topic of data literacy, how do you see the behavior of an organization — the culture of an organization — shifting? How do we get the chicken-and-egg relationship going between the data services that provide analytics and the consumers to start a virtuous positive adoption pattern?

One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don’t know why people aren’t using it.

Potter: One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don’t know why people aren’t using it. 

For me, there are a couple of elements to the problem. One is what I call data elitism. When you think about data literacy and you compare it to literacy in the pre-industrial age, the people who had the books were the people who were rich and had power. So church and state, that kind of thing. It wasn’t until technology created, through the printing press, a democratization of literacy that you started to see interesting behavior. Those with the books, those with the power, tried to subvert reading in the general population. They made it illegal. Some argue that the French Revolution was, in part, caused by rising rates of literacy.

If you flash-forward this analogy to today in data literacy, you have the same notion of elitism. Data is only allowed to be accessed by the senior levels of the organization. It can only be controlled by IT.

Ironically, the most data-enabled organizations are typically oriented to the Millennials or younger users. But they are in the wrong part of the organizational chart to actually take advantage of that. They are not allowed to see the data they could use to do their jobs.

The opportunity from a democratization-of-data perspective is understanding the value of data for every individual and allowing that data to be made available in a trusted environment. That’s where this end-to-end process becomes so important.

Gardner: How do we make the economics of analytics an accelerant to that adoption and the democratization of data? I’ll use another historical analogy, the Model T and assembly line. They didn’t sell Model Ts nearly to the degree they thought until they paid their own people enough to afford one

Is there a way of looking at that and saying, “Okay, we need to create an economic environment where analytics is paid for-on-demand, it’s fit-for-purpose, it’s consumption-oriented.” Wouldn’t that market effect help accelerate the adoption of analytics as a total enterprise cultural activity?

Think positive data culture

Potter: That’s a really interesting thought. The consumerization of analytics is a product of accessibility and of cost. When you build a positive data culture in an organization, data needs to be as readily accessible as email. From that perspective, turning it into a cost model might be a way to accomplish it. It’s about a combination of leadership, of just going there and making occur at the grassroots level, where the value it presents is clear.

And, again, I reemphasize this idea of needing a positive data culture.

Gardner: Any added practical advice for organizations? We have been looking at what will be happening and what to anticipate. But what should an enterprise do now to be in an advantageous position to execute a “positive data culture”?

Potter: The simplest advice is to know that technology is not the biggest hurdle; it’s change management, culture, and leadership. When you think about the data strategy integrated with the analytics strategy, that means looking at how you are organized and prioritized around that combined strategy.

Finally, when it comes to a data literacy strategy, define how you are going to enable your organization to see data as a positive asset to doing their jobs. The leadership should understand that data translates into value and results. It’s a tool, not a weapon.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Qlik.

You may also be interested in:

Posted in artificial intelligence, big data, Cloud computing, containers, data analysis, Information management, machine learning, multicloud, Qlik, Security | Tagged , , , , , , , , , , | Leave a comment

Happy employees equal happy customers — and fans. Can IT deliver for them all?

The next BriefingsDirect workplace productivity discussion explores how businesses are using the latest digital technologies to re-imagine the employee experience — and to transform their operations and results.

Employee experience isn’t just a buzz term. Research shows that engaged employees are happier, more productive, and deliver a superior customer experience, all of which translates into bottom line results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how, our panel will now explore how IT helps deliver a compelling experience that enables employees to work when, where, and how they want — and to perform at their best. Joining us are Adam Jones, Chief Revenue Officer, who oversees IT for the Miami Marlins Major League Baseball team and organization, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, when it comes to employee experience, Citrix has been at the forefront of the conversation and of the technology shaping it. In fact, I remember covering one of the first press conferences that Citrix had, and this is going back about 30 years, and the solutions were there for people to work remotely. It seemed crazy at the time, delivering apps over the wire, over the Internet.

But you are still innovating. You’re at it again. About a year ago, you laid out an aggressive plan to help companies power their way to even better ways to work. So, it begs the question: Tim, what’s wrong with the way people are working today and the way that employees are experiencing work today?

From daily grind to digital growth 

Minahan: That topic is top of mind both for C-level and board members around the globe. We are entering an era of a new talent crisis. What’s driving it is, number one, there are just too few workers. Demographically McKinsey estimates that in the next few years we will be short by 95 million medium- to high-skilled workers around the globe.


And that’s being frustrated by our traditional work models, which tend to organize around physical hubs. I build an office building, call center, or manufacturing facility and I do my best to hire the best talent around that hub. But the talent isn’t always located there.

The second thing is, as more companies become digital businesses — trying to develop new digital business lines, engage customers through new digital business channels, develop new digital business revenue streams — oftentimes they lack the right skills. They lack skills to help drive to this next level of transformation. If companies are fortunate enough to identify employees with those skills, there is a huge likelihood that they will be disengaged at work. 

In fact, the latest Gallup study finds that globally 85 percent of workers are disengaged at work. A key component of that frustration has to do with their work environment.

We spend a lot of time talking about vision alignment and career development — and all of that is important. But a key gap that many companies are overlooking is that they have a frustrating work environment. They are not giving their employees the tools or resources they need to do their jobs effectively.

In fact, all the choice we have around our applications and our devices has actually begun to create noise that distracts us from doing our core jobs in the best way possible.

Gardner: Is this a case of people being distracted by the interfaces? Is there too much information and overload? Are we not adding enough intelligence to provide a contextual approach? All of the above? 

Minahan: It is certainly “all of the above,” Dana. First off, there are just too many applications. The typical enterprise IT manager is responsible for more than 500 applications. At the individual employee level, a typical worker uses more than a dozen applications through the course of their day, and oftentimes needs to traverse four or five different applications just to do a single business process. That could be something as simple as finding the change in a deal status, or even executing one particular transaction.

Work Isn’t Working for Your Employees.

Find Out How Technology Can Help

And that would be bad enough, except consider that oftentimes we are distracted by apps that aren’t even core to our jobs. Last time I checked, Dana, neither you nor I, nor Adam were hired to spend our day approving expense reports in something like SAP Concur, which is a great application. But it’s not core to my job. Or, we are approving performance reviews in Workday, or a purchase request in SAP Ariba. Certainly, these distract from our day. By doing so, we need to constantly navigate via new application interfaces. We need to learn new applications that aren’t even core to our jobs. 

To your point around disruption and context switching, today — because we have all of these different channels, and not just e-mail, but Slack and Microsoft Teams and all of these applications – just finding information consumes a large part of our day. We can’t remember where we stored something, or we can’t remember the change in that deal status. So we have to spend about 20 percent of our day switching between all of these different contexts, just to get the information or insight we need to do our jobs.

Gardner: Clearly too much of a good thing. And to a large degree, IT has brought about that good thing. Has IT created this problem?

Minahan: In part. But I think employees share a bit of responsibility themselves. As an employee, I know I’m always pushing IT by saying, “Hey, absolutely, this is the one tool we need to do a more effective job at marketing, strategy, or what-have-you.”

We keep adding to the top of what we already have. And IT is in a tough position of either saying, “No,” or finding a way to layer on more and more choices. And that has the unintended side effect of what we have just mentioned — which is the complexity that frustrates today’s employee experience.

Workspace unity and security 

Gardner: Now, the IT people have faced complexity issues before, and many times they have come up with solutions to mitigate the complexity. But we also have to remember that you can’t just give employees absolute freedom. There have to be guardrails, and rules, compliance, and regulatory issues must be addressed. 

So, security and digital freedom need to be in balance. How do we get to the point, Tim, where we can create that balance, and give freedom — but not so much that they are at risk?

Minahan: You’re absolutely right. At Citrix, we firmly believe this problem needs to be solved. We are making the investments, working with our customers and our partners, to go out and solve it. We believe the right way to solve it is through a digital workspace that unifies everything your employees need to be productive in one, unified experience that wrappers those applications and content, and makes them available across any device or platform, no matter where you are.

A workspace that’s just unified but not secure doesn’t fully address the needs of the enterprise. We believe the workspace should wrapper in a layer of contextual security policies that know who you are.

If you are in the office, on the corporate network using your laptop, perfect. You also need to have access to the same content and applications to do your job on the train ride home, on your smartphone, and maybe while visiting a friend. You need to be able to log on through a web interface. You want your work to travel with you, so you can work anytime, anywhere. 

But such a workspace that’s just unified — but not secure — doesn’t fully address the needs of the enterprise. The second attribute of what’s required for a true digital workspace is that it needs to be secure. When you have those applications and content within the workspace, we believe the workspace should wrapper that in a layer of contextual security policies that know who you are, what you typically have access to, and how you typically access it. The security must know if you do your work through one device or another, and then apply the right policies when there are anomalies outside of that realm. 

For example, maybe you are logging in from a different place. If so, we are going to turn off certain capabilities within your applications, such as the capability to download, print, or screen-capture. Maybe we need to require a second layer of authentication, if you are logging on from a new device. 

And so, this approach brings together the idea of employee experience and balances it with the security that the enterprise needs. 

Gardner:We are also seeing more intelligence brought into this process. We are seeing more integration end-to-end, and we are anticipating the best worker experience. But companies, of course, are looking for productivity improvements to help their bottom line and their top line.

Want Employees to Perform at Their Best?

Let Them Thrive Using

An Intelligent Workspace

Is there a way to help businesses understand the economic benefits of the best digital workspace? How do we prove that this is the right way to go?

Minahan: Dana, you hit the nail on the head. I mentioned there are three attributes required for an effective digital workspace. We talked about the first two, unifying everything an employee needs to be productive with one unified experience, and secondly securing that to ensure that applications’ content is more secure in the workspace than when native. So that organizes your workday, and that’s a phenomenal head start. 

Work smart, with intelligence 

But, to your point, we can do better by building on that foundation and injecting intelligence into the workspace. You can then begin to help employees work better. You can help employees remove that noise from their day by using things such as machine learning (ML)artificial intelligence (AI), simplified workflows, and what we call micro appsto guide an employee through their workdays. The workspace is not just someplace they go to launch an application, but it is someplace they go to get work done.

We have begun providing capabilities that literally reach into your enterprise applications and extract out the key insights and tasks that are personal to each employee. So when you log into the workspace, Dana, it would say, “Hey, Dana, it’s time for you to approve that expense report.”

You don’t need to log-in to the app again. You just quickly open a review. If you want, you can click “approve” and move on, saving yourself minutes. And you multiply that throughout the course of the day. We estimate you can give an employee back 10 to 20 percent of their workweek. So, an added day each week of improved productivity. 

But it’s not just about streamlined tasks. It’s also about improved insights, of making sure that you understand the information you need. Maybe it’s that change in a deal status and presenting that up to you so you don’t need to log-in to Salesforce and check on that dashboard. It’s presented to you because the workspace knows it’s of interest to you. 

To your point, this could dramatically improve the overall productivity for an employee, improve their overall experience at work, and by extension allow them to serve their customers in a much better way. They have the resources, tools, and the information at their fingertips to deliver a superior customer experience. 

The Miami Marlins have a very sophisticated approach to user experience. They look at heir employees and their fan base across multiple ways of making the experience exceptional.

Gardner: We are entering an age, Tim, where we let the machines do what they do best and know the difference, so that then allows people to do what they can do best, creatively, and most productively. It’s an exciting time. 

Let’s look at a compelling use case. The Miami Marlins have a very sophisticated approach to user experience. And they are not just looking at their employees, they are looking at the end-users — their fan base across multiple different ways of entertainment and for intercepting the baseball experience. 

Baseball, in a sense, was hibernating over the winter, and now the new season has played out well in 2019. And your fans in Miami are getting treated to a world-class experience. 

Tell me, Adam, what went on behind-the-scenes that allows you in IT to make this happen? What is the secret sauce for providing such great experiences?

Marlins’ Major League IT advantage 

Jones: The Marlins is a 25-year-old franchise. We find ourselves in build mode coming into the mid-2019 season, following a change in ownership and leadership. We have elevated the standards and vision for the organization.

We are becoming a world-class sports and entertainment enterprise, and so are building a next-generation IT infrastructure to enable the 300-plus employees who operate across our lines of business and the various assets of the organization. We are very pleased to have our longtime partner, Citrix, deploy their digital workspace solutions to enable our employees to deliver against the higher standards that we have set. 

Gardner: Is it difficult to create a common technological approach for different types of user experience requirements? You have fans, scouts, and employees. There are a lot of different endpoints. How does a common technological approach work under those circumstances?

Jones: The diversity within our enterprise necessitates having tools and solutions that have a lot of agility and can be flexible across the various requirements of an organization such as ours. We are operating a very robust baseball operation — as well as a sophisticated business. We are looking to scale and engage a very diverse audience. We need to have the resources available to invest and develop talent on the baseball side. So, what we have within the Citrix environment is the capability to enable that very diverse set of activities within one environment.

Gardner: And we have become used to, in our consumer lives, having a sort of seamless segue between different things that we are doing. Are you approaching that same seamless integration when it comes to how people encounter your content across multiple channels? Is there a way for you to present yourselves in such a way that the technology takes over and allows people to feel like they are experiencing the same Miami Marlins experience regardless of how they actually intercept your organization and your sport?

Want Employees to Perform at Their Best?

Let Them Thrive Using

An Intelligent Workspace

Jones: Like many of our peers, we are looking to establish more robust, rounded relationships with our fans and community. And that means going beyond our home schedule to more of a 365-day relationship, with a number of touch points and a variety of content.

The mobility of our workforce to get out into the community — but not lose productivity — is incredibly important as we evolve into a more sophisticated and complex set of activities and requirements.

Gardner: Controlling your content, making sure you can make choices about who gets to see what, to protect your franchise, is important. Are you reaching a balance between offering a full experience of interesting content and technology, but at the same time protecting and securing your assets and your franchise?

Safe! at digital content distribution 

Jones: Security is our highest priority, particularly as we continue to develop more content and more intellectual property. What we have within the Citrix environment is very robust controls, with the capability to facilitate fairly broad collaboration among our workforce. So again, we are able to disseminate that content in near real-time so that we are creating impactful and timely moments with our fans.

Gardner: Tim, this sounds like a world-class use case for advanced technology. We have scale, security, omni-channel distribution, and a dynamic group of people who want to interact as much as they can. Why is the Miami Marlins such a powerful and interesting use-case from your perspective?

Minahan: The Marlins are a fantastic example of a world champion organization now moving into the digital edge. They are rethinking the fan experience, not just at the stadium but in how they engage across their digital properties and in the community. Adam and the other leadership there are looking across the board to figure out how the sport of baseball and fan experience evolve. They are exploring the linkage between the fan experience, or customer experience, and the employee experience, and they are learning about the role that technology plays in connecting the two.

Work Isn’t Working for Your Employees.

Find Out How Technology Can Help

They are a great example of a customer at the forefront of looking at these new digital channels and how they impact customer relationships — and how they impact values for employees as well.

Gardner: Tim, we have heard over the past decade about how data and information are so integral to making a baseball team successful. It’s a data-driven enterprise as much as any. How will the intelligence you are baking into more of the Citrix products help make the Miami Marlins baseball team a more intelligent organization? What are the tools behind the intelligent baseball future?

Minahan: A lot of the same intelligence capabilities we are incorporating into the workspace for our customers — around ML, AI, and micro apps — will ensure that the Marlins organization — everyone from the front office to the field manager — has the right insights and tasks presented to them at the right time. As a result, they can deliver the best experience, whether that is recruiting the best talent for the team or delivering the best experience for the fans. 

We are going to learn a lot, as we always have from our customers, from the Miami Marlins about how we can continue to adapt that to help them deliver that superior employee experience and, hence, the superior fan experience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix

You may also be interested in:

Posted in application transformation, Citrix, Cyber security, Data center transformation, enterprise architecture, Mobile apps, mobile computing, Security, User experience, vdi, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

How enterprises like McKesson digitize procurement and automate spend management to slash waste

The next BriefingsDirect intelligent enterprise innovations discussion explores new ways that leading enterprises like McKesson Corp. are digitizing procurement and automating spend management

We’ll now examine how new intelligence technologies and automation methods like robotic process automation (RPA) help global companies reduce inefficiencies, make employees happier, cut manual tasks, and streamline the entire source-to-pay process.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the role and impact of automation in business-to-business (B2B) finance, please welcome Michael Tokarz, Senior Director of Source to Pay Processes and Systems at McKesson, in Alpharetta, Georgia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There’s never been a better time to bring efficiency and intelligence to end-to-end, source-to-pay processes. What is it about the latest technologies and processes that provides a step-change improvement?

Tokarz: Our internal customers are asking us to move faster and engage deeper in our supplier conversations. By procuring intelligently, we are able to shift where resources are allocated so that we can better support our internal costumers.

Gardner: Is there a sense of urgency here? If you don’t do this, and others do, is there a competitive disadvantage? 

Tokarz: There’s a strategic advantage to first-movers. It allows you to set the standard within an industry and provide greater feedback and value to your internal customers.

Gardner: There are some major trends driving this. As far as new automation and the use of artificial intelligence (AI), why are they so important?

The AI advantage 


Tokarz: AI is important for a couple of reasons. Number one, we want to process transactions as cost-effectively as we possibly can. Leveraging a “bot” to do that, versus a human, is strategically advantageous to us. It allows us to write protocols that process automatically without any human touch, which, in turn is extremely valuable to the organization.

AI also allows workers to change their value-quotient within the organization. You can go from someone doing manual processes to working at a much higher level for the organization. They now work on things that are change-driven and that bring much more value, which is really important to the organization.

Gardner: What do you mean by bots? Is that the same as robotic process automation (RPA), or they overlapping? What’s the relationship?

Tokarz: I consider them the same technology, RPA and bots. It’s essentially a computer algorithm that’s written to help process transactions that meet a certain set of circumstances.

Gardner: E-sourcing technology is also a big trend and an enabler these days. Why is it important to you, particularly across your supplier base?

Tokarz: E-sourcing helps us drive conversations internally in the organization. It forces the businesses to pause. Everyone’s always in a hurry, and when they’re in a hurry they want to get something published for the organization and out on the street. Having the e-sourcing tool forces people to think about what they really need from the marketplace and to structure it in a format so that they can actually go faster.

E-Sourcing, while you have to do a little bit of work on the front end, you enable the speed of the transaction on the back end because you have everything aligned from all of the suppliers in one central place, so that you can easily compare and make solid business decisions.

Gardner: Another important thing for large organizations like McKesson is the ability to extend and scale globally. Rather than region-by-region there is standardization. Why is that important?

Tokarz: First and foremost, getting to one technology across the board allows us to have a global standard. And what does a global standard mean? It doesn’t mean that we’re going to do the same things the same way in every country. But it gives us a common platform to build our processes on.

It gives us a way to unify our organization so that we can have more informed conversations within the organization. It becomes really important when you begin managing global relationships with large suppliers.

Gardner: Tell us about McKesson and your role within vendor operations and management.

Tokarz: McKesson is a global provider of healthcare solutions — from pharmaceuticals to medical supplies to services. We’re mainly in the United States, Canada, and Europe.

I’m responsible for indirect sourcing here in the United States, but I also oversee the global implementations of solutions in Ireland, Europe, and Canada in the near future. Currently in the United States, we process about $1.6 billion in direct transactions. That’s more than 60,000 transactions on our SAP Ariba system. We also leverage other vendor management solutions to help us process our services transactions.

Gardner: A lot of people like you are interested in becoming touchless – of leveraging automation, streamlining processes, and using data to apply analytics and create virtuous adoption cycles. How might others benefit from your example of using bots and why that works well for you?

Bots increase business 

Tokarz: The first thing we did was leverage SAP Ariba Guided Buying. We also then reformatted our internal website to put Guided Buying forefront for all of our end users. We actually tag it for novice users because Guided Buying works similar to a tablet interface. It gives you smart icons that you can tap to begin and make decisions for your organization. It now drives purchasing behavior. 

The next thing we did is push as much buying through catalogs and indirect spend that we possibly could. We’ve implemented enough catalogs in the United States that we now have 80 percent of our transactions fully automated through catalogs. It provides people really nice visual cues and point-and-click accessibility. Some of my end users tell me they can find what they need within three minutes, and then they can go about their day, which is really powerful. Instead of focusing on buying or purchasing, it allows them to do their jobs, their specialty, which brings more value to the organization.

We use the RPA and bot technology to take the entire organization to the next level. We’re always striving to get to 90 touchless transactions. If we are at 80 percent, that means an additional 50 percent reduction in the touch transactions that we’re currently processing, which is very significant.

The last thing we’ve done is taken it to the next level. We use the RPA and bot technology to take the entire organization to the next level. We’re always striving to get to 90 percent touchless transactions. If we are at 80 percent, that means an additional 50 percent reduction in the touch transactions that we’re currently processing, which is very significant. 

That has allowed me to refocus some of my efforts with my business process outsourcing (BPO) providers where they’re not having to touch the transactions. I can have them instead focus on acquisitions, integrations, and doing different work that might have been at a cost increase. This all saves me money from an operations standpoint.

Gardner: And we all know how important user experience is — and also adoption. Sometimes you can bring a horse to water and they don’t necessarily drink.

So it seems to me that there is a double-benefit here. If you have a good interface like Guided Buying, using that as a front end, that can improve user satisfaction and therefore adoption. But by also using bots and automation, you are taking away the rote, manual processes and thereby making life more exciting. Tell us about any cultural and human capital management benefits.

Smarts, speed, and singular focus 

Tokarz: It allows my procurement team to focus differently. Before they were focused on the transactions in the queue and how fast to get them processed, all to keep the internal customers happy. Now I have a bot that processes that three times a day, it looks at the queue, and so we don’t have to worry about those any more. The team is only watching the bot to make sure it isn’t kicking out any errors.

From an acquisition integration standpoint, when I need to add suppliers to the network I don’t have to go for a change request to my management team and request more money. I can operate within the original budget with my BPO providers. If there’s another 300 suppliers that I need added to the network, for example, I can process them more effectively and efficiently.

Gardner: What have been some challenges with establishing the e-sourcing technology? What have you had to overcome to make e-sourcing more prevalent and to get as digital as possible?

Tokarz: Anytime I begin working on a project, I focus not only on the technology component, but also the process, organization, and policy components. I try to focus on all of them.

So first, we hired someone to manage the e-sourcing via an e-sourcing administrator role. It becomes really important. We have a single point of contact. Everyone knows where to go within the organization to make things happen as people learn the technology, and what the technology is actually capable of. Instead of having to train 50 people, I have one expert that can help guide them through the process.

From a policy standpoint, we’ve also taken the policies and dictated that. People are supposed to be leveraging the technology. We all know that not all policies are adhered to, but it sets the right framework for discussion internally. We can now go to a category manager and access the right technology to do the jobs better, faster, cheaper.

As a result, you have a more intriguing job versus doing administrative work, which ultimately leads to more value to the organization. They’re acting more as a business consultant to our internal customers to drive value — not just about price but on how to create value using innovations, new technology, and new solutions in the marketplace.

To me, it’s not just about the technology — it’s about developing the ecosystem of the organization.

Gardner: Is there anything about Guided Buying and the added intelligence that helps with e-sourcing – of getting the right information to the right person in the right format at the right time?

Seamless satisfaction for employee

Tokarz: The beautiful thing about Guided Buying is it’s seamless. People don’t know how the application works and that they are using SAP Ariba. It’s interesting. They see Guided Buying and they don’t realize it’s basically a looking glass into the architecture that is SAP Ariba behind the scenes.

That helps with transparency for them to understand what they are buying and get to it as quickly as possible. It allows them to process a transaction via a really nice, simple checkout screen. Everyone knows what it costs, and it just routes seamlessly across the organization.

Gardner: So what do you get when you do e-sourcing right? Are there any metrics or impacts that you can point to such as savings, efficiencies, employee satisfaction?

The biggest impact is employee satisfaction. Instead of having a category manager working in Microsoft Outlook, sending e-mails to 30 different suppliers on a particular event, they have a simple dashboard where they can combine all of the answers and push all of that information out seamlessly across all the participants.

Tokarz: The biggest impact is employee satisfaction. Instead of having a category manager working in Microsoft Outlook, sending e-mails to 30 different suppliers on a particular event, they have a simple dashboard where they can combine all of the answers, or questions, and develop all of the answers and push all of that information out seamlessly across all the participants. Instead of working administratively, they’re working strategically with internal customers. They are asking the hard questions about how to solve business problems at hand and creating value for the organization.

Gardner: Let’s dig deeper into the need for extensibility for globalization. To me this includes seeking a balance between the best of centralized and the best of distributed. You can take advantage of regional particulars, but also leverage and exploit the repeatability and standard methods of centralization.

What have you been doing in procurement using SAP Ariba that helps get to that balance?

Global insights grow success 

Tokarz: We’re in the process of rolling out SAP Ariba globally. We have different regions, and they all have different requirements. What we’ve learned is that our EMEA region wants to do some things differently than we were doing them. It forces us to answer the question, “Why were we doing things the way we were doing them, and should we be changing? Are their insights valuable?”

We learned that their insights are valuable, whether it be the partners that they are working with, from an integration standpoint, or the people on the ground. They have valuable insights. We’re beginning to work with our Canadian colleagues as well, and they’ve done a tremendous amount of work around change management. We want to capitalize on that, and we want to leverage it. We want to learn so that we can be better here in the United States at how we implement our systems.

Gardner: Let’s look to the future. What would you like to see improved, not only in terms of the technology but the way the procurement is going? Do you see more AI, ML, and bots progressing in terms of their contribution to your success?

Tokarz: The bots’ technology is really interesting, and I think it’s going to change pretty dramatically the way we work. It’s going to take a lot of the manual work that we do in processing transactions and it’s going to alleviate that.

And it’s not just about the transactions. You can leverage the bot technology or RPA technology to do manual work and then just have people do the audit. You’re eliminating three to five hours’ worth of work so that the workers can go focus their time on higher value-add.

For my organization, I’d like us to extend the utilization of the solutions that we currently own. I think we can do a better job of rolling out the technology broadly across the organization and leverage key features to make our business more powerful.

Gardner: We have been hearing quite a bit from SAP Ariba and SAP at-large about integrating more business applications and data sets to find process efficiencies across different types of spend and getting a better view of total spend. Does that fit into your future vision? 

Tokarz: Yes, it does. Data is really important. It’s a huge initiative at McKesson. We have teams that are specifically focused on data and integrating the data so that we can have meaningful information to make more broad decisions. They can be made not by, “Hey, I think I have the right knowledge.” Instead insights are based on the concrete details that guide you to making smart business decisions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, Business intelligence, electronic medical records, ERP, healthcare, Internet of Things, machine learning, procurement, RPA, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

CEO Henshall on Citrix’s 30-year journey to make workers productive, IT stronger, and partners more capable

The next BriefingsDirect intelligent workspaces discussion explores how for 30 years Citrix has pioneered ways to make workers more productive, IT operators stronger, and a vast partner ecosystem more capable.

We will now hear how Citrix is by no means resting on its laurels by charting a new future of work that abstracts productivity above apps, platforms, data, and even clouds. The goal: To empower, energize, and enlighten disaffected workers while simplifying and securing anywhere work across any deployment model.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To hear more about Citrix’s evolution and ambitious next innovations, please welcome David Henshall, President and CEO of Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: To me Citrix is unique in that for 30 years it has been consistently disruptive, driven by vision, and willing to take on both technology and culture — which are truly difficult things to do. And you have done it over and over again.

As Citrix was enabling multiuser remote access — or cloud before there was even a word for it — you knew that changing technology for delivering apps necessitated change in how users do their jobs. What’s different now, 30 years later? How has your vision of work further changed from delivery of apps?

Do your best work

Henshall: I think you said it well. For 30 years, we have focused on connecting people and information on-demand. That has allowed us to let people be productive on their terms. The fundamental challenge of people is to have access to the tools and resources necessary to get their jobs done — or as we describe it, to do their best work.

We look at that as an absolute necessity. It’s one of the things that makes people feel empowered, feel accomplished, and it allows them to drive better productivity and output. It allows engagement at the highest levels possible. All of these have been great contributing factors.

What’s changed? The technology landscape continues to evolve as applications have evolved over the years – and so have we. You referred to the fact that we’ve reinvented ourselves many times in the last three decades. All great companies go through the same regeneration against a common idea, over-and-over again. We are now in what I would describe as the cloud-mobile era, which has created unprecedented flexibility from the way people used to manage IT. Everything from new software-as-a-service (SaaS) services are being consumed with much less effort, all the way to distributed edge services that allow us to compute in new ways that we’ve never imagined.

And then, of course, on the device side, the choices are frankly nearly infinite. Being able to support the device of your choice is a critical part of what we do — and we believe that matters.

Gardner: I was fortunate enough to attend a press conference back in 1995 when Citrix WinFrame, as it was called at that time, was delivered. The late Citrix cofounder Ed Iacobucci was leading the press conference. And to me, looking back, that set the stage for things like desktop as a service (DaaS)virtual desktop infrastructure (VDI), multi-tenancy, and later SaaS. We all think of these as major mainstream technologies.

Do you feel that what you’re announcing about the future of work, and of inserting intelligence in context to what people do at work, will similarly set off a new era in technology? Are we repeating the past in terms of the scale and magnitude of what you are biting off?

Future productivity goes beyond products 

Henshall: The interesting thing about the future is that it keeps changing. Against that backdrop we are rethinking the way people work. It’s the same general idea about just giving people the tools to be productive on their terms.


A few years back that was about location, of being able to work outside of a traditional office. Today more than half the people do not work in a typical corporate headquarters environment. People are more distributed than ever before.

The challenge we are now trying to solve takes it another step forward. We think about it from a productivity standpoint and an engagement template. The downside of technology is that it does make everything possible. So therefore the level of complexity has gone up dramatically. The level of interruptions — and what we call context shifting— has gone up dramatically. And so, we are looking for ways to help simplify, automate common workflows, and modernize the way people engage with applications. All of these point toward the same common outcome of, “How do we make people more productive on their terms?”

Gardner: To solve that problem of location flexibility years ago, Citrix had to deal with the network, servers, performance and capacity, and latency — all of which were invisible. End users didn’t know that it was Citrix behind-the-scenes.

Will people know the Citrix name and associate it with workspaces now that you are elevating your value above IT?

Henshall: We are solving broader challenges. We have moved gradually over the years from being a behind-the-scenes infrastructure technology. People have actually used the company’s name as a verb. “I have Citrixed into my environment,” for example. That will slowly evolve into still leveraging Citrix as a verb, but meaning something like, “I Citrixed to get my job done.” That takes on an even broader definition around productivity and simplification, and it allows us more degrees of freedom.

We are working with ecosystem partners across the infrastructure landscape, all types of application vendors. We therefore are a bridge between all of those. It doesn’t mean we necessarily have to have our name front and center, but Citrix is still a verb for most people in the way they think about getting their jobs done.

Gardner: I commend you for that because a lot of companies can’t resist making their name part-and-parcel of a solution. Perhaps that’s why you’ve been such a good partner over the years. You’ve been supplying a lot of the workhorses to get jobs done, but without necessarily having to strut your stuff.

Let’s get back to the issues around worker talent, productivity, and worker user experience. It seems to me we have lot of the bits and parts for this. We have great apps, great technology, and cloud distribution. We are seeing interactivity via chatbots, and robotic programming automation (RPA).

Why do you think being at the middle is the right place to pull this all together? How can Citrix uniquely help, whereas none of the other individual parts can?

Empower the people, manage the tech

Henshall: It’s a problem they are all focused on solving. So take a SaaS application, for example. You have applications that are incredibly powerful, best of breed, and they allow for infinite flexibility. Therein lies part of the challenge. The vast majority of people are not power users. They are not looking for every single bell and whistle across a workflow. They are looking for the opportunity to get something done, and it’s usually something fairly simple.

We are designing an interface to help abstract away a lot of complexity from the end user so they can focus on the task more than the technology itself. It’s an interesting challenge because so much technology is focused on the tech and how great and powerful and inflexible it is, and they lose sight of what people are trying to accomplish.

We start by working backward. We start with the end user, understand what they need to be productive, empowered, and engaged. We let that be a guiding principle behind our roadmap. That gives us flexibility to empathize, to understand more about customers.

We start by working backward. We start with the end user, understand what they need to be productive, empowered, and engaged. We let that be a guiding principle behind our roadmap. That gives us flexibility to empathize, to understand more about customers and end users more effectively than if we were building something purely for technology’s sake.

Gardner: For younger workers who have grown up all-digital all the time, they are more culturally attuned to being proactive. They want to go out and do things with choice. So culturally, time is on your side.

On the other hand, getting people to change their behaviors can be very challenging. They don’t know that it could be any better, so they can be resistant. This is more than working with an IT department on infrastructure. We are talking about changing people’s thinking and how they relate to technology.

How do you propose to do that? Do you see yourself working in an ecosystem in such a way that this is not just, “If we build it, they will come,” affair, but evangelizing to the point where cognitive patterns can be changed?

Henshall: A lot of our relationships and conversations have been evolving over the last few years. We’ve been moving further up what I would call “the IT hierarchy.” We’re having conversations with CIOs now about broad infrastructure, ways that we can help address the use cases of all their employees, not just those that historically needing all the power of virtualization.

But as we move forward, there is a large transformation going on. Whether we use terms like digital transformation and others, those are less technology conversations and more about business outcomes – more than any time in my 30-year-career.

Because of that, you’re not only engaging the CIO, you may have the same conversation with a line of business executive, a chief people officer, the chief financial officer (CFO), or someone in another functional organizations. And this is because they’re all trying to accomplish a specific outcome more than focusing on the technology itself.

And that allows us to elevate the discussion in a way that is much more interesting. It allows us to think about the human and business outcomes more so than ever before. And again, it’s just one more extension of how we are getting out of the “technology for technology’s sake” view and much more into the, “What is it that we are actually trying to accomplish” view.

Gardner: David, as we tackle these issues, elevate the experience, and let people work the way they want, it seems we are also opening up the floodgates for addition of more intelligence.

Whether you call it artificial intelligence (AI)machine learning (ML), or augmented intelligence, the fact is that we are able to deal with more data, derive analytics from it, learn patterns, reapply those learning lessons, and repeat. So injecting that into work, and how people get their jobs done, is the big question these days. People are trying to tackle it from a variety of different directions.

You have said an advantage Citrix has, is in access to data. What kind of data are we talking about, and why is that going to put Citrix in a leadership position?

Soup to nuts supervision of workflow 

Henshall: We have a portfolio that spans everything from the client device through the application, files, and the network. We are able to instrument many different parts of the entire workflow. We can capture information about how people are using technologies, what their usage patterns look like, where they are coming in from, and how the files are being used.

In most cases, we take that and apply it into contextual outcomes. For example, in the case of security, we have an analytics platform and we use those security analytics. We can create a risk score that’s very similar to your credit score for an individual user’s behavior if something anomalous happens. For example, you’re here with me and you’re in front of your computer, but you also tried to log on from another part of the globe at the same time.

Things like that can be flagged almost instantaneously and allows the organization to identify and — in many cases — automatically address those types of scenarios. In that case, it may immediately ask for two-factor authentication.

We are not capturing personally identifiable information (PII) and other types of broader data that fall under a privacy umbrella. We access a lot of anonymized things that provide the insights.

Citrix operates in about 100 countries around the world. We are already very familiar with local compliance and data privacy regulations. We are making sure that we can operate within those and give our customers in those markets the tools to make sure they are operating within those constraints as well.

Every company has [had privacy discussions] and will continue to evolve over time as technology evolves because the underlying platforms are becoming very powerful. Citrix operates in about 100 countries around the world. We are already very familiar with local compliance and data privacy regulations. We are making sure that we can operate within those and certainly give our customers in those markets the tools to make sure that they are operating effectively within the constraints as well.

Gardner: The many resources people rely on to do their jobs come from different places — public clouds, private clouds, a hybrid between them, different SaaS providers, and different legacy systems of record.

You are in a unique position in the middle of that. You can learn from it and begin to suggest how people can improve. Those patterns can be really powerful. It’s not something we’ve been able to do before.

What do we call that? Is it AI? Or a valet or digital assistant to help in your work while protective of privacy and adhering to all the laws along the way? And where do you see that going in terms of having an impact on the economy and on companies?

AI, ML to assist and automate tasks

Henshall: Two very broad questions. From the future standpoint, AI and ML capabilities are helping turn all the data we have into more useful or actionable information. And in our case, you mentioned virtual assistance. We will be using intelligent assistance to help you automate simple tasks.

And many of those could be tasked between applications. For example, you could ask your assistant to move a meeting to next Thursday or any time your other meeting participants happen to be available. The bots will go out, search for that optimal time, and take those actions. Those are the types of things that we envision more for the virtual assistants going forward, and I think those will be interesting.

Beyond that, it becomes a learning mechanism whereby we can identify that your bot came back and told you you’ve had the same conflict two meetings in a row. Do you want to change all future meetings so that this doesn’t happen again? It can become much more predictive.

And so, this journey that Citrix has been on for many years started with helping to simplify IT so that it became easier to deliver the infrastructure. The second part of that journey was making it easier for people to consume those resources across the complexities we have talked about.

Now, the products we announced at our May 2019 Citrix Synergy Conference are more about guiding work to help simplify the workflows. We will be doing more in this last space on how to anticipate what you will need so that we can automate it ahead of time. And that’s an interesting journey. It will take a few years to get there, but it’s going to be pretty powerful when we do.

Gardner: As you’re conducting product development, I assume you’re reflecting these capabilities back to your own workforce, the Citrix global talent pool. Do you drink your own champagne? What are you finding? Does it give you a sense as the CEO that your workforce has an advantage by employing these technologies? Do we have any proof points that the productivity is in fact enhanced?

Henshall: It’s still early days. A lot of these are brand-new technologies that don’t have enough of a base of learning yet.

But some of the early learnings can identify areas where you’re multitasking too much, or are in an inefficient process, or in my case, I tend to look at automating opportunities for how much I am multitasking inside of a meeting. That helps me understand whether I should be in that meeting in the first place, whether I am a 100 percent focused and committed — or have I been distracted by other elements.

Those are interesting learnings that are more about personal productivity and how we can optimize from that respect.

More broadly speaking, our workforce is globally distributed. We absolutely drink our own champagne when it comes to engaging a global team. We have teams now in about 40 countries around the world and we are very, very virtual. In fact, among my leadership team, I am the only member that lives full-time in [Citrix’s headquarters] in South Florida. We make that work because we embrace all of our own technology, stay on top of common projects, communicate across all the various mediums, and collaborate where need be.

That allows us to tap into nontraditional workforce populations, to differentiate, and enable folks who need different types of flexibility for their own lifestyles. You miss great talent if you are far too rigid. Personally, I believe the days are gone when everybody is expected to work inside a corporate headquarters. It’s just not practical anymore.

Gardner: For those businesses that recognize there is tremendous change afoot, are using new models like cloud, and don’t want complexity to outstrip productivity – what advice do you have for them as they start digital transformation efforts? What should they be putting in place now to take advantage of what companies like Citrix will be providing them in a few years?

Business-first supports global collaboration 

Henshall: The number one thing on any digital transformation project is to be currently clear about what the outcome is you are trying to achieve. Start with the outcome and work backward. You can leverage platforms like Citrix, for example, to look across multiple technologies, focus on those business outcomes, and leave the technology decision in many cases to last. It shouldn’t be the other way around because if you do, you will self-limit what those outcomes should be. 

Make sure you have buy-in across all stakeholders. As I talked about earlier, have a conversation with the CFO, head of marketing, head of human resources, and many others. Look for breadth of outcomes, because you don’t want to solve problems for one small team, you want to solve problems across the enterprise. That’s where you get the best leverage. It allows you the best opportunity to simplify the complexity that has built up over the last 30 to 40 years. This will help people get out from under that problem.

Gardner: Lastly, for IT departments specifically, the people who have been most aware of Citrix as a brand, how should IT be thinking about entering this new era of focusing on work and productivity? What should IT be thinking about to transform themselves to be in the best position to attain these better business outcomes?

Henshall: I have already seen the transformation happening. Most IT administrators want to focus on larger business problems, more than just maintaining the existing infrastructure. Unfortunately, the budgets have been relatively limited for innovation because of all the complexity we have talked about.

But my advice for everyone is, take a step back, understand how to be the champion of the business, to be the hero by providing great outcomes, great experiences, and higher productivity. That’s not a technology conversation first and foremost. Obviously it has a technology element but understand and be empathetic of the needs of the business. Then work backward, and Citrix will help you get there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, Citrix, Cloud computing, Cyber security, Data center transformation, Enterprise architect, Enterprise transformation, machine learning, multicloud, Security, User experience, Virtualization | Tagged , , , , , , , , , , | Leave a comment

How real-time data streaming and integration set the stage for AI-driven DataOps

The next BriefingsDirect business intelligence (BI) trends discussion explores the growing role of data integration in a multi-cloud world.

Just as enterprises seek to gain more insights and value from their copious data, they’re also finding their applications, services, and raw data spread across a hybrid and public clouds continuum. Raw data is also piling up closer to the edge — on factory floors, in hospital rooms, and anywhere digital business and consumer activities exist.

Stay with us now as we examine the latest strategies for uniting and governing data wherever it resides. By doing so, businesses are enabling rapid and actionable analysis — as well as entirely new levels of human-to-augmented-intelligence collaboration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the foundational capabilities that lead to a total data access exploitation, we’re now joined by Dan Potter, Vice President of Product Marketing at Attunity, a Division of Qlik. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dan, what are the business trends forcing a new approach to data integration?

Potter: It’s all being driven by analytics. The analytics world has gone through some very interesting phases of late: Internet of Things (IoT), streaming data from operational systems, artificial intelligence (AI) and machine learning (ML), predictive and preventative kinds of analytics, and real-time streaming analytics.

So, it’s analytics driving data integration requirements. Analytics has changed the way in which data is being stored and managed for analytics. Things like cloud data warehouses, data lakes, streaming infrastructure like Kafka — these are all a response to the business demand for a new style of analytics.


As analytics drives data management changes, the way in which the data is being integrated and moved needs to change as well. Traditional approaches to data integration – such as batch processes, more ETL, and scripted-oriented integration – are no longer good enough. All of that is changing. It’s all moving to a much more agile, real-time style of integration that’s being driven by things like the movement to the cloud and the need to move more data in greater volume, and in greater variety, into data lakes, and how do I shape that data and make it analytics-ready.

With all of these movements, there have been new challenges and new technologies. The pace of innovation is accelerating, and the challenges are growing. The demand for digital transformation and the move to the cloud has changed the landscape dramatically. With that came great opportunities for us as a modern data integration vendor, but also great challenges for companies that are going through this transition.

Gardner: Companies have been doing data integration since the original relational database (RDB) was kicked around. But it seems the core competency of managing the integration of data is more important than ever.

Innovation transforms data integration

Potter: I totally agree, and if done right, in the future, you won’t have to focus on data integration. The goal is to automate as much as possible because the data sources are changing. You have a proliferation of NoSQL databases, graph databases; it’s no longer just an Oracle database or RDB. You have all kinds of different data. You have different technologies being used to transform that data. Things like Spark have emerged along with other transformation technologies that are real-time-oriented. And there are different targets to where this data is being transformed and moved to.

It’s difficult for organizations to maintain the skills set — and you don’t want them to. We want to move to an automated process of data integration. The more we can achieve that, the more valuable all of this becomes. You don’t spend time with mundane data integration; you spend time on the analytics — and that’s where the value comes from.

Gardner: Now that Attunity is part of Qlik, you are an essential component of a larger undertaking, of moving toward DataOps. Tell me why automated data migration and integration translates into a larger strategic value when you combine it with Qlik?

Potter: DataOps resonates well for the pain we’re setting out to address. DataOps is about bringing the same discipline that DevOps has brought to software development. Only now we’re bringing that to data and data integration for analytics.

How do we accelerate and remove the gap between IT, which is charged with providing analytics-ready data to the business, and all of the various business and analytics requirements? That’s where DataOps comes in. DataOps is technology, but that’s just a part of it. It’s as much or more about people and process — along with enabling technology and modern integration technology like Attunity.

We’re trying to solve a problem that’s been persistent since the first bit of data hit a hard drive. Data integration challenges will always be there, but we’re getting smarter about the technology that you apply and gaining the discipline to not boil the ocean with every initiative.

The new goal is to get more collaboration between what business users need and to automate the delivery of analytics-ready data, knowing full-well that the requirements are going to change often. You can be much more responsive to those business changes, bring in additional datasets, and prepare that data in different ways and in different formats so it can be consumed with different analytics technologies.

That’s the big problem we’re trying to solve. And now, being part of Qlik gives us a much broader perspective on these pains as relates to the analytics world. It gives us a much broader portfolio of data integration technologies. The Qlik Data Catalyst product is a perfect complement to what Attunity does.

Our role in data integration has been to help organizations move data in real-time as that data changes on source systems. We capture those changes and move that data to where it’s needed — like a cloud, data lake, or data warehouse. We prepare and shape that data for analytics.

Our role in data integration has been to help organizations move data in real-time as that data changes on source systems. We capture those changes and move that data to where it’s needed — like a cloud, data lake, or data warehouse. We prepare and shape that data for analytics.

Qlik Data Catalyst then comes in to catalog all of this data and make it available to business users so they can discover and govern that data. And it easily allows for that data to be further prepared, enriched, or to create derivative datasets.

So, it’s a perfect marriage in that the data integration world brings together the strength of Attunity with Qlik Data Catalyst. We have the most purpose-fit, modern data integration technology to solve these analytics challenges. And we’re doing it in a way that fits well with a DataOps discipline.

Gardner: We not only have the different data types, we have another level of heterogeneity to contend with and that’s cloud, hybrid cloud, multi-cloud, and edge. We don’t even know what more is going to be coming in two or three years. How does an organization stay agile given that level of dynamic complexity?

Real-time analytics deliver agility 

Potter: You need a different approach for a different style of integration technology to support these topologies that are themselves very different. And what the ecosystem looks like today is going to be radically different two years from now.

The pace of innovation just within the cloud platform technologies is very rapid. Just the new databases, transformation engines, and orchestration engines — it’s just proliferates. And now you have multiple cloud vendors. There are great reasons for organizations to use multiple clouds, to use the best of the technologies or approaches that work for your organization, your workgroup, your division. So you need that. You need to prepare yourself for that, and modern integration approaches definitely help.

One of the interesting technologies to help organizations provide ongoing agility is Apache Kafka. Kafka is a way to move data in real-time and make the data easy to consume even as it’s flowing. We see that as an important piece of the evolving data infrastructure fabric.

At Attunity we create data streams from systems like mainframes, SAP applications, and RDBs. These systems weren’t built to stream data, but we stream-enable that data. We publish it into a Kafka stream and that provides great flexibility for organizations to, for example, process that data in real time for real-time analytics such as fraud detection. It’s an efficient way to publish that data to multiple systems. But it also provides the agility to be able to deliver that data widely and have people find and consume that data easily.

Such new, evolving approaches enable a mentality that says, “I need to make sure that whatever decision I make today is going to future-proof me.” So, setting yourself up right and thinking about that agility and building for agility on day one is absolutely essential.

Gardner: What are the top challenges companies have for becoming masterful at this ongoing challenge — of getting control of data so that they can then always analyze it properly and get the big business outcomes payoff?

Potter: The most important competency is on the enterprise architecture (EA) level, more than on the people who traditionally build ETL scripts and integration routines. I think those are the piece you want to automate.

The real core competency is to define a modern data architecture and build it for agility so you can embrace the changing technologies and requirements landscape. It may be that you have all of your eggs in one cloud vendor today. But you certainly want to set yourself up so you can evolve and push processing to the most efficient place, and to attain the best technology for the kinds of analytics or operational workloads you want.

That’s the top competency that organizations should be focused on. As an integration vendor, we are trying to reduce the reliance on technical people to do all of this integration work in a manual way. It’s time-consuming, error-prone, and costly. Let’s automate as much as we can and help companies build the right data architecture for the future.

Gardner: What’s fascinating to me, Dan, in this era of AI, ML, and augmented intelligence is that we’re not just creating systems that will get you to that analytic opportunity for intelligence. We are employing that intelligence to get there. It’s tactical and strategic. It’s a process, and it’s a result.

How do AI tools help automate and streamline the process of getting your data lined up properly?

Automated analytics advance automation 

Potter: This is an emerging area for integration technology. Our focus initially has been on preparing data to make it available for ML initiatives. We work with vendors such as Databricks at the forefront of processing, using a high performance Spark engine and processing data for data science, ML, and AI initiatives.

We need to ask, “How do we apply cognitive engines, things like Qlik, to the fore within our own technology and get smarter about the patterns of integration that organizations are deploying so we can further automate?” That’s really the next way for us.

Gardner: You’re not just the president, you’re a client.

Potter: Yeah, that’s a great way to put it.

Gardner: How should people prepare for such use of intelligence?

Potter: If it’s done right — and we plan on doing it right — it should be transparent to the users. This is all about automation done right. It should just be intuitive. Going back 15 years when we first brought out replication technology at Attunity, the idea was to automate and abstract away all of the complexity. You could literally drag your source, your target, and make it happen. The technology does the mapping, the routing, and handles all the errors for me. It’s that same elegance. That’s where the intelligence comes in, to make it so intuitive that you are not seeing all the magic that’s happening under the covers.

This is all about automation done right. It should just be intuitive. When we first brought out replication technology at Attunity, the idea was to automate and abstract away all of the complexity. That’s now where the intelligence comes in, to make it so intuitive that you are not seeing all the magic under the covers.

We follow that same design principle in our product. As the technologies get more complex, it’s harder for us to do that. Applying ML and AI becomes even more important to us. So that’s really the future for us. You’ll continue to see, as we automate more of these processes, all of what is happening under the covers.

Gardner: Dan, are there any examples of organizations on the bleeding edge? They understand the data integration requirements and core competencies. They see this through the lens of architecture.

Automation insures insights into data 

Potter: Zurich Insurance is one of the early innovators in applying automation to their data warehouse initiatives. Zurich had been moving to a modern data warehouse to better meet the analytics requirements, but they realized they needed a better way to do it than in the past.

Traditional enterprise data warehousing employs a lot of people, building a lot of ETL scripts. It tends to be very brittle. When source systems change you don’t know about it until the scripts break or until the business users complain about holes in their graphs. Zurich turned to Attunity to automate the process of integrating, moving it to real-time, and automatically structuring their data warehouse.

Their capability to respond to business users is a fraction of what it was. They reduced 45-day cycles to two-day cycles for updating and building out new data marts for users. Their agility is off the charts compared to the traditional way of doing it. They can now better meet the needs of the business users through automation.

As organizations move to the cloud to automate processes, a lot of customers are embracing data lakes. It’s easy to put data into a data lake, but it’s really hard to derive value from the data lake and reconstruct the data to make it analytics-ready.

For example, you can take transactions from a mainframe and dump all of those things into a data lake, which is wonderful. But how do I create any analytic insights? How do I ensure all those frequently updated files I’m dumping into the lake can be reconstructed into a queryable dataset? The way people have done it in the past is manually. I have scriptures using Pig and other languages try to reconstruct it. We fully automate that process. For companies using Attunity technology, our big investments in data lakes has had a tremendous impact on demonstrating value.

Gardner: Attunity recently became part of Qlik. Are there any clients that demonstrate the combination of two-plus-two-equals-five effect when it comes to Attunity and the Qlik Catalyst catalog?

DataOps delivers the magic 

Potter: It’s still early days for us. As we look at our installed base — and there is a lot of overlap between who we sell to — the BI teams and the data integration teams in many cases are separate and distinct. DataOps brings them together. 

In the future, as we take the Qlik Data Catalyst and make that the nexus of where the business side and the IT side come together, the DataOps approach leverages that catalog and extends it with collaboration. That’s where the magic happens.

So business users can more easily find the data. They can send the requirements back to the data engineering team as they need them. By, again, applying AI and ML to the patterns that we are seeing from the analytics side will help better apply that to the data that’s required and automate the delivery and preparation of that data for different business users.

That’s the future, and it’s going to be very interesting. A year from now, after being part of the Qlik family, we’ll bring together the BI and data integration side from our joint customers. We are going to see some really interesting results.

Gardner: As this next, third generation of BI kicks in, what should organizations be doing to get prepared? What should the data architect, who is starting to think about DataOps, do to put them in an advantageous position to exploit this when the market matures?

Potter: First they should be talking to Attunity. We get engaged early and often in many of these organizations. The hardest job in IT right now is [to be an] enterprise architect, because there are so many moving parts. But we have wonderful conversations because at Attunity we’ve been doing this for a long time, we speak the same language, and we bring a lot of knowledge and experience from other organizations to bear. It’s one of the reasons we have deep strategic relationships with many of these enterprise architects and on the IT side of the house.

They should be thinking about what’s the next wave and how to best prepare for that. Foundationally, moving to more real-time streaming integration is an absolute requirement. You can take our word for it. You can go talk to analysts and other peers around the need for real-time data and streaming architectures, and how important that is going to be in the next wave.

Data integration is strategic, it unlocks the value of the data. If you do it right, you’re going to set yourself up for long-term success. 

So, preparing for that and again thinking about the agility in the automation that’s going to get them the desired results because if they’re not preparing for that now, they are going to be left behind, and if they are left behind the business is left behind, and it is a very competitive world and organizations are competing on data and analytics. So the faster that you can deliver the right data and make it analytic-ready, the faster and better decisions you can make and the more successful you’ll be.

So it really is a do-or-die kind of proposition and that’s why data integration, it’s strategic, it’s unlocking the value of this data, and if you do it right, you’re going to set yourself up for long-term success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Qlik

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, data analysis, DevOps, Information management, machine learning, MySQL, Qlik, storage | Tagged , , , , , , , , , , , | Leave a comment

How HCI forms a simple foundation for hybrid cloud, edge, and composable infrastructure

The next BriefingsDirect Voice of the Innovator podcast discussion explores the latest insights into hybrid cloud and hyperconverged infrastructure (HCI) strategies.

Speed to business value and simplicity in deployments have been top drivers of the steady growth around HCI solutions. IT operators are now looking to increased automation, built-in intelligence, and robust security as they seek such turnkey appliance approaches for both cloud and traditional workloads.

Stay with us now as we examine the rapidly evolving HCI innovation landscape, which is being shaped just as much by composability, partnerships, and economics, as it is new technology.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the next chapter of automated and integrated IT infrastructure solutions is Thomas Goepel, Chief Technologist for Hyperconverged Infrastructure at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Thomas, what are the top drivers now for HCI as a business tool? What’s driving the market now, and how has that changed from a few years ago?


Goepel: HCI has gone through a really big transformation in the last few years. When I look at how it originally started, it was literally people looking for a better way of building virtual desktop infrastructure (VDI) solutions. They wanted to combine servers and storage in a single device and make it easier to operate.

What I am seeing now is HCI spreading throughout datacenters and becoming one of the core elements of a lot of the datacenters around the world. The use cases have significantly been expanded. It started out with VDI, but now people are running all kinds of business applications on HCI — all the way to critical databases  like SAP HANA.

Gardner: People are using HCI in new ways. They are innovating in the market, and that often means they do things with HCI that were not necessarily anticipated. Do you see that happening with HCI?

Ease of use encourages HCI expansion 

Goepel: Yes, it’s happened with HCI quite a bit. The original use cases were very much focused on VDI and end-user computing. It was just a convenient way of having a platform for all of your virtual desktops and an easy way of managing them.

But people saw that ease of management can actually be expanded into other use cases. They then began to bring in some core business applications, such as Microsoft Exchange or SharePoint, logged onto the platform and saw there are more and more things they can put on there, and gain the entire simplicity that hyperconverged brings to operating in this environment.

How Hyperconverged Infrastructure Delivers

Unexpected Results for VDI Users

You no longer had to build a separate server farm, separate storage farm, or even manage your network independently. You could now do all of that from a single interface, a single-entry point, and gain a single point of management. Then people said, “Well, this ease makes it so beneficial for me, why don’t we bring the other things in here?” And then we saw it spread out in the data centers.

What we now have is people saying, “Hey, let me take this a step further. If I have remote offices, branch offices, or edge use-cases where I also need compute resources, why not try to take HCI there? Because typically on the edge I don’t even have system administrators, so I can take this entire simplicity down to this point, too.”

And the nice thing with hyperconvergence is that — at least in the HPE version of hyperconvergence, which is HPE SimpliVity — it’s not only simple to manage, it has also built in all of the enterprise features such as high availability and data efficiency, so it makes it really a robust solution. It has come a very long way on this journey.

Gardner: Thomas, you mentioned the role of HCI at the edge gaining traction and innovation. What’s a typical use case for this sort of micro datacenter at the edge? How does that work?

Losing weight with HCI wins the race

Goepel: Let me give you a really good example of a super-fast-paced industry: Formula One car racing. It really illustrates how edge is having an impact — and also how this has a business impact.

One of our customers, Aston Martin Red Bull Racing, has been very successful in Formula One racing. The rules of the International Automobile Federation (FIA), the governing board of Formula One racing, say that each race team can only bring a certain amount of weight to a racetrack during the races.

This is obviously a high-tech race. They are adjusting the car during the race, lap by lap, making adjustments based on the real-time performance of the car to get the last inch possible out of the car to win that race. All of these cars are very close to each other from a performance perspective.

Traditionally, they shipped racks and racks of IT gear to the racetrack to calculate the performance of the car and make adjustments during the race. They have now replaced all of these racks with HPE SimpliVity HCI gear and significantly reduced the amount of gear. It means having significantly less weight to bring to the racetrack.

How Hyperconvergence Plays

Pivotal Role at Red Bull

There are two benefits. First, reducing the weight of the IT gear allows them to bring additional things to the racetrack because what counts is the total weight – and that includes the car, spare parts, people, equipment — everything. There is a certain mandated limit.

By taking that weight out, having less IT equipment on the racetrack, the HCI allows them to bring extra personnel and spare parts. They can perform better in the races.

The other benefit is that HCI performs significantly better than traditional IT infrastructure. They can now make adjustments within one lap of the race versus before, when it took them three laps before they could make adjustments to the car.

This is a huge competitive advantage. When you look at the results, they are doing great when it comes to Formula One racing, especially for being a smaller team compared to the big teams out there.

From that perspective, at the edge, HCI is making some big improvements, not only in a high-end industry like Formula One racing, but in all kinds of other industries, including manufacturing and retail. They are seeing similar benefits.

Gardner: I wrote a research paper about four years ago, Thomas, that laid out the case that HCI will become a popular on-ramp to private clouds and ultimately hybrid cloud. Was I ahead of my time?

HCI on-ramp to the clouds

Goepel: Yes, I think you were a little bit ahead of your time. But you were also a visionary to lay out that groundwork. When you look at the industry, hyperconvergence is a fast-growing industry segment. When it comes to server and data center infrastructure, HCI has the highest growth rate across the entire IT industry.

I don’t see an end anytime soon. HCI continues to grow as people discover new use cases. The edge is one new element, but we are just scratching the surface.

What you were foreseeing four years ago is exactly what we now have, and I don’t see an end anytime soon. HCI continues to grow as people discover new use cases. The edge is one new element, but we are just scratching the surface.

Edge use cases are a fascinating new world in general — from such distributed environments as smart cities and smart manufacturing. We are just starting to get into this world. There’s a huge opportunity for innovation and this will become an attractive area for hyperconvergence. 

Gardner: How does HCI innovation align with other innovations at HPE around automation, composability, and intelligence derived to make IT behave as total solutions? Is there a sense that the whole is greater than the sum of the parts?

HCI innovations prevent problems

Goepel: Absolutely there is. We have leveraged a lot of innovation in the broader HPE ecosystem, including the latest generation of the ProLiant DL380 Server, the most secure server in the industry. All of these elements flew into the HPE SimpliVity HCI platform, too.

But we are not stopping there. A lot of other innovations in the HPE ecosystem are being brought into hyperconvergence. A perfect example is HPE InfoSight, a management platform that allows you to operate your infrastructure better by understanding what’s going on in a very efficient way. It uses artificial intelligence (AI) to detect when something is going wrong in your IT environment so you can proactively take action and don’t end up with a disaster.

How to Tell if Your Network

Is Really Aware of Your Infrastructure

HPE InfoSight originally started out in storage, but we are now taking it into the full HPE SimpliVity HCI ecosystem. It’s not just a support portal, it gives you intelligence to understand what’s going on before you run into problems. Those problems can be solved so your environment keeps running at top performance. You’ll have what you need to run any mission-critical business on HCI. 

More and more of these innovations in our ecosystem will be brought into the hyperconverged world. Another example is around composability. We have been developing a lot of platform capabilities around composability and we are now bringing HPE SimpliVity and composability together. This allows customers to actually change the infrastructure’s personality depending on the workload, including bringing on HPE SimpliVity. You can get the best of these two worlds.

This leads to building a private cloud environment that can be easily connected to a public cloud or clouds. You will ultimately build out a hybrid IT environment in such a way that your private cloud environment, or your on-premise environment, runs in the most optimized way for your business and for your specific needs as a company.

Gardner: You are also opening up that HCI ecosystem with new partners. Tell us how innovation around hyperconverged is broadening and making it more ecumenical for the IT operations consumer.

Welcome to the hybrid world

Goepel: HPE has always been an open player. We never believed in locking down an environment or making it proprietary and basically locking out everyone else. We have always been a company that listens to what our customers want, what our customers need, and then give them the best solution.

Now, customers are looking to run their HCI environment on HPE equipment and infrastructure because they know that this is reliable infrastructure. It is working, and they feel comfortable with it, and they trust it. But we also have customers who say, “Hey, you know, I want to run this piece of software or that solution on this HPE environment. Can you make sure this runs and works perfectly?”

We are in a hybrid world. And in a hybrid world there is not a single vendor that can cover the entire hybrid market. We need to innovate in such a way that we allow an ecosystem of partners to all come together and work collaboratively and jointly to provide new solutions.

We have recently announced new partnerships with other software vendors, and that includes HPE GreenLake Flex Capacity. With that, instead of doing big, upfront investments on equipment, you can do it in a more innovative way financially. It brings about the solution that solves the customers’ real problems, rather than locking the customer into some certain infrastructure.

Flexibility improves performance 

Gardner: You are broadening the idea of making something consumable when you innovate, not only around the technology and the partnerships, but also the economic model, the consumption model. Tell us more about how HPE GreenLake Flex Capacity and acquiring a turnkey HPE SimpliVity HCI solution can accelerate value when you consume it, not as a capital expense, but as an operating cost affair.

Goepel: No industry is 100 percent predictable, at least I haven’t seen it, and I haven’t found it. Not even the most conservative government institution that has a five-year plan is predictable. There are always factors that will disrupt that predictability plan, and you have to react to that.

How Hyperconverged Infrastructure

Solves Unique Challenges

For Datacenters at the Edge

Traditionally, what we have done in the industry is oversized our environments to calculate for anticipated growth over five years — and then add another 25 percent on top of it, and then another 10 percent cover on top of that. Hopefully we did not undersize the environment once we get to the end of the life of the equipment. 

That is a lot of capital you are investing into something that just sits there and has no value, no use, and just basically stands around, and you take off of your books in the financial perspective. 

Now, HPE GreenLake gives you a flexible-capacity model. You only pay literally for what you consume. If you grow faster than you anticipated, you just use more. If you grow slower, you use less. If you have an extremely successful business — but then something in the economic model changes and your business doesn’t perform as you have anticipated — then you can reduce your spending. That flexibility better supports your business.

IT shouldn’t be a burden that slows you down, it should be an accelerator. By having a flexible financial model, you get exactly that.You can scale up and down based on your business needs. 

We are ultimately doing IT to help our businesses to perform better. IT shouldn’t be a burden that slows you down, it should be an accelerator. By having a flexible financial model, you get exactly that. HPE GreenLake allows you to scale up and scale down your environment based on your business needs with the right financial benefits behind it.

Gardner: There is such a thing as too much of a good thing. And I suppose that also applies to innovation. If you are doing so many new and interesting things — allowing for hybrid models to accelerate and employing new economic models — sometimes things can spin out of control.

But you can also innovate around management to prevent that from happening. How does management innovation fit into these other aspects of a solution, to keep it from getting out of control?

Checks and balances extend manageability

Goepel: You bring up a really good point. One of the things we have learned as an industry is that things can spin out of control very quickly. And for me, the best example is when I go back two years when people said, “I need to go to the cloud because that is going to save my world. It’s going to reduce my costs, and it’s going to be the perfect solution for me.”

What happened is people went all-in for the cloud and every developer and IT person heard, “Hey, if you need a virtual machine just get it on whatever your favorite cloud provider is. Go for it.” People very quickly learned that this means exploding their costs. There was no control, no checks and balances.

On both the HCI and general IT side, we have learned from that initial mistake in the public cloud and have put the right checks and balances in place. HPE OneView is our infrastructure management platform that allows the system administrator to operate the infrastructure from a single-entry point or single point of view.

How Hyperconverged Infrastructure

Helps Trim IT Complexity

Without Sacrificing Quality

That gives you a very simple way of managing and plays along with the way HCI is operated — from a single point of view. You don’t have five consoles or five screens, you literally have one screen you operate from. 

You need to have a common way of managing checks and balances in any environment. You don’t want the end user or every developer to go in there and just randomly create virtual machines, because then your HCI environment quickly runs out of resources, too. You need to have the right access controls so that only people that have the right justification can do that, but it still needs to happen quickly. We are in a world where a developer doesn’t want to wait three days to get a virtual machine. If he is working on something, he needs the virtual machine now — not in a week or in two days.

Similarly, when it comes to a hybrid environment — when we bring together the private cloud and the public cloud — we want a consistent view across both worlds. So this is where HPE OneSphere comes in. HPE OneSphere is a cloud management platform that manages hybrid clouds, so private and public clouds. 

It allows you to gain a holistic view of what resources you are consuming, what’s the cost of these resources, and how you can best distribute workloads between the public and private clouds in the most efficient way. It is about managing performance, availability, and cost. You can put in place the right control mechanisms to curb rogue spending, and control how much is being consumed and where.

Gardner: From all of these advancements, Thomas, have you made any personal observations about the nature of innovation? What is it about innovation that works? What do you need to put in place to prevent it from becoming a negative? What is it about innovation that is a force-multiplier from your vantage point?

Faster is better 

Goepel: The biggest observation I have is that innovation is happening faster and faster. In the past, it took quite a while to get innovation out there. Now it is happening so fast that one innovation comes, then the next one just basically runs over it, and we are taking advantage of it, too. This is just the nature of the world we are living in; everything is moving much faster. 

There are obviously some really great benefits from the innovation we are seeing. We have talked about a few of them, like AI and how HCI is being used in edge use-cases. In manufacturing, hospitals, and these kinds of environments, you can now do things in better and more efficient ways. That’s also helping on the business side.

How One Business

Took Control of their Hybrid Cloud 

But there’s also the human factor, because innovation makes things easier for us or makes it better for us to operate. A perfect example is in hospitals, where we can provide the right compute power and intelligence to make sure patients get the right medication. It is controlled in a good way, rather than just somebody writing on a piece of paper and hoping the next person can read it. You can now do all of these things electronically, with the right digital intelligence to ensure that you are actually curing the patient.

I think we will see more and more of these types of examples happening and bringing compute power to the edge. That is a huge opportunity, and there is a lot of innovation in the next two to three years, specifically in this segment, and that will impact everyone’s life in a positive way. 

Gardner: Speaking of impacting people’s lives, I have observed that the IT operator is being greatly impacted by innovation. The very nature of their job is changing. For example, I recently spoke with Gary Thome, CTO for Composable Cloud at HPE, and he said that composability allows for the actual consumers of applications to compose their own supporting infrastructure.

Because of ease, automation, and intelligence, we don’t necessarily need to go to IT to say, “Set up XYZ infrastructure with these requirements.” Using composablity, we can move innovation to the very people who are in the most advantageous position to define what it is they need.

Thomas, how do you see innovation impacting the very definition of what IT people do?

No more mundane tasks 

Goepel: This is a very positive impact, and I will give you a really good example. I spend a lot of time talking to customers and to a lot of IT people out there. And I have never encountered a single systems administrator in this industry who comes to work in the morning and says, “You know, I am so happy that I am here this morning so I can do a backup of my environment. It’s going to take me four hours, and I am going to be the happiest person in the world if the backup goes through.” Nobody wants to do this. 

Nobody goes to work in the morning and says, “You know, I really hope I get a hard problem to solve, like my network crashes and I am going to be the hero in solving the problem, or by making a configuration change in my virtual environment.”

These are boring tasks that nobody is looking for, but we have to do it because we don’t have the right automation in our environments. We don’t have the right management tools in our environment. We put a lot of boring tasks to our administrators and let them do them. They are mundane and they don’t really look forward to them.

How Hyperconverged Infrastructure

Gives You 54 Minutes Back Every Hour

Innovation takes these burdens away from the systems administrator and frees up their time to do things that are not only more interesting, but also add to the bottom line of the company. They can better help drive the businesses and spend IT resources on something that makes the difference for the company’s bottom line.

Ultimately, you don’t want to be the one watching backups going through or restoring files. You want this to be automatic, with a couple of clicks, and then you spend your time on something more interesting.

Every systems administrator I talk to really likes the new ways. I haven’t seen anyone coming back to me and saying, “Hey, can you take this automation away and all this hyperconvergence away? I want to go back to the old way and do things manually so I know how to spend my eight hours of the day.” People have much more to do with the hours they have. This is just freeing them up to focus on the things that add value.

HCI to make IT life easier and easier 

Gardner: Before we close out, Thomas, how about some forward-looking thoughts about what innovation is going to bring next to HCI? We talked about the edge and intelligence, but is there more? What are we going to be talking about when it comes to innovation in two years in the HCI space?

Goepel: I touched on the edge. I think there will be a lot of things happening across the entire edge space, where HCI will clearly be able to make a difference. We will take advantage of the capabilities that HCI brings in all these segments — and it will actually drive innovation outside of the hyperconverged world, but by being enabled by HCI.

But there are a couple of other things to look at. Self-healing using AI in IT troubleshooting, I think, will become a big innovation point in the HCI industry. What we are doing with HPE InfoSight is a start, but there is much more to come. This will continue to make the life of the systems administrator easier.

We want HCI as a platform to be almost invisible to the end user because they shouldn’t care about the infrastructure. It will behave like a cloud, but just be on-premises and private, and in a better, more controlled way. 

Ideally, we want HCI as a platform to be almost invisible to the end user because they shouldn’t care about the infrastructure. It will behave like a cloud, but just be on-premises and private, and in a better, more controlled way.

The next element of innovation you will see is HCI acting very similar to a cloud environment. And some of the first steps with that are what we are doing around composability. This will drive forward to where you change the personality of the infrastructure depending on the workload needed. It becomes a huge pool of resources. And if you need to look like a bare-metal server, or a virtual server — a big one or a small one — you can just change it and this will be all software controlled. I think that innovation element will then enable a lot of other innovations on top of it.

How to Achieve Composability

Across Your Datacenter

If you take these three elements — AI, composability of the infrastructure, and driving that into the edge use cases — that will enable a lot of business innovation. It’s like the three legs of a stool. And that will help us drive even further innovation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud, Security, server, Software, Software-defined storage, storage, Virtualization | Tagged , , , , , , , , , , , , , | Leave a comment