Legacy IT evolves: How cloud choices like Microsoft Azure can conquer the VMware Tax

The next BriefingsDirect panel discussion explores cloud adoption strategies that can simplify IT operations, provide cloud deployment choice — and that make the most total economic sense.

Many data center operators face a crossroads now as they consider the strategic implications of new demands on their IT infrastructure and the new choices that they have when it comes to a cloud continuum of deployment options. These hybrid choices span not only cloud hosts and providers, but also platform technologies such as containers, intelligent network fabrics, serverless computing, and, yes, even good old bare metal.

For thousands of companies, the evaluation of their cloud choices also impacts how they on can help conquer the “VMware tax” by moving beyond a traditional server virtualization legacy.

The complexity of choice goes further because long-term decisions about technology must also include implications for long-term recurring costs — as well as business continuity. As IT architects and operators seek to best map a future from a VMware hypervisor and traditional data center architecture, they also need to consider openness and lock-in.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our panelists review how public cloud providers and managed service providers (MSPs) are sweetening the deal to transition to predicable hybrid cloud models. The discussion is designed to help IT leaders to find the right trade-offs and the best rationale for making the strategic decisions for their organization’s digital transformation.

The panel consists of David Grimes, Vice President of Engineering at NavisiteDavid Linthicum, Chief Cloud Strategy Officer at Deloitte Consulting, and Tim Crawford, CIO Strategic Advisor at AVOA. The discussion is moderated by BriefingsDirect’s Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Clearly, over the past decade or two, countless virtual machines have been spun up to redefine data center operations and economics. And as server and storage virtualization were growing dominant, VMware was crowned — and continues to remain — a virtualization market leader. The virtualization path broadened over time from hypervisor adoption to platform management, network virtualization, and private cloud models. There have been a great many good reasons for people to exploit virtualization and adopt more of a software-defined data center (SDDC) architecture. And that brings us to where we are today.

Dominance in virtualization, however, has not translated into an automatic path from virtualization to a public-private cloud continuum. Now, we are at a crossroads, specifically for the economics of hybrid cloud models. Pay-as-you-go consumption models have forced a reckoning on examining your virtual machine past, present, and future.

My first question to the panel is … What are you now seeing as the top drivers for people to reevaluate their enterprise IT architecture path?

The cloud-migration challenge

Grimes: It’s a really good question. As you articulated it, VMware radically transformed the way we think about deploying and managing IT infrastructure, but cloud has again redefined all of that. And the things you point out are exactly what many businesses face today, which is supporting a set of existing applications that run the business. In most cases they run on very traditional infrastructure models, but they’re looking at what cloud now offers them in terms of being able to reinvent that application portfolio.

David Grimes copy

Grimes

But that’s going to be a multiyear journey in most cases. One of the things that I think about as the next wave of transformation takes place is how do we enable development in these new models, such as containers and serverless, and using all of the platform services of the hyperscale cloud. How do we bring those to the enterprise in a way that will keep them adjacent to the workloads? Separating off in the application and the data is very challenging.

Gardner: Dave, organizations would probably have it easier if they’re just going to go from running their on-premises apps to a single public cloud provider. But more and more, we’re quite aware that that’s not an easy or even a possible shift. So, when organizations are thinking about the hybrid cloud model, and moving from traditional virtualization, what are some of the drivers to consider for making the right hybrid cloud model decision, where they can do both on-premises private cloud as well as public cloud?

Know what you have, know what you need

Linthicum: It really comes down to the profiles of the workloads, the databases, and the data that you’re trying to move. And one of the things that I tell clients is that cloud is not necessarily something that’s automatic. Typically, they are going to be doing something that may be even more complex than they have currently. But let’s look at the profiles of the existing workloads and the data — including security, governance needs, what you’re running, what platforms you need to move to — and that really kind of dictates which resources we want to put them on.

David Linthicum

Linthicum

As an architect, when I look at the resources out there, I see traditional systems, I see private clouds, virtualization — such as VMware — and then the public cloud providers. And many times, the choice is going to be all four. And having pragmatic hybrid clouds, which are paired with traditional systems and private and public clouds — means multiple clouds at the same time. And so, this really becomes an analysis in terms of how you’re going to look at the existing as-is state. And the to-be state is really just a functional matter of what the to-be state should be based on the business requirements that you see. So, it’s a little easier than I think most people think, but I think the outcome is typically going to be more expensive and more complex than they originally anticipated.

Gardner: Tim Crawford, do people under-appreciate the complexity of moving from a highly virtualized on-premises, traditional data center to hybrid cloud?

Crawford: Yes, absolutely. Dave’s right. There are a lot of assumptions that we take as IT professionals and we bring them to cloud, and then find that those assumptions kind of fall flat on their face. Many of the myths and misnomers of cloud start to rear their ugly heads. And that’s not to say that cloud is bad; cloud is great. But we have to be able to use it in a meaningful way, and that’s a very different way than how we’ve operated our corporate data centers for the last 20, 30, or 40 years. It’s almost better if we forget what we’ve learned over the last 20-plus years and just start anew, so we don’t bring forward some of those assumptions.

Tim Crawford Headshot

Crawford

And I want to touch on something else that I think is really important here, which has nothing to do with technology but has to do with organization and culture, and some of the other drivers that go into why enterprises are leveraging cloud today. And that is that the world is changing around us. Our customers are changing, the speed in which we have to respond to demand and need is changing, and our traditional corporate data center stacks just aren’t designed to be able to make those kinds of shifts.

And so that’s why it’s going to be a mix of cloud and corporate data centers.We’re going to be spread across these different modes like peanut butter in a way. But having the flexibility, as Dave said, to leverage the right solution for the right application is really, really important. Cloud presents a new model because our needs have not been able to be fulfilled in the past.

Gardner: David Grimes, application developers helped drive initial cloud adoption. These were new apps and workloads of, by, and for the cloud. But when we go to enterprises that have a large on-premises virtualization legacy — and are paying high costs as a result — how frequently are we seeing people move existing workloads into a cloud, private or public? Is that gaining traction now?

Lift and shift the workload

Grimes: It absolutely is. That’s really been a core part of our business for a while now, certainly the ability to lift and shift out of the enterprise data center. As Dave said, the workload is the critical factor. You always need to understand the workload to know which platform to put it on. That’s a given. With a lot of that existing legacy application stacks running in traditional infrastructure models, very often those get lifted and shifted into a like-model — but in a hosting provider’s data center. That’s because many CIOs have a mandate to close down enterprise data centers and move to the cloud. But that does, of course, mean a lot of different things.

You mentioned the push by developers to get into the cloud, and really that was what I was alluding to in my earlier comments. Such a reinventing of the enterprise application portfolio has often been led by the development that takes place within the organization. Then, of course, there are all of the new capabilities offered by the hyperscale clouds — all of them, but notably some of the higher-level services offered by Azure, for example. You’re going to end up in a scenario where you’ve got workloads that best fit in the cloud because they’re based on the services that are now natively embodied and delivered as-a-service by those cloud platforms.

But you’re going to still have that legacy stack that still needs to leave the enterprise data center. So, the hybrid models are prevailing, and I believe will continue to prevail. And that’s reflected in Microsoft’s move with Azure Stack, of making much of the Azure platform available to hosting providers to deliver private Azure in a way that can engage and interact with the hyperscale Azure cloud. And with that, you can position the right workloads in the right environment.

Gardner: Now that we’re into the era of lift and shift, let’s look at some of the top reasons why. We will ask our audience what their top reasons are for moving off of legacy environments like VMware. But first let’s learn more about our panelists. David Grimes, tell us about your role at Navisite and more about Navisite itself.

Panelist profiles

Grimes: I’ve been with Navisite for 23 years, really most of my career. As VP of Engineering, I run our product engineering function. I do a lot of the evangelism for the organization. Navisite’s a part of Spectrum Enterprise, which is the enterprise division of Charter. We deliver voice, video, and data services to the enterprise client base of Navisite, and also deliver cloud services to that same base. It’s been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models rapidly accelerating to where we are today.

Gardner: Dave Linthicum, tell us a bit about yourself, particularly what you’re doing now at Deloitte Consulting.

It’s been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models.

Linthicum: I’ve been with Deloitte Consulting for six months. I’m the Chief Cloud Strategy Officer, the thought leadership guy, trying to figure out where the cloud computing ball is going to be kicked and what the clients are doing, what’s going to be important in the years to come. Prior to that I was with Cloud Technology Partners. We sold that to Hewlett Packard Enterprise (HPE) last year. I’ve written 13 books. And I do the cloud blog on InfoWorld, and also do a lot of radio and TV. And the podcast, Dana.

Gardner: Yes, of course. You’ve been doing that podcast for quite a while. Tim Crawford, tell us about yourself and AVOA.

Crawford: After spending 20-odd years within the rank and file of the IT organization, also as a CIO, I bring a unique perspective to the conversation, especially about transformational organizations. I work with Fortune 250 companies, many of the Fortune 50 companies, in terms of their transformation, mostly business transformation. I help them explore how technology fits into that, but I also help them along their journey in understanding the difference between the traditional and transformational. Like Dave, I do a lot of speaking, a fair amount of writing and, of course, with that comes with travel and meeting a lot of great folks through my journeys.

Survey says: It’s economics

Gardner: Let’s now look at our first audience survey results. I’d like to add that this is not scientific. This is really an anecdotal look at where our particular audience is in terms of their journey. What are their top reasons for moving off of legacy environments like VMware?

Screen Shot 1The top reason, at 75 percent, is a desire to move to a pay-as-you-go versus a cyclical CapEx model. So, the economics here are driving the move from traditional to cloud. They’re also looking to get off of dated software and hardware infrastructure. A lot of people are running old hardware, it’s not that efficient, can be costly to maintain and in some cases, difficult or impossible, to replace. There is a tie at 50 percent each in concern about the total cost of ownership, probably trying to get that down, and a desire to consolidate and integrate more apps and data, so seeking a transformation of their apps and data.

Coming up on the lower end of their motivations are complexity and support difficulties, and the developer preference for cloud models. So, the economics are driving this shift. That should come as no surprise, Tim, that a lot of people are under pressure to do more with less and to modernize at the same time. The proverbial changing of the wings of the airplane while keeping it flying. Is there any more you would offer in terms of the economic drivers for why people should consider going from a traditional data center to a hybrid IT environment?

Crawford: It’s not surprising, and the reason I say that is this economic upheaval actually started about 10 years ago when we really felt that economic downturn. It caused a number of organizations to say, “Look, we don’t have the money to be able to upgrade or replace equipment on our regular cycles.”

And so instead of having a four-year cycle for servers, or a five-year cycle for storage, or in some cases as much as 10-plus cycle for network — they started kicking that can down the road. When the economic situation improved, rather than put money back into infrastructure, people started to ask, “Are there other approaches that we can take?” Now, at the same time, cloud was really beginning to mature and become a viable solution, especially for mid- to large- enterprises. And so, the combination of those two opened the door to a different possibility that didn’t have to do with replacing the hardware in corporate data centers.

Instead of having a four-year cycle for servers or five-year cycle for storage, they started kicking the can down the road.

And then you have the third piece to that trifecta, which are the overall business demands. We saw a very significant change in customer buying behavior at the same time, which is people were looking for things now. We saw the uptick of Amazon use and away from traditional retail, and that trend really kicked into gear around the same time. All of these together lead into this shift to demand for a different kind of model, looking at OpEx versus CapEx.

Gardner: Dave, you and I have talked about this a lot over the past 10 years, economics being a driver. But you don’t necessarily always save money by going to cloud. To me, what I see in these results is not just seeking lower total cost — but simplification, consolidation and rationalization for what enterprises do spend on IT. Does that make sense and is that reflected in your practice?

Savings, strategy and speed

Linthicum: Yes, it is, and I think that the primary reason for moving to the cloud has morphed in the last five years from the CapEx saving money, operational savings model into the need for strategic value. That means gaining agility, ability to scale your systems up as you need to, to adjust to the needs of the business in the quickest way — and be able to keep up with the speed of change.

A lot of the Global 2,000 companies out there are having trouble maintaining change within the organization, to keep up with change in their markets. I think that’s really going to be the death of a thousand cuts if they don’t fix it. They’re seeing cloud as an enabling technology to do that.

In other words, with cloud they can have the resources they need, they can get to the storage levels they need, they can manage the data that they need — and do so at a price point that typically is going to be lower than the on-premise systems. That’s why they’re moving in that direction. But like we said earlier, in doing so they’re moving into more complex models. They’re typically going to be spending a bit more money, but the value of IT — in its ability to delight the business in terms of new capabilities — is going to be there. I think that’s the core metric we need to consider.

Gardner: David, at Navisite, when it comes to cost balanced by the business value from IT, how does that play out in a managed hosting environment? Do you see organizations typically wanting to stick to what they do best, which is create apps, run business processes, and do data science, rather than run IT systems in and out of every refresh cycle? How is this shaking out in the managed services business?

Grimes: That’s exactly what I’m seeing. Companies are really moving toward focusing on their differentiation. Running infrastructure has become almost like having power delivered to your data center. You need it, it’s part of the business, but it’s rarely differentiating. So that’s what we’re seeing.

Running infrastructure has become almost like having power delivered to your data center. You need it, but its rarely differentiating.

One of the things in the survey results that does surprise me is the relatively low scoring for the operations complexity and support difficulties. With the pace of technology innovation happening, and even within VMware, within the enterprise context, but certainly within the context of the cloud platforms, Azure in particular, the skillsets to use those platforms, manage them effectively and take the biggest advantage of them are in exceedingly high demand. Many organizations are struggling to acquire and retain that talent. That’s certainly been my experience in with dealing with my clients and prospects.

Gardner: Now that we know why people want to move, let’s look at what it is that’s preventing them from moving. What are the chief obstacles that are preventing those in our audience from moving off of a legacy environment like VMware?

There’s more than just a technological decision here. Dell Technologies is the major controller of VMware, even with VMware being a publicly traded company. But Dell Technologies, in order to go private, had to incur enormous debt, still in the vicinity of $48 billion. There’s been reports recently of a reverse merger, where VMware as a public company will take over Dell as a private company. The markets didn’t necessarily go for that, and it creates a bit of confusion and concern in the market. So Dave, is this something IT operators and architects should concern themselves with when they’re thinking about which direction to go?

Linthicum: Ultimately, we need to look at the health of the company we’re buying hardware and software from in terms of their ability to be around over the next few years. The reality is that VMware, Dell, and [earlier Dell merger target] EMC are mega forces in terms of a legacy footprint in a majority of data centers. I really don’t see any need to be concerned about the viability of that technology. And when I look at viability of companies, I look at the viability of the technology, which can be bought and sold, and the intellectual property can be traded off to other companies. I don’t think the technology is going to go away, it’s just too much of a cash cow. And the reality is, whoever owns VMware is going to be able to make a lot of money for a long period of time.

Gardner: Tim, should organizations be concerned in that they want to have independence as VMware customers and not get locked in to a hardware vendor or a storage vendor at the same time? Is there concern about VMware becoming too tightly controlled by Dell at some point?

Partnership prowess

Crawford: You always have to think about who it is that you’re partnering with. These days when you make a purchase as an IT organization, you’re really buying into a partnership, so you’re buying into the vision and direction of that given company.

And I agree with Dave about Dell, EMC, and VMware in that they’re going to be around for a long period of time. I don’t think that’s really the factor to be as concerned with. I think you have to look beyond that.

You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally in terms of where you focus your management and your staff. That means moving up the chain, if you will, and away from the underlying infrastructure and into applications and things closely tied to business advantage.

As you start to do that, you start to look at other opportunities beyond just virtualization. You start breaking down the silos, you start breaking down the components into smaller and smaller components — and you look at the different modes of system delivery. That’s really where cloud starts to play a role.

Gardner: Let’s look now to our audience for what they see as important. What are the chief obstacles preventing you from moving off of a legacy virtualization environment? Again, the economics are quite prevalent in their responses.

Screen Shot 2By a majority, they are not sure that there’s sufficient return on investment (ROI) benefits. They might be wondering why they should move at all. Their fear of a lock-in to a primary cloud model is also a concern. So, the economics and lock-in risk are high, not just from being stuck on a virtualization legacy — but also concern about moving forward. Maybe they’re like the deer in the headlights.

You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally, of where you focus your management and your staff.

The third concern, a close tie, are issues around compliance, security, and regulatory restrictions from moving to the cloud. Complexity and uncertainty that the migration process will be successful, are also of concern. They’re worried about that lift and shift process.

They are less concerned about lack of support for moving from the C-Suite or business leadership, of not getting buy-in from the top. So … If it’s working, don’t fix it, I suppose, or at least don’t break it. And the last issue of concern, very low, is that it’s still too soon to know which cloud choices are best.

So, it’s not that they don’t understand what’s going on with cloud, they’re concerned about risk, and complexity of staying is a concern — but complexity of moving is nearly as big of a concern. David, anything in these results that jump out to you?

Feel the fear and migrate anyway

Grimes: Of those not being sure of the ROI benefits, that’s been a common thread for quite some time in terms of looking at these cloud migrations. But in our experience, what I’ve seen are clients choosing to move to a VMware cloud hosted by Navisite. They ultimately end up unlocking the business agility of their cloud, even if they weren’t 100 percent sure going into it that they would be able to.

But time and time again, moving away from the enterprise data center, repurposing the spend on IT resources to become more valuable to the business — as opposed to the traditional keeping the lights on function — has played out on a fairly regular basis.

I agree with the audience and the response here around the fear of lock-in. And it’s not just lock-in from a basic deployment infrastructure perspective, it’s fear of lock-in if you choose to take advantage of a cloud’s higher-level services, such as data analytics or all the different business things that are now as-a-service. If you buy into them, you certainly increase your ability to deliver. Your own pace of innovation can go through the roof — but you’re often then somewhat locked in.

You’re buying into a particular service model, a set of APIs, et cetera. It’s a form of lock-in. It is avoidable if you want to build in layers of abstraction, but it’s not necessarily the end of the world either. As with everything, there are trade-offs. You’re getting a lot of business value in your own ability to innovate and deliver quickly, yes, but it comes at the cost of some lock-in to a particular platform.

Gardner: Dave, what I’m seeing here is people explaining why hybrid is important to them, that they want to hedge their bets. All or nothing is too risky. Does that make sense to you, that what these results are telling us is that hybrid is the best model because you can spread that risk around?

IT in the balance between past and future

Linthicum: Yes, I think it does say that. I live this on a daily basis in terms of ROI benefits and concern about not having enough, and also the lock-in model. And the reality is that when you get to an as-is architecture state, it’s going to be a variety — as we mentioned earlier – of resources that we’re going to leverage.

So, this is not all about taking traditional systems – and the application workloads around traditional systems — and then moving them into the cloud and shutting down the traditional systems. That won’t work. This is about a balance or modernization of technology. And if you look at that, all bets are on the table — including traditional, including private cloud, and public cloud, and hybrid-based computing. Typically, it’s going to be the best path to success at looking at all of that. But like I said, the solution’s really going to be dependent on the requirements on the business and what we’re looking at.

Going forward, these kinds of decisions are falling into a pattern, and I think that we’re seeing that this is not necessarily going to be pure-cloud play. This is not necessarily going to be pure traditional play, or pure private cloud play. This is going to be a complex architecture that deals with a private and public cloud paired with traditional systems.

And so, people who do want to hedge their bets will do that around making the right decisions that they leverage the right resources for the appropriate task at hand. I think that’s going to be the winning end-point. It’s not necessarily moving to the platforms that we think are cool, or that we think can make us more money — it’s about localization of the workloads on the right platforms, to gain the right fit.

Gardner: From the last two survey result sets, it appears incumbent on legacy providers like VMware to try to get people to stay on their designated platform path. But at the same time, because of this inertia to shift, because of these many concerns, the hyperscalers like Google Cloud, Microsoft Azure, and Amazon Web Services also need to sweeten their deals. What are these other cloud providers doing, David, when it comes to trying to assuage the enterprise concerns of moving wholesale to the cloud?

It’s not moving to the platforms that we think are cool, or that can make us money, it’s about localization of the workloads on the right platforms, to get the right fit.

Grimes: There are certainly those hyperscale players, but there are also a number of regional public cloud players in the form of the VMware partner ecosystem. And I think when we talk about public versus private, we also need to make a distinction between public hyperscale and public cloud that still could be VMware-based.

I think one interesting thing that ties back to my earlier comments is when you look at Microsoft Azure and their Azure Stack hybrid cloud strategy. If you flip that 180 degrees, and consider the VMware on AWS strategy, I think we’ll continue to see that type of thing play out going forward. Both of those approaches actually reflect the need to be able to deliver the legacy enterprise workload in a way that is adjacent from an equivalence of technology as well as a latency perspective. Because one thing that’s often overlooked is the need to examine the hybrid cloud deployment models via the acceptable latency between applications that are inherently integrated. That can often be a deal-breaker for a successful implementation.

What we’ll see is this continued evolution of ensuring that we can solve what I see as a decade-forward problem. And that is, as organizations continue to reinvent their applications portfolio they must also evolve the way that they actually build and deliver applications while continuing to be able to operate their business based on the legacy stack that’s driving day-to-day operations.

Moving solutions

Gardner: Our final survey question asks What are your current plans for moving apps and data from a legacy environment like VMware, from a traditional data center?

Screen Shot 3And two strong answers out of the offerings come out on top. Public clouds such as Microsoft Azure and Google Cloud, and a hybrid or multi-cloud approach. So again, they’re looking at the public clouds as a way to get off of their traditional — but they’re looking not for just one or a lock-in, but they’re looking at a hybrid or multi-cloud approach.

Coming up zero, surprisingly, is VMware on AWS, which you just mentioned, David. Private cloud hosted and private cloud on-premises both come up at about 25 percent, along with no plans to move. So, staying on-premises in a private cloud has traction for some, but for those that want to move to the dominant hyperscalers, a multi-cloud approach is clearly the favorite.

Linthicum: I thought there would be a few that would pick VMware on AWS, but it looks like the audience doesn’t necessarily see that that’s going to be the solution. Everything else is not surprising. It’s aligned with what we see in the marketplace right now. Public cloud movement to Azure, Google Cloud and then also the movement to complex clouds like hybrid and multi-cloud also seem to be the two trends worth seeing right now in the space, and this is reflective of that.

Gardner: Let’s move our discussion on. It’s time to define the right trade-offs and rationale when we think about these taxing choices. We know that people want to improve, they don’t want to be locked in, they want good economics, and they’re probably looking for a long-term solution.

Now that we’ve mentioned it several times, what is it about Azure and Azure Stack that provides appeal? Microsoft’s cloud model seems to be differentiated in the market, by offering both a public cloud component as well as an integrated – or adjacent — private cloud component. There’s a path for people to come onto those from a variety of different deployment histories including, of course, a Microsoft environment — but also a VMware environment. What should organizations be thinking about, what are the proper trade-offs, and what are the major concerns when it comes to picking the right hybrid and multi-cloud approach?

Strategic steps on the journey

Grimes: At the end of the day, it’s ultimately a journey and that journey requires a lot of strategy upfront. It requires a lot of planning, and it requires selecting the right partner to help you through that journey.

Because whether you’re planning an all-in on Azure, or an all-in on Google Cloud, or you want to stay on VMware but get out of the enterprise data center, as Dave has mentioned, the reality is everything is much more complex than it seems. And to maximize the value of the models and capabilities that are available today, you’re almost necessarily going to end up in a hybrid deployment model — and that means you’re going to have a mix of technologies in play, a mix of skillsets required to support them.

Whether you’re planning on an all-Azure or all-Google, or you want to stay on VMware, it’s about getting out of the enterprise datacenter, and the reality is far more complex than it seems.

And so I think one of the key things that folks should do is consider carefully how they partner regardless of where they are in that journey, if they are on step one or step three, to continue that journey is going to be critical on selecting the right partner to help them.

Gardner: Dave, when you’re looking at risk versus reward, cost versus benefits, when you’re wanting to hedge bets, what is it about Microsoft Azure and Azure Stack in particular that help solve that? It seems to me that they’ve gone to great pains to anticipate the state of the market right now and to try to differentiate themselves. Is there something about the Microsoft approach that is, in fact, differentiated among the hyperscalers?

A seamless secret

Linthicum: The paired private and public cloud, with similar infrastructures and similar migration paths, and dynamic migration paths, meaning it could be workloads in between them — at least this is the way that it’s been described — is going to be unique in the market. Kind of the dirty little secret.

It’s going to be very difficult to port from a private cloud to a public cloud because most private clouds are typically not AWS and not Google, and they don’t make private clouds. Therefore, you have to port your code between the two, just like you’ve had to port systems in the past. And the normal issues about refactoring and retesting, and all the other things, really come home to roost.

But Microsoft could have a product that provides a bit more of a seamless capability of doing that. And the great thing about that is I can really localize on whatever particular platform I’m looking at. And if I, for example, “mis-localize” or I misfit, then it’s a relatively easy thing to move it from private to public or public to private. And this may be at a time where the market needs something like that, and I think that’s what is unique about it in the space.

Gardner: Tim, what do you see as some of the trade-offs, and what is it about a public, private hybrid cloud that’s architected to be just that — that seemingly Microsoft has developed? Is that differentiating, or should people be thinking about this in a different way?

Crawford: I actually think it’s significantly differentiating, especially when you consider the complexity that exists within the mass of the enterprise. You have different needs, and not all of those needs can be serviced by public cloud, not all of those needs can be serviced by private cloud.

There’s a model that I use with clients to go through this, and it’s something that I used when I led IT organizations. When you start to pick apart these pieces, you start to realize that some of your components are well-suited for software as a service (SaaS)-based alternatives, some of the components and applications and workloads are well-suited for public cloud, some are well-suited for private cloud.

A good example of that is if you have sovereignty issues, or compliance and regulatory issues. And then you’ll have some applications that just aren’t ready for cloud. You’ve mentioned lift and shift a number of times, and for those that have been down that path of lift and shift, they’ve also gotten burnt by that, too, in a number of ways.

And so, you have to be mindful of what applications go in what mode, and I think the fact that you have a product like Azure Stack and Azure being similar, that actually plays pretty well for an enterprise that’s thinking about skillsets, thinking about your development cycles, thinking about architectures and not having to create, as Dave was mentioning, one for private cloud and a completely different one for public cloud. And if you get to a point where you want to move an application or workload, then you’re having to completely redo it over again. So, I think that Microsoft combination is pretty unique, and will be really interesting for the average enterprise.

Gardner: From the managed service provider (MSP) perspective, at Navisite you have a large and established hosted VMware business, and you’re helping people transition and migrate. But you’re also looking at the potential market opportunity for an Azure Stack and a hosted Azure Stack business. What is it for the managed hosting provider that might make Microsoft’s approach differentiated?

A full-spectrum solution

Grimes: It comes down to what both Dave and Tim mentioned. Having a light stack and being able to be deployed in a private capacity, which also — by the way — affords the ability to use bare metal adjacency, is appealing. We haven’t talked a lot about bare metal, but it is something that we see in practice quite often. There are bare metal workloads that need to be very adjacent, i.e. land adjacent, to the virtualization-friendly workloads.

Being able to have the combination of all three of those things is what makes AzureStack attractive to a hosting provider such as Navisite. With it, we can solve the full-spectrum of the needs of the client, covering bare metal, private cloud, and hyperscale public — and really in a seamless way — which is the key point.

Gardner: It’s not often you can be as many things to as many people as that given the heterogeneity of things over the past and the difficult choices of the present.

We have been talking about these many cloud choices in the abstract. Let’s now go to a concrete example. There’s an organization called Ceridian. Tell us about how they solved their requirements problems?

Azure Stack is attractive to a hosting provider like Navisite. With it we can solve the full-spectrum of the needs of the client in a seamless way.

Grimes: Ceridian is a global human capital management company, global being a key point. They are growing like gangbusters and have been with Navisite for quite some time. It’s been a very long journey.

But one thing about Ceridian is they have had a cloud-first strategy. They embraced the cloud very early. A lot of those barriers to entry that we saw, and have seen over the years, they looked at as opportunity, which I find very interesting.

Requirements around security and compliance are critical to them, but they also recognized that a SaaS provider that does a very small set of IT services — delivering managed infrastructure with security and compliance — is actually likely to be able to do that at least as effectively, if not more effectively, than doing it in-house, and at a competitive and compelling price point as well.

So some of their challenges really were around all the reasons that we see, that we talked about here today, and see as the drivers to adopting cloud. It’s about enabling business agility. With the growth that they’ve experienced, they’ve needed to be able to react quickly and deploy quickly, and to leverage all the things that virtualization and now cloud enable for the enterprises. But again, as I mentioned before, they worked closely with a partner to maximize the value of the technologies and ensure that we’re meeting their security and compliance needs and delivering everything from a managed infrastructure perspective.

Overcoming geographical barriers

One of the core challenges that they had with that growth was a need to expand into geographies where we don’t currently operate our hosting facilities, so Navisite’s hosting capabilities. In particular, they needed to expand into Australia. And so, what we were able to do through our partnership with Microsoft was basically deliver to them the managed infrastructure in a similar way.

This is actually an interesting use case in that they’re running VMware-based cloud in our data center, but we were able to expand them into a managed Azure-delivered cloud locally out of Australia. Of course, one thing we didn’t touch on today — but is a driver in many of these decisions for global organizations — is a lot of the data sovereignty and locality regulations are becoming increasingly important. Certainly, Microsoft is expanding the Azure platform. And so their presence in Australia has enabled us to deliver that for Ceridian.

As I think about the key takeaways and learnings from this particular example, Ceridian had a very clear, very well thought out cloud-centric and cloud-first strategy. You, Dana, mentioned it earlier, that that really enables them to continue to keep their focus on the applications because that’s their bread and butter, that’s how they differentiate.

By partnering, they’re able to not worry about the keeping the lights on and instead focus on the application. Second, of course, is they’re a global organization and so they have global delivery needs based on data sovereignty regulations. And third, and I’d say probably most important, is they selected a partner that was able to bring to bear the expertise and skillsets that are difficult for enterprises to recruit and retain. As a result, they were able to take advantage of the different infrastructure models that we’re delivering for them to support their business.

Gardner: We’re now going to go to our question and answer portion. Kristen Allen of Navisite is moderating our Q and A section.

Bare metal and beyond

Kristen Allen: We have some very interesting questions. The first one ties into a conversation you were just having, “What are the ROI benefits to moving to bare metal servers for certain workloads?”

Grimes: Not all software licensing is yet virtualization-friendly, or at least on a virtualization platform-agnostic platform, and so there’s really two things that play into the selection of bare metal, at least in my experience. There is kind of a model of bare metal computing, small cartridge-based computers, that are very specific to certain workloads. But when we talk in more general terms for a typical enterprise workload, it really revolves around either software licensing incompatibility with some of the cloud deployment models or a belief that there is a performance that requires bare metal, though in practice I think that’s more of optics than reality. But those are the two things that typically drive bare metal adoption in my experience.

Linthicum: Ultimately, people want access directly for at the end-of-the-line platforms, and if there’s some performance reason, or some security reason, or some kind of a direct access to some of the input-output systems, we do see these kinds of one-offs for bare metal. I call them special needs applications. I don’t see it as something that’s going to be widely adopted, but from time to time, it’s needed, and the capabilities are there depending on where you want to run it.

Allen: Our next question is, “Should there be different thinking for data workloads versus apps ones, and how should they be best integrated in a hybrid environment?”

The compute aspect and data aspect of an application should be decoupled. If you want to you can then assemble them on different platforms, even one on public cloud and one on private cloud.

Linthicum: Ultimately, the compute aspect of an application and the data aspect of that application really should be decoupled. Then, if you want to, you can assemble them on different platforms. I would typically think that we’re going to place them either on all public or all private, but you can certainly do one on private and one on public, and one on public and one on private, and link them that way.

As we’re migrating forward, the workloads are getting even more complex. And there’s some application workloads that I’ve seen, that I’ve developed, where the database would be partitioned against the private cloud andthe public cloud for disaster recovery (DR) purposes or performance purposes, and things like that. So, it’s really up to you as the architect as to where you’re going to place the data in adjacent relation to the workload. Typically, a good idea to place them as close to each other as they can so they have the highest bandwidth to communicate to each other. However, it’s not necessary depending on what the application’s doing.

Gardner: David, maybe organizations need to place their data in a certain jurisdiction but might want to run their apps out of a data center somewhere else for performance and economics?

Grimes: The data sovereignty requirement is something that we touched on and that’s becoming increasingly important and increasingly, that’s a driver too, in deciding where to place the data.

Just following on Dave’s comments, I agree 100 percent. If you have the opportunity to architect a new application, I think there’s some really interesting choices that can be made around data placement, network placement, and decoupling them is absolutely the right strategy.

I think the challenge many organizations face is having that mandate to close down the enterprise data center and move to the “cloud.” Of course, we know that “cloud” means a lot of different things but, do that in a legacy application environment and that will present some unique challenges as well, in terms of actually being able to sufficiently decouple data and applications.

Curious, Dave, if you’ve had any successes in kind of meeting that challenge?

Linthicum: Yes. It depends on the application workload and how flexible the applications are and how the information is communicating between the systems; also security requirements. So, it’s one of those obnoxious consulting responses, “it depends” as to whether or not we can make that work. But the thing is the architecture is a legitimate architectural pattern that I’ve seen before and we’ve used it.

Allen: Okay. How do you meet and adapt for Health Insurance Portability and Accountability Act of 1996(HIPAA) requirements and still maintain stable connectivity for the small business?

Grimes: HIPAA, like many of the governance programs, is a very large and co-owned responsibility. I think from our perspective at Navisite, part of Spectrum Enterprise, we have the unique capability of delivering both the network services and the cloud services in an integrated way that can address the particular question around stable connectivity. But ultimately, HIPAA is a blended responsibility model where the infrastructure provider, the network provider, the provider managing up to whatever layer of the application stack will have certain obligations. But then the partner, the client would also retain some obligations as well.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Sponsor: Navisite.

You may also be interested in:

 

Advertisements
Posted in application transformation, Cloud computing, data center, Data center transformation, Dell, Deloitte, disaster recovery, enterprise architecture, Enterprise transformation, multicloud, Navisite, server, Virtualization, VMware | Tagged , , , , , , , , , , , , , , | Leave a comment

How new tools help any business build ethical and sustainable supply chains

The next BriefingsDirect digital business innovations discussion explores new ways that companies gain improved visibility, analytics, and predictive responses to better manage supply-chain risk-and-reward sustainability factors.

We’ll examine new tools and methods that can be combined to ease the assessment and remediation of hundreds of supply-chain risks — from use of illegal and unethical labor practices to hidden environmental malpractices.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to explore more about the exploding sophistication in the ability to gain insights into supply-chain risks and provide rapid remediation, are our panelists, Tony Harris, Global Vice President and General Manager of Supplier Management Solutions at SAP Ariba; Erin McVeigh, Head of Products and Data Services at Verisk Maplecroft, and Emily Rakowski, Chief Marketing Officer at EcoVadis. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tony, I heard somebody say recently there’s never been a better time to gather information and to assert governance across supply chains. Why is that the case? Why is this an opportune time to be attacking risk in supply chains?

Harris: Several factors have culminated in a very short time around the need for organizations to have better governance and insight into their supply chains.

Tony Harris

Harris

First, there is legislation such as the UK’s Modern Slavery Act in 2015 and variations of this across the world. This is forcing companies to make declarations that they are working to eradicate forced labor from their supply chains. Of course, they can state that they are not taking any action, but if you can imagine the impacts that such a statement would have on the reputation of the company, it’s not going to be very good.

Next, there has been a real step change in the way the public now considers and evaluates the companies whose goods and services they are buying. People inherently want to do good in the world, and they want to buy products and services from companies who can demonstrate, in full transparency, that they are also making a positive contribution to society — and not just generating dividends and capital growth for shareholders.

Finally, there’s also been a step change by many innovative companies that have realized the real value of fully embracing an environmental, social, and governance (ESG) agenda. There’s clear evidence that now shows that companies with a solid ESG policy are more valuable. They sell more. The company’s valuation is higher. They attract and retain more top talent — particularly Millennials and Generation Z — and they are more likely to get better investment rates as well.

Gardner: The impetus is clearly there for ethical examination of how you do business, and to let your costumers know that. But what about the technologies and methods that better accomplish this? Is there not, hand in hand, an opportunity to dig deeper and see deeper than you ever could before?

Better business decisions with AI

Harris: Yes, we have seen a big increase in the number of data and content companies that now provide insights into the different risk types that organizations face.

We have companies like EcoVadis that have built score cards on various corporate social responsibility (CSR) metrics, and Verisk Maplecroft’s indices across the whole range of ESG criteria. We have financial risk ratings, we have cyber risk ratings, and we have compliance risk ratings.

These insights and these data providers are great. They really are the building blocks of risk management. However, what I think has been missing until recently was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business in one place.

What has been missing was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business.

Technologies such as artificial intelligence (AI), for example, and machine learning (ML) are supporting businesses at various stages of the procurement process in helping to make the right decisions. And that’s what we developed here at SAP Ariba. 

Gardner: It seems to me that 10 years ago when people talked about procurement and supply-chain integrity that they were really thinking about cost savings and process efficiency. Erin, what’s changed since then? And tell us also about Verisk Maplecroft and how you’re allowing a deeper set of variables to be examined when it comes to integrity across supply chains.

McVeigh: There’s been a lot of shift in the market in the last five to 10 years. I think that predominantly it really shifted with environmental regulatory compliance. Companies were being forced to look at issues that they never really had to dig underneath and understand — not just their own footprint, but to understand their supply chain’s footprint. And then 10 years ago, of course, we had the California Transparency Act, and then from that we had the UK Modern Slavery Act, and we keep seeing more governance compliance requirements.

Erin McVeigh

McVeigh

But what’s really interesting is that companies are going beyond what’s mandated by regulations. The reason that they have to do that is because they don’t really know what’s coming next. With a global footprint, it changes that dynamic. So, they really need to think ahead of the game and make sure that they’re not reacting to new compliance initiatives. And they have to react to a different marketplace, as Tony explained; it’s a rapidly changing dynamic.

We were talking earlier today about the fact that companies are embracing sustainability, and they’re doing that because that’s what consumers are driving toward.

At Verisk Maplecroft, we came to business about 12 years ago, which was really interesting because it came out of a number of individuals who were getting their master’s degrees in supply-chain risk. They began to look at how to quantify risk issues that are so difficult and complex to understand and to make it simple, easy, and intuitive.

They began with a subset of risk indices. I think probably initially we looked at 20 risks across the board. Now we’re up to more than 200 risk issues across four thematic issue categories. We begin at the highest pillar of thinking about risks — like politics, economics, environmental, and social risks. But under each of those risk’s themes are specific issues that we look at. So, if we’re talking about social risk, we’re looking at diversity and labor, and then under each of those risk issues we go a step further, and it’s the indicators — it’s all that data matrix that comes together that tell the actionable story.

Some companies still just want to check a [compliance] box. Other companies want to dig deeper — but the power is there for both kinds of companies. They have a very quick way to segment their supply chain, and for those that want to go to the next level to support their consumer demands, to support regulatory needs, they can have that data at their fingertips.

Global compliance

Gardner: Emily, in this global environment you can’t just comply in one market or area. You need to be global in nature and thinking about all of the various markets and sustainability across them. Tell us what EcoVadis does and how an organization can be compliant on a global scale.

Rakowski: EcoVadis conducts business sustainability ratings, and the way that we’re using the procurement context is primarily that very large multinational companies like Johnson and Johnson or Nestlé will come to us and say, “We would like to evaluate the sustainability factors of our key suppliers.”

Emily Rakowski

Rakowski

They might decide to evaluate only the suppliers that represent a significant risk to the business, or they might decide that they actually want to review all suppliers of a certain scale that represent a certain amount of spend in their business.

What EcoVadis provides is a 10-year-old methodology for assessing businesses based on evidence-backed criteria. We put out a questionnaire to the supplier, what we call a right-sized questionnaire, the supplier responds to material questions based on what kind of goods or services they provide, what geography they are in, and what size of business they are in.

Of course, very small suppliers are not expected to have very mature and sophisticated capabilities around sustainability systems, but larger suppliers are. So, we evaluate them based on those criteria, and then we collect all kinds of evidence from the suppliers in terms of their policies, their actions, and their results against those policies, and we give them ultimately a 0 to 100 score.

And that 0 to 100 score is a pretty good indicator to the buying companies of how well that company is doing in their sustainability systems, and that includes such criteria as environmental, labor and human rights, their business practices, and sustainable procurement practices.

Gardner: More data and information are being gathered on these risks on a global scale. But in order to make that information actionable, there’s an aggregation process under way. You’re aggregating on your own — and SAP Ariba is now aggregating the aggregators.

How then do we make this actionable? What are the challenges, Tony, for making the great work being done by your partners into something that companies can really use and benefit from?

Timely insights, best business decisions

Harris: Other than some of the technological challenges of aggregating this data across different providers is the need for linking it to the aspects of the procurement process in support of what our customers are trying to achieve. We must make sure that we can surface those insights at the right point in their process to help them make better decisions.

The other aspect to this is how we’re looking at not just trying to support risk through that source-to-settlement process — trying to surface those risk insights — but also understanding that where there’s risk, there is opportunity.

So what we are looking at here is how can we help organizations to determine what value they can derive from turning a risk into an opportunity, and how they can then measure the value they’ve delivered in pursuit of that particular goal. These are a couple of the top challenges we’re working on right now.

We’re looking at not just trying to support risk through that source-to-settlement process — trying to surface those risk insights — but also understanding that where there is risk there is opportunity.

Gardner: And what about the opportunity for compression of time? Not all challenges are something that are foreseeable. Is there something about this that allows companies to react very quickly? And how do you bring that into a procurement process?

Harris: If we look at some risk aspects such as natural disasters, you can’t react timelier than to a natural disaster. So, the way we can alert from our data sources on earthquakes, for example, we’re able to very quickly ascertain whom the suppliers are, where their distribution centers are, and where that supplier’s distribution centers and factories are.

When you can understand what the impacts are going to be very quickly, and how to respond to that, your mitigation plan is going to prevent the supply chain from coming to a complete halt.

Gardner: We have to ask the obligatory question these days about AI and ML. What are the business implications for tapping into what’s now possible technically for better analyzing risks and even forecasting them?

AI risk assessment reaps rewards

Harris: If you look at AI, this is a great technology, and what we trying to do is really simplify that process for our customers to figure out how they can take action on the information we’re providing. So rather them having to be experts in risk analysis and doing all this analysis themselves, AI allows us to surface those risks through the technology — through our procurement suite, for example — to impact the decisions they’re making.

For example, if I’m in the process of awarding a piece of sourcing business off of a request for proposal (RFP), the technology can surface the risk insights against the supplier I’m about to award business to right at that point in time.

A determination can be made based upon the goods or the services I’m looking to award to the supplier or based on the part of the world they operate in, or where I’m looking to distribute these goods or services. If a particular supplier has a risk issue that we feel is too high, we can act upon that. Now that might mean we postpone the award decision before we do some further investigation, or it may mean we choose not to award that business. So, AI can really help in those kinds of areas.

Gardner: Emily, when we think about the pressing need for insight, we think about both data and analysis capabilities. This isn’t something necessarily that the buyer or an individual company can do alone if they don’t have access to the data. Why is your approach better and how does AI assist that?

Rakowski: In our case, it’s all about allowing for scale. The way that we’re applying AI and ML at EcoVadis is we’re using it to do an evidence-based evaluation.

We collect a great amount of documentation from the suppliers we’re evaluating, and actually that AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, compress the evaluation time from what used to be about a six or seven-hour evaluation time for each supplier down to three or four hours. So that’s essentially allowing us to double our workforce of analysts in a heartbeat.

AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, allowing us to double our workforce of analysts.

The other thing it’s doing is helping scan through material news feeds, so we’re collecting more than 2,500 news sources from around all kinds of reports, from China Labor Watch or OSHA. These technologies help us scan through those reports from material information, and then puts that in front of our analysts. It helps them then to surface that real-time news that we’re for sure at that point is material.

And that way we we’re combining AI with real human analysis and validation to make sure that what we we’re serving is accurate and relevant.

Harris: And that’s a great point, Emily. On the SAP Ariba side, we also use ML in analyzing similarly vast amounts of content from across the Internet. We’re scanning more than 600,000 data sources on a daily basis for information on any number of risk types. We’re scanning that content for more than 200 different risk types.

We use ML in that context to find an issue, or an article, for example, or a piece of bad news, bad media. The software effectively reads that article electronically. It understands that this is actually the supplier we think it is, the supplier that we’ve tracked, and it understands the context of that article.

By effectively reading that text electronically, a machine has concluded, “Hey, this is about a contracts reduction, it may be the company just lost a piece of business and they had to downsize, and so that presents a potential risk to our business because maybe this supplier is on their way out of business.”

And the software using ML figures all that stuff out by itself. It defines a risk rating, a score, and brings that information to the attention of the appropriate category manager and various users. So, it is very powerful technology that can number crunch and read all this content very quickly.

Gardner: Erin, at Maplecroft, how are such technologies as AI and ML being brought to bear, and what are the business benefits to your clients and your ecosystem?

The AI-aggregation advantage

McVeigh: As an aggregator of data, it’s basically the bread and butter of what we do. We bring all of this information together and ML and AI allow us to do it faster, and more reliably

We look at many indices. We actually just revamped our social indices a couple of years ago.

Before that you had a human who was sitting there, maybe they were having a bad day and they just sort of checked the box. But now we have the capabilities to validate that data against true sources.

Just as Emily mentioned, we were able to reduce our human-rights analyst team significantly and the number of individuals that it took to create an index and allow them to go out and begin to work on additional types of projects for our customers. This helped our customers to be able to utilize the data that’s being automated and generated for them.

We also talked about what customers are expecting when they think about data these days. They’re thinking about the price of data coming down. They’re expecting it to be more dynamic, they’re expecting it to be more granular. And to be able to provide data at that level, it’s really the combination of technology with the intelligent data scientists, experts, and data engineers that bring that power together and allow companies to harness it.

Gardner: Let’s get more concrete about how this goes to market. Tony, at the recent SAP Ariba Live conference, you announced the Ariba Supplier Risk improvements. Tell us about the productization of this, how people intercept with it. It sounds great in theory, but how does this actually work in practice?

Partnership prowess

Harris: What we announced at Ariba Live in March is the partnership between SAP Ariba, EcoVadis and Verisk Maplecroft to bring this combined set of ESG and CSR insights into SAP Ariba’s solution.

We do not yet have the solution generally available, so we are currently working on building out integration with our partners. We have a number of common customers that are working with us on what we call our design partners. There’s no better customer ultimately then a customer already using these solutions from our companies. We anticipate making this available in the Q3 2018 time frame.

And with that, customers that have an active subscription to our combined solutions are then able to benefit from the integration, whereby we pull this data from Verisk Maplecroft, and we pull the CSR score cards, for example, from EcoVadis, and then we are able to present that within SAP Ariba’s supplier risk solution directly.

What it means is that users can get that aggregated view, that high-level view across all of these different risk types and these metrics in one place. However, if, ultimately they are going to get to the nth degree of detail, they will have the ability to click through and naturally go into the solutions from our partners here as well, to drill right down to that level of detail. The aim here is to get them that high-level view to help them with their overall assessments of these suppliers.

Gardner: Over time, is this something that organizations will be able to customize? They will have dials to tune in or out certain risks in order to make it more applicable to their particular situation?

Customers that have an active subscription to our combined solutions are then able to benefit from the integration and see all that data within SAP Ariba’s supplier risk solutions directly.

Harris: Yes, and that’s a great question. We already addressed that in our solutions today. We cover risk across more than 200 types, and we categorized those into four primary risk categories. The way the risk exposure score works is that any of the feeding attributes that go into that calculation the customer gets to decide on how they want to weigh those.

If I have more bias toward that kind of financial risk aspects, or if I have more of the bias toward ESG metrics, for example, then I can weigh that part of the score, the algorithm, appropriately.

Gardner: Before we close out, let’s examine the paybacks or penalties when you either do this well — or not so well.

Erin, when an organization can fully avail themselves of the data, the insight, the analysis, make it actionable, make it low-latency — how can that materially impact the company? Is this a nice-to-have, or how does it affect the bottom line? How do we make business value from this?

Nice-to-have ROI

Rakowski: One of the things that we’re still working on is quantifying the return on investment (ROI) for companies that are able to mitigate risk, because the event didn’t happen.

How do you put a tangible dollar value to something that didn’t occur? What we can look at is taking data that was acquired over the past few years and understand that as we begin to see our risk reduction over time, we begin to source for more suppliers, add diversity to our supply chain, or even minimize our supply chain depending on the way you want to move forward in your risk landscape and your supply diversification program. It’s giving them that power to really make those decisions faster and more actionable.

And so, while many companies still think about data and tools around ethical sourcing or sustainable procurement as a nice-to-have, those leaders in the industry today are saying, “It’s no longer a nice-to-have, we’re actually changing the way we have done business for generations.”

And, it’s how other companies are beginning to see that it’s not being pushed down on them anymore from these large retailers, these large organizations. It’s a choice they have to make to do better business. They are also realizing that there’s a big ROI from putting in that upfront infrastructure and having dedicated resources that understand and utilize the data. They still need to internally create a strategy and make decisions about business process.

We can automate through technology, we can provide data, and we can help to create technology that embeds their business process into it — but ultimately it requires a company to embrace a culture, and a cultural shift to where they really believe that data is the foundation, and that technology will help them move in this direction.

Gardner: Emily, for companies that don’t have that culture, that don’t think seriously about what’s going on with their suppliers, what are some of the pitfalls? When you don’t take this seriously, are bad things going to happen?

Pay attention, be prepared

Rakowski: There are dozens and dozens of stories out there about companies that have not paid attention to critical ESG aspects and suffered the consequences of a horrible brand hit or a fine from a regulatory situation. And any of those things easily cost that company on the order of a hundred times what it would cost to actually put in place a program and some supporting services and technologies to try to avoid that.

From an ROI standpoint, there’s a lot of evidence out there in terms of these stories. For companies that are not really as sophisticated or ready to embrace sustainable procurement, it is a challenge. Hopefully there are some positive mavericks out there in the businesses that are willing to stake their reputation on trying to move in this direction, understanding that the power they have in the procurement function is great.

They can use their company’s resources to bet on supply-chain actors that are doing the right thing, that are paying living wages, that are not overworking their employees, that are not dumping toxic chemicals in our rivers and these are all things that, I think, everybody is coming to realize are really a must, regardless of regulations.

Hopefully there are some positive mavericks out there who are willing to stake their reputations on moving in this direction. The power they have in the procurement function is great.

And so, it’s really those individuals that are willing to stand up, take a stand and think about how they are going to put in place a program that will really drive this culture into the business, and educate the business. Even if you’re starting from a very little group that’s dedicated to it, you can find a way to make it grow within a culture. I think it’s critical.

Gardner: Tony, for organizations interested in taking advantage of these technologies and capabilities, what should they be doing to prepare to best use them? What should companies be thinking about as they get ready for such great tools that are coming their way?

Synergistic risk management

Harris: Organizationally, there tend to be a couple of different teams inside of business that manage risks. So, on the one hand there can be the kind of governance risk and compliance team. On the other hand, they can be the corporate social responsibility team.

I think first of all, bringing those two teams together in some capacity makes complete sense because there are synergies across those teams. They are both ultimately trying to achieve the same outcome for the business, right? Safeguard the business against unforeseen risks, but also ensure that the business is doing the right thing in the first place, which can help safeguard the business from unforeseen risks.

I think getting the organizational model right, and also thinking about how they can best begin to map out their supply chains are key. One of the big challenges here, which we haven’t quite solved yet, is figuring out who are the players or supply-chain actors in that supply chain? It’s pretty easy to determine now who are the tier-one suppliers, but who are the suppliers to the suppliers — and who are the suppliers to the suppliers to the suppliers?

We’ve yet to actually build a better technology that can figure that out easily. We’re working on it; stay posted. But I think trying to compile that information upfront is great because once you can get that mapping done, our software and our partner software with EcoVadis and Verisk Maplecroft is here to surfaces those kinds of risks inside and across that entire supply chain.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, Business networks, Cloud computing, Cyber security, data analysis, Enterprise transformation, Information management, machine learning, Networked economy, procurement, retail, risk assessment, SAP Ariba, Security, Spot buying, supply chain | Tagged , , , , , , , , , , , , | Leave a comment

Panel explores new ways to solve the complexity of hybrid cloud monitoring

The next BriefingsDirect panel discussion focuses on improving performance and cost monitoring of various IT workloads in a multi-cloud world.

We will now explore how multi-cloud adoption is forcing cloud monitoring and cost management to work in new ways for enterprises.

Our panel of Micro Focus experts will unpack new Dimensional Research survey findings gleaned from more than 500 enterprise cloud specifiers. You will learn about their concerns, requirements and demands for improving the monitoring, management and cost control over hybrid and multi-cloud deployments.

We will also hear about new solutions and explore examples of how automation leverages machine learning (ML) and rapidly improves cloud management at a large Barcelona bank.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To share more about interesting new cloud trends, we are joined by Harald Burose, Director of Product Management at Micro Focus, and he is based in Stuttgart; Ian Bromehead, Direct of Product Marketing at Micro Focus, and he is based in Grenoble, France, and Gary Brandt, Product Manager at Micro Focus, based in Sacramento. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s begin with setting the stage for how cloud computing complexity is rapidly advancing to include multi-cloud computing — and how traditional monitoring and management approaches are falling short in this new hybrid IT environment.

Enterprise IT leaders tasked with the management of apps, data, and business processes amid this new level of complexity are primarily grounded in the IT management and monitoring models from their on-premises data centers.

They are used to being able to gain agent-based data sets and generate analysis on their own, using their own IT assets that they control, that they own, and that they can impose their will over.

Yet virtually overnight, a majority of companies share infrastructure for their workloads across public clouds and on-premises systems. The ability to manage these disparate environments is often all or nothing.

The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure.

In many ways, the ability to manage in a hybrid fashion has been overtaken by the actual hybrid deployment models. The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure. Their management agents can’t go there. They have insights from their own systems, but far less from their clouds, and they can’t join these. They therefore have hybrid computing — but without commensurate hybrid management and monitoring.

They can’t assure security or compliance and they cannot determine true and comparative costs — never mind gain optimization for efficiency across the cloud computing spectrum.

Old management into the cloud

But there’s more to fixing the equation of multi-cloud complexity than extending yesterday’s management means into the cloud. IT executives today recognize that IT operations’ divisions and adjustments must be handled in a much different way.

Even with the best data assets and access and analysis, manual methods will not do for making the right performance adjustments and adequately reacting to security and compliance needs.

Automation, in synergy with big data analytics, is absolutely the key to effective and ongoing multi-cloud management and optimization.

Fortunately, just as the need for automation across hybrid IT management has become critical, the means to provide ML-enabled analysis and remediation have matured — and at compelling prices.

Great strides have been made in big data analysis of such vast data sets as IT infrastructure logs from a variety of sources, including from across the hybrid IT continuum.

Many analysts, in addition to myself, are now envisioning how automated bots leveraging IT systems and cloud performance data can begin to deliver more value to IT operations, management, and optimization. Whether you call it BotOps, or AIOps, the idea is the same: The rapid concurrent use of multiple data sources, data collection methods and real-time top-line analytic technologies to make IT operations work the best at the least cost.

IT leaders are seeking the next generation of monitoring, management and optimizing solutions. We are now on the cusp of being able to take advantage of advanced ML to tackle the complexity of multi-cloud deployments and to keep business services safe, performant, and highly cost efficient.

We are on the cusp of being able to take advantage of ML to tackle the complexity of multi-cloud deployments and keep business services safe.

Similar in concept to self-driving cars, wouldn’t you rather have self-driving IT operations? So far, a majority of you surveyed say yes; and we are going to now learn more about that survey information.

Ian, please tell us more about the survey findings.

IT leaders respond to their needs

Ian Bromehead: Thanks, Dana. The first element of the survey that we wanted to share describes the extent to which cloud is so prevalent today.

Ian Bromehead

Bromehead

More than 92 percent of the 500 or so executives are indicating that we are already in a world of significant multi-cloud adoption.

The lion’s share, or nearly two-thirds, of this population that we surveyed are using between two to five different cloud vendors. But more than 12 percent of respondents are using more than 10 vendors. So, the world is becoming increasingly complex. Of course, this strains a lot of the different aspects [of management].

What are people doing with those multiple cloud instances? As to be expected, people are using them to extend their IT landscape, interconnecting application logic and their own corporate data sources with the infrastructure and the apps in their cloud-based deployments — whether they’re Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Some 88 percent of the respondents are indeed connecting their corporate logic and data sources to those cloud instances.

What’s more interesting is that a good two-thirds of the respondents are sharing data and integrating that logic across heterogeneous cloud instances, which may or may not be a surprise to you. It’s nevertheless a facet of many people’s architectures today. It’s a result of the need for agility and cost reduction, but it’s obviously creating a pretty high degree of complexity as people share data across multiple cloud instances.

The next aspect that we saw in the survey is that 96 percent of the respondents indicate that these public cloud application issues are resolved too slowly, and they are impacting the business in many cases.

Some of the business impacts range from resources tied up by collaborating with the cloud vendor to trying to solve these issues, and the extra time required to resolve issues impacting service level agreements (SLAs) and contractual agreements, and prolonged down time.

What we regularly see is that the adoption of cloud often translates into a loss in transparency of what’s deployed and the health of what’s being deployed, and how that’s capable of impacting the business. This insight is a strong bias on our investment and some of the solutions we will talk to you about. Their primary concern is on the visibility of what’s being deployed — and what depends on the internal, on-premise as well as private and public cloud instances.

People need to see what is impacting the delivery of services as a provider, and if that’s due to issues with local or remote resources, or the connectivity between them. It’s just compounded by the fact that people are interconnecting services, as we just saw in the survey, from multiple cloud providers. Sothe weak part could be anywhere, could be anyone of those links. The ability for people to know where those issues are isnot happening fast enough for many people, with some 96 percent indicating that the issues are being resolved too slowly.

How to gain better visibility?

What are the key changes that need to be addressed when monitoring hybrid IT absent environments? People have challenges with discovery, understanding, and visualizing what has actually been deployed, and how it is impacting the end-to-end business.

They have limited access to the cloud infrastructure, and things like inadequate security monitoring or traditional monitoring agent difficulties, as well as monitoring lack of real-time metrics to be able to properly understand what’s happening.

It shows some of the real challenges that people are facing. And as the world shifts to being more dependent on the services that they consume, then traditional methods are not going to be properly adapted to the new environment. Newer solutions are needed. New ways of gaining visibility – and the measuring availability and performance are going to be needed.

I think what’s interesting in this part of the survey is the indication that the cloud vendors themselves are not providing this visibility. They are not providing enough information for people to be able to properly understand how service delivery might be impacting their own businesses. For instance, you might think that IT is actually flying blind in the clouds as it were.

The cloud vendors are not providing the visibility. They are not providing enough information for people to be able to understand service delivery impacts.

So, one of my next questions was, Across the different monitoring ideas or types, what’s needed for the hybrid IT environment? What should people be focusing on? Security infrastructure, getting better visibility, and end-user experience monitoring, service delivery monitoring and cloud costs – all had high ranking on what people believe they need to be able to monitor. Whether you are a provider or a consumer, most people end up being both. Monitoring is really key.

People say they really need to span infrastructure monitoring, metric that monitoring, and gain end-user security and compliance. But even that’s not enough because to properly govern the service delivery, you are going to have to have an eye on the costs — the cost of what’s being deployed — and how can you optimize the resources according to those costs. You need that analysis whether you are a consumer or the provider.

The last of our survey results shows the need for comprehensive enterprise monitoring. Now, people need things such as high-availability, automation, the ability to cover all types of data to find issues like root causes and issues, even from a predictive perspective. Clearly, here people expect scalability, they expect to be able to use a big data platform.

For consumers of cloud services, they should be measuring what they are receiving, and capable of seeing what’s impacting the service delivery. No one is really so naive as to say that infrastructure is somebody else’s problem. When it’s part of this service, equally impacting the service that you are paying for, and that you are delivering to your business users — then you better have the means to be able to see where the weak links are. It should be the minimum to seek, but there’s still happenings to prove to your providers that they’re underperforming and renegotiate what you pay for.

Ultimately, when you are sticking such composite services together, IT needs to become more of a service broker. We should be able to govern the aspects of detecting when the service is degrading.

So when their service is more PaaS, then workers’ productivity is going to suffer and the business will expect IT to have the means to reverse that quickly.

So that, Dana, is the set of the different results that we got out of this survey.

A new need for analytics

Gardner: Thank you, Ian. We’ll now go to Gary Brandt to learn about the need for analytics and how cloud monitoring solutions can be cobbled together anew to address these challenges.

Gary Brandt: Thanks, Dana. As the survey results were outlined and as Ian described, there are many challenges and numerous types of monitoring for enterprise hybrid IT environments. With such variety and volume of data from these different types of environments that gets generated in the complex hybrid environments, humans simply can’t look at dashboards or use traditional tools and make sense of the data efficiently. Nor can they take necessary actions required in a timely manner, given the volume and the complexity of these environments.

Gary Brandt

Brandt

So how do we deal with all of this? It’s where analytics, advanced analytics via ML, really brings in value. What’s needed is a set of automated capabilities such as those described in Gartner’s definition of AIOps and these include traditional and streaming data management, log and wire metrics, and document ingestion from many different types of sources in these complex hybrid environments.

Dealing with all this, trying to, when you are not quite sure where to look, when you have all this information coming in, it requires some advanced analytics and some clever artificial intelligence (AI)-driven algorithms just to make sense of it. This is what Gartner is really trying to guide the market toward and show where the industry is moving. The key capabilities that they speak about are analytics that allow for predictive capabilities and the capability to find anomalies in vast amounts of data, and then try to pinpoint where your root cause is, or at least eliminate the noise and get to focus on those areas.

We are making this Gartner report available for a limited time. What we have found also is that people don’t have the time or often the skill set to deal with activities and they focus on — they need to focus on the business user and the target and the different issues that come up in these hybrid environments and these AIOpscapabilities that Gartner speaks about are great.

But, without the automation to drive out the activities or the response that needs to occur, it becomes a missing piece. So, we look at a survey — some of our survey results and what our respondents said, it was clear that upward of the high-90 percent are clearly telling us that automation is considered highly critical. You need to see which event or metric trend so clearly impacts on a business service and whether that service pertains to a local, on-prem type of solution, or a remote solution in a cloud at some place.

Automation is key, and that requires a degree of that service definition, dependency mapping, which really should be automated. And to be declared more – just more easily or more importantly to be kept up to date, you don’t need complex environments, things are changing so rapidly and so quickly.

Sense and significance of all that data?

Micro Focus’ approach uses analytics to make sense of this vast amount of data that’s coming in from these hybrid environments to drive automation. The automation of discovery, monitoring, service analytics, they are really critical — and must be applied across hybrid IT against your resources and map them to your services that you define.

Those are the vast amounts of data that we just described. They come in the form of logs and events and metrics, generated from lots of different sources in a hybrid environment across cloud and on-prem. You have to begin to use analytics as Gartner describes to make sense of that, and we do that in a variety of ways, where we use ML to learn behavior, basically of your environment, in this hybrid world.

And we need to be able to suggest what the most significant data is, what the significant information is in your messages, to really try to help find the needle in a haystack. When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to give them the chance to anticipate and to remediate issues before they disrupt the services in a company’s environment.

When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to remediate issues before they disrupt.

And then we take this further because we have the analytics capability that’s described by Gartner and others. We couple that with the ability to execute different types of automation as a means to let the operator, the operations team, have more time to spend on what’s really impacting the business and getting to the issues quicker than trying to spend time searching and sorting through that vast amount of data.

And we built this on different platforms. One of the key things that’s critical when you have this hybrid environment is to have a common way, or an efficient way, to collect information and to store information, and then use that data to provide access to different functionality in your system. And we do that in the form of microservices in this complex environment.

We like to refer to this as autonomous operations and it’spart of our OpsBridge solution, which embodies a lot of different patented capabilities around AIOps. Harald is going to speak to our OpsBridgesolution in more detail.

Operations Bridge in more detail

Gardner: Thank you, Gary. Now that we know more about what users need and consider essential, let’s explore a high-level look at where the solutions are going, how to access and assemble the data, and what new analytics platforms can do.

We’ll now hear from Harald Burose, Director of Product Management at Micro Focus.

Harald Burose: When we listen carefully to the different problems that Ian was highlighting, we actually have a lot of those problems addressed in the Operations Bridge solution that we are currently bringing to market.

Harald Burose

Burose

All core use cases for Operations Bridge tie it to the underpinning of the Vertica big data analytics platform. We’re consolidating all the different types of data that we are getting; whether business transactions, IT infrastructure, application infrastructure, or business services data — all of that is actually moved into a single data repository and then reduced in order to basically understand what the original root cause is.

And from there, these tools like the analytics that Gary described, not only identify the root cause, but move to remediation, to fixing the problem using automation.

This all makes it easy for the stakeholders to understand what the status is and provide the right dashboarding, reporting via the right interface to the right user across the full hybrid cloud infrastructure.

As we saw, some 88 percent of our customers are connecting their cloud infrastructure to their on-premises infrastructure. We are providing the ability to understand that connectivity through a dynamically updated model, and to show how these services are interconnecting — independent of the technology — whether deployed in the public cloud, a private cloud, or even in a classical, non-cloud infrastructure. They can then understand how they are connecting, and they can use the toolset to navigate through it all, a modern HTML5-based interface, to look at all the data in one place.

They are able to consolidate more than 250 different technologies and information into a single place: their log files, the events, metrics, topology — everything together to understand the health of their infrastructure. That is the key element that we drive with the Operations Bridge.

Now, we have extended the capabilities further, specifically for the cloud. We basically took the generic capability and made it work specifically for the different cloud stacks, whether private cloud, your own stack implementations, a hyperconverged (HCI) stack, like Nutanix, or a Docker container infrastructure that you bring up on a public cloud like AzureAmazon, or Google Cloud.

We are now automatically discovering and placing that all into the context of your business service application by using the Automated Service Modeling part of the Operations Bridge.

Now, once we actually integrate those toolsets, we tightly integrate them for native tools on Amazon or for Docker tools, for example. You can include these tools, so you can then automate processes from within our console.

Customers vote a top choice

And, best of all, we have been getting positive feedback from the cloud monitoring community, by the customers. And the feedback has helped earn us a Readers’ Choice Award by the Cloud Computing Insider in 2017, by being ahead of the competition.

This success is not just about getting the data together, using ML to understand the problem, and using our capabilities to connect these things together. At the end of the day, you need to act on the activity.

Having a full-blown orchestration compatibility within OpsBridgeprovides more than 5,000 automated workflows, so you can automate different remediation tasks — or potentially point to future provisioning tasks that solve the problems of whatever you can imagine. You can use this to not only identify the root cause, but you can automatically kick off a workflow to address the specific problems.

If you don’t want to address a problem through the workflow, or cannot automatically address it, you still have a rich set of integrated tools to manually address a problem.

Having a full-blown orchestration capability with OpsBridge provides more than 5,000 automated workflows to automate many different remediation tasks.

Last, but not least, you need to keep your stakeholders up to date. They need to know, anywhere that they go, that the services are working. Our real-time dashboard is very open and can integrate with any type of data — not just the operational data that we collect and manage with the Operations Bridge, but also third-party data, such as business data, video feeds, and sentiment data. This gets presented on a single visual dashboard that quickly gives the stakeholders the information: Is my business service actually running? Is it okay? Can I feel good about the business services that I am offering to my internal as well as external customer-users?

And you can have this on a network operations center (NOC) wall, on your tablet, or your phone — wherever you’d like to have that type of dashboard. You can easily you create those dashboards using Microsoft Office toolsets, and create graphical, very appealing dashboards for your different stakeholders.

Gardner: Thank you, Harald. We are now going to go beyond just the telling, we are going to do some showing. We have heard a lot about what’s possible. But now let’s hear from an example in the field.

Multicloud monitoring in action

Next up is David Herrera, Cloud Service Manager at Banco Sabadell in Barcelona. Let’s find out about this use case and their use of Micro Focus’s OpsBridge solution.

David Herrera: Banco Sabadell is fourth largest Spanish banking group. We had a big project to migrate several systems into the cloud and we realized that we didn’t have any kind of visibility about what was happening in the cloud.

David_Herrera

Herrera

We are working with private and public clouds and it’s quite difficult to correlate the information in events and incidents. We need to aggregate this information in just one dashboard. And for that, OpsBridgeis a perfect solution for us.

We started to develop new functionalities on OpsBridge, to customize for our needs. We had to cooperate with a project development team in order to achieve this.

The main benefit is that we have a detailed view about what is happening in the cloud. In the dashboard we are able to show availability, number of resources that we are using — almost in real time. Also, we are able to show what the cost is in real time of every resource, and we can do even the projection of the cost of the items.

The main benefit is we have a detailed view about what is happening in the cloud. We are able to show what the cost is in real time of every resource.

[And that’s for] every single item that we have in the cloud now, even across the private and public cloud. The bank has invested a lot of money in this solution and we need to show them that it’s really a good choice in economical terms to migrate several systems to the cloud, and this tool will help us with this.

Our response time will be reduced dramatically because we are able to filter and find what is happening, andcall the right people to fix the problem quickly. The business department will understand better what we are doing because they will be able to see all the information, and also select information that we haven’t gathered. They will be more aligned with our work and we can develop and deliver better solutions because also we will understand them.

We were able to build a new monitoring system from scratch that doesn’t exist on the market. Now, we are able to aggregate a lot of detailing information from different clouds.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Micro Focus.

You may also be interested in:

Posted in artificial intelligence, big data, Cloud computing, cloud messaging, Cyber security, data analysis, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, hyperconverged infrastructure, Information management, machine learning, Micro Focus, multicloud, Security | Tagged , , , , , , , , , , , , | Leave a comment

How HudsonAlpha transforms hybrid cloud complexity into an IT force multiplier

The next BriefingsDirect hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments.

We’ll now learn how HudsonAlpha has been testing a new Hewlett Packard Enterprise (HPE) solution, OneSphere, to gain a common and simplified management interface to rule them all.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help explore the benefits of improved levels of multi-cloud visibility and process automation is Katreena Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving the need to solve hybrid IT complexity at HudsonAlpha?

Katreena Mullican

Mullican

Mullican: The big drivers at HudsonAlpha are the requirements for data locality and ease-of-adoption. We produce about 6 petabytes of new data every year, and that rate is increasing with every project that we do.

We support hundreds of research programs with data and trend analysis. Our infrastructure requires quickly iterating to identify the approaches that are both cost-effective and the best fit for the needs of our users.

Gardner: Do you find that having multiple types of IT platforms, environments, and architectures creates a level of complexity that’s increasingly difficult to manage?

Mullican: Gaining a competitive edge requires adopting new approaches to hybrid IT. Even carefully contained shadow IT is a great way to develop new approaches and attain breakthroughs.

Gardner: You want to give people enough leash where they can go and roam and experiment, but perhaps not so much that you don’t know where they are, what they are doing.

Software-defined everything 

Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do at HudsonAlpha for gaining rapid innovation.

Gardner: How do you gain balance from too hard-to-manage complexity, with a potential of chaos, to the point where you can harness and optimize — yet allow for experimentation, too?

Mullican: IT is ultimately responsible for the security and the up-time of the infrastructure. So it’s important to have a good framework on which the developers and the researchers can compute. It’s about finding a balance between letting them have provisioning access to those resources versus being able to keep an eye on what they are doing. And not only from a usage perspective, but from a cost perspective, too.

Simplified 

Hybrid Cloud

Management

Gardner: Tell us about HudsonAlpha and its fairly extreme IT requirements.

Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and educators who apply the benefits of genomics to everyday life. We also provide IT services and support for about 40 affiliate companies on our 150-acre campus in Huntsville, Alabama.

Gardner: What about the IT requirements? How you fulfill that mandate using technology?

Mullican: We produce 6 petabytes of new data every year. We have millions of hours of compute processing time running on our infrastructure. We have hardware acceleration. We have direct connections to clouds. We have collaboration for our researchers that extends throughout the world to external organizations. We use containers, and we use multiple cloud providers.

Gardner: So you have been doing multi-cloud before there was even a word for multi-cloud?

Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard of.

Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your scientists and researchers happy. How do you avoid lock-in? How do you keep it so that you can remain open and competitive?

Agnostic arrangements of clouds

Mullican: It’s important for us to keep our local datacenters agnostic, as well as our private and public clouds. So we strive to communicate with all of our resources through application programming interfaces (APIs), and we use open-source technologies at HudsonAlpha. We are proud of that. Yet there are a lot of possibilities for arranging all of those pieces.

There are a lot [of services] that you can combine with the right toolsets, not only in your local datacenter but also in the clouds. If you put in the effort to write the code with that in mind — so you don’t lock into any one solution necessarily — then you can optimize and put everything together.

Gardner: Because you are a nonprofit institute, you often seek grants. But those grants can come with unique requirements, even IT use benefits and cloud choice considerations.

Cloud cost control, granted

Mullican: Right. Researchers are applying for grants throughout the year, and now with the National Institutes of Health (NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can immediately begin consuming resources in the cloud — from storage to compute — and that cost is covered by the grant.

So they are anxious to get started on that, which brings challenges to IT. We certainly don’t want to be the holdup for that innovation. We want the projects to progress as rapidly as possible. At the same time, we need to be aware of what is happening in a cloud and not lose control over usage and cost.

Simplified 

Hybrid Cloud

Management

Gardner: Certainly HudsonAlpha is an extreme test bed for multi-cloud management, with lots of different systems, changing requirements, and the need to provide the flexibility to innovate to your clientele. When you wanted a better management capability, to gain an overview into that full hybrid IT environment, how did you come together with HPE and test what they are doing?

Variety is the spice of IT

Mullican: We’ve invested in composable infrastructure and hyperconverged infrastructure (HCI) in our datacenter, as well as blade server technology. We have a wide variety of compute, networking, and storage resources available to us.

The key is: How do we rapidly provision those resources in an automated fashion? I think the key there is not only for IT to be aware of those resources, but for developers to be as well. We have groups of developers dealing with bioinformatics at HudsonAlpha. They can benefit from all of the different types of infrastructure in our datacenter. What HPE OneSphere does is enable them to access — through a common API — that infrastructure. So it’s very exciting.

Gardner: What did HPE OneSphere bring to the table for you in order to be able to rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?

Mullican: We have been beta testing HPE OneSphere since October 2017, and we have tied it into our VMware ESX Server environment, as well as our Amazon Web Services (AWS) environment successfully — and that’s at an IT level. So our next step is to give that to researchers as a single pane of glass where they can go and provision the resources themselves.

Gardner: What this might capability bring to you and your organization?

Cross-training the clouds

Mullican: We want to do more with cross-cloud. Right now we are very adept at provisioning within our datacenters, provisioning within each individual cloud. HudsonAlpha has a presence in all the major public clouds — AWSGoogleMicrosoft Azure. But the next step would be to go cross-cloud, to provision applications across them all.

For example, you might have an application that runs as a series of microservices. So you can have one microservice take advantage of your on-premises datacenter, such as for local storage. And then another piece could take advantage of object storage in the cloud. And even another piece could be in another separate public cloud.

But the key here is that our developer and researchers — the end users of OneSphere – they don’t need to know all of the specifics of provisioning in each of those environments. That is not a level of expertise in their wheelhouse. In this new OneSphere way, all they know is that they are provisioning the application in the pipeline — and that’s what the researchers will use. Then it’s up to us in IT to come along and keep an eye on what they are doing through the analytics that HPE OneSphere provides.

Gardner: Because OneSphere gives you the visibility to see what the end users are doing, potentially, for cost optimization and remaining competitive, you may be able to play one cloud off another. You may even be able to automate and orchestrate that.

Simplified 

Hybrid Cloud

Management

Mullican: Right, and that will be an ongoing effort to always optimize cost — but not at the risk of slowing the research. We want the research to happen, and to innovate as quickly as possible. We don’t want to be the holdup for that. But we definitely do need to loop back around and keep an eye on how the different clouds are being used and make decisions going forward based on the analytics.

Gardner: There may be other organizations that are going to be more cost-focused, and they will probably want to dial back to get the best deals. It’s nice that we have the flexibility to choose an algorithmic approach to business, if you will.

Mullican: Right. The research that we do at HudsonAlpha saves lives and the utmost importance is to be able to conduct that research at the fastest speed.

Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS, yet they are going to be adding more clouds. And they are supporting more internal private cloud infrastructures, and using an API-driven approach to microservices and containers.

The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct the research at the fastest speed.

As an early tester, and someone who has been a long-time user of HPE infrastructure, is there anything about the combination of HPE SynergyHPE SimpliVity HCI, and HPE 3PAR intelligent storage — in conjunction with OneSphere — that’s given you a “whole greater than the sum of the parts” effect?

Mullican: HPE Synergy and composable infrastructure is something that is very near and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and customizing open-source applications on Image Streamer -– open-source operating systems and applications.

The ability to utilize that in the mix that I have architected natively with OneSphere — in addition to the public clouds — is very powerful, and I am excited to see where that goes.

Gardner: Any words of wisdom to others who may be have not yet gone down this road? What do you advise others to consider as they are seeking to better compose, automate, and optimize their infrastructure?

Get adept at DevOps

Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach.

As far as putting an emphasis on automation — and being able to provision infrastructure in the datacenter and the cloud through automated APIs — a lot of companies probably are still slow to adopt that. They are still provisioning in older methods, and I think it’s important that they do that. But then, once your IT department is adept with DevOps, your developers can begin feeding from that and using what IT has laid down as a foundation. So it needs to start with IT.

It involves a skill set change for some of the traditional system administrators and network administrators. But now, with software-defined networking (SDN) and with automated deployments and provisioning of resources — that’s a skill set that IT really needs to step up and master. That’s because they are going to need to set the example for the developers who are going to come along and be able to then use those same tools.

That’s the partnership that companies really need to foster — and it’s between IT and developers. And something like HPE OneSphere is a good fit for that, because it provides a unified API.

On one hand, your IT department can be busy mastering how to communicate with their infrastructure through that tool. And at the same time, they can be refactoring applications as microservices, and that’s up to the developer teams. So both can be working on all of this at the same time.

Then when it all comes together with a service catalog of options, in the end it’s just a simple interface. That’s what we want, to provide a simple interface for the researchers. They don’t have to think about all the work that went into the infrastructure, they are just choosing the proper workflow and pipeline for future projects.

We want to provide a simple interface to the researchers. They don’t have to think about all the work that went into the infrastructure.

Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level abstraction, and that OneSphere is an accelerant to elevating IT. At the same time, OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating the developers. So are we really finally bringing people to that higher plane of business-focus and digital transformation?

HCI advances across the globe

Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in IT. It’s not a distinguished department, but in some companies that’s not the case.

And I think we have a lot of advantages because we think in terms of automation, and we think in terms of APIs from the infrastructure standpoint. And the tools that we have invested in, the types of composable and hyperconverged infrastructure, are helping accomplish that.

Gardner: I speak with a number of organizations that are global, and they have some data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also might be powerful in helping to decide where data sets reside in different clouds, private and public, for various regulatory reasons.

Is there something about having that visibility into hybrid IT that extends into hybrid data environments?

Mullican: Data locality is one of our driving factors in IT, and we do have on-premises storage as well as cloud storage. There is a time and a place for both of those, and they do not always mix, but we have requirements for our data to be available worldwide for collaboration.

So, the services that HPE OneSphere makes available are designed to use the appropriate data connections, whether that would be back to your object storage on-premises, or AWS Simple Storage Service (S3), for example, in the cloud.

Simplified 

Hybrid Cloud

Management

Gardner: Now we can think of HPE OneSphere as also elevating data scientists — and even the people in charge of governance, risk management, and compliance (GRC) around adhering to regulations. It seems like it’s a gift that keeps giving.

Hybrid hard work pays off

Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition to all of the preparation work that we have done in IT around automated provisioning with HPE Synergy and Image Streamer.

HPE OneSphere is a way to showcase to the end user all of the efforts that have been, and are being, done by IT. That’s why it’s a satisfying tool to implement, because, in the end, you want what you have worked on so hard to be available to the researchers and be put to use easily and quickly.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, Cloud computing, data analysis, data center, Data center transformation, enterprise architecture, Enterprise transformation, healthcare, Hewlett Packard Enterprise, hyperconverged infrastructure, multicloud | Leave a comment

South African insurer King Price gives developers the royal treatment as HCI meets big data

The next BriefingsDirect developer productivity insights interview explores how a South African insurance innovator has built a modern hyperconverged infrastructure (HCI) IT environment that replicates databases so fast that developers can test and re-test to their hearts’ content.

We’ll now learn how King Price in Pretoria also gained data efficiencies and heightened disaster recovery benefits from their expanding HCI-enabled architecture.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the myriad benefits of a data transfer intensive environment is Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. The discussion is moderated by  Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What have been the top trends driving your interest in modernizing your data replication capabilities?

Steyn: One of the challenges we had was the business was really flying blind. We had to create a platform and the ability to get data out of the production environment as quickly as possible to allow the business to make informed decisions — literally in almost real-time.

Gardner: What were some of the impediments to moving data and creating these new environments for your developers and your operators?

How to solve key challenges

With HPE SimpliVity HCI

Steyn: We literally had to copy databases across the network and onto new environments, and that was very time consuming. It literally took us two to three days to get a new environment up and running for the developers. You would think that this would be easy — like replication. It proved to be quite a challenge for us because there are vast amounts of data. But the whole HCI approach just eliminated all of those challenges.

Gardner: One of the benefits of going at the infrastructure level for such a solution is not only do you solve one problem– but you probably solve multiple ones; things like replication and deduplication become integrated into the environment. What were some of the extended benefits you got when you went to a hyperconverged environment?

Time, Storage Savings 

Steyn: Deduplication was definitely one of our bigger gains. We have had six to eight development teams, and I literally had an identical copy of our production environment for each of them that they used for testing, user acceptance testing (UAT), and things like that.

At any point in time, we had at least 10 copies of our production environment all over the place. And if you don’t dedupe at that level, you need vast amounts of storage. So that really was a concern for us in terms of storage.

Gardner: Of course, business agility often hinges on your developers’ productivity. When you can tell your developers, “Go ahead, spin up; do what you want,” that can be a great productivity benefit.

Jacobus Steyn (1)

Steyn

Steyn: We literally had daily fights between the IT operations and infrastructure guys and the developers because they were needed resources and we just couldn’t provide them with those resources. And it was not because we didn’t have resources at hand, but it was just the time to spin it up, to get to the guys to configure their environments, and things like that.

It was literally a three- to four-day exercise to get an environment up and running. For those guys who are trying to push the agile development methodology, in a two-week sprint, you can’t afford to lose two or three days.

Gardner: You don’t want to be in a scrum where they are saying, “You have to wait three or four days.” It doesn’t work.

Steyn: No, it doesn’t, definitely not.

Gardner: Tell us about King Price. What is your organization like for those who are not familiar with it?

As your vehicle depreciates, so does your monthly insurance premium. That has been our biggest selling point.

Steyn: King Price initially started off as a short-term insurance company about five years ago in Pretoria. We have a unique, one-of-a-kind business model. The short of it is that as your vehicle’s value depreciates, so does your monthly insurance premium. That has been our biggest selling point.

We see ourselves as disruptive. But there are also a lot of other things disrupting the short-term insurance industry in South Africa — things like Uber and self-driving cars. These are definitely a threat in the long term for us.

It’s also a very competitive industry in South Africa. Sowe have been rapidly launching new businesses. We launched commercial insurance recently. We launched cyber insurance. Sowe are really adopting new business ventures.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: And, of course, in any competitive business environment, your margins are thin; you have to do things efficiently. Were there any other economic benefits to adopting a hyperconverged environment, other than developer productivity?

Steyn: On the data center itself, the amount of floor space that you need, the footprint, is much less with hyperconverged. It eliminates a lot of requirements in terms of networking, switching, and storage. The ease of deployment in and of itself makes it a lot simpler.

On the business side, we gained the ability to have more data at-hand for the guys in the analytics environment and the ratings environment. They can make much more informed decisions, literally on the fly, if they need to gear-up for a call center, or to take on a new marketing strategy, or something like that.

Gardner: It’s not difficult to rationalize the investment to go to hyperconverged.

Worth the HCI Investment

Steyn: No, it was actually quite easy. I can’t imagine life or IT without the investment that we’ve made. I can’t see how we could have moved forward without it.

Gardner: Give our audience a sense of the scale of your development organization. How many developers do you have? How many teams? What numbers of builds do you have going on at any given time?

Steyn: It’s about 50 developers, or six to eight teams, depending on the scale of the projects they are working on. Each development team is focused on a specific unit within the business. They do two-week sprints, and some of the releases are quite big.

It means getting the product out to the market as quickly as possible, to bring new functionality to the business. We can’t afford to have a piece of product stuck in a development hold for six to eight weeks because, by that time, you are too late.

Gardner: Let’s drill down into the actual hyperconverged infrastructure you have in place. What did you look at? How did you make a decision? What did you end up doing?

Steyn: We had initially invested in Hewlett Packard Enterprise (HPE) SimpliVity 3400 cubes for our development space, and we thought that would pretty much meet our needs. Prior to that, we had invested in traditional blades and storage infrastructure. We were thinking that we would stay with that for the production environment, and the SimpliVity systems would be used for just the development environments.

The gains we saw were just so big … Now we have the entire environment running on SimpliVity cubes.

But the gains we saw in the development environment were just so big that we very quickly made a decision to get additional cubes and deploy them as the production environment, too. And it just grew from there. Sowe now have the entire environment running on SimpliVity cubes.

We still have some traditional storage that we use for archiving purposes, but other than that, it’s 100 percent HPE SimpliVity.

Gardner: What storage environment do you associate with that to get the best benefits?

Keep Storage Simple

Steyn: We are currently using the HPE 3PAR storage, and it’s working quite well. We have some production environments running there; a lot of archiving uses for that. It’s still very complementary to our environment.

Gardner: A lot of organizations will start with HCI in something like development, move it toward production, but then they also extend it into things like data warehouses, supporting their data infrastructure and analytics infrastructure. Has that been the case at King Price?

Steyn: Yes, definitely. We initially began with the development environment, and we thought that’s going to be it. We very soon adopted HCI into the production environments. And it was at that point where we literally had an entire cube dedicated to the enterprise data warehouse guys. Those are the teams running all of the modeling, pricing structures, and things like that. HCI is proving to be very helpful for them as well, because those guys, they demand extreme data performance, it’s scary.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: I have also seen organizations on a slippery slope, that once they have a certain critical mass of HCI, they begin thinking about an entire software-defined data center (SDDC). They gain the opportunity to entirely mirror data centers for disaster recovery, and for fast backup and recovery security and risk avoidance benefits. Are you moving along that path as well?

Steyn: That’s a project that we launched just a few months ago. We are redesigning our entire infrastructure. We are going to build in the ease of failover, the WAN optimization, and the compression. It just makes a lot more sense to just build a second active data center. So that’s what we are busy doing now, and we are going to deploy the next-generation technology in that data center.

Gardner: Is there any point in time where you are going to be experimenting more with cloud, multi-cloud, and then dealing with a hybrid IT environment where you are going to want to manage all of that? We’ve recently heard news from HPE about OneSphere. Any thoughts about how that might relate to your organization?

Cloud Common Sense

Steyn: Yes, in our engagement with Microsoft, for example, in terms of licensing of products, this is definitely something we have been talking about. Solutions like HPE OneSphere are definitely going to make a lot of sense in our environment.

There are a lot of workloads that we can just pass onto the cloud that we don’t need to have on-premises, at least on a permanent basis. Even the guys from our enterprise data warehouse, there are a lot of jobs that every now and then they can just pass off to the cloud. Something like HPE OneSphere is definitely going to make that a lot easier for us.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Cloud computing, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, hyperconverged infrastructure, Security, Software-defined storage, storage | Tagged , , , , , , , , , , | Leave a comment

Containers, microservices, and HCI help governments in Norway provide safer public data sharing

The next BriefingsDirect digital transformation success story examines how local governments in Norway benefit from a common platform approach for safe and efficient public data distribution.

We’ll now learn how Norway’s 18 counties are gaining a common shared pool for data on young people’s health and other sensitive information thanks to streamlined benefits of hyperconverged infrastructure (HCI)containers, and microservices.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us discover the benefits of a modern platform for smarter government data sharing is Frode Sjovatsen, Head of Development for FINT Project in Norway. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving interest in having a common platform for public information in your country?

SjovatsenWe need interactions between the government and the community to be more efficient. Sowe needed to build the infrastructure that supports automatic solutions for citizens. That’s the main driver.

Gardner: What problems do you need to overcome in order to create a more common approach?

Common API at the core

SjovatsenOne of the biggest issues is [our users] buy business applications such as human resources for school administrators to use and everyone is happy. They have a nice user interface on the data. But when we need to use that data across all the other processes — that’s where the problem is. And that’s what the FINT project is all about.

Frode Sjovatsen

Sjovatsen

[Due to apps heterogeneity] we then need to have developers create application programming interfaces (APIs), and it costs a lot of money, and it is of variable quality. What we’re doing now is creating a common API that’s horizontal — for all of those business applications. It gives us the ability to use our data much more efficiently.

Gardner: Please describe for us what the FINT project is and why this is so important for public health.

SjovatsenIt’s all about taking the power back, regarding the information we’ve handed the vendors. There is an initiative in Norway where the government talks about getting control ofallthe information. And the thought behind the FINT project is that we need to get ahold of all the information, describe it, define it, and then make it available via APIs — both for public use and also for internal use.

Gardner: What sort of information are we dealing with here? Why is it important for the general public health?

SjovatsenIt’s all kinds of information. For example, it’s school information, such as about how the everyday processes run, the schedules, the grades, and so on. All of that data is necessary to create good services, for the teachers and students. We also want to make that data available so that we can build new innovations from businesses that want to create new and better solutions for us.

Learn More About

HPE Pointnext Services

Gardner: When you were tasked with creating this platform, why did you seek an API-driven, microservices-based architecture? What did you look for to maintain simplicity and cost efficiency in the underlying architecture and systems?

Agility, scalability, and speed

SjovatsenWe needed something that was agile so that we can roll out updates continuously. We also needed a way to roll back quickly, if something fails.

The reason we are running this on one of the county council’s datacenters is we wanted to separate it from their other production environments. We need to be able to scale these services quickly. When we talked to Hewlett Packard Enterprise (HPE), the solution they suggested was using HCI.

Gardner: Where are you in the deployment and what have been some of the benefits of such a hyperconverged approach?

SjovatsenWe are in the late stage of testing and we’re going into production in early 2018. At the moment, we’re looking into using HPE SimpliVity.

Container comfort

Gardner: Containers are an important part of moving toward automation and simplicity for many people these days. Is that another technology that you are comfortable with and, if so, why?

SjovatsenYes, definitely. We are very comfortable with that. The biggest reason is that when we use containers, we isolate the application; the whole container is the application and we are able to test the code before it goes into production. That’s one of the main drivers.

The second reason is that it’s easy to roll out andit’s easy to roll back. We also have developers in and out of the project, and containers make it easy for them to quickly get in to the environment they are working on. It’s not that much work if they need to install on another computer to get a working environment running.

Gardner: A lot of IT organizations are trying to reduce the amount of money and time they spend on maintaining existing applications, so they can put more emphasis into creating new applications. How do containers, microservices, and API-driven services help you flip from an emphasis on maintenance to an emphasis on innovation?

Learn More About

HPE Pointnext Services

SjovatsenThe container approach is very close to the DevOps environment, so the time from code to production is very small compared to what we did before when we had some operations guys installing the stuff on servers. Now, we have a very rapid way to go from code to production.

Gardner: With the success of the FINT Project, would you consider extending this to other types of data and applications in other public sectoractivities or processes? If your success here continues, is this a model that you think has extensibility into other public sectorapplications?

Unlocking the potential

Sjovatsen: Yes, definitely. At the moment, there are 18 county councils in this project. We are just beginning to introduce this to all of the 400 municipalities [in Norway]. So that’s the next step. Those are the same data sets that we want to share or extend. But there are also initiatives with central registers in Norway and we will add value to those using our approach in the next year or so.

Gardner: That could have some very beneficial impacts, very good payoffs.

SjovatsenYes, it could. There are other uses. For example, in Oslo we have made an API extend over the locks on many doors. So, we can now have one API to open multiple locking systems. So that’s another way to use this approach.

In Oslo we have made an API extend over the locks on many doors. We can now have one API to open multiple locking systems.

Gardner: It shows the wide applicability of this. Any advice,Frode, for other organizations that are examining more of a container, DevOps, and API-driven architecture approach? What might you tell them as they consider taking this journey?

SjovatsenI definitely recommend it — it’s simple and agile. The main thing with containers is to separate the storage from the applications. That’s probably what we worked on the most to make it scalable. We wrote the application so it’s scalable, and we separated the data from the presentation layer.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, Cyber security, data center, Data center transformation, DevOps, electronic medical records, Enterprise architect, enterprise architecture, healthcare, Hewlett Packard Enterprise, Security, Software-defined storage | Tagged , , , , , , , , , , | Leave a comment

Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks

The next BriefingsDirect agile data center architecture interview explores how an Ericsson and Hewlett Packard Enterprise (HPE) partnership establishes a mobile telecommunications stack that accelerates data services adoption in rapidly advancing economies.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll now learn how this mobile business support infrastructure possesses a low-maintenance common core — yet remains easily customizable for regional deployments just about anywhere.

Here to help us define the unique challenges of enabling mobile telecommunications operators in countries such as Bangladesh and Uzbekistan, we are joined by Mario Agati, Program Director at Ericsson, based in Amsterdam, and Chris James-Killer, Sales Director for HPE. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the unique challenges that mobile telecommunications operators face when they go to countries like Bangladesh?

Mario Agati

Agati

Agati: First of all, these are countries with a very low level of revenue per user (RPU). That means for them cost efficiency is a must. All of the solutions that are going to be implemented in those countries should be, as much as possible, focused on cost efficiency, reusability, and industrialization. That’s one of the main reasons for this program. We are addressing those types of needs — of high-level industrialization and reusability across countries where cost-efficiency is king.

Gardner: In such markets, the technology needs to be as integrated as possible because some skill sets can be hard to come by. What are some of the stack requirements from the infrastructure side to make it less complex?

James-Killer: These can be very challenging countries, and it’s key to do the pre-work as systematically as you can. So, we work very closely with the architects at Ericsson to ensure that we have something that’s repeatable, that’s standardized and delivers a platform that can be rolled out readily in these locations.

Even countries such as Algeria are very difficult to get goods into, and so we have to work with customs, we have to work with goods transfer people; we have to work on local currency issues. It’s a big deal.

Learn More About the

HPE and Ericsson Alliance

Gardner: In a partnership like this between such major organizations as Ericsson and HPE, how do you fit together? Who does what in this partnership?

Agati: At Ericsson, we are the prime integrator responsible for running the overall digital transformation. This is for a global operator that is presently in multiple countries. It shows the complexity of such deals.

We are responsible for delivering a new, fully digital business support system (BSS). This is core for all of the telco services. It includes all of the business management solutions — from the customer-facing front end, to billing, to charging, and the services provisioning.

In order to cope with this level of complexity, we at Ericsson rely on a number of partners that are helping us where we don’t have our own solutions. And, in this case, HPE is our selected partner for all of the infrastructure components. That’s how the partnership was born.

Gardner: From the HPE side, what are the challenges in bringing a data center environment to far-flung parts of the world? Is this something that you can do on a regional basis, with a single data center architecture, or do you have to be discrete to each market?

Your country, your data center

James-Killer: It is more bespoke than we would like. It’s not as easy as just sending one standard shipping container to each country. Each country has its own dynamic, its own specific users.

Chris James-Killer

James-Killer

The other item worth mentioning is that each country needs its own data center environment. We can’t share them across countries, even if the countries are right next to each other, because there are laws that dictate this separation in the telecommunications world.

So there are unique attributes for each country. We work with Ericsson very closely to make sure that we remove as many itemized things as we can. Obviously, we have the technology platform standardized. And then we work out what’s additionally required in each country. Some countries require more of something and some countries require less. We make sure it’s all done ahead of time. Then it comes down to efficient and timely shipping, and working with local partners for installation.

Gardner: What is the actual architecture in terms of products? Is this heavily hyper-converged infrastructure (HCI)-oriented, and software-defined? What are the key ingredients that allow you to meet your requirements?

James-Killer: The next iterations of this will become a lot more advanced. It will leverage a composable infrastructure approach to standardize resources and ensure they are available to support required workloads. This will reduce overall cost, reduce complexity, and make the infrastructure more adaptable to the end customers’ business needs and how they change over time. Our HPE Synergy solution is a critical component of this infrastructure foundation.

At the moment we have to rely on what’s been standardized as a platform for supporting this BSS portfolio.

This platform has been established for years and years. So it is not necessarily on the latest technology … but it’s a good, standardized, virtualized environment to run this all in a failsafe way.

We have worked with Ericsson for a long time on this. This platform has been established for years and years. So it is not necessarily on the latest technology; the latest is being tested right now. For example, the Ericsson Karlskrona BSS team in Sweden is currently testing HPE Synergy. But, as we speak, the current platform is HPE Gen9 so it’s ProLiant Servers. HPE Aruba is involved; a lot of heavy-duty storage is involved as well.

But it’s a good, standardized, virtualized environment to run this all in a failsafe way. That’s really the most critical thing. Instead of being the most advanced, we just know that it will work. And Ericsson needs to know that it will work because this platform is critical to the end-users and how they operate within each country.

Gardner: These so-called IT frontiers countries — in such areas as Southeast Asia, Oceania, the Middle East, Eastern Europe, and the Indian subcontinent — have a high stake in the success of mobile telecommunications. They want their economies to grow. Having a strong mobile communications and data communications infrastructure is essential to that. How do we ensure the agility and speed? How are you working together to make this happen fast?

Architect globally, customize locally

Agati: This comes back to the industrialization aspect. By being able to define a group-wide solution that is replicable in each of these countries, you are automatically providing a de facto solution in countries where it would be very difficult to develop locally. They obtain a complex, state-of-the-art core telco BSS solution. Thanks to this group initiative, we are able to define a strong set of capabilities and functions, an architecture that is common to all of the countries.

That becomes a big accelerator because the solution comes pre-integrated, pre-defined, and is just ready to be customized for whatever remains to be done locally. There are always aspects of the regulations that need to be taken care of locally. But you can start from a predefined asset that is already covering some 80 percent of your needs.

Learn More About the

HPE and Ericsson Alliance

In a relatively short time, in those countries, they obtain a state-of-the-art, brand-new, digital BSS solution that otherwise would have required a local and heavy transformation program — with all of the complexity and disadvantages of that.

Gardner: And there’s a strong economic incentive to keep the total cost of IT for these BSS deployments at a low percentage of the carriers’ revenue.

Shared risk, shared reward

Agati: Yes. The whole idea of the digital transformation is to address different types of needs from the operator’s perspective. Cost efficiency is probably the biggest driver because it’s the one where the shareholders immediately recognize the value. There are other rationales for digital transformation, such as relating to the flexibility in the offering of new services and of embracing new business models related to improved customer experiences.

On the topic of cost efficiency, we have created with a global operator an innovative revenue-share deal. From our side, we commit to providing them a solution that enables them a certain level of operational cost reduction.

The current industry average cost of IT is 5 to 6 percent of total mobile carrier revenue. Now, thanks to the efficiency that we are creating from the industrialization and re-use across the entire operator’s group, we are committed to bringing the operational cost down to the level of around 2 percent. In exchange, we will receive a certain percentage of the operator’s revenue back.

That is for us, of course, a bold move. I need to say this clearly, because we are betting on our capability of not only providing a simple solution, but on also providing actual shareholder value, because that’s the game we are actually playing in now.

It’s a real quality of life issue … These people need to be connected and haven’t been connected before.

We are risking our own money on it at the end of the game. So that’s what makes the big difference in this deal against any other deal that I have seen in my career — and in any other deal that I have seen in this industry. There is probably no one that is really taking on such a huge challenge.

Gardner: It’s very interesting that we are seeing shared risks, but then also shared rewards. It’s a whole different way of being in an ecosystem, being in a partnership, and investing in big-stakes infrastructure projects.

Agati: Yes.

Gardner: There has been recent activity for your solutions in Bangladesh. Can you describe what’s been happening there, and why that is illustrative of the value from this approach?

Bangladesh blueprint

Agati: Bangladesh is one of the countries in the pipeline, but it is not yet one of the most active. We are still working on the first implementation of this new stack. That will be the one that will set the parameters and become the template for all the others to come.

The logic of the transformation program is to identify a good market where we can challenge ourselves and deliver the first complete solution, and then reuse that solution for all of the others. This is what is happening now; we’re in the advanced stages of this pilot project.

Gardner: Yes, thank you. I was more referring to Bangladesh as an example of how unique and different each market can be. In this case, people often don’t have personal identification; therefore, one needs to use a fingerprint biometric approach in the street to sell a SIM to get them up and running, for example. Any insight on that, Chris?

Learn More About the

HPE and Ericsson Alliance

James-Killer: It speaks to the importance of the work that Ericsson is doing in these countries. We have seen in Africa and in parts of the Middle East how important telecommunications is to an individual. It’s a real quality of life issue. We take it for granted in Sweden; we certainly take advantage of it in my home country of Australia. But in some of these countries you are actually making a genuine difference.

These people need to be connected and haven’t been connected before. And you can see what has happened politically when the people have been exposed to this kind of technology. So it’s admirable, I believe, what Ericsson is doing, particularly commercially, and the way that they are doing it.

It also speaks to Ericsson’s success and the continued excitement around LTE and 4G in these markets; not actually 5G yet. When you visit Ericsson’s website or go to Ericsson’s shows, there’s a lot of talk about autonomous vehicles and working with Volvo and working with Scania, and the potential of 5G for smart cities initiatives. But some of the best work that Ericsson does is in building out the 4G networks in some of these frontier countries.

Agati: If I can add one thing. You mentioned how specific requirements are coming from such countries as Bangladesh, where we have the specific issue related to identity management. This is one of the big challenges we are now facing, of gaining the proper balance between coping with different local needs, such as different regulations, different habits, different cultures — but at the same time also industrializing the means, making them repeatable and making that as simple as possible and as consistent as possible across all of these countries.

There is a continuous battle between the attempts to simplify and the reality check on what does not always allow simplification and industrialization. That is the daily battle that we are waging: What do you need and what don’t you need. Asking, “What is the business value behind a specific capability? What is the reasoning behind why you really need this instead of that?”

We at Ericsson want to be the champion of simplicity and this project is the cornerstone of going in that direction.

At the end of the game, this is the bet that we are making together with our customers — that there is a path to where you can actually find the right way to simplification. Ericsson has recently been launching our new brand and it is about this quest for making it easier. That’s exactly our challenge. We want to be the champion of simplicity and this project is the cornerstone of going in that direction.

Gardner: And only a global integrator with many years of experience in many markets can attain that proper combination of simplicity and customization.

Agati: Yes.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Envisioning procurement technology and techniques in 2025: The future looks bright

The advent of increased automation, data-driven decisions, powerful business networks, and the firepower of artificial intelligence (AI)and blockchainare combining to elevate procurement — and the professionals who drive it — to a new plane of greater influence and impact.

To learn more about how rapidly evolving technology changes the future of procurement, the next BriefingsDirect panel discussion explores how and why innovation-fueled procurement will fill an increasingly strategic role for businesses.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Our future of procurement panelists are Julie Gerdeman, Vice President of the Digital Transformation Organization at SAP Ariba; Shivani Govil, Vice President of Artificial Intelligence and Cognitive Products at SAP Ariba, and Matt Volker, Vice President of Supply Chain at NatureSweet in San Antonio, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Julie, SAP Ariba recently published a point-of-view on what procurement will look like in 2025, and it shows procurement as far different from how we know it today. Paint a picture, if you would, of what we should expect from procurement over the next several years?

Gerdeman: We are on the brink of more change than ever before in procurement. We think procurement organizations are going to rethink everything— from technology to resource allocation to talent and skill-sets. This all can entirely remake the function.

Julie Gerdeman

Gerdeman

And how they will do that means a few things. First, they are going to leverage new and emerging technologies to automate the mundane tasks to refocus on more strategic-level work, and that will allow them to become key drivers of corporate goals and business innovation.

How are they going to do that? It will be through use of intelligent systems that are self-learning and that provide a consumer-like, highly personalized user experience that makes purchasing easy.

We also believe that procurement strategists will become ambassadors of the brand for their companies. Unlike in the past, all companies want to do well financially, but we believe procurement can help them also do good, and we are going to see more-and-more of that in the future. Procurement will become the stewards of the corporate reputation and brand perception by ensuring a sustainable supply chain.

In the future, procurement will become even more collaborative — to achieve cost-savings goals. And that means companies will be connected like never before through leveraged networks. They are going to take the lead in driving collaboration and using networks to connect buyers, partners, and suppliers globally for efficiency. On the tech side we believe super networksmay emerge that create value. These super networks will network with other networks, and this hyper-connected ecosystem will become the standard.

Finally, data is the new currency and buyers and sellers are going to leverage things that I believe Shivani will be talking about, like predictive analytics, real-time insights, AI, blockchain, and using all of that to move the business forward. We are really excited about the future of procurement.

Gardner: The adoption and the pace at which people can change in order to acknowledge these technical changes — and to also put in place the organizational shifts — takes time, it’s a journey. What can companies be doing now to think about how to be a leader — rather than a laggard — when it comes to this procurement overhaul?

Gerdeman: That’s such a great question. Adoption has to be the key focus, and every company will begin at a different place, and probably move at a different pace — and that’s okay.

Adoption [of modern procurement] has to be the focus, and every company will begin at at different pace, and probably move at a different pace — and that’s okay.

What we see first is a focus on business outcomes. This is the difference for having successful adoption. So that’s focus — on outcomes. Next comes supporting that through organizational design, resource allocation, and developing the talent that’s needed. Finally, you must leverage the right technology to help drive that adoption at the pace that’s desired. This is key. Those are some of the core components for improved adoption.

Gardner: Matt at NatureSweet, you are well on your way to making procurement a strategic element for your company. Tell us about your transformation, what your company does, and how procurement has become more of a kingpin to how you succeed.

Greenhouse growth with IT

Volker: We have more than 9,000 associates across our operations and we consider them our competitive advantage. That’s because people create the expertise at the greenhouse — for the best growing and harvesting techniques.

With that said, our intention is to bring more innovation to the table, to allow them to work smarter versus harder — and that entails technology. As an example, we currently have five different initiatives that are linked to automation and systems solutions. And SAP Ariba Snaphappens to be one of those initiatives. We are creating a lot of change management improvements around processes. As a result, our people’s accountability shifts from a transactional solution in procurement to a strategic sourcing play.

Gardner: Tell us more about NatureSweet. You are an innovative grower of vegetables and other organic produce?

Volker: Yes, we grow fresh-produced products, predominantly in tomatoes, cucumbers, and peppers; both conventional and organic. We service North America — so Canada, Mexico, and the US — with a brand of products across a spectrum of small and large tomatoes, varieties of peppers, and varieties of cucumbers.

Gardner: Now that you’ve put strategic procurement into place, what are the demonstrable benefits?

Volker: Being in the mid-market, we were in a position where the sky was the limit. In a transactional sense, we still had basic processes that were not fully in-place — down to identifying item numbers and stock items across a broad range of purchases across 8,000 different commodity items.

Matt Volker

Volker

That led us to do the due diligence to be able to identify what those items were, which then drove us to category management, siloing specific purchases into categories, and then applying our resources against those categories.

The net result was that we moved from transactional to strategic sourcing. We didn’t know what we didn’t know. But once we brought SAP Ariba Snapinternally into our operations, we gained access to three million-plus vendors. It provided us the visibility into worldwide opportunities across each of those purchase categories. It was a benefit — both on spend as well as services and products available.

Gardner:Just to be clear, SAP Ariba Snapis a new, packaged solution to help small and medium-sized businesses (SMBs) become purveyors of the top-line technology, processes, and networks when it comes to procurement. In this case size really doesn’t matter.

Shivani, SMB companies like NatureSweet want to be thinking about the top-line capabilities of AI and machine learning, but perhaps they don’t want to take that on as a core competency. It’s really outside of their wheelhouse. How can an organization like SAP Ariba tackle the complexities of these technologies, get the deep data, and then help companies like NatureSweet better accomplish their goals?

Govil: At SAP Ariba, we are enabling transactions across more than three million buyers and suppliers, and for more than $1.6 trillion in value across the network. If you think about that statistic, and you think about how much data resides across our network and in our systems, what we are looking at is applying AI- and machine learning-type technologies to unlock the value of that data and deliver actionable insights to companies, such as NatureSweet, so that they can perform and do their functions better.

Shivani Govil

Govil

Our focus is really looking at the technology and how can we apply those technologies to be able to deliver value to the end-user. Our goal is to bring together better connections between the people, the processes, the systems, the data, and the context or intent of the user in order to enable them to do their jobs better.

Gardner:And looking to this initiative about predicting through 2025 how procurement will change, tell us how you see technology supporting that. What is it about these new technologies that you think will accelerate change in procurement?

The future is now 

Govil: I see the technology as being a key enabler to help make the transformation happen. In fact, a lot of it is happening around us today already. I think some of us can point to watching science fiction movies or looking at cartoons like The Jetsonswhere they talk about a future age of flying cars, robot maids, and moving roadways. If you look around us, that’s all happening today. We have self-driving cars, we have companies working on flying cars, and we have the robot vacuums.

Our goal is to bring together better connections between the people, the processes, the systems, the data, and the context or intent of the user in order to enable them to do their jobs better.

Technology has already become the enabler to allow people to think, act, and behave in very, very different ways than before. That’s what we’re seeing happening around us.

Gardner: For end-users who are not data scientists, or are even interested in becoming data scientists, the way mainstream folks now experience AI, bots, and machine learning is often through voice-recognition technologies like Siriand Alexa.

What is it about those technologies that allow us to educate people on AI? How can your average line-of-business person appreciate the power of what AI and a cloud business network can do?

The conversational AI advantage 

Gerdeman: Alexa and Siri have become household names, and everyone is familiar with those types of technologies. Those are the conversational interfaces that enable us to have interactions with our applications in a seamless manner, as if we are talking to a friend or trying to get something done.

When we think about this whole space of technology, AI, and machine learning, we actually span multiple different types of technologies. The natural language processing, the speech-to-text, and text-to-speech — those are the types of technologies that are used in the conversational interactions. You also have things like machine learning, deep learning, and neural networks, etc., that provide the insights that span across large amounts of data and deliver insights and uncover hidden patterns to enable better business outcomes.

Let me give you an example. We think about Siri and Alexa in our everyday life. But imagine an intelligent assistant on the business sourcing solution, one that is guiding a category owner throughout the processes of running a sourcing event. This can really help the category owner better understand and get intelligent recommendations as to what should they be doing — all through conversational type of interactions.

We are applying techniques such as deep learning and convolutional neural networks … We see that it saves people from what used to take days to less than 11 minutes to do the same task.

Now, you can take that even further, to where I talk about some of the other types of technologies. Think about the fact that companies get thousands of invoices today. How do you actually go through those invoices and then classify them so that you can understand what your spend categories are? How do you do analysis around it to get a better handle on your spend?

We are now applying techniques like deep learning and convolutional neural networks (CNNs)to be able to automatically start classifying those invoices into the right spend categories. By doing so, we have seen that it saves people from what used to take days to less than 11 minutes to do the same task.

Gardner: Matt, now that you have heard about some of these AI capabilities, does this resonate with you? Are you excited about it, and do you prefer to see a company like SAP Ariba tackle this first and foremost?

AI for excellence 

Volker: Our long-term strategy is to become a consumer package goods company and that places us in the same conversation as folks like Proctor and Gamble, Hershey, and PepsiCo. That strategy is ambitious because if you consider produce traditionally, it’s been a commodity market driven on temporary workforces that are migratory by nature. But that’s not who we are.

We are a branded company. Our claim to fame in the marketplace is that we have the best products in the market. They are always the best products on the market, always at the same price, always available, 100 percent vertically integrated and in a direct relationship with the consumer. Why I mentioned that is for us to continue to excel at our growth potential, the need of automation, AI, and digitizing processes are critical to our success.

We are building toward an end-to-end integrated supply chain, and we must have that link of a qualified tool in procurement to make that a successful transformation.

In my mind, from a supply chain perspective, we are building toward an end-to-end integrated supply chain, and we must have that link of a qualified tool in procurement to make that a successful transformation.

Gardner: Shivani, we’ve also heard that people are concerned that their jobs are going to get taken over by a robot. We don’t expect necessarily that 100 percent of buying is going to be done automatically, by some algorithm. There has to be a balance. You want to automate as much as you can, but you want to bring in people to do what they do best.

Is there anything about your view towards 2025 that addresses this idea of the right mix of people and machines to leverage each other to the mutual benefit of the business?

Human insights add value 

Govil: We don’t think technology is going to replace humans. In fact, what we think is happening is that technology is augmenting the abilities of the humans to be able to do their jobs better. The way we look at these technologies is asking are they really allowing the procurement professionals to be smarter, faster, and more efficient in terms of what they are doing? Let me give you an example.

We all know that digitization has happened. Paper invoices, for example, are a thing of the past. Now, imagine if you can, also adding in what’s stored in many digital records? You’d get the insights from all of that together. Let’s say there’s a drop in commodity prices, you’ll be able to intelligently give the contract negotiator the advice that it’s time for them to negotiate a contract, as well as provide benchmarks and insights into what kind of prices they should be looking for.

The way we’re approaching this is looking at the type of user, the type of task that they are performing and based on that, evaluating different ways that these types of technologies can help. For example, if you have a casual user and the task is very repetitive, like a customer support-type activity, where the questions are fairly generic, commonly asked questions, can you look at automating those tasks? In this case, we would look at automation-type functions.

On the other hand, if you have an expert user doing a very deep, complex task, which is highly variable, such as a contract negotiation, how can you then best help?  How do you then use these technologies to help that contract negotiator to amplify what they are doing and get even better results for the business?

Depending on the type of user, whether a casual or an expert user, and whether it’s a repetitive task or a highly variable unique customized task, we can look at these types of technologies to enable in very different ways.

Depending on the type of user, whether they are casual or an expert user, and the type of task, whether it’s a repetitive task or a highly variable unique customized task, we can look at these types of technologies to enable in very different ways.

Gardner: Julie, for organizations to prepare themselves to make that right call between what machines can do well and what people do best at a strategic level requires a digital transformation of the organization, the people, process and the technology. Is there anything that you can offer from what you’ve seen in the Procurement 2025 vision that would help organizations get to a digital-first advantage?

Gerdeman: What we have seen in doing the research is that it’s at three structural levels. First, at the corporate procurement level, and addressing risk and security, and thinking about the experience of satisfaction. And then, at the business unit level, we’re seeing procurement get more embedded into the business unit, and actually work more closely in a line of business, in a production or manufacturing facility to be closer to the line of business, and that then helps a transformation.

And then finally, at the shared services level, a lot of what Shivani was referring to, some of those more mundane tasks being digitized. They become more arbiters of satisfaction to the end users — really to be overseeing the digitization rather than performing the task themselves.

To get on that journey at the corporate level, then the business unit, and the line-of-business level, procurement gets more embedded.

To get on that journey at the corporate level, then the business unit, and then line-of-business level, procurement gets more embedded. Finally, shared services were viewed as things in the future as like complete lights-out facilities, which then would change the dynamic.

Gardner: It’s interesting to me that this can be quite easily customized. You could be a smaller organization, or a giant enterprise across different vertical industries with different geographies. You can enter this transformation on your own trajectory and begin to layer in more use of AI and automation. It can be done at your own pace.

But one of the things we have seen is the pace has been slower than we expected. Matt, from your organization’s experience, what can increase the pace of adoption when it comes to digital transformation, automation, and ultimately robots and AI and artificial intelligence benefits?

Increase the speed of adoption 

Volker: As an organization we pride ourselves on being transformational. In our world, there are 9,000 people for whom we want to improve the living standards, and from that it develops a strategy to say, “How do you go about doing that?” Well, it’s about transformation.

So, through automation and system solutions we intend to get there by December 31, 2019. It was referenced earlier by Shivani that you move folks away from transactional to strategic sourcing through categorizing your vendor community and allowing them to say, “Now I can move away from requisitions and purchase orders to really searching globally for partners that will allow us to accelerate our spend the right way, both from a cost, service and quality perspective.”

Gardner: Shivani, we have been looking at this in the future, but let’s just go short-term for a second. In my experience with software development and moving from traditional software to the cloud, it’s really important to demonstrate upfront the benefits that would then accelerate adoption.

While we are still looking several years out, what could be done around AI, machine learning, and data analysis now that would become a catalyst to the adoption and the benefits in the future?

The value-add bucket list 

Govil: We are seeing the values of these technologies manifest in many different ways. I actually largely put them into four buckets. The first one is really around driving deeper engagement and user adoption. So, if you imagine having these types of conversational interactions, intelligent assistants that really drive the engagement and adoption by the end user of using the systems, that then means bringing more spend under your management, getting the better outcomes for the business that you are trying to deliver. So that’s number one.

If you imagine having these types of conversational interactions, intelligent assistants that really drive the engagement and adoption by end users, that then means bringing more spend under your management.

The second is in terms of being able to unlock the data and discover hidden patterns or insights that you don’t have access to otherwise. If you think about it, there’s so much data that exists today — whether it’s in your structured enterprise systems or your unstructured systems out in the web, on social media, from different sources – that by being able to bring those data elements together and understanding the context and what the user is trying to achieve, you can actually help discover patterns or trends and enable the user to do their jobs even better.

I talked about the sourcing, contract negotiations, you can think about it in terms of ethical sourcing, too, and how to you find the right types of suppliers that are ethically appropriate for your business. So, that’s the second one, it’s around unlocking the data and discovering hidden patterns and insights.

The third one is around driving retention of talent and knowledge. A lot of companies are facing an ageing workforce where people are beginning to retire, and you don’t want the knowledge that they have to go with them; you want to retain that in your business and your applications.

These types of technologies enable that to happen. Also, being data-driven allows you to track new talent because everyone wants to work with the latest and greatest technologies, and so this becomes a way for new talent to get attracted to your business.

And the fourth, and the most important, is around driving better business outcomes. This can be in the form of efficiency, it can be in the form of automating repetitive tasks, or it can be in the form of driving increased accuracy. Together these all allow you to create strategic value for procurement as a function and become a value creator for the business.

Gardner: Julie, you had some other thoughts on accelerating adoption?

Support at the top

Gerdeman: Because so much of what we are about is amazing technology, it can only be successful through effective change management at the organization level. So, it’s in that process and the people side that if you embrace the technology, that’s fabulous. But structuring the organization and supporting and enabling folks — whether they are users, category managers, sourcing managers — to then adapt to the change and lead through that change — and gain the support at the highest levels – that’s what we have seen be really effective in driving successful digital transformation. It’s through that high level of change management.

Gardner: We’ve often seen patterns where it takes longer for change to happen than we thought, but the impact is even greater than we anticipated.

So, looking out several years for an understanding of what that greater impact might be, my final question to you all is, what’s it going to be like in 10 years? What will astonish us in 10 years?

Outstanding People and processes

Volker: In our world of produce, it’s about the investment. As land and resources become more scarce, we need to be much more productive from the standpoint of being able to deliver a highly perishable mix of products with speed to market. Part of the solution is automation, AI, and digitizing processes, because it’s going to allow us the opportunity to consistently reinvent ourselves, just like any other organization. This will allow us to maintain our competitive advantage in the marketplace.

We are extremely well-suited to pull the consumer and customer along with us in an emerging economy where speed to market and flexibility are mandatory.

Organizations will look completely different, and people in them will be data scientists and strategic consultants.

Gerdeman: I actually think it’s about the people, and I’m most passionate about this. Organizations will look completely different, and the people in them will be data scientists, strategic consultants, and people that we never had the opportunity to use well before because we were so focused on executing tasks associated with procurement. But how the new organizations and the people and the talent are organized will be completely astonishing to us.

Gardner: A workforce of force multipliers, right?

Gerdeman: Absolutely.

Gardner: Shivani, same question, what’s going to astound us 10 years out?

Govil: I agree with Julie on the people’s side of it. I also think that the technologies are going to enable new things to happen in ways that we can’t even imagine today. If you begin thinking about we talked a lot about — AI and machine learning, blockchain and the possibilities that opens, you think about 3D printing, drones — you can imagine sometime in the near future rather than sourcing parts and materials, you might be sourcing designs and then printing them locally. You might have a drone that’s delivering goods from a local 3D printer to your facility in an as-needed manner, with no advanced ordering required.

I think that it’s just going to be amazing to see where this combination of technology and skills will take the procurement function.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor. SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business networks, Cloud computing, data analysis, machine learning, Networked economy, procurement, SAP, SAP Ariba, Spot buying | Tagged , , , , , , , , , , , | Leave a comment

Bridging the educational divide–How business networks level the playing field for those most in need

The next BriefingsDirect panel discussion explores how Step Up For Students (SUFS), a non-profit organization in Florida, has collaborated with SAP Ariba to launch MyScholarShop, a digital marketplace for education that bridges the information gap and levels the playing field for those students most in need.

Now assisting some 10,000 K-12 special needs and low-income students, the user-friendly marketplace empowers parents and guardians to find and purchase the best educational services for their children. In doing so, it also helps maximize availability of scholarship funds to enhance their learning.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share more about how this first-of-a-kind solution actually works, are panelists Jonathan Beckham, Vice President of Technology Strategy and Innovation at Step Up For Students in Jacksonville, Florida; Mike Maguire, Global Vice President of New Market Development at SAP Ariba, and Katie Swingle, a Florida Gardiner Scholarship Program recipient. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mike, there’s no doubt that technology has transformed procurement. We’ve gone from an emphasis on efficiency and spend to seeking better user experiences and more analytics capabilities. We’re also entering a new era where we see that businesses are trying to do “good,” in addition to doing “well.”

You had a very personal revelation about this a few years ago. Tell us about why doing well and good can go hand-and-hand?

Maguire: I was thrilled to have the opportunity to work with Jonathan and the SUFS team for both personal and professional reasons. First, I am a parent of a special needs young adult. My wife, Carole, and I have a 19-year-old daughter, Allyson, and we have lived with having no special needs solutions out there that help optimize the spend for such extra things as tuition, educational supplies, and services.

Mike Maguire

Maguire

If you go to a hospital for surgery or you need medications, there’s always somebody there to help you with the process. But when you go into this world of tuition reimbursement and educational optimization, there’s no guidance for how that spend should be effectively executed. So now, many years later in my professional life, it is terrific to have the opportunity to use a solution like SAP Ariba SNAP to help SUFS in their mission and open that up to parents through the Ariba supplier network.

Gardner: Tell us how cloud applications and the SAP Ariba business network platform are structured and architected that lends them to this kind of marketplace-plus benefit?

Maguire: Networks and cloud apps at their very core are about connecting people, processes, and information in a way that’s simple and transparent to all those who are involvedwith the outcome of making smart choices. We’ve done this for multinational corporations for years. They end up saving money on their bottom lines by having good information to make smart choices. Now we’re doing the exact same thing to optimize the bottom line for families.

Gardner: Jonathan, at SUFS, you probably faced the same kinds of challenges that many businesses do. They don’t want manual processes. They don’t want to be bogged down with time-consuming approaches. They need to broaden their horizons, to see all available assets, and then analyze things better. But were there particular problems that you were trying to solve when it came to using marketplaces like Ariba’s?

Optimized Opportunities

Beckham: We’re trying to solve a lot of problems by optimizing processes for our families. It’s very important to us that we choose a partner that provides a really great user interface (UI) and user experience (UX). You know, we’re all about not just optimizing our bottom line — like you think of for traditional corporations — we’re about optimizing the experience.

Jonathan-Beckham

Beckham

Any funds or any resources that we gain, we’re about putting those back into the families, and investing those, and helping them to accelerate their educational path or learning goals. So that was really something that we were looking to do and use this process for.

Gardner: Tell us about your organization and MyScholarShop. Was this something that depended on electronic digital marketplaces at the outset, or was it something you have now greatly enhanced?

Beckham: At SUFS, we provide scholarships for low-income and special needs students in kindergarten through grade 12. As part of that, we administer a program called an educational savings account. That allows parents and students to customize their learning options, to go out and buy instructional materials, to go out and buy curriculum or use tuition fees or technology and as part of that process. It’s largely been a reimbursement process for families. They go out, purchase services — using their own funds — and then seek reimbursement.

We were then really searching for a platform — something to change that model for us. The number one need was to not have to take money out of our families’ pockets. And then number two was to connect them with high-quality providers and suppliers so they can find better options.

Gardner: In a business environment, it’s about matching buyers and sellers — and then bringing a value-add to that discussion, with collaboration. This powerfully also enhances the ability for people who are looking to find the right place to donate scholarships and to provide educational support. How has the network helped on the seller side, if you will, when it comes to non-profits and charitable organizations? Do they see this as something as beneficial, too?

Suppliers Sought, and Found

Beckham: Absolutely. We’ve had a lot of great conversations with suppliers that have approached us, and with some that we’ve approached directly. There are a lot of terrific products that are out there for students with special needs that we wanted to bring into this network. And some of them are already on the Ariba Network, which was great for us.

But, at the same time, one of the things that we looked for is optimizing our spend. From a reporting standpoint, we wanted insights to help negotiate better pricing. And using the Ariba Network does that for us. So when we engage with suppliers, we know if we can get free shipping, or if we get discounts and better payment terms. Those are all things that we can pass on directly to our families and to the students. We’re a non-profit. We’re not looking to make extra money. We’re looking to reduce the cost, labor, and the processes for our families in our program.

Gardner: Katie, your son, Gregory, is a Florida Gardiner Scholarship Program recipient. Tell us how you came to learn about these services, and how they have been beneficial and impactful for you and your family.

Swingle: As a Gardiner Scholarship recipient, we are under the special needs side of what SUFS does. My son is diagnosed with autism. He has been since he was three years old. So it’s been quite a journey for us, lots of ups and downs.

SAP Ariba Live, 2018, Las Vegas, USA

Swingle

And what we came to find through our journey was needing the right educational environment. We needed the right educational tools if we were going to make progress. And unfortunately public school was just not the right option at that time, especially in those early years when you’re trying to help them the most.

SUFS is the administrator of our scholarship, and that’s how I became involved with them. So we go and we spend our money on tuition, products, and different therapies for Gregory. We pay for them. And then SUFS — because he’s a recipient of the scholarship — reimburses us for those. It’s been absolutely life changing for us.

Once we got Gregory into the right environment, with the school that he is in, with the right therapists, and with the right products — it felt like everything started to come together. All of the disappointment that we had had over and over and over again over the years was starting to go away, and it was exciting.

I was meeting my son for the first time — to be quite honest. We had had so many roadblocks, and all of the sudden this child was blossoming. And it was because we had the financial means from SUFS and from the scholarship to put him in the right environment where he could blossom.

And it’s been amazing ever since then. The trajectory for my child’s life has changed. We went from a pretty dire prognosis to …  I don’t know where he’s going to be, but I know it’s going to be great. And we’re just really excited to be a part of this on the ground level.

Gardner: And for those in our audience who might not be that familiar with autism,there can be a great amount of improvement when the right processes, services, and behavioral therapies are brought to bear. For those who don’t understand autism, it is a different way of being “wired,” so to speak, but you can work with that. These young folks can learn new ways to solve many problems that they might not have been able to solve on their own. So, getting those services is huge.

Jonathan, are we just talking about scholarships or you are also allowing families and individuals to find the services? Are we at the point where we’re linking services in the marketplace as well as the funding? How does it work?

Share the Wealth of Data

Beckham: That’s a great question. At SUFS we have an amazing department called the Office of Student Learning, and these are tried-and-true educators who have been in classrooms, and administrators that also work with professional development with teachers throughout the State of Florida.

As part of that, they’re helping us to identify some of these high-quality suppliers that are available. They’re really helping us with the SAP Ariba’s Guided Buying capabilities to curate and customize that platform for our individuals. So, we have great visions that we share with SAP Ariba, and we’re very happy to have a partner that is helping to make recommendations around the products and services.

All of the sudden, this child was blossoming. And it was because we had the financial means from SUFS and from the scholarship to put him in the right environment.

For example, if Katie and her family identify a great therapist, or a great technology tool that can help her son, then why can’t we make those recommendations to other families in similar situations? It becomes a sort Amazon-like buying experience — you know, where people who purchase one thing may be interested in purchasing other similar things.

Identifying those suppliers that are high quality, whose products and services are working for our families – we can now help make recommendations around those.

Gardner: Mike, as we know from the business world, marketplaces can develop organically — but they can then go viral. So that the more buyers there are then the more sellers come up, and the more sellers there are, the richer the environment – and the more viable the economics become.

Are we starting to see that with autism support services? Some of the recent studies show that somewhere close to one in 40 boys are autistic, and perhaps one in 190 girls are autistic. We’re talking about a fairly large portion of our society, around the world. So, how does this work as a marketplace? And is it large enough to be sustainable?

Autism-Support Savings

Maguire:I think it absolutely is. When we think about the Ariba Network, we’re about like-minded people and like-minded causes optimizing their goals.And in the area of disabilities that I’ve seen, technology is a godsend for these kids growing up in this generation.

When you think about technologies and connectedness — which the Ariba Network is all about  — in the disabled community, the use of such technologies as driverless cars can bring new levels of freedom to this population of differently abled people. As these children become adults, this is just going to open up to complete independence that the prior generations never knew.

Ariba Network is about like-minded causes optimizing their goals. In the area of disabilities, technology is a godsend for these kids growing up in this generation.

Gardner: It seems to me that if this works for an autism marketplace that there are many other flavors or variations on the theme — whether it’s other sorts of disabilities or learning challenges.

Maguire: An example: I am a board member of the Massachusetts Arc and we spend most of our time working out policy and legislation for independent skills and options for the full spectrum of a lifespan.

When you become 18 and you are out of the school system, you have the same exact requirements to optimize Social Security disability payments. The same exact challenges around an entitlement that a young adult gets at 18 years old, probably with some help from their parents. It goes to their own account because they are young adults.

How do you optimize that spend, right? How do you optimize that for the different things to make for better life skills and tools? I believe that MyScholarShop could be extended well beyond K-12 because there’s a need for a lifetime of spend optimization for intellectually challenged people.

Gardner: Jonathan, this was introduced in January 2018, and your larger implementation is slated for the 2018-2019 school year. What should we expect in the next year or two?

Beckham: The program we’re talking about with Katie is the Gardiner Scholarship Program, and we have about 10,000 students there. It’s about $100 million in scholarships that we utilize. But next year we’re actually looking to bring in the Florida Tax Credit Program as well.

These are lower-income families, and about 100,000 students, and we’re actually at some $630 million in spend this year. As we grow with this program, and we look for high quality suppliers and providers, we look to bring both of those together ultimately so that we can use all of that data, use all those recommendations to help many, many more families.

Gardner: And the scope beyond Florida? Is this going to be a state, regional, or national program, too?

National Expansion

Beckham: We already have a subsidiary in Alabama. We also work with the State of Illinois. We’ve worked with other states in the past, and we absolutely have plans to help provide this service and help expand this nationwide so we can help many, many more students.

Gardner: Mike, any more to offer in terms of how this expands beyond its base?

Maguire: One of the things that expands is the connectedness to the network. And this is going to unleash availabilities and capabilities for not only the people of intellectual needs but for the elderly. I mean, we can talk about this for every piece of the population that has a need for assistance in this space.

Gardner: Katie, any thoughts about where you like to see it go, or how you think be people should be aware of it?

Swingle: SUFS and other organizations are trying to spread the word about educational choice and education savings accounts specifically like mine, the Gardiner or the Florida Tax Scholarship. There are states that don’t have anything at all available to families like this. I’m so blessed to live in Florida, which has been one of the more progressive states to offer this kind of service.

I hope the success of the network gets people talking across the nation. They can then push their legislators to start looking into this. I’m just a Florida mom. But there’s a mom in California or Washington State who has no options, and I hope that she would hear about this and be able to push her legislators to open this up to even more families.

Gardner: Jonathan or Mike, this also strikes me as a really great example of a public-private cooperation — of leveraging a little bit of what government can offer but also financial support in a marketplace in the private sector. Let’s tease that out a little bit.

Parent-friendly purchasing

Maguire: I think through this a lot. Traditionally, when a company buys procurement software, it is being justified based upon all the savings of getting rid of maverick spend, that all spend comes under management, and that’s what the Return on Investment (ROI) is based on.

The key piece of that ROI is adoption by end-users. What we’re finding now as we go into the mid-market with good partners like Premikati and SUFS is that you can’t force adoption. But the only way you get the savings in the ROI is if everyone is a procurement services user. And that means you need a good user buying experience that is very natural — and actually fun.

The end-users are thousands of moms and dads. If their user experience is not much fun, if it’s not that easy, it’s not going to be used — and the whole pyramid of results will break down.

We’re now in an environment with SUFS where it’s not about, “Hey, our people in human resources are using the SAP Ariba system,” or, “The sales guy is using the SAP Ariba system.” Their end-users are thousands of moms and dads. And those moms and dads have to have an experience just like they’re buying from home, buying at any website. And if it’s not much fun, if it is not that easy, it’s not going to be used — and the whole pyramid of results will break down.

Gardner: It’s like Metcalfe’s Law, whereby the network is only as powerful as the number of the people on the network. You have to have the right user experience in order for adoption to take off.

Let’s go back to Jonathan to that public-private sector issue. How does this work in terms of local governments and also in the private sector?

Empowered Education 

Beckham: This is the way that we see educational choice throughout the country happening right now. You see a lot of states that don’t have any options out there for the students. You see some that are running them from the government side of things. And then you see some that are very successful like SUFS — legislated to have an opportunity for these educational choice programs.

But it’s running as a very slim non-profit. We only take 3 percent of our funds to administer our program. We’re a very high Charity Navigator-rated program, so we have an organization that’s really looking to empower our families, empower our students, and use our funds the best way that we can.

And then we’re able to find really high-quality partners like SAP Ariba to help us implement these things. So you put all those things together and I think you have an amazing program that really helps families.

Gardner: Katie, on the practical matter for other parents who might be intrigued, who have a special needs student, how might they start to prepare themselves to get ready? Where would you say, with 20-20 hindsight, that you should begin this process?

Raise your Voice 

Swingle: Let me start with if you’re a Florida parent, or an Arizona parent, or a parent already in a state where this is starting to move. You need to know what services your child is going to need. If, for example, they are going to need occupational therapy, you’re going to need to read those reviews, and read up a lot on behavior analysis and get some ideas about what your child might need.

As any autism parent who has shopped for products on multiple websites knows, our kids need all kinds of products. You now have an idea of where you can buy those via learning exchanges. You begin having an idea of what your child’s going to need with their funds. And you can really begin getting your keywords — occupational therapy (OT), Applied Behavioral Analysis (ABA) therapy, and physical therapy. You’re going to be reading reviews on the network about them and see how they might be able to help.

Don’t be afraid to tell your story, but the people who need to hear it are your legislators, your local and state representatives.

For people who are in states that don’t have options like we do, you need to be writing your state representatives; you need to be telling your story just like I am. Sometimes there’s a little bit of shame, sometimes there’s a little bit of embarrassment. I’ll be honest. My husband still has hard time saying the word “autism.”

We’ve been in this game now for seven years and he still sometimes can’t spit it out. It’s time to spit it out, it’s time to be honest and it’s time to tell your story. Don’t be afraid to tell your story, but the people who need to hear it are your legislators, your local and state representatives need to know about this.

They need to know about states like Florida that use SAP Ariba and MyScholarShop. They need to ask, “Excuse me? I live in California or I live in Colorado, why don’t I have this option? Look at what this woman is getting in Florida; look at what this family has in Arizona. I need this here and why don’t we have this?”

Put the pressure on, and don’t be afraid. You have a voice, you’re a voter, and they are there to represent you. Also give them some enthusiasm, let them meet your child, bring pictures. I brought pictures of my son, I said you know, “Look this is my child, please help me!” And if the legwork has been done by states like Florida and our organizations like SUFS and SAP Ariba, then the legwork is done. Now get your voice up there.

Gardner: What Katie is pointing to is that this is a very repeatable model. Mike, we know that doing well and doing good are very important to a lot of businesses now. How is this not only repeatable but also has extensions to other areas of doing well and good?

Principled Procurement

Maguire: Everyone has a purpose and every organization has a purpose. If you don’t, then you’re just wandering around in the woods. What are the pieces of your organization that you really want to have an ethical and moral stand with?

And that’s why we’ve worked with United Nations, the Global Compact for Fair and Decent Work. We work with Made in a Free World to stamp out human trafficking and people like Verisk MapleCroft and EcoVadis for sustainable and ethical supply chains.

We try to make sure that procurement with a purpose is actually in action at SAP Ariba because we like to oversee what’s actually happening, and we have the capability through the network — and through the transparency the network brings — to actually look, see, measure, and make some change.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, Business networks, Cloud computing, contact center, Enterprise transformation, healthcare, Networked economy, procurement, professional services, risk assessment, SAP, SAP Ariba, social media, Software, User experience | Tagged , , , , , , , , , , , , | 1 Comment

Pay-as-you-go IT models provide cost and operations advantages for Northrop Grumman

The next BriefingsDirect IT business model innovation interview explores how pay-as-you-go models have emerged as a new way to align information technology (IT) needs with business imperatives.

We’ll now learn how global aerospace and defense integrator Northrop Grumman has sought a revolution in business model transformation in how it acquires and manages IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help explore how cloud computing-like consumption models can be applied more broadly is Ron Foudray, Vice President, Business Development for Technology Services at Northrop Grumman. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What trends are driving the need to change how IT is acquired? People have been buying IT for 40 or more years. Why a change now?

Foudray: Our customers, who are primarily in the government sector across the globe, understand the dynamic nature of how IT and technology innovation occurs. It can be a very expensive investment to maintain and manage your own infrastructure as part of that.

Ron Foudray

Foudray

In parallel, they see the benefits of where technology is going from a cloud perspective, and how that can drive innovation — and even affordability. So there is a cultural transformation around how to do more relative to IT and where it’s going.

That gets to the things you were just using in your opening comments as to how do we transform the business model and provide that to our customers, who traditionally haven’t thought about those business models.

Gardner: I suppose this is parallel to some creative financing trends we saw 10 or 15 years ago in other sectors – manufacturing and transportation, for example – where they found more creative ways of sharing and spreading the risk of capital.

Pay-as-you-go or buy?

Foudray: I think it’s a great analogy. You can look at it as if you are going to lease a car instead of buying one. In the future, maybe we don’t buy cars; maybe we just access them via Uber or Lyft, or some other pieces. But it’s that kind of transformation and that kind of model that we need to be willing to embrace — both culturally and financially — and learn how we can leverage that.

Gardner: Ron, tell us about Northrop Grumman and why your business is a good fit for these new models.

Foudray: I have been in the aerospace and defense market for 36 years. Northrop Grumman clearly is a market-leading, global security company, and we focus primarily on building manned and unmanned platforms.

We have as part of our portfolio the sensors that go along with those platforms. You may have heard of something called C4ISR, for Command, Control, Computers, Communications, Intelligence, Surveillance and Reconnaissance. It’s those types of sensors and systems that we bring to the table.

In my portfolio, on the technology services side, we are also providing differentiated capabilities for how we support, maintain, upgrade and modernize that infrastructure. That includes the capabilities of how we can provide the services more broadly to our customers. So we focus primarily on five core pillar areas: autonomous systems, strike platforms, logistics, cyber-security, and C4ISR.

Gardner: You are not only in the delivery of these solutions, but you are an integrator for the ecosystem that has to come together to provide them. And, of course, that includes IT.

Foudray: Exactly. In fact, sometimes when I go talk to a customer, it’s like we’re Northrop Grumman Information Technology. They are trying to connect the dots. So, yes, I think of Northrop Grumman not only as the platforms, sensors and systems, but the enterprise IT infrastructure as well..

The edge for our war fighters is anywhere that their systems and sensors are being deployed.

That comes with the digital transformation that’s been ongoing inside of our war-fighting apparatus around the world for some time. And so when you hear about the [transformation] of things in the data center or at the edge — well, the edge for our war fighters is anywhere that their systems and sensors are being deployed.

We need to be able to do more of that processing, and that storage, in real time, at that closer point-of-need. We therefore need to be driving innovation with enterprise IT on how to connect into and leverage that all back across those systems, sensors, and platforms.

When you put it in that context, the digital interconnectedness that we have — not just a society — but in a war fighting sense as well, it becomes more and more clear as to why an integrator, a company like Northrop Grumman, wants to drive enterprise IT innovation and solutions. By doing so, we can drive essentially the three things I think all customers are looking for, which are mission effectiveness, mission efficiency, and affordability.

Gardner: The changes we have seen in IT and software over the past decade — of Software-as-a-Service (SaaS) and other cloud-driven models — make a lot of sense. You pay as you consume. You may not own the systems; they are in somebody else’s data center, typically referred to as the cloud.

But I’m going to guess that in your business, public cloud isn’t where you are going to put your data centers – this is probably more of an on-premises, close to the point of value, if you will, deployment model. So how do you translate SaaS consumption models and economics to an on-premises data center?

Control and compliance in the cloud?

Foudray: You are astute in pointing that out, because government customers traditionally have had a greater need for a level of control and compliance. With those types of data and applications — whether it’s the clearance level of the information or just the type of information that’s being collected — there is sensitivity.

That said, there are still some types of information — back office type of things – that may be appropriate for a public cloud that you could commingle with today. But very clearly there is more and more of a push for that on-premises solution set.

When our customers begin thinking about cloud — and they are modeling their enterprise on a cloud capability — they tend to use the model of, “Well, how can I get the same affordability outcomes that a public cloud provider is going to be able to offer?” They are amortizing their cost and those elements across all those other customers versus an on-premises solution that is only theirs.

The business model innovation is that consumption-based, on-premises solution that gets more creative on how you look at the residual values.

And so the business model innovation that we are talking about and driving is that consumption-based, on-premises solution that gets more creative on how you look at the residual values. And in our space, there’s a lot of digital data that won’t come back into the equation that is not able to realize residual value. It’s like when you bring back the leased car, that we talked about earlier, if you go over 30,000 miles, it still has value after your lease period.

In a lot of cases in the government environment, depending on where it lives, those digital fingerprints are going to have to stay on the customers’ side or get destroyed, so you can’t assume that into the model.

There are a lot of different variables driving it. That’s where the innovation comes in, and defines how you work as an integrator. With partners — like we see with Hewlett Packard Enterprise (HPE) and others in the marketplace — we can drive that innovation.

Gardner: In a case where there’s a major government or military organization, they may want to acquire on a pay-per-use basis, but the supply chain that supports that, they might want to be paid upfront on a CapEx basis. How are you able to drive this innovation in end-pricing and in economics for entire solutions that extend back into such supply chains? Or are you stuck in the middle?

Trusted partners essential

Foudray: That hits on a very core part of the challenge, and why having a partner that is going to help you provide the IT infrastructure is so important — not just in terms of managing that supply chain holistically but in having a trusted partner, and making sure that the integrity and the security of that supply chain is maintained. We haven’t talked about the security element yet, but there is a whole cybersecurity piece of that supply chain from an integrity perspective that has to be maintained as well.

The more trust you build up in that partnership, and across those relationships with your downstream suppliers, the better. That trust extends to how they are getting paid and the terms associated with that, with working those terms and conditions and parameters upfront, and of getting those laid in so that the desired expectations are met. Then you must work with your customer to set the right expectations on their terms and conditions to provide them a new consumption-based model. It’s all from an agreement perspective, all very closely aligned.

Gardner: Is there something about newer data center technology that is better tuned to this sort of payment model change? I’m thinking of software-defined data center (SDDC) and the fact that virtualization allows you to rapidly spin-up cloud infrastructure applications. There’s more platform agility than we had several years ago. Does that help in being able to spread the risk because the IT vendors know that they can be fleet and agile with their systems, more than in the past?

Hardware clearly is an enabling feature and function, but software is what’s really driving digital transformation … not just on the technology side, but also on the business side and how it’s consumed.

Foudray: We do a lot from a software perspective as a systems integrator in the defense market space. Software is really the key. Hardware clearly is an enabling feature and function that’s driving that, but software is what’s really driving digital transformation. And that element in and of itself is really what’s helping to transform the way that we think about innovation — not just on the technology side, but also on the business side, and in how it’s consumed.

We are putting a lot of energy into software transformation, as part of the digitization aspect — not just in terms of how quickly we can provide those drops from an agile development, DevOps, development-security-operations (SecOps) perspective, but in terms of the type of services that are delivered with it, and how you look at it.

Changing the business model in parallel needs to avoid offending engineering principle 101: Never introduce more than one key change at a time. You have to be careful that culturally, depending on the organization that you are interacting with, that you are not trying to drive too much change and adoption patterns at the same time.

But you are right to hit on the software. If I had to pick one element, software is going to be the driver. Next is the culture — the human behavior, of where someone lives, and what he or she is used to. That’s also going to be transformative.

Gardner: For mainstream enterprises and businesses, what do you get when you do this? What are some of the payoffs in terms of your ability to execute in your business, keep your customers satisfied, and maybe even speed up innovation? What do you get when you do this acquisitions model transformation thing right?

Scale in, scale out, securely

Foudray: First, it’s important to recognize that you don’t lose control, you don’t lose compliance, and you don’t lose those things that traditionally may have caused you not to embrace [these models].

What you get is the ability to leverage innovation from a technology perspective as it happens because your provider is going to be able to scale in and scale out technology as needed. You are going to be able to provision more dynamically in such an environment.

You get the ability to leverage innovation from a technology perspective as it happens.

If you have the right partner in your integrator and their provider, you should be able to anticipate and get in front of the changes that drive today’s scalability challenges, so you can get the provisioning and get the resourcing that you need. You are also going to be in a much better predictability state of where you need to be for the financial elements of your system.

There are some other benefits. If you implement it correctly, not only are you going to get the performance that you need, your utilization rates should go way up. That’s because you are not going to be paying for underutilized systems as part of your infrastructure. You will see that added affordability piece.

If you do it right, and if you pick integrators who are also tying in the added dimension of security, which we very much are focused on providing, you are going to get a high level of compliance with the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF). On the US side, there is also the National Defense Authorization Act, which requires organization and agency heads to certify that their enterprise is at a certain level of hygiene. If you have implemented this correctly, you should be able to instrument your environment in such a way that at any given time you know what level of security you are at, from a risk perspective.

There are a lot of benefits you get for cost, schedule, and performance — all of that tied together in a way that you never would have been able to see from an ecosystem perspective, all at the same time. You may get one or two of those, but not all three. So I think there are some benefits that go along those lines that you are going to be able to see as a customer, whether you are in the defense space or not.

Gardner: Yes, I think we’re going to see these models across more industry ecosystems and supply chains. Clearly vendors like HPE have heard you. They recently announced some very innovative new flex-capacity-types of pricing, and GreenLake-branded ways to acquire technology differently in most markets.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Cyber security, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, HP, Software-defined storage, storage | Tagged , , , , , , , , , , , , | Leave a comment

A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes

The next BriefingsDirect data center financing agility interview explores how two Belgian hospitals are adjusting to dynamic healthcare economics to better compete and cooperate.

We will now explore how a regional hospital seeking efficiency — and a teaching hospital seeking performance — are meeting their unique requirements thanks to modern IT architectures and innovative IT buying methods.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us understand the multilevel benefits of the new economics of composable infrastructure and software defined data center (SDDC) in the fast-changing healthcare field are Filip Hens, Infrastructure Manager at UZA Hospital in Antwerp, and Kim Buts, Infrastructure Manager at Imelda Hospital in Bonheiden, both in Belgium.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top trends disrupting the healthcare industry in Belgium? Filip, why do things need to change? Why do you need to have better IT infrastructure?

Hens: That’s a good question. There are many up-and-coming trends. One is new regulations around governance, which is quite important. Due to these new rules, we are working more closely together with other hospitals to share more data, and therefore need better data security. This is one of the main reasons that we need to change.

Filip Hens

Hens

In Belgium, we have many hospitals, with some of them only a few kilometers apart. Yet there have been very few interactions between them.

New demands around augmentation of services means patient data are a growing concern. So it’s not only the needs of new governance but also the demand for providing better medical services across hospitals.

Gardner: Kim, how are the economics of healthcare — of doing more with less — an ongoing requirement? How are you able to conserve on the costs?

Buts: We are trying to do everything we can across the financial possibilities. We are constantly looking for good solutions that are affordable. The obligation to work in a [hospital] cluster provides us with a lot of new challenges.

A major challenge for us was around security. We have invested hugely in security. Many of the new applications are now shared across the hospital cluster. So we chose to take on the role of innovator. And to continue innovating, we have to spend a lot of money. That was not foreseen in the annual budget. So we took advantage of Hewlett Packard Enterprise’s (HPE’s) new financial services approaches, to make things happen much faster than usual.

How HPE Digital Solutions

Support Healthcare

And Life Sciences

Gardner: We’ll get back to some of those services, but I’d like to help our readers and listeners better understand this interesting combination of needing to compete — that is to attract patients — but at the same time cooperate and share data across hospital cluster. Filip, tell us about UZA and how you’re unique compared to a regional hospital. What makes you different?

Sharing is caring, and saving

Hens: Our main focus remains patient care, but for us it is not necessarily general medicine. It is more the specialist cases, for such things as specialized surgery. That is our main goal. Also we are a teaching hospital, so we have an emphasis on learning from patients and from patient data.

Gardner: You have unique IT and big data requirements from your researchers. You have more of an intense research and development environment, and that comes with a different set of IT requirements?

Hens: Yes, and that is very important. We are more demanding of the quality of the data, the need to gather more information, and to provide our researchers a better infrastructure platform.

That is one difference between a general hospital and a university hospital. A teaching facility has more complex patient analytics requirements, the need for complex data mining and stuff like that.

Gardner: Kim, how are you in your healthcare cluster now able to share and cooperate? What is it that you’re sharing, and how do you that securely to creating better healthcare outcomes?

Buts: A big difference for us is financial. Since we are a smaller hospital, we must offer a very broad portfolio of treatments. That means we need to have a lot of patients to then have enough income to survive. The broad offering, that portfolio of treatments, also means we are going to need to work more together with the other cluster members.

Kim Buts

Buts

We are now trying to buy new IT equipment together, because we cannot afford to each buy for every kind of surgery, or for every kind of treatment. So we have combined our budgets together and we are hosting different things in our hospital that are then used by the other cluster members, too.

Financially, due to the regulations, we have less income than a university hospital. The benefits of education funding do not get to us. We only get income from patients, and that is why we need to have a broad portfolio.

Hens: Unlike a general hospital, we have income from the government and we also have an income flow from scientific research. It is huge funding; it is a huge amount. That is really what makes us different. That is why we need to use all of that data, to elaborate on scientific research from the data.

If not an advantage, it is an extra benefit that we have as university hospital. In the end, it is very important in that we maintain and add extra business functionality via an updated IT infrastructure.

If we maintain those clusters well — the general hospitals together with university hospitals — then those clusters can share among themselves how to best meet patient needs, and concentrate on using the sparest amount of the budget.

Robust research, record keeping, required

Gardner: You are therefore both trying to grapple with the use and sharing of electronic medical records (EMR) applications. Are you both upgrading to using a different system? How are you going about the difficult task of improving and modernizing EMR?

Buts: One big difference between our hospitals is our doctors; they are working for the hospital on a self-employed basis at Imelda. They are not employees of the hospital as at UZA. The demands of our doctors are therefore very high, so we have to improve all of our facilities — and our computer storage systems — very fast.

We try to innovate for the doctors, so we have to spend a lot of money on innovation. That is a big difference, I think, between the university hospitals because the doctors are employees there.

Gardner: How does that impact your use of EMR systems?

How HPE Digital Solutions

Support Healthcare

And Life Sciences

Buts: We are in the process of changing. We are looking for a new EMR system. We are discussing and we are choosing, but the demands of the doctors are sometimes different from the demands of the general hospital management.

Gardner: Filip, EMR, is that something you are grappling with, too?

Hens: We did the same evaluations and we have already chosen a new EMR. For us, implementing an EMR is now all about consolidation of a very scattered data landscape, of moving toward a centralized organization, and of centralizing databases for sharing and optimization of that data.

There is some pressure between what physicians want and what we as IT can deliver with the EMR. Let’s just say it is an opportunity. It is an opportunity to understand each other better, to know why they have high demands, and why we have other demands.

That comparison between the physicians and us IT guys makes it a challenging landscape. We are busier with the business side and with full IT solutions, rather than just implementing something.

It is not just about implementing something new, but adaptation of a new structure of people. Our people rethink how everybody’s role is changing in the hospital, and what is needed for interaction with everybody. So, we are in the process of that transformation.

Gardner: What is it about the underlying IT infrastructure that is going to support the agility needed to solve both of your sets of problems, even though they are somewhat different?

Filip, tell us about what you have chosen for infrastructure and why composable infrastructure helps solve many these business-level challenges.

Composable confidence

Hens: That is a good question, because choosing a solution is not like going to the supermarket and just buy something. It is a complex process. We still have separation of data storage and computing power.

We still separate that kind of stuff because we want to concentrate on the things that really bring added value, and that are also trustworthy. For us, that means virtualization on the server and network platforms, to make it more composable.

A more software-defined and composable approach will make us more independent from the underlying hardware. We have chosen for our data center the HPE Synergy platform. In our opinion, we are ready because after many years as an HPE customer — it just works.

For me, knowing that something is working is very important, but understanding the pitfalls of a project is even more important.

And for me, knowing that something is working is very important, but understanding the pitfalls of a project is even more important. For me, the open discussion that you can have with HPE about those pitfalls, of how to prepare for them and how to adapt your people to know what’s to come in the future — that is all very important.

It’s not only a decision about the metal, but also about what are the weaknesses in the metal and how we can overcome that — that is why we stick with HPE, because we have a good relationship.

Gardner: Kim, what are you doing to modernize, but also innovate around those all-important economic questions? How are you using pay-as-you-go models to afford more complex technology, and to give you advancement in serving your customers?

One-stop shopping

Buts: The obligations of the new hospital-cluster regulations had a huge impact on our IT infrastructure. We had to modernize. We needed more compute power and more storage. When we began calculating, it showed us that replacing all of the hard drives at one time was the best option, instead of spreading it over the next three to four years.

Also the new workload demands on the infrastructure meant we needed to replace it as fast as possible, but the budget was not available at our hospitals. So HPE Financial Services provided us with a solution that meant we could replace all our equipment with very short notice. We exchanged servers, storage, and our complete network, including our Wi-Fi network.

So we actually started with a completely brand new data center thanks to the financial services of HPE.

Gardner: How does that financing work? Is that a pay-as-you-go, or are payments spread over time?

Buts: It’s spread over the coming five years. That was the only solution that was good for us. We could not afford to do it any other way.

Gardner: So that is more like an operating costs budget than an upfront capital outlays budget?

We actually started with a completely brand new data center thanks to the financial services of HPE. We could not afford to do it any other way.

Buts: Yes, and the other thing we wanted to do was do everything with HPE — because they could offer us a complete range of servers, storage, and Wi-Fi networking. That way we could reduce the complexity of all our work, and it guaranteed us a fast return on the investment.

Gardner: It is all more integrated, upfront.

Buts: Yes, that is correct.

Gardner: At UZA, what are you doing to even further modernize your infrastructure to accommodate more data, research, sharing, and security?

Hens: It is not about what I want to deliver; it is about what the business wants that we can deliver, and what we can together deliver to the hospital. So, for me, the next step is the EMR program.

So, implementing the EMR, looking for the outcomes from it, and offering something better to end-users. Then those outcomes can be used to further modernize the infrastructure.

That for me is the key. I will not necessarily say that we will buy more HPE Synergy. For me, the key to the process, as I just described, that is what will set the margins of what we will need.

Gardner: Kim, now that you have a new data center, where do you take it next in terms of people, process or even added technology efficiencies? Improved data and analytics, perhaps?

Cloud in the Cluster?

Buts: That is a difficult one because the cluster is very new for us. We are still looking at good ways to incorporate and decide where the data is going to be placed, and what services are going to be required.

It is still brand new for us, and we have to find a good way to incorporate it all with the different hospital cluster members. A big issue is how are we going to exchange the critical patient data, and how we are going to store it safely and securely.

Gardner: Is cloud computing going to be a part of that?

Buts: I do not know. Everything is “cloud” now so, maybe. I am not a huge fan of public cloud. If you can stay in a private cloud, yeah, then okay. But public cloud, I do not know. In a hospital, regulations are so strong and the demands are so high.

Gardner: Maybe a shared private cloud environment of some sort?

Buts: Yeah. I think that could be a good solution.

How HPE Digital Solutions

Support Healthcare

And Life Sciences

Hens: For public cloud in general, I think that is a no-go. But what we are doing already with our EMR, we can work together with a couple of hospitals and we can choose to build a private cloud at one of the sites at our hospitals.

You do not need to define it as a cloud. Really, it’s like public Internet cloud, but you have to make your IT cloud-aware and cloud-defined inside the walls of your hospital. That is the first track you need to take.

Buts: That is why in our hospital cluster, we chose to host a lot of new applications on the new hardware. It gave us the ability to learn and adapt quickly to the new innovations. And for the other hospitals, we are now becoming a kind of service provider to them. That was for us a big change, because now we are more a service level agreements (SLA)-driven organization than we used to be.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, Cyber security, data center, Data center transformation, disaster recovery, electronic medical records, Enterprise architect, enterprise architecture, Enterprise transformation, healthcare, Hewlett Packard Enterprise, Security, storage, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Retail gets a makeover thanks to data-driven insights, edge computing, and revamped user experiences

The next BriefingsDirect Voice of the Customer vertical industry disruption solutions interview explores how intelligence, edge computing, and a rethinking of the user experience come together to give retailers a business-boosting makeover.

We’ll now learn how Deloitte and Hewlett Packard Enterprise (HPE) are helping traditional retailers — as well as hospitality organizations and restaurants — provide a more consistent, convenient, and contiguous user experience across their businesses.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help to define the new digitally enhanced retail experience are Kalyan Garimella, IoT Manager at Deloitte Consulting, and Jeff Carlat, Senior Director of Technology Solutions at HPE. The interview is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jeff, what are the top trends now driving the amazing changes in retail?

Jeff Carlat

Carlat

Carlat: First off, I want to clear the air. Retail is not dead. Everywhere I go I hear that the retailer is dead, no more brick and mortar. It’s a fallacy. There is a retail apocalypse out there, but quite honestly 85 to 90 percent of purchases still go through the brick-and-mortar retailer.

The retail apocalypse does apply to brick-and-mortar stores that are failing to transform to fully embrace the digitalization  expected by consumers today. We are here to do something about it.

Gardner: Kalyan, user experiences have always been important. You can go back to Selfridges in London more than 100 years ago. People understand the importance of user experience. What’s different now in the digital age?

Garimella: Unfortunately, if you think about it, going back for the past four decades, retailers have relied on brand names and the strength of the merchandise to attract more customers. They never really differentiated themselves from the experiences that they were creating versus what their competitors were creating.

With the advent of changing customer demographics — with Millennials, Gen Ys, Gen Xs coming into the picture — retailers now need to produce a more customized shopping experience. They need to give shoppers a reason to escape their online retail channels, to come to brick-and-mortar shops and make more purchases there. It’s high time we give that to them — and make them come back to the stores.

Gardner: There are still things in the physical world that need to remain in the physical world, right, Jeff?

Virtual-real hybrid

Carlat: Exactly right! Take me, for example. We recently bought a new house and I wanted to get a nice La-Z-Boy chair. I’m the kind of guy who’s not going to just push a button on a computer or a handheld to buy a new chair. I’m going to want to go sit in it. I want to know is this right for me, and so I go to a traditional brick-and-mortar outlet.

How HPE and Deloitte Align IT
With Business Strategies

Yes, I may do my research [online]. I may actually end up [online] doing my purchase and having it shipped directly to my home. But while I’m at the store, I want to have an experience — an immersive experience — that’s going to help suggest to me, “Oh what’s the perfect side table that should go with that? What’s the complementary piece of art that actually matches the fabric?”

I want the capability to know what that chair will look like in my own decor, via virtually imposing that chair into my environment. That’s where the world is going. Those are the demands of the new retail environment, and they will separate those that continue to thrive in the retail environment from those that suffer and decline.

Gardner: And, of course, the people in that physical environment might actually know quite a bit about the purchase that you could gain from. They have been doing this for some time. There is the interaction of a consultancy effect when you are in a sales environment.

Kalyan Garimella

Garimella

Garimella: People are always going to be a key asset no matter where we do it and in whichever industry. If we can complement the existing user knowledge that exists in the retail stores with the intelligence, or analytics and data that go along with it — that’s a powerful combo. We want to provide that.

That’s why we are talking about helping brick and mortars attract more customers — not just by increasing the customer experience and optimizing your digital store operations — by combining data and insights, and not relying only on opinions.

Gardner: Is that what we mean by cross-channel experiences, Jeff?

Easy as 1-2-3

Carlat: We, together with Deloitte, are delivering in early 2018 the Connected Consumer for Retail offering. It’s definitely a cross-channel experience. This takes the cross-channel experience and enhances it for the brick-and-mortar environment.

The Connected Consumer for Retail offering is based on three core principles. Principle number one is providing that enhanced customer experience, that immersive experience, which ultimately increases revenues and basket sizes for retailers.

The Connected Consumer for Retail offering takes the cross-channel experience and enhances it for the brick-and-mortar environment.

The second principle is based on optimizing in-store operations. How do you ensure that you have the right amount of stock — not overstocking and not under-stocking? How do you reduce the amount of a lost inventory? This Connected Consumer offering will help shrink and reduce the cost structures in a brick-and-mortar environment.

And finally, as Kalyan mentioned, the third key principle is around driving new insights from the in-store analytics. That data and intelligence is derived from the customers — coming through video-location analytics and all kinds of integration into social networks. You can know so much more about the customer, and then give that customer a personalized experience that brings them back and increases brand loyalty.

Gardner: I suppose it’s important to connect all of the dots across an entire shopping ecosystem process – from research to purchase to installation to service. Is that what we need?

Garimella: Absolutely, and that is what we refer to as an omni-channel experience, or a unified commerce experience. Our customers these days expect a seamless continuous shopping experience — be it online or in a store. If you can create that consistent behavior and shopping experience, that is a powerful channel to attract even more customers.

There are many retail concepts very much in demand right now, such as online delivery or pickup at the store. Or you can order in-store and have delivery to your house. Or you can order in one store and pick up in other stores, if the inventory is not currently available in the initial store.

So whatever channel they choose, you can provide value in each of those steps back to the customer – and in doing so you are attracting loyalty, you are building the brand. And that is a powerful medium.

Deloitte and HPE Collaboration

Span 20 Years and Myriad IT Solutions

Gardner: And the more interactions, the more data, the more feedback, the more analysis, and the better the experience. It can all tie together.

Let’s talk about how the technology accomplishes that. You mentioned a new retail initiative at HPE in partnership with Deloitte. What are fundamental technology underpinnings that allow this to happen?

Solid foundations for success

Garimella: The Connected Consumer for Retail begins at the infrastructure level — solutions around HPE Aruba, HPE Edgeline Systems portfolio, and other converged infrastructure systems. For location-based analysis, we are using the wireless LAN from Aruba and their Meridian App Platform for mobile. From a security layer, we are using Niara and ClearPass, but we are also working with a set of third-party vendors for radio-frequency identification (RFID) and for video analytics. So it amounts to an ecosystem of the right partners to solve the right business problem for each of those retailers.

Gardner: And, of course, it has to be integrated properly, and that is where Deloitte comes in. How does that come together into an actual solution?

Carlat: This is the beauty of working with a group like Deloitte. They bring together the consultative and advisory capabilities, along with the technical integration needed. Deloitte brings the ability to help the customer figure out how to get started on this journey.

First off, the methodology helps a customer think big about what they can do, then helps them actually build a business plan internally to drive change and get the right business approvals to start changing. Then they proceed to solution execution that starts small – and builds a proof of concept.

How HPE and Deloitte Align IT

With Business Strategies

In as little as eight weeks, we can deliver the value that can then be extrapolated across all of the retail sites. That’s what projects the true savings. That is the proper scale: To think big where you can, then start small, and lastly, scale fast across all of the sites.

Gardner: Kalyan, any more to offer on the importance of proper integration at a solutions level?

Garimella: Internet of Things (IoT) is such a complex ecosystem of technologies that you need subject matter experts from each of the technologies — such as RFID, Bluetooth beacons, Wi-Fi, analytics, artificial intelligence (AI), your core enterprise resource planning (ERP) systems, the customer relationship management (CRM) systems, and the list goes on.

That’s where we come in, with the right people, and with the vast resources that we have. That’s deep industry expertise. We come and we look at the problems, create the customer journey for our clients, and then create the right level of systems integration that can help achieve the business objective.

Gardner: Let’s look at some examples. What are some of the ways that retailers are doing things right to improve on that all-important user experience?

Carlat: As a consumer, I know what I like — and I know what I do not like. I have seen overly aggressive advertising, pushiness that repels me as much as waiting in a long line at a retail brick-and-mortar. There needs to be a correct balance, if you will, of suggestive selling, cross-selling, and upselling. But you have to have the right learning, the right analytics, to be right more times than you are wrong. It means providing a value versus becoming a pest.

This new offering allows that balance to be made. Other best practices would be providing point notifications to issue a discount that would get me as a consumer over the buying hump, to say, “You know, that is a good deal. I cannot pass this up.” Then as a seller, I can naturally dovetail into increasing the basket size, cross-sell, and upsell.

Gardner: How can the brick-and-mortar company better extend itself beyond the threshold of the physical building into the lifestyle, the experience, and the needs of the consumer?

Customized consumer choices

Garimella: You are talking about bringing the retailer into the houses of the customers. That is where the successful online retailers have been. We are working with our brick-and-mortar clients to create similar experiences.

Some of the options to do that would be having a digital voice assistant included on your retailer or shopping app. You could add items to a wish list; you could look up those items and determine if they are close by and where is the retailer nearest to my house. Maybe I could go and check those out instead of waiting for a couple of days for them to be delivered.

We are talking about bringing the retailer into the houses of the customers. That is where the successful online retailers have been.

So those are some of the experiences that we are trying to create — not just inside the brick-and-mortar store, but outside as well.

Gardner: Jeff, tell us a bit more about the Connected Consumer for Retail. Where can we find out more information?

Carlat: We are rolling out this offering in Q1 2018. It is being delivered consultatively initially through Deloitte as the lead. We are happy to come in and do demos, as well as deliver proofs of concept. We are actually happy to help build a business model and conduct workshops to understand what is the best path for retailers to begin adopting the on-ramp to this digital transformation.

The easiest way to get to us is via our websites at either HPE or at Deloitte. We have business leads in all regions, all parts of the world.

Gardner: We have talked mostly about brick-and-mortar retailers, but this applies to hospitality organizations, restaurants, and other consumer services. How should they too be thinking about the user experience and extending it to a life cycle and a lifestyle?

From pain to gain

Garimella: Wherever there’s a possibility of converting a pain point in a customer journey into an engagement point, I think IoT can definitely help. We are calling this the Connected Consumer for Retail for a reason. The same concepts and the same technologies that we have developed for the retail solution can be extended to hospitality, or travel, or food services, et cetera, et cetera.

For example, based on location and proximity of a user, you can create — using the location-based services – improved experiences that cater to individuals in hospitality and hotels by giving them the right offers at the right time, thereby increasing the basket size in their respective industries.

Gardner: It seems that across these vertical industries we are at the threshold of something that had never been possible before.

Carlat: This is the beginning of a new era for retail. What is clear to me is those retailers that choose to adopt change are going to be the winners — and more importantly those that do not choose to change are going to be the losers.

Deloitte and HPE Collaboration

Span 20 Years and Myriad IT Solutions

Garimella: I think Jeff hit it right on. Retail is changing and changing fast, and other industries will follow in the same suit as well. If you do not put enough emphasis on customer engagement, while also optimizing your operations, you are at risk.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, CRM, data analysis, Deloitte, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, Mobile apps, mobile computing, Networked economy, retail, social media, User experience | Tagged , , , , , , , , , , , | Leave a comment

How VMware, HPE, and Telefonica together bring managed cloud services to a global audience

The next BriefingsDirect Voice of the Customer optimized cloud design interview explores how a triumvirate of VMware, Hewlett Packard Enterprise (HPE), and Telefonica together bring managed cloud services to global audiences.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Learn how Telefonica’s vision for delivering flexible cloud services capabilities to Latin American and European markets has proven so successful. Here to explain how they developed the right recipe for rapid delivery of agile Infrastructure-as-a-Services (IaaS) deployments is Joe Baguley, Vice President and CTO of VMware EMEA, and Antonio Oriol Barat, Head of Cloud IT Infrastructure Services at Telefonica. The interview is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What challenges are mobile and telecom operators now facing as they transition to becoming managed service providers?

Oriol Barat: The main challenge we face at this moment is to help customers navigate in a multi-cloud environment. We now have local platforms, some legacy, some virtualized platforms, hyperscale public cloud providers, and data communications networks. We want to help our customers manage these in a secure way.

Gardner: How have your cloud services evolved? How have partnerships allowed you to enter new markets to quickly provide services?

Antonio Oriol Barat

Oriol Barat

Oriol Barat: We have had to transition from being a hosting provider with data centers in many countries. Our movement to cloud was a natural evolution of those hosting services. As a telecommunications company (telco), our main business is shared networks, and the network is a shared asset between many customers. So when we thought about the hosting business, we similarly wanted to be able to have shared assets. VMware, with its virtualization technology, came as a natural partner to help us evolve our hosting services.

Gardner: Joe, it’s as if you designed the VMware stack with customers such as Telefonica in mind.

Baguley: You could say that, yes. The vision has always been for us at VMware to develop what was originally called the software-defined data center (SDDC). Now, with multi-cloud, for me, it’s an operating system (OS) for clouds.

We’re bringing together storage, networking and compute into one OS that can run both on-premises and off-premises. You could be running on-premises the same OS as someone like Telefonica is running for their public cloud — meaning that you have a common operating environment, a common infrastructure.

So, yes, entirely, it was built as part of this vision that everyone runs this OS to build his or her clouds.

Gardner: To have a core, common infrastructure — yet have the ability to adapt on top of that for localized markets — is the best of all worlds.

Joe Baguley

Baguley

Baguley: That’s entirely it. Like someone said, “If all of the clouds are running the same OS, what’s the differentiation?” Well, the differentiation is, you want to go with the biggest player in Latin America. You want to go with the player that has the best direct connections: The guys that can give you service levels maybe that the cloud providers can’t give. They can give you over-the-top services that other cloud providers don’t provide. They can give you an integrated solution for your business that includes the cloud — and other enterprise services.

It’s about providing the tools for cloud providers to build differentiated powerful clouds for their customers.

Learn How HPE and VMware Solutions
Enable a New Style of Business

Gardner: Antonio, please, for those of our listeners and readers that aren’t that familiar with Telefonica, tell us about the breadth and depth of your company.

Oriol Barat: Telefonica is one of the top 10 global telco providers in the world. We are in 21 countries. We have fixed and mobile data services, and now we are in the process of digital transformation, where we have our focus in four areas: cloud, security, Internet of Things (IoT), and big data.

We used to think that our core business was in communications. Now we see what we call a new core of our business at the intersection of data communications, cloud, and security. We think this is really the foundation, the platform, of all the services that come on top.

Gardner: And, of course, we would all like to start with brand-new infrastructure when we enter markets. But as you know, we have to deal with what is already in place, too. When it came time for you to come up with the right combination of vendors, the right combination of technologies, to produce your new managed services capabilities, why did you choose HPE and VMware to create this full solution?

Sharing requires trust

Oriol Barat: VMware was our natural choice with its virtualization technologies to start providing shared IT platforms — even before cloud, as a word, was invented. We launched “virtual hosting” in 2007. That was 10 years ago, and since then we have been evolving from this virtual hosting that had no portal but was a shared platform for customers, to the cloud services that we have today.

The hardware part is important; we have to have reliable and powerful technology. For us, it’s very important to provide trust to the customers. Trust, because what they are running in their data centers is similar to what we have in our data centers. Having VMware and HPE as partners provides this trust to the customers so that they will move the applications, and they know it will work fine.

Gardner: HPE is very fond of its Synergy platform, with composable infrastructure. How did that help you and VMware pull together the full solution for Telefonica, Joe?

Learn More End-to-End Solutions
From HPE and VMware

Baguley: We have been on this journey together, as Antonio mentioned, since 2007 — since before cloud was a thing. We don’t have a test environment that’s as big as Telefonica’s production environment — and neither does HPE. What we have been doing is working together — and like any of these journeys, there have been missteps along the way. We stumbled occasionally, but it’s been good to work together as a partnership.

As we have grown, we have also both understood how the requirements of the market are changing and evolving. Ten years ago providing a combined cloud platform on a composable infrastructure was unheard of — and people wouldn’t believe you could do it. But that’s what we have evolved together, with the work that we have done with companies such as Telefonica.

The need for something like HPE Synergy and the Gen10 stack — where there are these very configurable stacks that you can put together — has literally grown out of the work that we have done together, along with what we have done in our management stack, with the networking, compute, and storage.

Gardner: The combination of composable infrastructure and SDDC makes for a pretty strong tag team.

Baguley: Yes, definitely. It gives you that flexibility and the agility that a cloud provider needs to then meet the agility requirements of their customers, definitely.

Gardner: When it comes to bringing more end users into the clouds for your managed services providers, one of the important things is for end users to move into that cloud with as much ease as possible. Because VMware is a de facto standard in many markets with its vSphere Hypervisor, how does that help you, being a VMware stack, create that ease of joining these clouds?

Seamless migrations

Oriol Barat: Having the same technology in the customer data center and in our cloud makes things a lot easier. In the first place, in terms of confidence, the customer can be confident that it’s going to work well when it is in place. The other thing is that VMware is providing us with the tools that make these migrations easier.

Baguley: At VMworld 2017, we announced VMware Hybrid Cloud Extension (HCX), which is our hybrid cloud connector. It allows customers to locally install software that connects at a Layer 2 [network] level, as well as right back to vSphere 5.0 in clouds. Those clouds now are IBM and VMware cloud native, but we are extending it to other service providers like Telefonica in 2018.

The important thing here is by going down this road, people can take some of the fear out of going to the cloud.

So a customer can truly feel that their connecting and migrations will be seamless. Things like vSphere vMotion across that gap are going to be possible, too. I think the important thing here is by going down this road, people can take some of the fear out of going to the cloud, because some of the fear is about getting locked in: “I am going to make decisions that I will regret in two years by converting my virtual machines (VMs) to run on another platform.” Right here, there isn’t that fear, there is just more choice, and Telefonica is very much part of that story of choice.

Gardner: It sounds like you have made things attractive for managed service providers in many markets. For example, they gain ease of migration from enterprises into the provider’s cloud. In the case of Telefonica, users gain support, services and integration, knowing that the venerable vendors like VMware and HPE are behind the underlying services.

Do you have any examples where you have been able to bring this total solution to a typical managed service provider account? How has it worked out for them?

Everyone’s doing it

Oriol Barat: We have use cases in all the vertical industries. Because cloud is a horizontal technology, it’s the foundation of everything. I would say that all companies of all verticals are in this process of transformation.

We have a lot of customers in retail that are moving their platforms to cloud. We have had, for example, US companies coming to Europe and deploying their SAP systems on top of our platforms.

For example in Spain, we have a very strong tourism industry with a lot of hotel chains that are also using our cloud services for their reservation systems and for more of their IT.

We have use cases in healthcare, of companies moving their medical systems to our clouds.

We have use cases of software vendors that are growing software-as-a-service (SaaS) businesses and they need a flexible platform that can grow as their businesses grow.

A lot of people are using these platforms as disaster recovery (DR) for the platforms that they have on-premises.

I would say that all verticals are into this transformation.

Learn How HPE and VMware Solutions
Enable a New Style of Business

Gardner: It’s interesting, you mentioned being able to gain global reach from a specific home economy by putting data centers in place with a managed service provider model.

It’s also important for data sovereignty and compliance and General Data Protection Regulation (GDPR) and other issues for that to happen. It sounds like a very good market opportunity.

And that brings us to the last part of our discussion. What happens next? When we have proven technology in place, and we have cloud adoption, where would you like to be in 12 months?

Gaining the edge

Baguley: There has been a lot of talk at recent events, like HPE Discover, about intelligent edge developments. We are doing a lot at the edge, too. When you look at telcos, the edge is going to become something quite interesting.

What we are talking about is taking that same blend of storage, networking and compute, and running it on as small a device as possible. So think micro data centers, nano data centers. How far out can we push this cloud? How much can we distribute this cloud? How close to the point of need can we get our customers to execute their workloads, to do their artificial intelligence (AI), to do their data gathering, et cetera?

And working in partnership with someone who has a fantastic cloud and a fantastic network just means that a customer who is looking to build some kind of distributed edge-to-cloud core capability is something that Telefonica and VMware could probably do over the next 12 months. That could be really, really strong.

Gardner: Antonio?

Oriol Barat: In this transformation that all the enterprises are in, maybe we are in the 20 percent of execution range. So we still have 80 percent of the transformation ahead of us. The potential is huge.

Looking ahead with our services, for example, it’s very important that the network is also in transformation, leveraging the software-defined networking (SDN) technologies. These networks are going to be more flexible. We think that we are in a good position to put together cloud services with such network services — with security, also with more software-defined capabilities, and create really flexible solutions for our customers.

Learn More End-to-End Solutions
From HPE and VMware

Baguley: One example that I would like to add is if you can imagine that maybe Real Madrid C.F. are playing at home next weekend … It’s theoretical that Telefonica could have the bottom of those network base stations — because of VMware Network Functions Virtualization (NFV), it’s no longer specific base station hardware, it’s x86 HPE servers in there. They can maybe turn around to a betting company and say, “Would you like to move your front-end web servers with running containers to run in the base station, in Real Madrid’s stadium, for the four hours in the afternoon of that match?” And suddenly they are the best performing website.

That’s the kind of out-there transformative ideas that are now possible due to new application infrastructures, new cloud infrastructures, edge, and technologies like the network all coming together. So those are the kind of things you are going to see from this kind of solutions approach going forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business networks, Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, managed services, multicloud, Security, Software-defined storage, Virtualization, VMware | Tagged , , , , , , , , , , | Leave a comment

Infatuation leads to love—How container orchestration and federation enables multi-cloud competition

The use of containers by developers — and now increasingly IT operators — has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together … and maybe even getting some relationship help along the way.

And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts.

This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Nic, HashiCorp has gone a long way to enable multi-cloud provisioning. What are some of the trends now driving the need for multi-cloud? And how does container management and orchestration fit into the goal of obtaining functional multi-cloud use, or even interoperability?

Nic Jackson

Nic Jackson

Jackson: What we see mainly from our enterprise customers is that people are looking for a number of different ways so that they don’t get locked into one particular cloud provider. They are looking for high-availability and redundancy across cloud providers. They are looking for a migration path from private cloud to a public cloud. Or they want a burstable capacity, which means that they can take that private cloud and burst it out into public cloud, if need be.

Containers — and orchestration platforms like KubernetesNomad and Swarm — are providing standard interfaces to developers. So once you have the platform set up, the running of an application can be mostly cloud-agnostic.

Gardner: There’s a growing need for container management and orchestration for not only cloud-agnostic development, but potentially as a greasing of the skids, if you will, to a multi-cloud world.

Harbin: Yes. If you make the investment now to architect and package your applications with containers and intelligent orchestration, you will have much better agility to move your application across cloud providers.

This will also enable you to quickly leverage any new products on any cloud provider.  For example DigitalOcean recently upgraded our High CPU Droplet plans, providing some of the best values for accessing the latest chipsets from Intel. For users with containerized applications and orchestration, they could easily improve application performance by moving workloads over to that new product.

Gardner: And, Matt, at StackPointCloud you have created a universal control plane for Kubernetes. How does that help in terms of ease of deployment choice and multi-cloud use?

Ease-of-use increases flexibility

Baldwin: We’ve basically built a management control plane for Kubernetes that gives you a single pane of glass across all your cloud providers. We deal with the top four, so AmazonMicrosoft AzureGoogle and DigitalOcean. Because we provide that single pane of glass, you can build the clusters you need with those providers and you can stand up federation.

Matt Baldwin

Matt Baldwin

In Kubernetes, multi-cloud is done via that federation. The federation control plane connects all of those clusters together. We are also managing workloads to balance workloads across, say, some on Amazon Web Services (AWS) and some on DigitalOcean, if you like.

That’s what we have been doing with our star product. We are still on that journey, still building more things. Because it’s moving quite fast, federation is shifting and changing. We are keeping pace and trying to make it all easier to use.

Our whole point is usability. We think that all this tooling needs to become really, really easy to use. You need to be able to manage multi-cloud as if it’s a single cloud.

Gardner: Reynold, with DigitalOcean being one of the major cloud providers that Matt mentioned, why is it important for you to enable this level of multi-cloud use? Is it a matter of letting the best public cloud services values win? Why do you want to see the floodgates open for public cloud choice and interoperability?

Introducing

Simple and Reliable

Cloud Object Storage

Harbin: Thousands of businesses and over a million developers use DigitalOcean — primarily because of the ease in provisioning and of being able to spin up and manage their infrastructure. This next step of having orchestration tools and containers puts even more flexibility into the hands of developers and businesses.

Reynold Harbin

Reynold Harbin

For customers who want to use data centers on DigitalOcean, or data centers on other providers, we want to enable flexibility. We want developers to more easily burst into public clouds as they need, and gain all the visibility they want in a common way across the various infrastructure providers that they want to use.

Gardner: Developers are increasingly interested in a serverless model, where they let the clouds manage the allocation of machine resources. This also helps in cost optimization. How do the container orchestration and management tools help? How does serverless, and the demand for it, also fit in?

Jackson: Serverless adds an extra layer of complexity, because the different cloud providers have different approaches to doing serverless. A serverless function running on Google or Azure or AWS — they all have different interfaces. They have different ways of deploying, and the underlying code has to be abstracted enough so that it can run across all the different providers. You have to really think about that from a software architectural problem, from that perspective.

Serverless pros and cons 

In my opinion, you would allow yourself to get locked in if you use things like the Native Queuing or Pub/Sub, which works really well with a particular cloud provider’s serverless platform.

One of the recent projects I’m super-excited about is OpenFaaS, by Alex Ellis. What OpenFaaS tries to do is provide that cloud-agnostic method of running functions-as-a-service (FaaS). This is not necessarily serverless, you still have to manage the underlying servers, but it does allow you to take advantage of your existing Kubernetes, Nomad, or Docker Swarm Clusters. It then gives you the developer workflow, which I think is the ultimate end-goal, rather than thinking about decoupling the complexity of the infrastructure.

Gardner: Reynold, any thoughts on serverless?

Harbin: I agree. We are on this road of making it easier for the application developer so they don’t have to worry about the underlying infrastructure. For certain applications, serverless can help in that goal, but at the same time you’re adding complexity. You have to think about the application, the architecture, and which services are going to be the most useful in terms of applying serverless.

You have to think about the application, the architecture, and which services are going to be the most useful in terms of applying serverless.

We want to enable our developers to use whatever technologies will help them the most. And for certain applications, serverless will be relevant. OpenFaaS is really interesting, because it makes it easier to write to one standard, and not have to worry about the underlying virtual servers or cloud providers.

Jackson: The other neat thing about OpenFaaS is the maintainability. When you look at application lifecycle management (ALM), which not enough people pay enough attention to, Serverless is so new that ALM is still unknown.

But with OpenFaaS — and one of the things that I love about that platform — you are baking functions into Docker containers so you can run those as standard microservices outside of the OpenFaaS platforms, if you want. So you can see that kind of maintainability. It gives you an upgrade path, despite being completely decoupled from any particular cloud provider’s platform. So you gain flexibility.

If you want to go multi-cloud, you can run OpenFaaS on a federated Nomad or federated Kubernetes cluster and you have your own private multi-cloud FaaS approach, which I think is super cool.

Gardner: It sounds as if we would like to see the same trajectory we saw with containers take place with serverless, there is just a bit of a lag there in terms of the interoperability and the extensibility.

Baldwin: There is also the serverless framework they can use that helps to abstract out the serverless endpoints. So abstract at Lambda or Kubeless or any other, Fission; Kubeless and Fission are just two other projects that are more geared toward Kubernetes than others.

Gardner: Nic, tell us about your organization, HashiCorp. What are you up to?

Simplify, simplify

Jackson: We are all about delivering developer tooling to enable modern applications. We have products like Nomad, which is a scheduler; Terraform, for infrastructure-as-code; Consul, which you can use for key value configurations and service discovery; Packer for creating gold master images; and Vault, which is becoming very popular for managing “secrets” and things like that.

We are putting together a suite of products that can make integration super-easy, but they actually work well standalone, too. You could just run Terraform if you want to, or maybe you are just going to use Nomad and Consul, or maybe Consul and Vault. But the aim is that we want to simplify a lot of the problems that people have when they start building highly available, highly distributed and scalable infrastructures.

Gardner: Reynold, tell us about DigitalOcean, and why you are interested in supporting organizations like StackPointCloud and HashiCorp as they better provide services and value to their customers.

Harbin: DigitalOcean is a very intuitive cloud services platform on which to run applications. We are designed to help developers and businesses build their applications, deploy them, and scale them faster, more efficiently, and more cost effectively. Our products basically are cloud services with various configurations to maximize CPU or memory available in our data centers around the world.

We also have storage, including object storage, for a unlimited scale; or block storage that you can attach a volume of any size to, depending on your needs. And then we also include networking services for securing and scaling — from firewalling to load balancing your applications.

All of these products are designed to be controlled, either through a simplified UI or through a very simple API, a RESTful API, so that tools like Terraform or Kubernetes orchestration through StackPointCloud can all be done through the single pane of glass of your choice. And the infrastructure that underlies it is all controlled via the API.

 Users and developers want easier ways to provision and manage infrastructure.

The reason we are leaning to these kinds of partnerships and tooling is because that’s what our users want, what developers want. They want easier ways to provision and manage infrastructure. So if you want to use an orchestration tool, then we want to make that as easy and as seamless as possible.

Gardner: The infatuation with containers has moved into the full love affair level, at least based on what I see in the market. But how do we keep this from going off the rails? We have seen other cases where popularity can lead to some complexity. For example, with the way virtual machines (VMs) were adopted to a point where sprawl became such an issue.

What are the challenges we are facing, and how can organizations better prepare themselves for a world of far more containers, and perhaps a world of more serverless?

Container complexity 

Baldwin: Containers are going to introduce a lot of complexity. I will just dig into one level of complexity, which is security. How to protect one host talking to another host? You need to figure out how to protect one service talking to another service. How do you secure that, how do you incur that traffic, how do you ensure that identity is handled?

When you begin looking at other pieces of the puzzle, things like ServiceMesh. We look at things like Kubernetes and Istio as complementary because you are going to need to be able to observe all of these environments. You are going to have to do all the things that you would have done with VMs, but there’s just an abundance of these things. That’s kind of what we are seeing, and that’s the level of complexity.

The tooling is still trying to catch up, and a lot of the open source tools are still in development, with some of the components still in alpha. There is a lot of need for ease-of-use around these tools, a lot of need for better user interfaces. We are at the beginning where, yes, we are trying to handle containers, and lots of containers all over the place, and trying to figure out how these things are talking to each other, and being able to just troubleshoot that.

How do you trace when your application starts to have an issue? How do you figure out where in that environment the issue is showing up? You start to learn how to you use tools like the Zipkin or you introduce OpenTracing into your stack, things like that.

Introducing

Simple and Reliable

Cloud Object Storage

Gardner: Matt, what would you encourage people to do now, experiment with more tools, acquaint themselves with those tools, make demands on tools, how to head this off this from a user perspective?

Tiptoe through the technology

Baldwin: I would begin by stepping into the water, going into the shallow end of the pool by just starting to explore the technology.

I have seen organizations jump into these technologies. Take Kubernetes as an example. I have seen organizations adopt Kubernetes really early, and then they started to build their own Platform as a Service (PaaS) on top of it without actually being involved in the project and being aware of what’s happening in the project.

So there is the danger of duplicating things that are happening in the roadmap, duplicating something that’s in the roadmap that will be done in six months in the project. And now you are stuck on Kubernetes version 1.2, and how do you move to the next version of Kubernetes?

So I think there is a danger there with too early of an adoption, if you start to build too much. But at the same time there is a need to conduct proof of concepts (POCs), to start to shift some of your smaller services into new areas.

I think you need to introduce Istio into test environments and start to look at what that does for you, and start looking at all the use cases around it, things like traffic shifting. There are issues like how to do a A-B deployments, service meshes can actually give you that and start to play with that and start to plan for the future, but maybe not completely start to customize whatever you just built, because there is always a threat that the project isn’t fully baked yet.

Gardner: Sounds like it might be time to be thinking strategically, as well as tactically in how you approach these things. Maybe even get some enterprise architects involved so that you don’t get too bogged down before the standards are cooked.

Nic, what do you see as the challenges with bringing containers to use in a multi-cloud environment? What should people be thinking about to hedge against those challenges?

Sensible speed

Jackson: Look at just how fast things have moved. I mean, Kubernetes as a product practically didn’t exist two years ago. Nomad didn’t really exist two years ago. I think it was only just launched at HashiCorp in 2015. And those products are still evolving.

And I think it was a really good comment that you have to be careful about building on top of these things, and then stray too far away from the stable branch. You could end up in a situation where you can’t follow an upgrade path — because one thing that’s for certain, the speed of evolution isn’t going to slow down.

Look at just how fast things have moved. I mean, Kubernetes as a product practically didn’t exist two years ago. Nomad didn’t really exist two years ago. I think it was only just launched at HashiCorp in 2015. And those products are still evolving.

Always try to keep abreast of where the technology is, and always make sure you have a great path. You can do that through being sensible about abstraction. In the same way that you would not necessarily depend on a concrete implementation in your code, you would depend on interfaces. You have to take a similar approach to your infrastructure, so we should be looking at depending upon interfaces, so that if a new component comes along — something that’s better than Kubernetes – you can actually hot-swap them out without having to go through years of re-platforming.

Gardner: Reynold, how do you see solving complexity in the evolution of these technologies, and ways that early-adopters can resist getting bogged down as they continue to mature?

Harbin: The two main points that Matt and Nic have brought up are really good ones. Certainly visibility and security of these applications and these environments is really important from a functionality perspective.

As Nic mentioned, the pace at which new technologies are being developed is intense. You have to have an environment where you can test out these various tools, see what works for you, do it in a way that you can get these ideas and run them and test them and see how this technology can help your particular business. And a lot of this infrastructure in many ways is almost disposable, because you can spin it up as you need to, test it and then spin it down — and it might only need to live for an hour or for a couple of days.

Being aware of the tools, what’s happening in terms of new functionality, and then being able to test that either locally or in a cloud environment is really going to be important.

Gardner: I was expecting at least one of you to bring up DevOps. That thinking about development in conjunction with production, and making this more of a seamless process would help. Am I off base? Matt, should DevOps be part of this solution set?

Shared language

Baldwin: Yes, it should be part of it. I guess my personal opinion on DevOps is that we are moving more toward where Ops needs to become more and more invisible. It’s more about shipping, and it’s more about focusing on the apps versus the infrastructure. And so I just see more like the capital O going to lowercase o.

What I do think is interesting right now is that developers and operators are now speaking the same language. If you are looking at Kubernetes, developers and operators are now speaking the same language. They are speaking in Kubernetes, and so that’s a very big deal. So now the developer is building it in the same way that the operator is going to understand it. The operator is going to understand how the microservice is built; the developer is going to understand how it’s built. They are all going to understand everything.

And then with multi-cloud, you could also do things like have your staging environment in one cloud and you promote your code so that your operators are running the code over in production on another provider and you could promote that code across the network, so you can do things like that, too.

They are speaking in Kubernetes, and so that’s a very big deal. So now the developer is building it in the same way that the operator is going to understand it.

I think there is some of the traditional DevOps tooling, things like Chef, things like Puppet, I don’t think have as much of a future as they used to have, because they did a lot of app management on the hosts and now that the apps are not living on the host anymore, there is not a lot for those tools to do. So just build out a host at Amazon AWS and then just deploy Kubernetes and then just let Kubernetes take over from there.

Some of those tools, their importance will lessen, like you won’t have to know Puppet as much; you likely won’t ever need to know Puppet.

Gardner: Nic, are you in the same camp, more Dev, less Ops, lowercase o?

More Dev, less Ops?

Jackson: I think it depends on two things. The first thing is the scale of your organization. When you look at a lot of tools, and you look at a lot of information that’s out there, it makes an assumption that everybody is operating at fixed scale, and I don’t think that’s the case. Pretty much any business that’s operating in a digital world, which is pretty much any business these days, you can take advantage of modern development techniques. When you start depending on the scale, then it also shifts who is potentially going to be doing the infrastructure side of things.

Smaller companies, I think you are going to get more Dev than you will Ops because that may not be a scale that can support a dedicated operations team. But larger enterprise organizations, you may have more of a platform team, more of an operations person who is using code to manage infrastructure.

Introducing

Simple and Reliable

Cloud Object Storage

In either case, there’s a requirement that developers have to have an appreciation and an understanding of the platform to which they are deploying their code. They need to have that because they need to have an understanding of how things like service discovery works. How are the volumes working for persistent storage, how are things going to work in terms of scale and scalability? So if you are going to be load testing it, what are sort of the operational thresholds in terms of I/O for CPU or disk, and things like that?

I think DevOps is a really powerful concept. I certainly love working in a world where I can interact and work with the operations and the infrastructure teams. I benefit as a software engineer, and I think the infrastructure engineers benefit because those sorts of skills that we both have, we can share. So I really hope DevOps doesn’t go away, but I think the level at which that interaction occurs does very much depend on scale of your organization.

Shop around

Gardner: Are there examples of some organizations, large or small, that have embraced containers, have multi-cloud in their sights, are maybe thinking about serverless?

Baldwin: I have an example. This customer was a full-on Amazon shop, and they had not migrated to microservices. Their first step was to move to Docker, and then we moved them up to Kubernetes. These guys were an adtech firm and they had, as you can imagine, ingress traffic that had a high charge to it, and that was billed by Amazon.

So they spent a lot of time negotiating a better cloud price-point with Google. What they were able to do is stand up a Kubernetes cluster on Google Cloud and then shift the workload that was needed at that better price-point. At the same time, they kept the rest of the workload at Amazon because they were still relying on some of the other underlining services of Amazon, things like Amazon Relational Database Service (Amazon RDS).

So they didn’t want to completely move to Google, but they wanted to move something that they were taking a really large hit on, on cost, and move that to Google. So I think you are going to see multi-cloud first get used as a vendor tactic against the cloud providers to try and negotiate a better price point. So if you are doing adtech, now you are in a position where you can actually negotiate with Amazon, Google or whomever, and get a better price and just move your workload to whomever gives it to you.

So that makes it a lot more competitive. That was an early example, one of the earlier federation examples we have.

Gardner: The economic paybacks from that could be very significant, if you can leverage better deals from your cloud providers. That could be a very significant portion of your overall expenses.

Baldwin: It’s giving the power back to the consumer. We basically have a cloud monopoly, and then smaller ones. So we have Amazon AWS, and so how do you work against Amazon to reduce the price points, how do you try to break that?

And once you start to get power back to the consumer, that starts to weaken the hold on the end-user.

Gardner: Nic, an example that we can look to perhaps in a different way, one that provides a business advantage?

Go public 

Jackson: One of the things that we see for a lot of enterprise customers is the cloud adoption phase. So I can’t give you the exact numbers, but the total market in terms of compute for the big four cloud providers is about 30 percent. There is something like 60 percent to 70 percent of all of the existing compute still running in private data centers. A lot of organizations are looking at moving that forward. They want to be able to adopt cloud, for whatever reason. They want better tooling to be able to do that.

You can create a federated Kubernetes cluster, or a federated Nomad cluster, and you can begin shifting your workload away from the private data center and into the cloud. You can gain that clear migration path. It allows you to run both of those platforms side by side, the distinct platform that the organization understands but also the modern platform that requires learning in terms of tooling and behavior.

That’s going to be a typical approach for a lot of the large enterprises. We are going to see a lot of the shift from private data centers into public clouds. A lot of the cloud providers are offering pretty attractive reasons in terms of licensing to do that rather than renew your license for your physical infrastructure. Why don’t you just move it off into your cloud provider?

That’s going to be a typical approach for a lot of the large enterprises. We are going to see a lot of the shift from private data centers into public clouds.

But if you’re running tens of billions of dollars worth of business, then any downtime is incredibly expensive. So you will want to ensure that you have the maximum high availability.

Baldwin: You can see that Microsoft is converting a lot of their enterprise agreements to move people over to Azure.

Jackson: Well, it’s not just Microsoft. I mean, Dell/EMC is one of the most aggressive. I could imagine a great sales strategy for them is to say, “Well, hey, rather than buying a new Dell server, why don’t you just lease one of these servers in the Dell cloud and we will manage it for you.” And you basically you’re just shifting from a capital expenditure (CapEx) to an operational expenditure (OpEx) model.

I think Oracle has a similar strategy, the Oracle cloud is up and coming. So the potential is rather than paying for an Oracle database license you could just move that database into the Oracle cloud and save yourself a lot of trouble around the maintenance of the physical data center.

Gardner: Reynold, any thoughts on examples of how orchestration of containers may be moving more toward Serverless models that have great benefits for your end users? As a public cloud, where do you see a good example of how this all works to everyone’s advantage?

No more either/or

Harbin: As developers move toward containers and orchestration, they can begin looking at cloud providers not as a choice of either/or but as, “I get to use all of them, and I get to use the products and services that are best for my particular application.”

An example of that would be a customer who was hosting their application and their storage on Amazon AWS, and a month ago DigitalOcean released our new object storage product called Spaces. Essentially they gained all the benefits of the AWS S3 object storage, but the cost is 10 times lower, at least for bandwidth.

If this particular customer could containerize their application, which basically publishes and posts content to object storage and delivers a lot of that to end users, they would have the flexibility to take advantage of new products like Spaces that are being rolled out all the time by various cloud providers. In this case, they could have easily moved their application to DigitalOcean, take advantage of our new object storage product, and essentially lowered the total cost.

But it’s not just DigitalOcean products. New technologies that can make your applications better are being released all the time, as open source projects and commercial products. Companies will gain agility if their applications are containerized, as they will be able to use new technologies much more easily.

Baldwin: There are some great abstraction layers — things like Minio that you don’t necessarily need to interact with the underlying object storage. You have a layer that allows you to be ignorant of that, and such de-coupling is super-useful.

Companies will gain agility if their applications are containerized, as they will be able to use new technologies much more easily.

Gardner: I’m afraid we are about out of time, but I wanted to give each of you an opportunity to tell us how to learn more about your organization.

Matt Baldwin, how could people follow you and also learn more about StackPointCloud?

Baldwin: If you wanted to give Kubernetes a shot, we provide a turnkey marketplace and management platform. So you just hit the site, log in with social credentials like GitHub, and then you can start to build clusters. You can check it out via our blog on Stackpoint.io. We also run all of the major markets for the Kubernetes community, up and down the West and East Coasts.

So you can engage with us at any of the Kubernetes events in Seattle, San Francisco, New York, and wherever. Yeah, also just drop any Kubernetes slack channel and just ping us, ping me on baldwinmathew, also @baldwinmathew on Twitter.

Gardner: Nic, same thing, how can people follow you and learn more about HashiCorp?

Jackson: HashiCorp.com is a great landing site because you can bounce out to the various product sites from there. We also have a blog, which we are pretty active with. We are generally publishing at least a couple of pieces of information ourselves on there every week but we are also syndicating other stuff that we find, not necessarily always related to HashiCorp but just interesting technology things.

So you can get access to the blog through there and on Twitter following HashiCorp, myself, I am @sheriffjackson, so you can follow me on Twitter, I try to share stuff that I find interesting.

Gardner: And Reynold, learning more about DigitalOcean as well as following you or other evangelists that you think are worthy?

Harbin: The community site on DigitalOcean has 1,700 really well-curated articles. So do.co/community would be a good start, and we have several really technology-agnostic articles about containerization, as well as specific technologies like Kubernetes. They are articles, they are well written and they will teach you just how you can get started. And then of course, the DigitalOcean website is a good resource just for our own product.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: DigitalOcean.

You may also be interested in:

Posted in application transformation, Cloud computing, cloud messaging, Data center transformation, DevOps, DigitalOcean, Enterprise architect, enterprise architecture, Enterprise transformation, multicloud, server, serverless, ServiceMesh, SOA, Software, Software-defined storage, storage, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

How a large Missouri medical center developed an agile healthcare infrastructure security strategy

Healthcare provider organizations are among the most challenging environments to develop and implement comprehensive and agile security infrastructures.

These providers of healthcare are usually sprawling campuses with large ecosystems of practitioners, suppliers, and patient-facing facilities. They also operate under stringent compliance requirements, with data privacy as a top priority.

At the same time, large hospitals and their extended communities are seeking to become more patient outcome-focused as they deliver ease-of-use, the best applications, as well as up-to-date data analysis to their staffs and physicians.

The next BriefingsDirect security insights discussion examines how a large Missouri medical center developed a comprehensive healthcare infrastructure security strategy from the edge to the data center — and everything in between.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how healthcare security can become more standardized and proactive with unified management and lower total costs, BriefingsDirect sat down with Phillip Yarbro, Network and Systems Engineer at Saint Francis Healthcare System in Cape Girardeau, Missouri. The discussion was moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: When it comes to security nowadays, Phil, there’s a lot less chunking it out, of focusing on just devices or networks separately or on data centers alone. It seems that security needs to be deployed holistically — or at least strategically – with standardized solutions, focused on across-the-board levels of coverage.

Tell us how you’ve been able to elevate security to that strategic level at Saint Francis Healthcare System.

Phillip Yarbro

Phillip Yarbro

Yarbro: As a healthcare organization, we have a wide variety of systems — from our electronic medical records (EMR) that we are currently using, to our 10-plus legacy EMRs, our home health system, payroll time and attendance. Like you said, that’s a wide variety of systems to keep up-to-date with antivirus solutions, making sure all of those are secure, especially with them being virtualized. All of those systems require a bunch of different exclusions and whatnot.

With our previous EMR, it was really hard to get those exclusions working and to minimize false positives. Over the past several years, security demands have increased. There are a lot more PCs and servers in the environment. There are a lot more threats taking place in healthcare systems, some targeting protected health information (PHI) or financial data, and we needed a solution that would protect a wide variety of endpoints; something that we could keep up-to-date extremely easily, and that would cover a wide variety of systems and devices.

Gardner: It seems like they’re adding more risk to this all the time, so it’s not just a matter of patching and keeping up. You need to be proactive, whenever possible.

 Being proactive is definitely key. We like to control applications to keep our systems even more secure, rather than just focusing on real-time threats.

Yarbro: Yes, being proactive is definitely key. Some of the features that we like about our latest systems are that you can control applications, and we’re looking at doing that to keep our systems even more secure, rather than just focusing on real-time threats, and things like that.

Gardner: Before we learn more about your security journey, tell us about Saint Francis Healthcare System, the size of organization and also the size of your IT department.

Yarbro: Saint Francis is between St. Louis and Memphis. It’s the largest hospital between the two cities. It’s a medium-sized hospital with 308 beds. We have a Level III neonatal intensive care unit (NICU) and a Level III trauma center. We see and treat more than 700,000 people within a five-state area.

With all of those beds, we have about 3,000 total staff, including referring physicians, contractors, and things like that. The IT help desk support, infrastructure team, and networking team amounts to about 30 people who support the entire infrastructure.

Gardner: Tell us about your IT infrastructure. To what degree are you using thin clients and virtual desktop infrastructure (VDI)? How many servers? Perhaps a rundown of your infrastructure in total?

Yarbro: We have about 2,500 desktops, all of which are Microsoft Windows desktops. Currently, they are all supplied by our organization, but we are looking at implementing a bring-your-own-device (BYOD) policy soon. Most of our servers are virtualized now. We do have a few physical ones left, but we have around 550 to 600 servers.

Of those servers, we support about 60 Epic servers and close to 75 Citrix servers. On the VDI side, we are using VMware Horizon View, and we are supporting about 2,100 virtual desktop sessions.

Gardner: Data center-level security is obviously very important for you. This isn’t just dealing with the edge and devices.

Virtual growth

Yarbro: Correct, yes. As technology increases, we’re utilizing our virtual desktops more and more. The data center virtualization security is going to be a lot more important going forward because that number is just going to keep growing.

Gardner: Let’s go back to your security journey. Over the past several years, requirements have gone up, scale has gone up, complexities have gone up. What did you look for when you wanted to get more of that strategic-level security approach? Tell us about your process for picking and choosing the right solutions.

Yarbro: A couple of lessons that we learned from our previous suppliers is that when we were looking for a new security solution we wanted something that wouldn’t make us experience scan storms. Our previous system didn’t have the capability to spread out our virus scans, and as a result whenever the staff would come in, in the morning and evenings, users were negatively affected by latency because of the scans. Our virtual servers all scanned at the same time.

We have a wide variety of systems and applications. Epic is our main EMR, but we also have 10 legacy EMRs, a picture archiving and communication system (PACS), rehab, home health, payroll, as well as time and attendance apps.

So whenever those were set to scan, our network just dragged to a halt. We were looking for a new solution that didn’t have a huge impact on our virtual environment. We have a wide variety of systems and applications. Epic is our main EMR, but we also have 10 legacy EMRs, a picture archiving and communication system (PACS), rehab, home health, payroll, as well as time and attendance apps. There are a wide variety of systems that all have different exclusions and require different security processes. So we were hoping that our new solution would minimize false positives.

Since we are healthcare organization, there is PHI and there is sensitive financial data. We needed a solution that was Health Insurance Portability and Accountability Act (HIPAA)-compliant as well as Payment Card Industry Data Security Standard (PCI DSS)-compliant. We wanted a system that made a really good complement and that made it easy to manage everything.

Our previous ones, we were using Trend Micro in conjunction with Malwarebytes, were in two consoles. A lot of the time it was hard to get the exclusions to apply down to the devices when it came time to upgrade the clients. We had to spend time upgrading clients twice. It didn’t always work right. It was a very disruptive do-it-yourself operation, requiring a lot of resources on the back end. We were just looking for something that was much easier to manage.

Defend and prevent attacks

Gardner: Were any of the recent security breaches or malware infections something that tripped you up? I know that ransomware attacks have been on people’s minds lately.

It’s been a great peace-of-mind benefit for our leadership to hear from Bitdefender that we were already protected (from ransomware attacks).

Yarbro: With the WannaCry and Petya attacks, we actually received a proactive e-mail from Bitdefender saying that we were protected. The most recent one, the Bad Rabbit, came in the next day and Bitdefender had already said that we were good for that one as well. It’s been a great peace-of-mind benefit for our leadership here knowing that we weren’t affected, that we were already protected whenever such news made its way to them in the morning.

Gardner: You mentioned Bitdefender. Tell me about how you switched, when, and what’s that gotten for you at Saint Francis?

Yarbro: After we evaluated Bitdefender, we worked really closely with their architects to make sure that we followed best practices and had everything set up, because we wanted to get our current solutions out of there as fast as possible.

For a lot of our systems we have test servers for testing computers. We were able to push Bitdefender out within minutes of having the consoles set up to these devices. After we received some exclusion lists, or were able to test on those, we made sure that Bitdefender didn’t catch or flag anything.

We were able to deploy Bitdefender on 2,200 PCs, all of our virtual desktops and VDI, and roughly 425 servers between May and July with minimal downtime, knowing that the downtime we had was simply to reboot the servers after we uninstalled our previous antivirus software.

We recently upgraded the remaining 150 or so servers, which we don’t have test systems for. They were all of our critical servers that couldn’t go down, such as our backup systems. We were able to push Bitdefender out to all of those within a week, again, without any downtime, and straight from the console.

Gardner: Tell us about that management capability. It’s good to have one screen, of course, but depth and breadth are also important. Has there been any qualitative improvement, in addition to the consolidation improvement?

Yarbro: Yes. Within the Bitdefender console, with our various servers, we have different policies in place, and now we can get very granular with it. The stuff that takes up a lot of resources we have it set to scan, maybe every other day instead of every day, but you can also block off servers.

Bitdefender also has a firewall option that we are looking at implementing soon, where you can group servers together as well as open the same firewall roles, and things like that. It just helps give us great visibility into making sure our servers and data center are protected and secured.

Gardner: You mentioned that some of the ransomware attacks recently didn’t cause you difficulty. Are there any other measurements that you use in order to qualify or quantify how good your security is? What did you find improved with your use of Bitdefender GravityZone?

It reduced our time to add new exclusions to our policies. That used to take us about 60 minutes. It’s down to five minutes. That’s a huge timesaving.

Yarbro: It reduced our time to add new exclusions to our policies. That used to take us about 60 minutes to do because we had to login to both consoles, do it, and make sure it got pushed out. That’s down to five minutes for us. So that’s a huge timesavings.

From the security administration side, by going into the console and making sure that everything is still reporting, that everything still looks good, making sure there haven’t been any viruses on any machines — that process went down from 2.5 to three hours a week to less than 15 minutes.

GravityZone has a good reporting setup. I actually have a schedule set every morning to give me the malware activity and phishing activity from the day before. I don’t even have to go into the console to look at all that data. I get a nice e-mail in the morning and I can just visually see what happened.

At the end of the month we also have a reports setup that tells us the 10 highest endpoints that were infected with malware, and we can be proactive and go out and either re-educate our staff if it’s happening with a certain person. Not only from the security administration time has it saved us, it also helps us with security-related trouble calls. I would say that they have probably dropped at least 10 percent to 15 percent on those since we rolled out Bitdefender hospital-wide.

Gardner: Of course, you also want to make sure your end-users are seeing improvement. How about the performance degradation and false positives? Have you heard back from the field? Or maybe not, and that’s the proof?

User-friendly performance

Yarbro: You said it best right there. We haven’t heard anything from end-users. They don’t even know it’s there. With this type of roll out, no news is good news. They didn’t even notice the transition except an increase in performance. But otherwise they didn’t even know that anything was there, and the false positives haven’t been there.

We have our exclusion policy set, and it really hasn’t given us any headaches. It has helped our physicians quite a bit because they need uninterrupted access to medical information. They used to have to call whenever our endpoints lost their exclusion list and their software was getting flagged. It was very frustrating for them. They must be able to get into our EMR systems and log that information as quickly as possible. With Bitdefender, they haven’t had to call IT or anything like that, and it’s just helped them greatly.

Gardner: Back to our high-level discussion about going strategic with security, do you feel that using GravityZone and other Bitdefender technologies and solutions have been able to help you elevate your security to being comprehensive, deep, and something that’s more holistic?

Multilayered, speedier security

Yarbro: Yes, definitely. We did not have this level of control with our old systems. First of all, we didn’t have antivirus on all of our servers because it impacted them so negatively. Some of our more critical servers didn’t even have protection.

Just having our entire environment at 100 percent coverage has made us a lot more secure. The extra features that Bitdefender offers — not just the antivirus piece but also the application blocking, device control, and firewall roles control just adds another level of security that we didn’t even dream about with our old solutions.

Gardner: How about the network in the data center? Is that something that you’ve been able to better applying policies and rules to in ways that you hadn’t before?

Yarbro: Yes, now with Bitdefender there is an option to offload scanning to a security server. We decided at first not to go with that solution because when we installed Bitdefender on our VDI endpoints, we didn’t see any increased CPU or memory utilization across any of our hosts, which is a complete 180-degrees from what we had before.

But for some of our other servers, servers in our DMZ, we are thinking about using the security server approach to offload all of the scanning. It will further increase performance across our virtualized server environment.

Gardner: From an economic standpoint, that also gives you more runway, so to speak, in terms of having to upgrade the hardware. You are going to get more bang for your buck in your infrastructure investments.

With servers-level security, it doesn’t have to send that file back or check it again — it already knows. That just speeds things up, almost exponentially.

Yarbro: Yes, exactly. And with that servers-level security, it’s beneficial to note that if there’s ever an upgrade for software or patches, that once a server checks into it first, if another server checks in or another desktop checks in, it already has that exclusion. It doesn’t have to send that file back or check it again — it already knows. So it just speeds things up, almost exponentially, on those other devices.

Gardner: Just a more intelligent way to go about it, I would think.

Yarbro: Yes.

Gardner: Have you been looking to some of the other Bitdefender technologies? Where do you go next in terms of expanding your horizon on security?

One single pane of secure glass

Yarbro: The extra Bitdefender components that we’re kind of testing right now are device control and firewall, of being able to make sure that only devices that we allow can be hooked up, say via USB ports. That’s critical in our environment. We don’t want someone to come in here with a flash drive and install or upload a virus or anything along those lines.

The application and website blacklisting is also something that’s coming in the near future. We want to make sure that no malware, if it happens, can get past. We are also looking to consolidate two more management systems into just our Bitdefender console. That would be for encryption and patch management.

Bitdefender can do encryption as well, so we can just roll our current third-party software into Bitdefender. It will give us one pane of glass to manage all of these security features. In addition to patch management, we are using two different systems; one for servers, one for Windows endpoints. If we can consolidate that all into Bitdefender, because those policies are already in there, it would just be a lot of easier to manage and make us a lot more secure.

Gardner: Anything in terms of advice for others who are transitioning off of other security solutions? What would you advise people to do as they are going about a change from one security infrastructure to another?

Slow and steady saves the servers

Yarbro: That’s a good question. Make sure that you have all of your exclusion lists set properly. Bitdefender already in the console has Windows, VMware’s and Citrix’s best practices in their policies.

You only have to worry about your own applications, as long as you structure it properly from the beginning. Bitdefender’s engineers helped us with quite a bit. Just go slow and steady. From May to July last year we were able to do 425 servers. We probably could have done more than that, but we didn’t want to risk breaking something. Luckily, we didn’t push it to those more critical servers because we did change a few of our policy settings that probably would have broken a few of those and had us down for a while if we had put it all in right away.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in application transformation, Bitdefender, Citrix, Cloud computing, Cyber security, Data center transformation, disaster recovery, Help desk, Identity, Security, server, User experience, Virtualization, VMware | Tagged , , , , , , , , , , , , , | Leave a comment

Inside story on HPC’s role in the Bridges Research Project at Pittsburgh Supercomputing Center

The next BriefingsDirect Voice of the Customer high-performance computing (HPC) success story interview examines how Pittsburgh Supercomputing Center (PSC) has developed a research computing capability, Bridges, and how that’s providing new levels of analytics, insights, and efficiencies.

We’ll now learn how advances in IT infrastructure and memory-driven architectures are combining to meet the new requirements for artificial intelligence (AI), big data analytics, and deep machine learning.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe the inside story on building AI Bridges are Dr. Nick Nystrom, Interim Director of Research, and Paola Buitrago, Director of AI and Big Data, both at Pittsburgh Supercomputing Center. The discussion is moderated by Dana Gardner, principal analyst, at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s begin with what makes Bridges unique. What is it about Bridges that is possible now that wasn’t possible a year or two ago?

Nick Nystrom

Nystrom

Nystrom: Bridges allows people who have never used HPC before to use it for the first time. These are people in business, social sciences, different kinds of biology and other physical sciences, and people who are applying machine learning to traditional fields. They’re using the same languages and frameworks that they’ve been using on their laptops and now that is scaling up to a supercomputer. They are bringing big data and AI together in ways that they just haven’t done before.

Gardner: It almost sounds like the democratization of HPC. Is that one way to think about it?

Nystrom: It very much is. We have users who are applying tools like R and Python and scaling them up to very large memory — up to 12 terabytes of random access memory (RAM) — and that enables them to gain answers to problems they’ve never been able to answer before.

Gardner: There is a user experience aspect, but I have to imagine there are also underlying infrastructure improvements that also contribute to user democratization.

We stay in touch with the user community and we look at this from their perspective. What are the applications that they need to run? What we came up with is a very heterogeneous system.

Nystrom: Yes, democratization comes from two things. First, we stay closely in touch with the user community and we look at this opportunity from their perspective first. What are the applications that they need to run? What do they need to do? And from there, we began to work with hardware vendors to understand what we had to build, and, what we came up with is a very heterogeneous system.

We have three tiers of nodes having memories ranging from 128 gigabytes to 3 terabytes, to 12 terabytes of RAM. That’s all coupled on the same very-high-performance fabric. We were the first installation in the world with the Intel Omni-Path interconnect, and we designed that in a custom topology that we developed at PSC expressly to make big data available as a service to all of the compute nodes with equally high bandwidth, low latency, and to let these new things become possible.

Gardner: What other big data analytics benefits have you gained from this platform?

Buitrago: A platform like Bridges enables that which was not available before. There’s a use case that was recently described by Tuomas Sandholm, [Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University. It involves strategic machine learning using Bridges HPC to play and win at Heads-Up, No-limit Texas Hold’em poker as a capabilities benchmark.]

Paola Buitrago

Buitrago

This is a perfect example of something that could not have been done without a supercomputer. A supercomputer enables massive and complex models that can actually give an accurate answer.

Right now, we are collecting a lot of data. There’s a convergence of having great capabilities right in the compute and storage — and also having the big data to answer really important questions. Having a system like Bridges allows us to, for example, analyze all that there is on the Internet, and put the right pieces together to answer big societal or healthcare-related questions.

Explore the New Path to

High Performance

Computing

Gardner: The Bridges platform has been operating for some months now. Tell us some other examples or use cases that demonstrate its potential.

Dissecting disease through data

Nystrom: Paola mentioned use cases for healthcare. One example is a National Institutes of Health (NIH) Center of Excellence in the Big Data to Knowledge program called the Center for Causal Discovery.

They are using Bridges to combine very large data in genomics, such as lung-imaging data and brain magnetic resonance imaging (MRI) data, to come up with real cause-and-effect relationships among those very large data sets. That was never possible before because the algorithms were not scaled. Such scaling is now possible thanks very large memory architectures and because the data is available.

At CMU and the University of Pittsburgh, we have those resources now and people are making discoveries that will improve health. There are many others. One of these is on the Common Crawl data set, which is a very large web-scale data set that Paola has been working with.

Buitrago: Common Crawl is a data set that collects all the information on the Internet. The data is currently available on the Amazon Web Services (AWS) cloud in S3. They host these data sets for free. But, if you want to actually analyze the data, to search or create any index, you have to use their computing capabilities, which is a good option. However, given the scale and the size of the data, this is something that requires a huge investment.

So we are working on actually offering the same data set, putting it together with the computing capabilities of Bridges. This would allow the academic community at large to do such things as build natural language processing models, or better analyze the data — and they can do it fast, and they can do it free of charge. So that’s an important example of what we are doing and how we want to support big data as a whole.

Explore the New Path to

High Performance

Computing Solutions

Gardner: So far we’ve spoken about technical requirements in HPC, but economics plays a role here. Many times we’ve seen in the evolution of technology that as things become commercially available off-the-shelf technologies, they can be deployed in new ways that just weren’t economically feasible before. Is there an economics story here to Bridges?

Low-cost access to research

Nystrom: Yes, with Bridges we have designed the system to be extremely cost-effective. That’s part of why we designed the interconnect topology the way we did. It was the most cost-effective way to build that for the size of data analytics we had to do on Bridges. That is a win that has been emulated in other places.

So, what we offer is available to research communities at no charge — and that’s for anyone doing open research. It’s also available to the industrial sector at essentially a very attractive rate because it’s a cost-recovery rate. So, we do work with the private sector. We are looking to do even more of that in future.

We’re always looking at the best available technology for performance, for price, and then architecting that into a solution that will serve research.

Also, the future systems we are looking at will leverage lots of developing technologies. We’re always looking at the best available technology for performance, for price, and then architecting that into a solution that will serve research.

Gardner: We’ve heard a lot recently from Hewlett Packard Enterprise (HPE) recently about their advances in large-scale memory processing and memory-driven architectures. How does that fit into your plans?

Nystrom: Large, memory-intensive architectures are a cornerstone of Bridges. We’re doing a tremendous amount of large-scale genome sequence assembly on Bridges. That’s individual genomes, and it’s also metagenomes with important applications such as looking at the gut microbiome of diabetic patients versus normal patients — and understanding how the different bacteria are affected by and may affect the progression of diabetes. That has tremendous medical implications. We’ve been following memory technology for a very long time, and we’ve also been following various kinds of accelerators for AI and deep learning.

Gardner: Can you tell us about the underlying platforms that support Bridges that are currently commercially available? What might be coming next in terms of HPE Gen10 servers, for example, or with other HPE advances in the efficiency and cost reduction in storage? What are you using now and what do you expect to be using in the future?

Ever-expanding memory, storage

Nystrom: First of all, I think the acquisition of SGI by HPE was very strategic. Prior to Bridges, we had a system called Blacklight, which was the world’s largest shared-memory resource. It’s what taught us, and we learned how productive that can be for new communities in terms of human productivity. We can’t scale smart humans, and so that’s essential.

In terms of storage, there are tremendous opportunities now for integrating storage-class memory, increasing degrees of flash solid-state drives (SSDs), and other stages. We’ve always architected our own storage systems, but now we are working with HPE to think about what we might do for our next round of this.

Gardner: For those out there listening and reading this information, if they hadn’t thought that HPC and big data analytics had a role in their businesses, why should they think otherwise?

Nystrom: From my perspective, AI is permeating all aspects of computing. The way we see AI as important in an HPC machine is that it is being applied to applications that were traditionally HPC only — things like weather and protein folding. Those were apps that people used to run on just big iron.

These will be enterprise workloads where AI has a key impact. They will use AI as an empowering tool to make what they already do, better.

Now, they are integrating AI to help them find rare events, to do longer-term simulations in less time. And they’ll be doing this across other industries as well. These will be enterprise workloads where AI has a key impact. It won’t necessarily turn companies into AI companies, but they will use AI as an empowering tool to make what they already do, better.

Gardner: An example, Nick?

Nystrom: A good example of the way AI is permeating other fields is what people are doing at the Institute for Precision Medicine, [a joint effort between the University of Pittsburgh and the University of Pittsburgh Medical Center], and the Carnegie Mellon University Machine Learning and Computational Biology Departments.

They are working together on a project called Big Data for Better Health. Their objective is to apply state of the art machine learning techniques, including deep learning, to integrated genomic patient medical records, imaging data, and other things, and to really move toward realizing true personalized medicine.

Gardner: We’ve also heard a lot recently about hybrid IT. Traditionally HPC required an on-premises approach. Now, to what degree does HPC-as-a-service make sense in order to take advantage of various cloud models?

Explore the New Path to

High Performance

Computing

Nystrom: That’s a very good question. One of the things that Bridges makes available through the democratizing of HPC is big data-as-a-service and HPC-as-a-service. And it does that in many cases by what we call gateways. These are web portals for specific domains.

At the Center for Causal Discovery, which I mentioned, they have the Causal Web. It’s a portal, it can run in any browser, and it lets people who are not experts with supercomputers access Bridges without even knowing they are doing it. They run applications with a supercomputer as the back-end.

Another example is Galaxy Project and Community Hub, which are primarily for bioinformatic workflows, but also other things. The main Galaxy instance is hosted elsewhere, but people can run very large memory genome assemblies on Bridges transparently — again without even knowing. They don’t have to log in, they don’t have to understand Linux; they just run it through a web browser, and they can use HPC-as-a-service. It becomes very cloud-like at that point.

Super-cloud supercomputing

Cloud and traditional HPC are complimentary among different use cases, for what’s called for in different environments and across different solutions.

Buitrago: Depending on the use case, an environment like the cloud can make sense. HPC can be used for an initial stage, if you want to explore different AI models, for example. You can fine-tune your AI and benefit from having the data close. You can reduce the time to start by having a supercomputer available for only a week or two. You can find the right parameters, you get the model, and then when you are actually generating inferences you can go to the cloud and scale there. It supports high peaks in user demand. So, cloud and traditional HPC are complimentary among different use cases, for what’s called for in different environments and across different solutions.

Gardner: Before we sign off, a quick look to the future. Bridges has been here for over a year, let’s look to a year out. What do you expect to come next?

Nystrom: Bridges has been a great success. It’s very heavily subscribed, fully subscribed, in fact. It seems to work; people like it. So we are looking to build on that. We’re looking to extend that to a much more powerful engine where we’ve taken all of the lessons we’ve learned improving Bridges. We’d like to extend that by orders of magnitude, to deliver a lot more capability — and that would be across both the research community and industry.

Gardner: And using cloud models, what should look for in the future when it comes to a richer portfolio of big data-as-a-service offerings?

Buitrago: We are currently working on a project to make data more available to the general public and to researchers. We are trying to democratize data and let people do searches and inquiries and processing that they wouldn’t be able to do without us.

We are integrating big data sets that go from web crawls to genomic data. We want to offer them paired with the tools to properly process them. And we want to provide this to people who haven’t done this in the past, so they can explore their questions and try to answer them. That’s something we are really interested in and we look forward to moving into a production stage.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, machine learning, Software-defined storage, storage | Tagged , , , , , , , , , | Leave a comment

How UBC gained TCO advantage via flash for its EduCloud cloud storage service

The next BriefingsDirect cloud efficiency case study explores how a storage-as-a-service offering in a university setting gains performance and lower total cost benefits by a move to all-flash storage.

We’ll now learn how the University of British Columbia (UBC) has modernized its EduCloud storage service and attained both efficiency as well as better service levels for its diverse user base.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to help us explore new breeds of SaaS solutions is Brent Dunington, System Architect at UBC in Vancouver. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How is satisfying the storage demands at a large and diverse university setting a challenge? Is there something about your users and the diverse nature of their needs that provides you with a complex requirements list?

Dunington: A university setting isn’t much different than any other business. The demands are the same. UBC has about 65,000 students and about 15,000 staff. The students these days are younger kids, they all have iPhones and iPads, and they just want to push buttons and get instant results and instant gratification. And that boils down to the services that we offer.

Brent Dunington

Dunington

We have to be able to offer those services, because as most people know, there are choices — and they can go somewhere else and choose those other products.

Our team is a rather small team. There are 15 members in our team, so we have to be agile, we have to be able to automate things, and we need tools that can work and fulfill those needs. So it’s just like any other business, even though it’s a university setting.

 

Gardner: Can you give us a sense of the scale that describes your storage requirements?

Dunington: We do SaaS, we also do infrastructure-as-a-service (IaaS). EduCloud is a self-service IaaS product that we deliver to UBC, but we also deliver it to 25 other higher institutions in the Province of British Columbia.

We have been doing IaaS for five years, and we have been very, very successful. So more people are looking to us for guidance.

HPE

Delivers

Flash Performance

Because we are not just delivering to UBC, we have to be up running and always able to deliver, because each school has different requirements. At different times of the year — because there is registration, there are exam times — these things have to be up. You can’t not be functioning during an exam and have 600 students not able to take the tests that they have been studying for. So it impacts their life and we want to make sure that we are there and can provide the services for what they need.

Gardner: In order to maintain your service levels within those peak times, do you in your IaaS and storage services employ hybrid-cloud capabilities so that you can burst? Or are you doing this all through your own data center and your own private cloud?

On-Campus Cloud

Dunington: We do it all on-campus. British Columbia has a law that says all the data has to stay in Canada. It’s a data-sovereignty law, the data can’t leave the borders.

That’s why EduCloud has been so successful, in my opinion, because of that option. They can just go and throw things out in the private cloud.

The public cloud providers are providing more services in Canada: Amazon Web Services (AWS) and Microsoft Azure cloud are putting data centers in Canada, which is good and it gives people an option. Our team’s goal is to provide the services, whether it’s a hybrid model or all on-campus. We just want to be able to fulfill those needs.

Gardner: It sounds like the best of all worlds. You are able to give that elasticity benefit, a lot of instant service requirements met for your consumers. But you are starting to use cloud pay-as-you-go types of models and get the benefit of the public cloud model — but with the security, control and manageability of the private clouds.

What decisions have you made about your storage underpinnings, the infrastructure that supports your SaaS cloud?

Dunington: We have a large storage footprint. For our site, it’s about 12 petabytes of storage. We realized that we weren’t meeting the needs with spinning disks. One of the problems was that we had runaway virtual workloads that would cause problems, and they would impact other services. We needed some mechanism to fix that.

We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

We went through the whole request for proposal (RFP) process, and all the IT infrastructure vendors responded, but we did have some guidelines that we wanted to go through. One of the things we did is present our problems and make sure that they understood what the problems were and what they were trying to solve.

And there were some minimum requirements. We do have a backup vendor of choice that they needed to merge with. And quality of service is a big thing. We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

Gardner: You gained more than just flash benefits when you got to flash storage, right?

Streamlined, safe, flash storage

Dunington: Yes, for sure. With an entire data center full of spinning disks, it gets to the point where the disks start to manage you; you are no longer managing the disks. And the teams out there changing drives, removing volumes around it, it becomes unwieldy. I mean, the power, the footprint, and all that starts to grow.

Also, Vancouver is in a seismic zone, we are right up against the Pacific plate and it’s a very active seismic area. Heaven forbid anything happens, but one of the requirements we had was to move the data center into the interior of the province. So that was what we did.

When we brought this new data center online, one of the decisions the team made was to move to an all-flash storage environment. We wanted to be sure that it made financial sense because it’s publicly funded, and also improved the user experience, across the province.

Gardner: As you were going about your decision-making process, you had choices, what made you choose what you did? What were the deciding factors?

Dunington: There were a lot of deciding factors. There’s the technology, of being able to meet the performance and to manage the performance. One of the things was to lock down runaway virtual machines and to put performance tiers on others.

But it’s not just the technology; it’s also the business part, too. The financial part had to make sense. When you are buying any storage platform, you are also buying the support team and the sales team that come with it.

Our team believes that technology is a certain piece of the pie, and the rest of it is relationship. If that relationship part doesn’t work, it doesn’t matter how well the technology part works — the whole thing is going to break down.

Because software is software, hardware is hardware — it breaks, it has problems, there are limitations. And when you have to call someone, you have to depend on him or her. Even though you bought the best technology and got the best price — if it doesn’t work, it doesn’t work, and you need someone to call.

So those service and support issues were all wrapped up into the decision.

HPE

Delivers

Flash Performance

We chose the Hewlett Packard Enterprise (HPE) 3PAR all-flash storage platform. We have been very happy with it. We knew the HPE team well. They came and worked with us on the server blade infrastructure, so we knew the team. The team knew how to support all of it.

We also use the HPE OneView product for provisioning, and it integrated into that all. It also supported the performance optimization tool (IT Operations Management for HPE OneView) to let us set those values, because one of the things in EduCloud is customers choose their own storage tier, and we mark the price on it. So basically all we would do is present that new tier as new data storage within VMware and then they would just move their workloads across non-disruptively. So it has worked really well.

The 3PAR storage piece also integrates with VMware vRealize Operations Manager. We offer that to all our clients as a portal so they can see how everything is working and they can do their own diagnostics. Because that’s the one goal we have with EduCloud, it has to be self-service. We can let the customers do it, that’s what they want.

Gardner: Not that long ago people had the idea that flash was always more expensive and that they would use it for just certain use-cases rather than pervasively. You have been talking in terms of a total cost of ownership reduction. So how does that work? How does the economics of this over a period of time, taking everything into consideration, benefit you all?

Economic sense at scale

Dunington: Our IT team and our management team are really good with that part. They were able to break it all down, and they found that this model would work at scale. I don’t know the numbers per se, but it made economic sense.

Spinning disks will still have a place in the data center. I don’t know a year from now if an all-flash data center will make sense, because there are some records that people will throw in and never touch. But right now with the numbers on how we worked it out, it makes sense, because we are using the standard bronze, the gold, the silver tiers, and with the tiers it makes sense.

The 3PAR solution also has dedupe functionality and the compression that they just released. We are hoping to see how well that trends. Compression has only been around for a short period of time, so I can’t really say, but the dedupe has done really well for us.

Gardner: The technology overcomes some of the other baseline economic costs and issues, for sure.

We have talked about the technology and performance requirements. Have you been able to qualify how, from a user experience, this has been a benefit?

Dunington: The best benchmark is the adoption rate. People are using it, and there are no help desk tickets, so no one is complaining. People are using it, and we can see that everything is ramping up, and we are not getting tickets. No one is complaining about the price, the availability. Our operational team isn’t complaining about it being harder to manage or that the backups aren’t working. That makes me happy.

The big picture

Gardner: Brent, maybe a word of advice to other organizations that are thinking about a similar move to private cloud SaaS. Now that you have done this, what might you advise them to do as they prepare for or evaluate a similar activity?

Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there.

Dunington: Look at the full picture, look at the total cost of ownership. There’s the buying of the hardware, and there’s also supporting the hardware, too. Make sure that you understand your requirements and what your customers are looking for first before you go out and buy it. Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there. We will see in a couple of years how it went.

Look at the big picture, step back. It’s just not the new shiny toy, and you might have to take a stepped approach into buying, but for us it worked. I mean, it’s a solid platform, our team sleeps well at night, and I think our customers are really happy with it.

Gardner: This might be a little bit of a pun in the education field, but do your homework and you will benefit.

HPE

Delivers

Flash Performance

Dunington: Yes, for sure.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Cloud computing, data analysis, data center, Data center transformation, disaster recovery, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, Information management, managed services, Software, Software-defined storage, storage, User experience | Tagged , , , , , , , , | Leave a comment

How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

mark-peters_150

Peters

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to — available tools.

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.  

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well — that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations — at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational — they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you’re trying to get done before you figure out what you’re going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark.

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage — and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds — and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects — but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects — but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought — all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular — their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got — because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control — not for control’s sake — only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users — but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have — and finding out that you’re in a mess — doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it — even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation — in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises — means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays — that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared — it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage — and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be — how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters — and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity — whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have — the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure — not just your physical infrastructure but your applications, too — can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You’re not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this.

Many vendor organizations are either focused on the cloud side of the business — and there are some very big names — or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations — Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Bimodal IT, Cloud computing, cloud messaging, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning, Platform 3.0, Software-defined storage | Tagged , , , , , , , , , | Leave a comment

Kansas Development Finance Authority gains peace of mind, end-points virtual shield using hypervisor-level security

Implementing and managing IT security has leaped in complexity for organizations ranging from small and medium-sized businesses (SMBs) to massive government agencies.

Once-safe products used to thwart invasions now have been exploited. E-mail phishing campaigns are far more sophisticated, leading to damaging ransomware attacks.

What’s more, the jack-of-all-trades IT leaders of the mid-market concerns are striving to protect more data types on and off premises, their workload servers and expanded networks, as well as the many essential devices of the mobile workforce.

Security demands have gone up, yet there is a continual need for reduced manual labor and costs — while protecting assets sooner and better.

The next BriefingsDirect security strategies case study examines how a Kansas economic development organization has been able to gain peace of mind by relying on increased automation and intelligence in how it secures its systems and people.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To explore how an all-encompassing approach to security has enabled improved results with fewer hours at a smaller enterprise, BriefingsDirect sat down with Jeff Kater, Director of Information Technology and Systems Architect at Kansas Development Finance Authority (KDFA) in Topeka. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: As a director of all of IT at KDFA, security must be a big concern, but it can’t devour all of your time. How have you been able to balance security demands with all of your other IT demands?

Kater: That’s a very interesting question, and it has a multi-segmented answer. In years past, leading up to the development of what KDFA is now, we faced the trends that demanded very basic anti-spam solutions and the very basic virus threats that came via the web and e-mail.

Jeff Kater

Kater

What we’ve seen more recently is the growing trend of enhanced security attacks coming through malware and different exploits — that were once thought impossible — are now are the reality.

Therefore in recent times, my percentage of time dedicated to security had grown from probably five to 10 percent all the way up to 50 to 60 percent of my workload during each given week.

Gardner: Before we get to how you’ve been able to react to that, tell us about KDFA.

Kater: KDFA promotes economic development and prosperity for the State of Kansas by providing efficient access to capital markets through various tax-exempt and taxable debt obligations.

KDFA works with public and private entities across the board to identify financial options and solutions for those entities. We are a public corporate entity operating in the municipal finance market, and therefore we are a conduit finance authority.

KDFA is a very small organization — but a very important one. Therefore we run enterprise-ready systems around the clock, enabling our staff to be as nimble and as efficient as possible.

There are about nine or 10 of us that operate here on any given day at KDFA. We run on a completely virtual environment platform via Citrix XenServer. So we run XenApp, XenDesktop, and NetScaler — almost the full gamut of Citrix products.

We have a few physical endpoints, such as laptops and iPads, and we also have the mobile workforce on iPhones as well. They are all interconnected using the virtual desktop infrastructure (VDI) approach.

Gardner: You’ve had this swing, where your demands from just security issues have blossomed. What have you been doing to wrench that back? How do you get your day back, to innovate and put in place real productivity improvements?

We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

Kater: We went with virtualization via Citrix. It became our solution of choice due to not being willing to pay the extra tax, if you will, for other solutions that are on the market. We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

When we embraced virtualization, the security approaches were very traditional in nature. The old way of doing things worked fantastically for a physical endpoint.

The traditional approaches to security had been on our physical PCs for years. But when that security came over to the virtual realm, they bogged down our systems. They still required updates be done manually. They just weren’t innovating at the same speed as the virtualization, which was allowing us to create new endpoints.

And so, the maintenance, the updating, the growing threats were no longer being seen by the traditional approaches of security. We had endpoint security in place on our physical stations, but when we went virtual we no longer had endpoint security. We then had to focus on antivirus and anti-spam at the server level.

What we found out very quickly was that this was not going to solve our security issues. We then faced a lot of growing threats again via e-mail, via web, that were coming in through malware, spyware, other activities that were embedding themselves on our file servers – and then trickling down and moving laterally across our network to our endpoints.

Gardner: Just as your organization went virtual and adjusted to those benefits, the malware and the bad guys, so to speak, adjusted as well — and started taking advantage of what they saw as perhaps vulnerabilities as organizations transitioned to higher virtualization.

Security for all, by all

Kater: They did. One thing that a lot of security analysts, experts, and end-users forget in the grand scheme of things is that this virtual world we live in has grown so rapidly — and innovated so quickly — that the same stuff we use to grow our businesses is also being used by the bad actors. So while we are learning what it can do, they are learning how to exploit it at the same speed — if not a little faster.

Gardner: You recognized that you had to change; you had to think more about your virtualization environment. What prompted you to increase the capability to focus on the hypervisor for security and prevent issues from trickling across your systems and down to your endpoints?

Kater: Security has always been a concern here at KDFA. And there has been more of a security focus recently, with the latest news and trends. We honestly struggled with CryptoLocker, and we struggled with ransomware.

While we never had to pay out any ransom or anything — and they were stopped in place before data could be exfiltrated outside of KDFA’s network — we still had two or three days of either data loss or data interruption. We had to pull back data from an archive; we had to restore some of our endpoints and some of our computers.

We needed to have a solution for our virtual environment — one that would be easy to deploy, easy to manage, and it would be centrally managed.

As we battled these things over a very short period of time, they were progressively getting worse and worse. We decided that we needed to have a solution for our virtual environment – one that would be not only be easy to deploy, easy to manage, but it would be centrally managed as well, enabling me to have more time to focus back on my workload — and not have to worry so much about the security thresholds that had to be updated and maintained via the traditional model.

So we went out to the market. We ran very extensive proof of concepts (POCs), and those POCs very quickly illustrated that the underlying architecture was only going to be enterprise-ready via two or three vendors. Once we started running those through the paces, Bitdefender emerged for us.

I had actually been watching the Hypervisor Introspection (HVI) product development for the past four years, since its inception came with a partnership between Citrix, Intel, the Linux community and, of course, Bitdefender. One thing that was continuous throughout all of that was that in order to deploy that solution you would need GravityZone in-house to be able to run the HVI workloads.

And so we became early adopters of Bitdefender GravityZone, and we are able to see what it could do for our endpoints, our servers, and our Microsoft Exchange Servers. Then, Hypervisor Introspection became another security layer that we are able to build upon the security solution that we had already adopted from Bitdefender.

Gardner: And how long have you had these solutions in place?

Kater: We are going on one and a half to two years for GravityZone. And when HVI went to general availability earlier this year, in 2017, and we were one of the first adopters to be able to deploy it across our production environment.

Gardner: If you had a “security is easy” button that you could pound on your desk, what are the sorts of things that you look for in a simpler security solution approach?

IT needs brains to battle breaches

Kater: The “security is easy” button would operate much like the human brain. It would need that level of intuitive instinct, that predictive insight ability. The button would generally be easily managed, automated; it would evolve and learn with artificial intelligence (AI) and machine learning what’s out there. It would dynamically operate with peaks and valleys depending on the current status of the environment, and provide the security that’s needed for that particular environment.

Gardner: Jeff, you really are an early adopter, and I commend you on that. A lot of organizations are not quite as bold. They want to make sure that everything has been in the market for a long time. They are a little hesitant.

But being an early adopter sounds like you have made yourselves ready to adopt more AI and machine learning capabilities. Again, I think that’s very forward-looking of you.

But tell us, in real terms, what has being an early adopter gotten for you? We’ve had some pretty scary incidents just in the recent past, with WannaCry, for example. What has being an early adopter done for you in terms of these contemporary threats?

Kater: The new threats, including the EternalBlue exploit that happened here recently, are very advanced in nature. Oftentimes when these breaches occur, it takes several months before they have even become apparent. And oftentimes they move laterally within our network without us knowing, no matter what you do.

Some of the more advanced and persistent threats don’t even have to infect the local host with any type of software. They work in the virtual memory space. It’s much different than the older threats, where you could simply reboot or clear your browser cache to resolve them and get back to your normal operations.

Earlier, when KDFA still made use of non-persistent desktops, if the user got any type of corruption on their virtual desktop, they were able to reboot, and get back to a master image and move on. However, with these advanced threats, when they get into your network, and they move laterally — even if you reboot your non-persistent desktop, the threat will come back up and it still infects your network. So with the growing ransomware techniques out there, we can no longer rely on those definition-based approaches. We have to look at the newer techniques.

As far as why we are early adopters, and why I have chosen some of the principles that I have, I feel strongly that you are really only as strong as your weakest link. I strive to provide my users with the most advanced, nimble, and agnostic solutions possible.

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations.

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations. It allows us to have discussions about increasing productivity at that point, and to maximize the potential of our smaller number of users — versus having to worry about the latest news of security breaches that are happening all around us.

Gardner: You’re able to have a more proactive posture, rather than doing the fire drill when things go amiss and you’re always reacting to things.

Kater: Absolutely.

Gardner: Going back to making sure that you’re getting a fresh image and versions of your tools …  We have heard some recent issues around the web browser not always being safe. What is it about being able to get a clean version of that browser that can be very important when you are dealing with cloud services and extensive virtualization?

Virtual awareness, secure browsing

Kater: Virtualization in and of itself has allowed us to remove the physical element of our workstations when desirable and operate truly in that virtual or memory space. And so when you are talking about browsers, you can have a very isolated, a very clean browser. But that browser is still going to hit a website that can exploit your system. It can run in that memory space for exploitation. And, again, it doesn’t rely on plug-ins to be downloaded or anything like that anymore, so we really have to look at the techniques that these browsers are using.

What we are able to do with the secure browsing technique is publish, in our case, via XenApp, any browser flavor with isolation out there on the server. We make it available to the users that have access for that particular browser and for that particular need. We are then able to secure it via Bitdefender HVI, making sure that no matter where that browser goes, no matter what interface it’s trying to align with, it’s secure across the board.

Gardner: In addition to secure browsing, what do you look for in terms of being able to keep all of your endpoints the way you want them? Is there a management approach of being able to verify what works and what doesn’t work? How do you try to guarantee 100 percent security on those many and varied endpoints?

Kater: I am a realist, and I realize that nothing will ever be 100 percent secure, but I really strive for that 99.9 percent security and availability for my users. In doing so — being that we are so small in staff, and being that I am the one that should manage all of the security, architecture, layers, networking and so forth — I really look for that centralized model. I want one pane of glass to look at for managing, for reporting.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go,  what did it do to me and how was I protected.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go, and what did it do to me and how was I protected. I need that so that I can report to my management staff and say, “Hey, honestly, this is what happened, this is what was happening behind the scenes. This is how we remediated and we are okay. We are protected. We are safe.”

And so I really look for that centralized management. Automation is key. I want something that will automatically update, with the latest virus and malware definitions, but also download the latest techniques that are seen out there via those innovative labs from our security vendors to fully patch our systems behind the scenes. So it takes that piece of management away from me and automates it to make my job more efficient and more effective.

Gardner: And how has Bitdefender HVI, in association with Bitdefender GravityZone, accomplished that? How big of a role does it play in your overall solution?

Kater: It has been a very easy deployment and management, to be honest. Again, entities large and small, we are all facing the same threats. When we looked at ways to attain the best solution for us, we wanted to make sure that all of the main vendors that we make use of here at KDFA were on board.

And it just so happened this was a perfect partnership, again, between Citrix, Bitdefender, Intel, and the Linux community. That close partnership, it really developed into HVI, and it is not an evolutionary product. It did not grow from anything else. It really is a revolutionary approach. It’s a different way of looking at security models. It’s a different way of protecting.

HVI allows for security to be seen outside of the endpoint, and outside of the guest agent. It’s kind of an inside-looking-outward approach. It really provides high levels of visibility, detection and, again, it prevents the attacks of today, with those advanced persistent threats or APTs.

With that said, since the partnership between GravityZone and HVI is so easy to deploy, so easy to manage, it really allows our systems to grow and scale when the need is there. And we just know that with those systems in place, when I populate my network with new VMs, they are automatically protected via the policies from HVI.

Given that the security has to be protected from the ground all the way up, we rest assured that the security moves with the workload. As the workload moves across my network, it’s spawned off and onto new VMs. The same set of security policies follows the workloads. It really takes out any human missteps, if you will, along the process because it’s all automated and it all works hand-in-hand together.

Behind the screens

Gardner: It sounds like you have gained increased peace of mind. That’s always a good thing in IT; certainly a good thing for security-oriented IT folks. What about your end-users? Has the ability to have these defenses in place allowed you to give people a bit more latitude with what they can do? Is there a productivity, end-user or user experience benefit to this?

Kater: When it comes to security agents and endpoint security as a whole, I think a lot of people would agree with me that the biggest drawback when implementing those into your work environment is loss of productivity. It’s really not the end-user’s fault. It’s not a limitation of what they can and can’t do, but it’s what happens when security puts an extra load on your CPU, it puts extra load on your RAM; therefore, it bogs down your systems. Your systems don’t operate as efficiently or effectively and that decreases your productivity.

With Bitdefender, and the approaches that we adopted, we have seen very, very limited, almost uncomputable limitations as far as impacts on our network, impacts on our endpoints. So user adoption has been greater than it ever has, as far as a security solution.

I’m also able to manipulate our policies within that Central Command Center or Central Command Console within Bitdefender GravityZone to allow my users, at will, if they would like, to see what they are being blocked against, and which websites they are trying to run in the background. I am able to pass that through to the endpoint for them to see firsthand. That has been a really eye-opening experience.

We used to compute daily, thinking we were protected, and that nothing was running in the background. We were visiting the pages, and those pages were acting as though we thought that they should. What we have quickly found out is that any given page can launch several hundred, if not thousands, of links in the background, which can then become an exploit mechanism, if not properly secured.

Gardner: I would like to address some of the qualitative metrics of success when you have experienced the transition to more automated security. Let’s begin with your time. You said you went from five or 10 percent of time spent on security to 50 or 60 percent. Have you been able to ratchet that back? What would you estimate is the amount of time you spend on security issues now, given that you are one and a half years in?

Kater: Dating back 5 to 10 years ago with the inception of VDI, my security footprint as far as my daily workload was probably around that 10 percent. And then, with the growing threats in the last two to three years, that ratcheted it up to about 50 percent, at minimum, maybe even 60 percent. By adopting GravityZone and HVI, I have been able to pull that back down to only consume about 10 percent of my workload, as most of it is automated for me behind the scenes.

Gardner: How about ransomware infections? Have you had any of those? Or lost documents, any other sort of qualitative metrics of how to measure efficiency and efficacy here?

We have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Kater: I am happy to report that since the adoption of GravityZone, and now with HVI as an extra security layer on top of Bitdefender GravityZone, that we have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Gardner: Well, that speaks for itself. Let’s look to the future, now that you have obtained this. You mentioned earlier your interest in AI, machine learning, automating, of being proactive. Tell us about what you expect to do in the future in terms of an even better security posture.

Safety layers everywhere, all the time

Kater: In my opinion, again, security layers are vital. They are key to any successful deployment, whether you are large or small. It’s important to have all of your traditional security hardware and software in place working alongside this new interwoven fabric, if you will, of software — and now at the hypervisor level. This is a new threshold. This is a new undiscovered territory that we are moving into with virtual technologies.

As that technology advances, and more complex deployments are made, it’s important to protect that computing ability every step of the way; again, from that base and core, all the way into the future.

More and more of my users are computing remotely, and they need to have the same security measures in place for all of their computing sessions. What HVI has been able to do for me here in the current time, and in moving to the future, is I am now able to provide secure working environments anywhere — whether that’s their desktop, whether that’s their secure browser. I am able to leverage that HVI technology once they are logged into our network to make their computing from remote areas safe and effective.

Gardner: For those listening who may not have yet moved toward a hypervisor-level security – or who have maybe even just more recently become involved with pervasive virtualization and VDI — what advice could you give them, Jeff, on how to get started? What would you suggest others do that would even improve on the way you have done it? And, of course, you have had some pretty good results.

Kater: It’s important to understand that everybody’s situation is very different, so identifying the best solutions for everybody is very much on an individual corporation basis. Each company has its own requirements, its own compliance to follow, of course.

Pick two or three vendors and run very stringent POCs; make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network.

The best advice that I can give is pick two or three vendors, at the least, and run very stringent POCs; no matter what they may be, make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network. Then, when you have two or three that come out of that and that you feel strongly about, continue to break them down.

I cannot stress the importance of POCs enough. It’s very important to identify that one or two that you really feel strongly about. Once you identify those, then talk to the industry experts that support those technologies, talk to the engineers, really get the insight from the inside out on how they are innovating and what their plan is for the future of their products to make sure that you are on a solid footprint.

Most success stories involve a leap of faith. With machine learning and AI, we are now taking a leap that is backed by factual knowledge and analyzing techniques to stay ahead of threats. No longer are we relying on those virus definitions and those virus updates that can be lagging sometimes.

Gardner: Before we sign off, where do you go to get your information? Where would you recommend other people go to find out more?

Kater: Honestly, I was very fortunate that HVI at its inception fell into my lap. When I was looking around at different products, we just hit the market at the right time. But to be honest with you, I cannot stress enough, again, run those POCs.

If you are interested in finding out more about Bitdefender and its product line up, Bitdefender has an excellent set of engineers on staff; they are very knowledgeable, they are very well-rounded in all of their individual disciplines. The Bitdefender website is very comprehensive. It contains many outside resources, along with inside labs reporting, showcasing just what their capabilities are, with a lot of unbiased opinions.

They have several video demos and technical white papers listed out there, you can find them all across the web and you can request the full product demo when you are ready for it and run that POC of Bitdefender products in-house with your network. Also, they have presales support that will help you all along the way.

Bitdefender HVI will revolutionize your data center security capacity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Citrix, Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, machine learning, Microsoft, mobile computing, Security, server, Virtualization, VMware | Tagged , , , , , , , , , , , , , , | Leave a comment

Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.

Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how international companies must factor localization, data sovereignty and other regional cloud requirements into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud — especially hybrid cloud — when you’re straddling global regions?

Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.

Peter-Burris-150x150

Burris

The downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.

So, cloud and globalization can go together — but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.

Gardner: If you need to then focus more on the data issues — such as compliance, regulation, and data sovereignty — how is that different from taking an applications-centric view of things?

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.

That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.

But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.

It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data — either small or very large — across different regions. It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

So, the issues of privacy, the issues of local control of data are also very important, but the first and most important consideration for every business needs to be: Can I actually run the application where I want to, given the realities of latency? And number two: Can I run the application where I want to given the realities of bandwidth? This issue can completely overwhelm all other costs for data-rich, data-intensive applications over distance.

Gardner: As you are factoring your architecture, you need to take these local considerations into account, particularly when you are factoring costs. If you have to do some heavy lifting and make your bandwidth capable, it might be better to have a local closet-sized data center, because they are small and efficient these days, and you can stick with a private cloud or on-premises approach. At the least, you should factor the economic basis for comparison, with all these other variables you brought up.

Edge centers

Burris: That’s correct. In fact, we call them “edge centers.” For example, if the application features any familiarity with Internet of Things (IoT), then there will likely be some degree of latency considerations obtained, and the cost of doing a round trip message over a few thousand miles can be pretty significant when we consider the total cost of how fast computing can be done these days.

The first consideration is what are the impacts of latency for an application workload like IoT and is that intending to drive more automation into the system? Imagine, if you will, the businessperson who says, “I would like to enter into a new market expand my presence in the market in a cost-effective way. And to do that, I want to have the system be more fully automated as it serves that particular market or that particular group of customers. And perhaps it’s something that looks more process manufacturing-oriented or something along those lines that has IoT capabilities.”

The goal is to bring in the technology in a way that does not explode the administration, management, and labor cost associated with the implementation.

The goal, therefore, is to bring in the technology in a way that does not explode the administration, managements, and labor cost associated with the implementation.

The other way you are going to do that is if you do introduce a fair amount of automation and if, in fact, that automation is capable of operating within the time constraints required by those automated moments, as we call them.

If the round-trip cost of moving the data from a remote global location back to somewhere in North America — independent of whether it’s legal or not – comes at a cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is the most obvious and stringent consideration.

On top of that, these moments of automation necessitate significant amounts of data being generated and captured. We have done model studies where, for example, the cost of moving data out of a small wind farm can be 10 times as expensive. It can cost hundreds of thousands of dollars a year to do relatively simple and straightforward types of data analysis on the performance of that wind farm.

Process locally, act globally

It’s a lot better to have a local presence that can handle local processing requirements against models that are operating against locally derived data or locally generated data, and let that work be automated with only periodic visibility into how the overall system is working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is starting.

It gets more complex than in a relatively simple environment like a wind farm, but nonetheless, the amount of processing power that’s necessary to run some of those kinds of models can get pretty significant. We are going to see a lot more of this kind of analytic work be pushed directly down to the devices themselves. So, the Sense, Infer, and Act loop will occur very, very closely in some of those devices. We will try to keep as much of that data as we can local.

But there are always going to be circumstances when we have to generate visibility across devices, we have to do local training of the data, we have to test the data or the models that we are developing locally, and all those things start to argue for sometimes much larger classes of systems.

Gardner: It’s a fascinating subject as to what to push down the edge given that the storage cost and processing costs are down and footprint is down and what to then use the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.

But before we go into any further, Peter, tell us about yourself, and your organization, Wikibon.

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE. TheCUBE conducts about 5,000 interviews per year with thought leaders at various locations, often on-site at large conferences.

I came to Wikibon from Forrester Research, and before that I had been a part of META Group, which was purchased by Gartner. I have a longstanding history in this business. I have also worked with IT organizations, and also worked inside technology marketing in a couple of different places. So, I have been around.

Wikibon’s objective is to help mid-sized to large enterprises traverse the challenges of digital transformation. Our opinion is that digital transformation actually does mean something. It’s not just a set of bromides about multichannel or omnichannel or being “uberized,” or anything along those lines.

The difference between a business and a digital business is the degree to which data is used as an asset.

The difference between a business and a digital business is the degree to which data is used as an asset. In a digital business, data absolutely is used as a differentiating asset for creating and keeping customers.

We look at the challenges of what does it mean to use data differently, how to capture it differently, which is a lot of what IoT is about. We look at how to turn it into business value, which is a lot of what big data and these advanced analytics like artificial intelligence (AI), machine learning and deep learning are all about. And then finally, how to create the next generation of applications that actually act on behalf of the brand with a fair degree of autonomy, which is what we call “systems of agency” are all about. And then ultimately how cloud and historical infrastructure are going to come together and be optimized to support all those requirements.

We are looking at digital business transformation as a relatively holistic thing that includes IT leadership, business leadership, and, crucially, new classes of partnerships to ensure that the services that are required are appropriately contracted for and can be sustained as it becomes an increasing feature of any company’s value proposition. That’s what we do.

Global risk and reward

Gardner: We have talked about the tension between public and private cloud in a global environment through speeds and feeds, and technology. I would like to elevate it to the issues of culture, politics and perception. Because in recent years, with offshoring and looking at intellectual property concerns in other countries, the fact is that all the major hyperscale cloud providers are US-based corporations. There is a wide ecosystem of other second tier providers, but certainly in the top tier.

Is that something that should concern people when it comes to risk to companies that are based outside of the US? What’s the level of risk when it comes to putting all your eggs in the basket of a company that’s US-based?

Burris: There are two perspectives on that, but let me add one more just check on this. Alibaba clearly is one of the top-tier, and they are not based in the US and that may be one of the advantages that they have. So, I think we are starting to see some new hyperscalers emerge, and we will see whether or not one will emerge in Europe.

I had gotten into a significant argument with a group of people not too long ago on this, and I tend to think that the political environment almost guarantees that we will get some kind of scale in Europe for a major cloud provider.

If you are a US company, are you concerned about how intellectual property is treated elsewhere? Similarly, if you are a non-US company, are you concerned that the US companies are typically operating under US law, which increasingly is demanding that some of these hyperscale firms be relatively liberal, shall we say, in how they share their data with the government? This is going to be one of the key issues that influence choices of technology over the course of the next few years.

Cross-border compute concerns

We think there are three fundamental concerns that every firm is going to have to worry about.

I mentioned one, the physics of cloud computing. That includes latency and bandwidth. One computer science professor told me years ago, “Latency is the domain of God, and bandwidth is the domain of man.” We may see bandwidth costs come down over the next few years, but let’s just lump those two things together because they are physical realities.

The second one, as we talked about, is the idea of privacy and the legal implications.

The third one is intellectual property control and concerns, and this is going to be an area that faces enormous change over the course of the next few years. It’s in conjunction with legal questions on contracting and business practices.

Learn More About

Hybrid IT Management

Solutions From HPE

From our perspective, a US firm that wants to operate in a location that features a more relaxed regime for intellectual property absolutely needs to be concerned. And the reason why they need to be concerned is data is unlike any other asset that businesses work with. Virtually every asset follows the laws of scarcity.

Money, you can put it here or you can put it there. Time, people, you can put here or you can put there. That machine can be dedicated to this kind of wire or that kind of wire.

Data is weird, because data can be copied, data can be shared. The value of data appreciates as we us it more successfully, as we integrate it and share it across multiple applications.

Scarcity is a dominant feature of how we think about generating returns on assets. Data is weird, though, because data can be copied, data can be shared. Indeed, the value of data appreciates as we use it more successfully, as we use it more completely, as we integrate it and share it across multiple applications.

And that is where the concern is, because if I have data in one location, two things could possibly happen. One is if it gets copied and stolen, and there are a lot of implications to that. And two, if there are rules and regulations in place that restrict how I can combine that data with other sources of data. That means if, for example, my customer data in Germany may not appreciate, or may not be able to generate the same types of returns as my customer data in the US.

Now, that sets aside any moral question of whether or not Germany or the US has better privacy laws and protects the consumers better. But if you are basing investments on how you can use data in the US, and presuming a similar type of approach in most other places, you are absolutely right. On the one hand, you probably aren’t going to be able to generate the total value of your data because of restrictions on its use; and number two, you have to be very careful about concerns related to data leakage and the appropriation of your data by unintended third parties.

Gardner: There is the concern about the appropriation of the data by governments, including the United States with the PATRIOT Act. And there are ways in which governments can access hyperscalers’ infrastructure, assets, and data under certain circumstances. I suppose there’s a whole other topic there, but at least we should recognize that there’s some added risk when it comes to governments and their access to this data.

Burris: It’s a double-edged sword that US companies may be worried about hyperscalers elsewhere, but companies that aren’t necessarily located in the US may be concerned about using those hyperscalers because of the relationship between those hyperscalers and the US government.

These concerns have been suppressed in the grand regime of decision-making in a lot of businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up, and perhaps, it’s one of the reasons why Alibaba is growing so fast right now.

All hyperscalers are going to have to be able to demonstrate that they can protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated.

All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated. [The rationale] for basing your business in these types of services is really immature. We have made enormous progress, but there’s a long way yet to go here, and that’s something that businesses must factor as they make decisions about how they want to incorporate a cloud strategy.

Gardner: It’s difficult enough given the variables and complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues. But, of course, now there are legal issues around data sovereignty, privacy, and intellectual property concerns. It’s complex, and it’s something that an IT organization, on its own, cannot juggle. This is something that cuts across all the different parts of a global enterprise — their legal, marketing, security, risk avoidance and governance units — right up to the board of directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud computing on any sustainable basis.

Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a business person says, “Oh, no sweat, I am just going to grab some resources and start building something in the cloud.”

I can remember back in the mid-1990s when I would go into large media companies to meet with IT people to talk about the web, and what it would mean technically to build applications on the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be in legal. They were very concerned about what it meant to put intellectual property in a digital format up on the web, because of how it could be misappropriated or how it could lose value. So, that class of concern — or that type of concern — is minuscule relative to the broader questions of cloud computing, of the grabbing of your data and holding it a hostage, for example.

There are a lot of considerations that are not within the traditional purview of IT, but CIOs need to start thinking about them on their own and in conjunction with their peers within the business.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can organizations do to prevent going too far down an alley that’s dark and misunderstood, and therefore have a difficult time adjusting?

How do we better rationalize for cloud computing decisions? Do we need better management? Do we need better visibility into what our organizations are doing or not doing? How do we architect with foresight into the larger picture, the strategic situation? What do we need to start thinking about in terms of the solutions side of some of these issues?

Cloud to business, not business to cloud

Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here. The first thing we tell senior executives is, don’t think about bringing your business to the cloud — think about bringing the cloud to your business. That’s the most important thing. A lot of companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”

It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would go back and do research on outsourcing, I discovered that a lot of the outsourcing was not driven by business needs, but driven by executive compensation schemes, literally. So, where executives were told that they would be paid on the basis of return in net assets, there was a high likelihood that the business was going to go to outsourcers to get rid of the assets, so the executives could pay themselves an enormous amount of money.

Think about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

The same type of thinking pertains here — the goal is not to get rid of IT assets since those assets, generally speaking, are becoming less important features of the overall proposition of digital businesses.

Think instead about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

Every decision-maker needs to ask himself or herself, “How can I get the cloud experience wherever the data demands?” The goal of the cloud experience, which is a very, very powerful concept, ultimately needs to be able to get access to a very rich set of services associated with automation. We need visible pricing and metering, self-sufficiency, and self-service. These are all the experiences that we want out of cloud.

What we want, however, are those experiences wherever the data requires it, and that’s what’s driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack that provides a consistent cloud experience wherever the data has to run — whether that’s because of IoT or because of privacy issues or because of intellectual property concerns. True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience.

Weaving IT all together

The third thing to note here is that ultimately this is going to lead to the most complex integration regime we’ve ever envisioned for IT. By that I mean, we are going to have applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private cloud, legacy applications, and many other types of services that we haven’t even conceived of right now.

And understanding how to weave all of those different data sources, and all those different service sources, into coherent application framework that runs reliably and providers a continuous ongoing service to the business is essential. It must involve a degree of distribution that completely breaks most models. We’re thinking about infrastructure, architecture, but also, data management, system management, security management, and as I said earlier, all the way out to even contractual management, and vendor management.

The arrangement of resources for the classes of applications that we are going to be building in the future are going to require deep, deep, deep thinking.

That leads to the fourth thing, and that is defining the metric we’re going to use increasingly from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop — and they will continue to drop — it means ultimately that the fundamental cost determinant will be, How long does it take an application to complete? How long does it take this transaction to complete? And that’s not so much a throughput question, as it is a question of, “I have all these multiple sources that each on their own are contributing some degree of time to how this piece of work finishes, and can I do that piece of work in less time if I bring some of the work, for example, in-house, and run it close to the event?”

This relationship between increasing distribution of work, increasing distribution of data, and the role that time is going to play when we think about the event that we need to manage is going to become a significant architectural concern.

The fifth issue, that really places an enormous strain on IT is how we think about backing up and restoring data. Backup/restore has been an afterthought for most of the history of the computing industry.

As we start to build these more complex applications that have more complex data sources and more complex services — and as these applications increasingly are the basis for the business and the end-value that we’re creating — we are not thinking about backing up devices or infrastructure or even subsystems.

We are thinking about what does it mean to backup, even more importantly, applications and even businesses. The issue becomes associated more with restoring. How do we restore applications in business across this incredibly complex arrangement of services and data locations and sources?

There’s a new data regime that’s emerging to support application development. How’s that going to work — the role the data scientists and analytics are going to play in working with application developers?

I listed five areas that are going to be very important. We haven’t even talked about the new regime that’s emerging to support application development and how that’s going to work. The role the data scientists and analytics are going to play in working with application developers – again, we could go on and on and on. There is a wide array of considerations, but I think all of them are going to come back to the five that I mentioned.

Gardner: That’s an excellent overview. One of the common themes that I keep hearing from you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk, and a lack of maturity. We really are venturing into unknown territory in creating applications that draw on these resources, assets and data from these different clouds and deployment models.

When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a party to come in to bring in new types of management with maturity and with visibility. Who are some of the players that might fill that role? One that I am familiar with, and I think I have seen them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an ecosystem of global cloud environments that helps mitigate some of the concerns about a single hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come in to this problem set and start to solve it? What do you think from what you’ve heard so far about Project New Hybrid IT Stack at HPE?

Key cloud players

Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of people like to claim it was microprocessors, and there is an element of truth to that, but many computer companies had their own proprietary networks. When companies wanted to put those networks together to build more distributed applications, the mini-computer companies said, “Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the process flattened the mini-computer companies.

HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody else.

We are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized.

The second thing is that to build the next generations of more complex applications — and especially applications that involve capabilities like deep learning or machine learning with increased automation — we are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized. That is an absolute requirement. We are not going to make progress by adding new levels of complexity and building increasingly rich applications if we don’t take full advantage of the technologies that we want to use in the applications — inside how we run our infrastructures and run our subsystems, and do all the things we need to do from a hybrid cloud standpoint.

Ultimately, the companies are going to step up and start to flatten out some of these cloud options that are emerging. We will need companies that have significant experience with infrastructure, that really understand the problem. They need a lot of experience with a lot of different environments, not just one operating system or one cloud platform. They will need a lot of experience with these advanced applications, and have both the brainpower and the inclination to appropriately invest in those capabilities so they can build the type of platforms that we are talking about. There are not a lot of companies out there that can.

There are few out there, and certainly HPE with its New Stack initiative is one of them, and we at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece parts that will be required to make a go of this technology. It’s going to be one of the most exciting areas of invention over the next few years. We really look forward to working with our user clients to introduce some of these technologies and innovate with them. It’s crucial to solve the next generation of problems that the world faces; we can’t move forward without some of these new classes of hybrid technologies that weave together fabrics that are capable of running any number of different application forms.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Bimodal IT, Cloud computing, cloud messaging, data analysis, data center, Data center transformation, DevOps, disaster recovery, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, Security, server, Software, Software-defined storage | Tagged , , , , , , , , , , | Leave a comment

As enterprises face hybrid IT complexity, new management solutions beckon

The next BriefingsDirect Voice of the Analyst interview examines how new machine learning and artificial intelligence (AI) capabilities are being applied to hybrid IT complexity challenges.

We’ll explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to report on how companies and IT leaders are seeking new means to manage an increasingly complex transition to sustainable hybrid IT is Paul Teich, Principal Analyst at TIRIAS Research in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Paul, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. There is also lingering concern about the complexity of managing so many fast-moving parts. We have legacy IT, private cloud, public cloud, software as a service (SaaS) and, of course, multi-cloud. So as someone who tracks technology and its consumption, how much has technology itself been tapped to manage this sprawl, if you will, across hybrid IT?

Paul Teich

Teich

Teich: So far, not very much, mostly because of the early state of multi-cloud and the hybrid cloud business model. As you know, it takes a while for management technology to catch up with the actual compute technology and storage. So I think we are seeing that management is the tail of the dog, it’s getting wagged by the rest of it, and it just hasn’t caught up yet.

Gardner: Things have been moving so quickly with cloud computing that few organizations have had an opportunity to step back and examine what’s actually going on around them — never mind properly react to it. We really are playing catch up.

Teich: As we look at the options available, the cloud giants — the public cloud services — don’t have much incentive to work together. So you are looking at a market where there will be third parties stepping in to help manage multi-cloud environments, and there’s a lag time between having those services available and having the cloud services available and then seeing the third-party management solution step in.

Gardner: It’s natural to see that a specific cloud environment, whether it’s purely public like AWS or a hybrid like Microsoft Azure and Azure Stack, want to help their customers, but they want to help their customers all get to their solutions first and foremost. It’s a natural thing. We have seen this before in technology.

There are not that many organizations willing to step into the neutral position of being ecumenical, of saying they want to help the customer first, manage it all from the first.

As we look to how this might unfold, it seems to me that the previous models of IT management — agent-based, single-pane-of-glass, and unfortunately still in some cases spreadsheets and Post-It notes — have been brought to bear on this. But we might be in a different ball game, Paul, with hybrid IT, that there’s just too many moving parts, too much complexity, and that we might need to look at data-driven approaches. What is your take on that?

Learn More About

Hybrid IT Management

Solutions From HPE

Teich: I think that’s exactly correct. One of the jokes in the industry right now is if you want to find your stranded instances in the cloud, cancel your credit card and AWS or Microsoft will be happy to notify you of all of the instances that you are no longer paying for because your credit card expired. It’s hard to keep track of this, because we don’t have adequate tools yet.

When you are an IT manager and you have a lot of folks on public cloud services, you don’t have a full picture.

That single pane of glass, looking at a lot of data and information, is soon overloaded. When you are an IT manager, you are at a mid-sized or a large corporation, you have a lot of folks paying out-of-pocket right now, slapping a credit card down on public cloud services, so you don’t have a full picture. Where you do have a picture, there are so many moving parts.

I think we have to get past having a screen full of data, a screen full of information, and to a point where we have insight. And that is going to require a new generation of tools, probably borrowing from some of the machine learning evolution that’s happening now in pattern analytics.

Gardner: The timing in some respects couldn’t be better, right? Just as we are facing this massive problem of complexity of volume and velocity in managing IT across a hybrid environment, we have some of the most powerful and cost-effective means to deal with big data problems just like that.

Life in the infrastructure

Paul, before we go further let’s hear about you and your organization, and tell us, if you would, what a typical day is like in the life of Paul Teich?

Teich: At TIRIAS Research we are boutique industry analysts. By boutique we mean there are three of us — three principal analysts; we have just added a few senior analysts. We are close to the metal. We live in the infrastructure. We are all former engineers and/or product managers. We are very familiar with deep technology.

My day tends to be first, a lot of reading. We look at a lot of chips, we look at a lot of service-level information, and our job is to, at a very fundamental level, take very complex products and technologies and surface them to business decision-makers, IT decision-makers, folks who are trying to run lines of business (LOB) and make a profit. So we do the heavy lifting on why new technology is important, disruptive, and transformative.

Gardner: Thanks. Let’s go back to this idea of data-driven and analytical values as applied to hybrid IT management and complexity. If we can apply AI and machine learning to solve business problems outside of IT — in such verticals as retail, pharmaceutical, transportation — with the same characteristics of data volume, velocity, and variety, why not apply that to IT? Is this a case of the cobbler’s kids having no shoes? You would think that IT would be among the first to do this.

Dig deep, gain insight

Teich: The cloud giants have already implemented systems like this because of necessity. So they have been at the front-end of that big data mantra of volume, velocity — and all of that.

To successfully train for the new pattern recognition analytics, especially the deep learning stuff, you need a lot of data. You can’t actually train a system usefully without presenting it with a lot of use cases.

The public clouds have this data. They are operating social media services, large retail storefronts, and e-tail, for example. As the public clouds became available to enterprises, the IT management problem ballooned into a big data problem. I don’t think it was a big data problem five or 10 years ago, but it is now.

That’s a big transformation. We haven’t actually internalized what that means operationally when your internal IT department no longer runs all of your IT jobs anymore.

We are generating big data and that means we need big data tools to go analyze it and to get that relevant insight.

That’s the biggest sea change — we are generating big data in the course of managing our IT infrastructure now, and that means we need big data tools to go analyze it, and to get that relevant insight. It’s too much data flowing by for humans to comprehend in real time.

Gardner: And, of course, we are also talking about islands of such operational data. You might have a lot of data in your legacy operations. You might have tier 1 apps that you are running on older infrastructure, and you are probably happy to do that. It might be very difficult to transition those specific apps into newer operating environments.

You also have multiple SaaS and cloud data repositories and logs. There’s also not only the data within those apps, but there’s the metadata as to how those apps are running in clusters and what they are doing as a whole. It seems to me that not only would you benefit from having a comprehensive data and analytics approach for your IT operations, but you might also have a workflow and process business benefit by being an uber analyst, by being on top of all of these islands of operational data.

Learn More About

Hybrid IT Management

Solutions From HPE

To me, moving toward a comprehensive intelligence and data analysis capability for IT is the gift that keeps giving. You would then be able to also provide insight for an uber approach to processes across your entire organization — across the supply chains, across partner networks, and back to your customers. Paul, do you also see that there’s an ancillary business benefit to having that data analysis capability, and not ceding it to your cloud providers?

Manage data, improve workflow

Teich: I do. At one end of the spectrum it’s simply what do you need to do to keep the lights on, where is your data, all of it, in the various islands and collections and the data you are sharing with your supply chain as well. Where is the processing that you can apply to that data? Increasingly, I think, we are looking at a world in which the location of the stored data is more important than the processing power.

The management of all the data you have needs to segue into visible workflows.

We have processing power pretty much everywhere now. What’s key is moving data from place to place and setting up the connections to acquire it. It means that the management of all the data you have needs to segue into visible workflows.

Once I know what I have, and I am managing it at a baseline effectively, then I can start to improve my processes. Then I can start to get better workflows, internally as well as across my supply chain. But I think at first it’s simply, “What do I have going on right now?”

As an IT manager, how can I rein in some of these credit card instances, credit card storage on the public clouds, and put that all into the right mix. I have to know what I know first — then I can start to streamline. Then I can start to control my costs. Does that make sense?

Gardner: Yes, absolutely. And how can you know which people you want to give even more credit to on their credit cards – and let them do more of what they are doing? It might be very innovative, and it might be very cost-effective. There might also be those wasting money, spinning their wheels, repaving cow paths, over and over again.

If you don’t have the ability to make those decisions with insight, without the visibility, and then further analyze it as to how best to go about it – it seems to me a no-brainer.

It also comes at an auspicious time as IT is trying to re-factor its value to the organization. If in fact they are no longer running servers and networks and keeping the trains running on time, they have to start being more in the business of defining what trains should be running and then how to make them the best business engines, if you will.

If IT departments needs to rethink their role and step up their game, then they need to use technologies like advanced hybrid IT management from vendors with a neutral perspective. Then they become the overseers of operations at a fundamentally different level.

Data revelation, not revolution

Teich: I think that’s right. It’s evolutionary stuff. I don’t think it’s revolutionary. I think that in the same way you add servers to a virtual machine farm, as your demand increases, as your baseline demand increases, IT needs to keep a handle on costs — so you can understand which jobs are running where and how much more capacity you need.

One of the things they are missing with random access to the cloud is bulk purchasing. And so at a very fundamental level, helping your organization manage which clouds you are spending on by aggregating the purchase of storage, aggregating the purchase of compute instances to get just better buying power, doing price arbitrage when you can. To me, those are fundamental qualities of IT going forward in a multi-cloud environment.

They are extensions of where we are today; it just doesn’t seem like it yet. They have always added new servers to increasing internal capacity and this is just the next evolutionary step.

Gardner: It certainly makes sense that you would move as maturity occurs in any business function toward that orchestration, automation and optimization – rather than simply getting the parts in place. What you are describing is that IT is becoming more like a procurement function and less like a building, architecture, or construction function, which is just as powerful.

Not many people can make those hybrid IT procurement decisions without knowing a lot about the technology. Someone with just business acumen can’t walk in and make these decisions. I think this is an opportunity for IT to elevate itself and become even more essential to the businesses.

Teich: The opportunity is a lot like the Sabre airline scheduling system that nearly every airline uses now. That’s a fundamental capability for doing business, and it’s separate from the technology of Sabre. It’s the ability to schedule — people and airplanes – and it’s a lot like scheduling storage and jobs on compute instances. So I think there will be this step.

But to go back to the technology versus procurement, I think some element of that has always existed in IT in terms of dealing with vendors and doing the volume purchases on one side, but also having some architect know how to compose the hardware and the software infrastructure to serve those applications.

Connect the clouds

We’re simply translating that now into a multi-cloud architecture. How do I connect those pieces? What network capacity do I need to buy? What kind of storage architectures do I need? I don’t think that all goes away. It becomes far more important as you look at, for example, AWS as a very large bag of services. It’s very powerful. You can assemble it in any way you want, but in some respect, that’s like programming in C. You have all the power of assembly language and all the danger of assembly language, because you can walk up in the memory and delete stuff, and so, you have to have architects who know how to build a service that’s robust, that won’t go down, that serves your application most efficiently and all of those things are still hard to do.

So, architecture and purchasing are both still necessary. They don’t go away. I think the important part is that the orchestration part now becomes as important as deploying a service on the side of infrastructure because you’ve got multiple sets of infrastructure.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: For hybrid IT, it really has to be an enlightened procurement, not just blind procurement. And the people in the trenches that are just buying these services — whether the developers or operations folks — they don’t have that oversight, that view of the big picture to make those larger decisions about optimization of purchasing and business processes.

That gets us back to some of our earlier points of, what are the tools, what are the management insights that these individuals need in order to make those decisions? Like with Sabre, where they are optimizing to fill every hotel room or every airplane seat, we’re going to want in hybrid IT to fill every socket, right? We’re going to want all that bare metal and all those virtualization instances to be fully optimized — whether it’s your cloud or somebody else’s.

It seems to me that there is an algorithmic approach eventually, right? Somebody is going to need to be the keeper of that algorithm as to how this all operates — but you can’t program that algorithm if you don’t have the uber insights into what’s going on, and what works and what doesn’t.

What’s the next step, Paul, in terms of the technology catching up to the management requirements in this new hybrid IT complex environment?

Teich: People can develop some of that experience on a small scale, but there are so many dimensions to managing a multi-cloud, hybrid IT infrastructure business model. It’s throwing off all of this metadata for performance and efficiency. It’s ripe for machine learning.

We’re moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale.

In a strong sense, we’re moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale. It’s just going to be looking at a bigger picture, it’s going to be managing more variables, and learning across a lot more data points than a human can possibly comprehend.

We are at this really interesting point in the industry where we are getting deep-learning approaches that are coming online cost effectively; they can help us do that. They have a little while to go before they are fully mature. But IT organizations that learn to take advantage of these systems now are going to have a head start, and they are going to be more efficient than their competitors.

Gardner: At the end of the day, if you’re all using similar cloud services then that differentiation between your company and your competitor is in how well you utilize and optimize those services. If the baseline technologies are becoming commoditized, then optimization — that algorithm-like approach to smartly moving workloads and data, and providing consumption models that are efficiency-driven — that’s going to be the difference between a 1 percent margin and a 5 percent margin over time.

The deep-learning difference

Teich: The important part to remember is that these machine-training algorithms are somewhat new, so there are several challenges with deploying them. First is the transparency issue. We don’t quite yet know how a deep-learning model makes specific decisions. We can’t point to one aspect and say that aspect is managing the quality of our AWS services, for example. It’s a black box model.

We can’t yet verify the results of these models. We know they are being efficient and fast but we can’t verify that the model is as efficient as it could possibly be. There is room for improvement over the next few years. As the models get better, they’ll leave less money on the table.

We’re also validating that when you build a machine-learning model that it’s covering all the situations you want it to cover. You need an audit trail for specific sets of decisions, especially with data that is subject to regulatory constraints. You need to know why you made decisions.

So the net is, once you are training a machine-learning model, you have to keep retraining it over time. Your model is not going to do the same thing as your competitor’s model. There is a lot of room for differentiation, a lot of room for learning. You just have to go into it with your eyes open that, yeah, occasionally things will go sideways. Your model might do something unexpected, and you just have to be prepared for that. We’re still in the early days of machine learning.

Gardner: You raise an interesting point, Paul, because even as the baseline technology services in the multi-cloud era become commoditized, you’re going to have specific, unique, and custom approaches to your own business’ management.

Your hybrid IT optimization is not going to be like that of any other company. I think getting that machine-learning capability attuned to your specific hybrid IT panoply of resources and assets is going to be a gift that keeps giving. Not only will you run your IT better, you will run your business better. You’ll be fleet and agile.

If some risk arises — whether it’s a cyber security risk, a natural disaster risk, a business risk of unintended or unexpected changes in your supply chain or in your business environment — you’re going to be in a better position to react. You’re going to have your eyes to the ground, you’re going to be well tuned to your specific global infrastructure, and you’ll be able to make good choices. So I am with you. I think machine learning is essential, and the sooner you get involved with it, the better.

Before we sign off, who are the vendors and some of the technologies that we will look to in order to fill this apparent vacuum on advanced hybrid IT management? It seems to me that traditional IT management vendors would be a likely place to start.

Who’s in?

Teich: They are a likely place to start. All of them are starting to say something about being in a multi-cloud environment, about being in a multi-cloud-vendor environment. They are already finding themselves there with virtualization, and the key is they have recognized that they are in a multi-vendor world.

There are some start-ups, and I can’t name them specifically right now. But a lot of folks are working on this problem of how do I manage hybrid IT: In-house IT, and multi-cloud orchestration, a lot of work going on there. We haven’t seen a lot of it publicly yet, but there is a lot of venture capital being placed.

I think this is the next step, just like PCs came in the office, smartphones came in the office as we move from server farms to the clouds, going from cloud to multi-cloud, it’s attracting a lot of attention. The hard part right now is nailing whom to place your faith in. The name brands that people are buying their internal IT from right now are probably good near-term bets. As the industry gets more mature, we’ll have to see what happens.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We did hear a vision described on this from Hewlett Packard Enterprise (HPE) back in June at their Discover event in Las Vegas. I’m expecting to hear quite a bit more on something they’ve been calling New Hybrid IT Stack that seems to possess some of the characteristics we’ve been describing, such as broad visibility and management.

So at least one of the long-term IT management vendors is looking in this direction. That’s a place I’m going to be focusing on, wondering what the competitive landscape is going to be, and if HPE is going to be in the leadership position on hybrid IT management.

Teich: Actually, I think HPE is the only company I’ve heard from so far talking at that level. Everybody is voicing some opinion about it, but from what I’ve heard, it does sound like a very interesting approach to the problem.

Microsoft actually constrained their view on Azure Stack to a very small set of problems, and is actively saying, “No, I don’t.” If you’re looking at doing virtual machine migration and taking advantage of multi-cloud for general-purpose solutions, it’s probably not something that you want to do yet. It was very interesting for me then to hear about the HPE Project New Hybrid IT Stack and what HPE is planning to do there.

Gardner: For Microsoft, the more automated and constrained they can make it, the more likely you’d be susceptible or tempted to want to just stay within an Azure and/or Azure Stack environment. So I can appreciate why they would do that.

Before we sign off, one other area I’m going to be keeping my eyes on is around orchestration of containers, Kubernetes, in particular. If you follow orchestration of containers and container usage in multi-cloud environments, that’s going to be a harbinger of how the larger hybrid IT management demands are going to go as well. So a canary in the coal mine, if you will, as to where things could get very interesting very quickly.

The place to be

Teich: Absolutely. And I point out that the Linux Foundation’s CloudNativeCon in early December 2017 looks like the place to be — with nearly everyone in the server infrastructure community and cloud infrastructure communities signing on. Part of the interest is in basically interchangeable container services. We’ll see that become much more important. So that sleepy little technical show is going to be invaded by “suits,” this year, and we’re paying a lot of attention to it.

Gardner: Yes, I agree. I’m afraid we’ll have to leave it there. Paul, how can our listeners and readers best follow you to gain more of your excellent insights?

Teich: You can follow us at www.tiriasresearch.com, and also we have a page on Forbes Tech, and you can find us there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, cloud messaging, data analysis, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning, Microsoft, Platform 3.0 | Tagged , , , , , , , , , | Leave a comment

How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT’s ability to grow and thrive

The next BriefingsDirect Voice of the Analyst interview examines how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice.

We’ll now explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order to have businesses grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy

Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles joins us to report on how companies are managing an increasingly complex transition to sustainable hybrid IT. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. But there is also lingering concern about how to best determine the right mix of cloud, what kinds of cloud, and how to mitigate the risks and manage change over time.

As someone who regularly advises chief information officers (CIOs), who or which group is surfacing that is tasked with managing this cloud adoption and its complexity within these businesses? Who will be managing this dynamic complexity?

Crawford: For the short-term, I would say everyone. It’s not as simple as it has been in the past where we look to the IT organization as the end-all, be-all for all things technology. As we begin talking about different consumption models — and cloud is a relatively new consumption model for technology — it changes the dynamics of it. It’s the combination of changing that consumption model — but then there’s another factor that comes into this. There is also the consumerization of technology, right? We are “democratizing” technology to the point where everyone can use it, and therefore everyone does use it, and they begin to get more comfortable with technology.

It’s not as it used to be, where we would say, “Okay, I’m not sure how to turn on a computer.” Now, businesses may be more familiar outside of the IT organization with certain technologies. Bringing that full-circle, the answer is that we have to look beyond just IT. Cloud is something that is consumed by IT organizations. It’s consumed by different lines of business, too. It’s consumed even by end-consumers of the products and services. I would say it’s all of the above.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: The good news is that more and more people are able to — on their own – innovate, to acquire cloud services, and they can factor those into how they obtain business objectives. But do you expect that we will get to the point where that becomes disjointed? Will the goodness of innovation become something that spins out of control, or becomes a negative over time?

Crawford: To some degree, we’ve already hit that inflection-point where technology is being used in inappropriate ways. A great example of this — and it’s something that just kind of raises the hair on the back of my neck — is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “Go cloud.”

The board should be very business-focused and instead they’re dictating specific technology — whether it’s the right technology or not. That’s really what this comes down to.

What’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application?

Another example is folks that try and go all-in on cloud but aren’t necessarily thinking about what’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? It’s not a one-size-fits-all answer.

We in the enterprise IT space haven’t really done enough work to truly understand how best to leverage these new sets of tools. We need to both wrap our head around it but also get in the right frame of mind and thought process around how to take advantage of them in the best way possible.

Another example that I’ve worked through from an economic standpoint is if you were to do the math, which I have done a number of times with clients — you do the math to figure out what’s the comparative between the IT you’re doing on-premises in your corporate data center with any given application — versus doing it in a public cloud.

Think differently

If you do the math, taking an application from a corporate data center and moving it to public cloud will cost you four times as much money. Four times as much money to go to cloud! Yet we hear the cloud is a lot cheaper. Why is that?

When you begin to tease apart the pieces, the bottom line is that we get that four-times-as-much number because we’re using the same traditional mindset where we think about cloud as a solution, the delivery mechanism, and a tool. The reality is it’s a different delivery mechanism, and it’s a different kind of tool.

When used appropriately, in some cases, yes, it can be less expensive. The challenge is you have to get yourself out of your traditional thinking and think differently about the how and why of leveraging cloud. And when you do that, then things begin to fall into place and make a lot more sense both organizationally — from a process standpoint, and from a delivery standpoint — and also economically.

Gardner: That “appropriate use of cloud” is the key. Of course, that could be a moving target. What’s appropriate today might not be appropriate in a month or a quarter. But before we delve into more … Tim, tell us about your organization. What’s a typical day in the life for Tim Crawford like?

It’s not tech for tech’s sake, rather it’s best to say, “How do we use technology for business advantage?”

Crawford: I love that question. AVOA stands for that position in which we sit between business and technology. If you think about the intersection of business and technology, of using technology for business advantage, that’s the space we spend our time thinking about. We think about how organizations across a myriad of different industries can leverage technology in a meaningful way. It’s not tech for tech’s sake, and I want to be really clear about that. But rather it’s best to say, “How do we use technology for business advantage?”

We spend a lot of time with large enterprises across the globe working through some of these challenges. It could be as simple as changing traditional mindsets to transformational, or it could be talking about tactical objectives. Most times, though, it’s strategic in nature. We spend quite a bit of time thinking about how to solve these big problems and to change the way that companies function, how they operate.

A day in a life of me could range from, if I’m lucky, being able to stay in my office and be on the phone with clients, working with folks and thinking through some of these big problems. But I do spend a lot of time on the road, on an airplane, getting out in the field, meeting with clients, understanding what people really are contending with.

I spent well over 20 years of my career before I began doing this within the IT organization, inside leading IT organizations. It’s incredibly important for me to stay relevant by being out with these folks and understanding what they’re challenged by — and then, of course, helping them through their challenges.

Any given day is something new and I love that diversity. I love hearing different ideas. I love hearing new ideas. I love people who challenge the way I think.

It’s an opportunity for me personally to learn and to grow, and I wish more of us would do that. So it does vary quite a bit, but I’m grateful that the opportunities that I’ve had to work with have been just fabulous, and the same goes for the people.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: I’ve always enjoyed my conversations with you, Tim, because you always do challenge me to think a little bit differently — and I find that very valuable.

Okay, let’s get back to this idea of “appropriate use of cloud.” I wonder if we should also expand that to be “appropriate use of IT and cloud.” So including that notion of hybrid IT, which includes cloud and hybrid cloud and even multi-cloud. And let’s not forget about the legacy IT services.

How do we know if we’re appropriately using cloud in the context of hybrid IT? Are there measurements? Is there a methodology that’s been established yet? Or are we still in the opening innings of how to even measure and gain visibility into how we consume and use cloud in the context of all IT — to therefore know if we’re doing it appropriately?

The monkey-bread model

Crawford: The first thing we have to do is take a step back to provide the context of that visibility — or a compass, as I usually refer to these things. You need to provide a compass to help understand where we need to go.

If we look back for a minute, and look at how IT operates — traditionally, we did everything. We had our own data center, we built all the applications, we ran our own servers, our own storage, we had the network – we did it all. We did it all, because we had to. We, in IT, didn’t really have a reasonable alternative to running our own email systems, our own file storage systems. Those days have changed.

Fast-forward to today. Now, you have to pick apart the pieces and ask, “What is strategic?” When I say, “strategic,” it doesn’t mean critically important. Electrical power is an example. Is that strategic to your business? No. Is it important? Heck, yeah, because without it, we don’t run. But it’s not something where we’re going out and building power plants next to our office buildings just so we can have power, right? We rely on others to do it because there are mature infrastructures, mature solutions for that. The same is true with IT. We have now crossed the point where there are mature solutions at an enterprise level that we can capitalize on, or that we can leverage.

Part of the methodology I use is the monkey bread example. If you’re not familiar with monkey bread, it’s kind of a crazy thing where you have these balls of dough. When you bake it, the balls of dough congeal together and meld. What you’re essentially doing is using that as representative of, or an analogue to, your IT portfolio of services and applications. You have to pick apart the pieces of those balls of dough and figure out, “Okay. Well, these systems that support email, those could go off to Google or Microsoft 365. And these applications, well, they could go off to this SaaS-based offering. And these other applications, well, they could go off to this platform.”

And then, what you’re left with is this really squishy — but much smaller — footprint that you have to contend with. That problem in the center is much more specific — and arguably that’s what differentiates your company from your competition.

Whether you run email [on-premises] or in a cloud, that’s not differentiating to a business. It’s incredibly important, but not differentiating. When you get to that gooey center, that’s the core piece, that’s where you put your resources in, that’s what you focus on.

This example helps you work through determining what’s critical, and — more importantly — what’s strategic and differentiating to my business, and what is not. And when you start to pick apart these pieces, it actually is incredibly liberating. At first, it’s a little scary, but once you get the hang of it, you realize how liberating it is. It brings focus to the things that are most critical for your business.

Identify opportunities where cloud makes sense – and where it doesn’t. It definitely is one of the most significant opportunities for most IT organizations today.

That’s what we have to do more of. When we do that, we identify opportunities where cloud makes sense — and where it doesn’t. Cloud is not the end-all, be-all for everything. It definitely is one of the most significant opportunities for most IT organizations today.

So it’s important: Understand what is appropriate, how you leverage the right solutions for the right application or service.

Gardner: IT in many organizations is still responsible for everything around technology. And that now includes higher-level strategic undertakings of how all this technology and the businesses come together. It includes how we help our businesses transform to be more agile in new and competitive environments.

So is IT itself going to rise to this challenge, of not doing everything, but instead becoming more of that strategic broker between in IT functions and business outcomes? Or will those decisions get ceded over to another group? Maybe enterprise architects, business architects, business process management (BPM) analysts? Do you think it’s important for IT to both stay in and elevate to the bigger game?

Changing IT roles and responsibilities

Crawford: It’s a great question. For every organization, the answer is going to be different. IT needs to take on a very different role and sensibility. IT needs to look different than how it looks today. Instead of being a technology-centric organization, IT really needs to be a business organization that leverages technology.

The CIO of today and moving forward is not the tech-centric CIO. There are traditional CIOs and transformational CIOs. The transformational CIO is the business leader first who happens to have responsibility for technology. IT, as a whole, needs to follow the same vein.

For example, if you were to go into a traditional IT organization today and ask them what’s the nature of their business, ask them to tell you what they do as an administrator, as a developer, to help you understand how that’s going to impact the company and the business — unfortunately, most of them would have a really hard time doing that.

The IT organization of the future, will articulate clearly the work they’re doing and how that impacts their customers and their business, and how making different changes and tweaks will impact their business. They will have an intimate knowledge of how their business functions much more than how the technology functions. That’s a very different mindset, that’s the place we have to get to for IT on the whole. IT can’t just be this technology organization that sits in a room, separate from the rest of the company. It has to be integral, absolutely integral to the business.

Gardner: If we recognize that cloud is here to stay — but that the consumption of it needs to be appropriate, and if we’re at some sort of inflection point, we’re also at the risk of consuming cloud inappropriately. If IT and leadership within IT are elevating themselves, and upping their game to be that strategic player, isn’t IT then in the best position to be managing cloud, hybrid cloud and hybrid IT? What tools and what mechanisms will they need in order to make that possible?

Learn More About

Hybrid IT Management

Solutions From HPE

Crawford: Theoretically, the answer is that they really need to get to that level. We’re not there, on the whole, yet. Many organizations are not prepared to adopt cloud. I don’t want to be a naysayer of IT, but I think in terms of where IT needs to go on the whole, on the sum, we need to move into that position where we can manage the different types of delivery mechanisms — whether it’s public cloud, SaaS, private cloud, appropriate data centers — those are all just different levers we can pull depending on the business type.

Businesses change, customers change, demand changes and revenue comes from different places. IT needs to be able to shift gears just as fast and in anticipation of where the company goes.

As you mentioned earlier, businesses change, customers change, demand changes, and revenue comes from different places. In IT, we need to be able to shift gears just as fast and be prepared to shift those gears in anticipation of where the company goes. That’s a very different mindset. It’s a very different way of thinking, but it also means we have to think of clever ways to bring these tools together so that we’re well-prepared to leverage things like cloud.

The challenge is many folks are still in that classic mindset, which unfortunately holds back companies from being able to take advantage of some of these new technologies and methodologies. But getting there is key.

Gardner: Some boards of directors, as you mentioned, are saying, “Go cloud,” or be cloud-first. People are taking them at that, and so we are facing a sort of cloud sprawl. People are doing micro services and as developers spinning up cloud instances and object storage instances. Sometimes they’ll keep those running into production; sometimes they’ll shut them down. We have line of business (LOB) managers going out and acquiring services like SaaS applications, running them for a while, perhaps making them a part of their standard operating procedures. But, in many organizations, one hand doesn’t really know what the other is doing.

Are we at the inflection point now where it’s simply a matter of measurement? Would we stifle innovation if we required people to at least mention what it is that they’re doing with their credit cards or petty cash when it comes to IT and cloud services? How important is it to understand what’s going on in your organization so that you can begin a journey toward better management of this overall hybrid IT?

Why, oh why, oh why, cloud?

Crawford: It depends on how you approach it. If you’re doing it from an IT command-and-control perspective, where you want to control everything in cloud — full stop, that’s failure right out of the gate. But if you’re doing it from a position of — I’m trying to use it as an opportunity to understand why are these folks leveraging cloud, and why are they not coming to IT, and how can I as CIO be better positioned to be able to support them, then great! Go forth and conquer.

The reality is that different parts of the organization are consuming cloud-based services today. I think there’s an opportunity to bring those together where appropriate. But at the end of the day, you have to ask yourself a very important question. It’s a very simple question, but you have to ask it, and it has to do with each of the different ways that you might leverage cloud. Even when you go beyond cloud and talk about just traditional corporate data assets — especially as you start thinking about Internet of things (IoT) and start thinking about edge computing — you know that public cloud becomes problematic for some of those things.

The important question you have to ask yourself is, “Why?” A very simple question, but it can have a really complicated answer. Why are you using public cloud? Why are you using three different forms of public cloud? Why are you using private cloud and public cloud together?

Once you begin to ask yourself those questions, and you keep asking yourself that question … it’s like that old adage. Ask yourself why three times and you kind of get to the core as the true reason why. You’ll bring greater clarity as to the reasons, and typically the business reasons, of why you’re actually going down that path. When you start to understand that, it brings clarity to what decisions are smart decisions — and what decisions maybe you might want to think about doing differently.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: Of course, you may begin doing something with cloud for a very good reason. It could be a business reason, a technology reason. You’ll recognize it, you gain value from it — but then over time you have to step back with maturity and ask, “Am I consuming this in such a way that I’m getting it at the best price-point?” You mentioned a little earlier that sometimes going to public cloud could be four times as expensive.

So even though you may have an organization where you want to foster innovation, you want people to spread their wings, try out proofs of concept, be agile and democratic in terms of their ability to use myriad IT services, at what point do you say, “Okay, we’re doing the business, but we’re not running it like a good business should be run.” How are the economic factors driven into cloud decision-making after you’ve done it for a period of time?

Cloud’s good, but is it good for business?

Crawford: That’s a tough question. You have to look at the services that you’re leveraging and how that ties into business outcomes. If you tie it back to a business outcome, it will provide greater clarity on the sourcing decisions you should make.

For example, if you’re spending $5 to make $6 in a specialty industry, that’s probably not a wise move. But if you’re spending $5 to make $500, okay, that’s a pretty good move, right? There is a trade-off that you have to understand from an economic standpoint. But you have to understand what the true cost is and whether there’s sufficient value. I don’t mean technological value, I mean business value, which is measured in dollars.

If you begin to understand the business value of the actions you take — how you leverage public cloud versus private cloud versus your corporate data center assets — and you match that against the strategic decisions of what is differentiating versus what’s not, then you get clarity around these decisions. You can properly leverage different resources and gain them at the price points that make sense. If that gets above a certain amount, well, you know that’s not necessarily the right decision to make.

Economics plays a very significant role — but let’s not kid ourselves. IT organizations haven’t exactly been the best at economics in the past. We need to be moving forward. And so it’s just one more thing on that overflowing plate that we call demand and requirements for IT, but we have to be prepared for that.

Gardner: There might be one other big item on that plate. We can allow people to pursue business outcomes using any technology that they can get their hands on — perhaps at any price – and we can then mature that process over time by looking at price, by finding the best options.

But the other item that we need to consider at all times is risk. Sometimes we need to consider whether getting too far into a model like a public cloud, for example, that we can’t get back out of, is part of that risk. Maybe we have to consider that being completely dependent on external cloud networks across a global supply chain, for example, has inherent cyber security risks. Isn’t it up to IT also to help organizations factor some of these risks — along with compliance, regulation, data sovereignty issues? It’s a big barrel of monkeys.

Before we sign off, as we’re almost out of time, please address for me, Tim, the idea of IT being a risk factor mitigator for a business.

Safety in numbers

Crawford: You bring up a great point, Dana. Risk — whether it is risk from a cyber security standpoint or it could be data sovereignty issues, as well as regulatory compliance — the reality is that nobody across the organization truly understands all of these pieces together.

It really is a team effort to bring it all together — where you have the privacy folks, the information security folks, and the compliance folks — that can become a united team.

It really is a team effort to bring it all together — where you have the privacy folks, the information security folks, and the compliance folks — that can become a united team. I don’t think IT is the only component of that. I really think this is a team sport. In any organization that I’ve worked with, across the industry it’s a team sport. It’s not just one group.

It’s complicated, and frankly, it’s getting more complicated every single day. When you have these huge breaches that sit on the front page of The Wall Street Journal and other publications, it’s really hard to get clarity around risk when you’re always trying to fight against the fear factor. So that’s another balancing act that these groups are going to have to contend with moving forward. You can’t ignore it. You absolutely shouldn’t. You should get proactive about it, but it is complicated and it is a team sport.

Gardner: Some take-aways for me today are that IT needs to raise its game. Yet again, they need to get more strategic, to develop some of the tools that they’ll need to address issues of sprawl, complexity, cost, and simply gaining visibility into what everyone in the organization is – or isn’t — doing appropriately with hybrid cloud and hybrid IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Software | Tagged , , , , , , , , , , | 2 Comments

Case study: How HCI-powered private clouds accelerate efficient digital transformation

The next BriefingsDirect cloud efficiency case study examines how a world-class private cloud project evolved in the financial sector.

We’ll now learn how public cloud-like experiences, agility, and cost structures are being delivered via a strictly on-premises model built on hyper-converged infrastructure for a risk-sensitive financial services company.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Jim McKittrick joins to help explore the potential for cloud benefits when retaining control over the data center is a critical requirement. He is Senior Account Manager at Applied Computer Solutions (ACS) in Huntington Beach, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Many enterprises want a private cloud for security and control reasons. They want an OpEx-like public cloud model, and that total on-premises control. Can you have it both ways?

McKittrick: We are showing that you can. People are learning that the public cloud isn’t necessarily all it has been hyped up to be, which is what happens with newer technologies as they come out.

Gardner: What are the drivers for keeping it all private?

McKittrick: Security, of course. But if somebody actually analyzes it, a lot of times it will be about cost and data access, and the ease of data egress because getting your data back can sometimes be a challenge.

Jim McKittrickAlso, there is a realization that even though I may have strict service-level agreements (SLAs), if something goes wrong they are not going to save my business. If that thing tanks, do I want to give that business away? I have some clients who absolutely will not.

Gardner: Control, and so being able to sleep well at night.

McKittrick: Absolutely. I have other clients that we can speak about who have HIPAA requirements, and they are privately held and privately owned. And literally the CEO says, “I am not doing it.” And he doesn’t care what it costs.

Gardner: If there were a huge delta between the price of going with a public cloud or staying private, sure. But that deltais closing. So you can have the best of both worlds — and not pay a very high penalty nowadays.

McKittrick: If done properly, certainly from my experience. We have been able to prove that you can run an agile, cloud-like infrastructure or private cloud as cost-effectively — or even more cost effectively — than you can in the public clouds. There are certainly places for both in the market.

Gardner: It’s going to vary, of course, from company to company — and even department to department within a company — but the fact is that that choice is there.

McKittrick: No doubt about it, it absolutely is.

Gardner: Tell us about ACS, your role there, and how the company is defining what you consider the best of hybrid cloud environments.

McKittrick: We are a relatively large reseller, about $600 million. We have specialized in data center practices for 27 years. So we have been in business quite some time and have had to evolve with the IT industry.

We have a head start on what’s really coming down the pipe — we are one to two years ahead of the general marketplace.

Structurally, we are fairly conventional from the standpoint that we are a typical reseller, but we pride ourselves on our technical acumen. Because we have some very, very large clients and have worked with them to get on their technology boards, we feel like we have a head start on what’s really coming down the pipe —  we are maybe one to two years ahead of the general marketplace. We feel that we have a thought leadership edge there, and we use that as well as very senior engineering leadership in our organization to tell us what we are supposed to be doing.

Gardner: I know you probably can’t mention the company by name, but tell us about a recent project that seems a harbinger of things to come.

Hyper-convergent control 

McKittrick: It began as a proof of concept (POC), but it’s in production, it’s live globally.

I have been with ACS for 18 years, and I have had this client for 17 of those years. We have been through multiple data center iterations.

When this last one came up, three things happened. Number one, they were under tremendous cost pressure — but public cloud was not an option for them.

The second thing was that they had grown by acquisition, and so they had dozens of IT fiefdoms. You can imagine culturally and technologically the challenges involved there. Nonetheless, we were told to consolidate and globalize all these operations.

Thirdly, I was brought in by a client who had run the US presence for this company. We had created a single IT infrastructure in the US for them. He said, “Do it again for the whole world, but save us a bunch of money.” The gauntlet was thrown down. The customer was put in the position of having to make some very aggressive choices. And so he effectively asked me bring them “cool stuff.”

You could give control to anybody in the organization across the globe and they would be able to manage it.

They asked, “What’s new out there? How can we do this?” Our senior engineering staff brought a couple of ideas to the table, and hyper-converged infrastructure (HCI) was central to that. HCI provided the ability to simplify the organization, as well as the IT management for the organization. You could give control of it to anybody in the organization across the globe and they would be able to manage it, working with partners in other parts of the world.

Gardner: Remote management being very important for this.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Absolutely, yes. We also gained failover capabilities, and disaster recovery within these regional data centers. We ended going from — depending on whom you spoke to — somewhere between seven to 19 data centers globally, down to three. We were able to consolidate down to three. The data center footprint shrank massively. Just in the US, we went to one data center; we got rid of the other data center completely. We went from 34 racks down to 3.5.

Gardner: Hyper-convergence being a big part of that?

McKittrick: Correct, that was really the key, hyper-convergence and virtualization.

The other key enabling technology was data de-duplication, so the ability to shrink the data and then be able to move it from place to place without crushing bandwidth requirements, because you were only moving the changes, the change blocks.

Gardner: So more of a modern data lifecycle approach?

McKittrick: Absolutely. The backup and recovery approach was built in to the solution itself. So we also deployed a separate data archive, but that’s different than backup and recovery. Backup and recovery were essentially handled by VMware and the capability to have the same machine exist in multiple places at the same time.

Gardner: Now, there is more than just the physical approach to IT, as you described it, there is the budgetary financial approach. So how do they maybe get the benefit of the  OpEx approach that people are fond of with public cloud models and apply that in a private cloud setting?

Budget benefits 

McKittrick: They didn’t really take that approach. I mean we looked at it. We looked at essentially leasing. We looked at the pay-as-you-go models and it didn’t work for them. We ended up doing essentially a purchase of the equipment with a depreciation schedule and traditional support. It was analyzed, and they essentially said, “No, we are just going to buy it.”

Gardner: So total cost of ownership (TCO) is a better metric to look at. Did you have the ability to measure that? What were some of the metrics of success other than this massive consolidation of footprint and better control over management?

McKittrick: We had to justify TCO relative to what a traditional IT refresh would have cost. That’s what I was working on for the client until the cost pressure came to bear. We then needed to change our thinking. That’s when hyper-convergence came through.

What we would have spent on just hardware and infrastructure costs, not including network and bandwidth — would have been $55 million over five years, and we ended up doing it for $15 million.

The cost analysis was already done, because I was already costing it with a refresh, including compute and traditional SAN storage. The numbers I had over a five-year period – just what we would have spent on hardware and infrastructure costs, and not including network and bandwidth – would have been $55 million over five years, and we ended up doing it for $15 million.

Gardner: We have mentioned HCI several times, but you were specifically using SimpliVity, which is now part of Hewlett Packard Enterprise (HPE). Tell us about why SimpliVity was a proof-point for you, and why you think that’s going to strengthen HPE’s portfolio.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: This thing is now built and running, and it’s been two years since inception. So that’s a long time in technology, of course. The major factors involved were the cost savings.

As for HPE going forward, the way the client looked at it — and he is a very forward-thinking technologist — he always liked to say, “It’s just VMware.” So the beauty of it from their perspective – was that they could just deploy on VMware virtualization. Everyone in our organization knows how to work with VMware, we just deploy that, and they move things around. Everything is managed in that fashion, as virtual machines, as opposed to traditional storage, and all the other layers of things that have to be involved in traditional data centers.

The HCI-based data centers also included built-in WAN optimization, built-in backup and recovery, and were largely on solid-state disks (SSDs). All of the other pieces of the hardware stack that you would traditionally have — from the server on down — folded into a little box, so to speak, a physical box. With HCI, you get all of that functionality in a much simpler and much easier to manage fashion. It just makes everything easier.

Gardner: When you bring all those HCI elements together, it really creates a solution. Are there any other aspects of HPE’s portfolio, in addition now to SimpliVity, that would be of interest for future projects?

McKittrick: HPE is able to take this further. You have to remember, at the time, SimpliVity was a widget, and they would partner with the server vendors. That was really it, and with VMware.

Now with HPE, SimpliVity can really build out their roadmap. There is all kinds of innovation that’s going to come.

Now with HPE, SimpliVity has behind them one of the largest technology companies in the world. They can really build out their roadmap. There is all kinds of innovation that’s going to come. When you then pair that with things like Microsoft Azure Stack and HPE Synergy and its composable architecture — yes, all of that is going to be folded right in there.

I give HPE credit for having seen what HCI technology can bring to them and can help them springboard forward, and then also apply it back into things that they are already developing. Am I going to have more opportunity with this infrastructure now because of the SimpliVity acquisition? Yes.

Gardner:  For those organizations that want to take advantage of public cloud options, also having HCI-powered hybrid clouds, and composable and automated-bursting and scale-out — and soon combining that multi-cloud options via HPE New Stack – this gives them the best of all worlds.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Exactly. There you are. You have your hybrid cloud right there. And certainly one could do that with traditional IT, and still have that capability that HPE has been working on. But now, [with SimpliVity HCI] you have just consolidated all of that down to a relatively simple hardware approach. You can now quickly deploy and gain all those hybrid capabilities along with it. And you have the mobility of your applications and workloads, and all of that goodness, so that you can decide where you want to put this stuff.

Gardner: Before we sign off, let’s revisit this notion of those organizations that have to have a private cloud. What words of advice might you give them as they pursue such dramatic re-architecting of their entire IT systems?

A people-first process

McKittrick: Great question. The technology was the easy part. This was my first global HCI roll out, and I have been in the business well over 20 years. The differences come when you are messing with people — moving their cheese, and messing with their rice bowl. It’s profound. It always comes back to people.

The people and process were the hardest things to deal with, and quite frankly, still are. Make sure that everybody is on-board. They must understand what’s happening, why it’s happening, and then you try to get all those people pulling in the same direction. Otherwise, you end up in a massive morass and things don’t get done, or they become almost unmanageable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise | Tagged , , , , , , , , , | Leave a comment

Inside story on HPC’s AI role in Bridges ‘strategic reasoning’ research at CMU

The next BriefingsDirect high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable — even using imperfect information.

We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?

Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can’t just optimize as if you were the only actor — because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.

ts

Sandholm

That’s what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning — not all of it, but most of it until about 12 years ago — was really about perfect information games like Othello, Checkers, Chess and Go.

And there has been tremendous progress. But these complete information, or perfect information, games don’t really model real business situations very well. Most business situations are of imperfect information.

So you don’t know the other guy’s resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent’s mistakes and learn to exploit them.

So totally different techniques are needed, and this has way more applications in reality than perfect information games have.

Gardner: In business, you don’t always know the rules. All the variables are dynamic, and we don’t know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both.

Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw?

Heads-Up No-Limit Texas Hold’em has become the leading benchmark in the AI community.

Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold’em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the IC chip makers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold’em turned out to be great benchmark because it is a huge game of imperfect information.

It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes — it will still be more than that.

Gardner: This is as close to infinity as you can probably get, right?

Sandholm: Ha-ha, basically yes.

Gardner: Okay, so you have this massively complex potential data set. How do you winnow that down, and how rapidly does the algorithmic process and platform learn? I imagine that being reactive, creating a pattern that creates better learning is an important part of it. So tell me about the learning part.

Three part harmony

Sandholm: The learning part always interests people, but it’s not really the only part here — or not even the main part. We basically have three main modules in our architecture. One computes approximations of Nash equilibrium strategies using only the rules of the game as input. In other words, game-theoretic strategies.

That doesn’t take any data as input, just the rules of the game. The second part is during play, refining that strategy. We call that subgame solving.

Then the third part is the learning part, or the self-improvement part. And there, traditionally people have done what’s called opponent modeling and opponent exploitation, where you try to model the opponent or opponents and adjust your strategies so as to take advantage of their weaknesses.

However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don’t have that many holes to exploit and they are experts at counter-exploiting. When you start to exploit opponents, you typically open yourself up for exploitation, and we didn’t want to take that risk. In the learning part, the third part, we took a totally different approach than traditionally is taken in AI.

We are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

One way to think about that is that we are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

All three of these modules run on the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC), for which the hardware was built by Hewlett Packard Enterprise (HPE).

HPC from HPE

Overcomes Barriers

To Supercomputing and Deep Learning

Gardner: Is this being used in any business settings? It certainly seems like there’s potential there for a lot of use cases. Business competition and circumstances seem to have an affinity for what you’re describing in the poker use case. Where are you taking this next?

Sandholm: So far this, to my knowledge, has not been used in business. One of the reasons is that we have just reached the superhuman level in January 2017. And, of course, if you think about your strategic reasoning problems, many of them are very important, and you don’t want to delegate them to AI just to save time or something like that.

Now that the AI is better at strategic reasoning than humans, that completely shifts things. I believe that in the next few years it will be a necessity to have what I call strategic augmentation. So you can’t have just people doing business strategy, negotiation, strategic pricing, and product portfolio optimization.

You are going to have to have better strategic reasoning to support you, and so it becomes a kind of competition. So if your competitors have it, or even if they don’t, you better have it because it’s a competitive advantage.

Gardner: So a lot of what we’re seeing in AI and machine learning is to find the things that the machines do better and allow the humans to do what they can do even better than machines. Now that you have this new capability with strategic reasoning, where does that demarcation come in a business setting? Where do you think that humans will be still paramount, and where will the machines be a very powerful tool for them?

Human modeling, AI solving

Sandholm: At least in the foreseeable future, I see the demarcation as being modeling versus solving. I think that humans will continue to play a very important role in modeling their strategic situations, just to know everything that is pertinent and deciding what’s not pertinent in the model, and so forth. Then the AI is best at solving the model.

That’s the demarcation, at least for the foreseeable future. In the very long run, maybe the AI itself actually can start to do the modeling part as well as it builds a better understanding of the world — but that is far in the future.

Gardner: Looking back as to what is enabling this, clearly the software and the algorithms and finding the right benchmark, in this case the poker game are essential. But with that large of a data set potential — probabilities set like you mentioned — the underlying computersystems must need to keep up. Where are you in terms of the threshold that holds you back? Is this a price issue that holds you back? Is it a performance limit, the amount of time required? What are the limits, the governors to continuing?

Sandholm: It’s all of the above, and we are very fortunate that we had access to Bridges; otherwise this wouldn’t have been possible at all.  We spent more than a year and needed about 25 million core hours of computing and 2.6 petabytes of data storage.

This amount is necessary to conduct serious absolute superhuman research in this field — but it is something very hard for a professor to obtain. We were very fortunate to have that computing at our disposal.

Gardner: Let’s examine the commercialization potential of this. You’re not only a professor at Carnegie Mellon, you’re a founder and CEO of a few companies. Tell us about your companies and how the research is leading to business benefits.

Superhuman business strategies

Sandholm: Let’s start with Strategic Machine, a brand-new start-up company, all of two months old. It’s already profitable, and we are applying the strategic reasoning technology, which again is application independent, along with the Libratus technology, the Lengpudashi technology, and a host of other technologies that we have exclusively licensed to Strategic Machine. We are doing research and development at Strategic Machine as well, and we are taking these to any application that wants us.

 HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Such applications include business strategy optimization, automated negotiation, and strategic pricing. Typically when people do pricing optimization algorithmically, they assume that either their company is a monopolist or the competitors’ prices are fixed, but obviously neither is typically true.

We are looking at how do you price strategically where you are taking into account the opponent’s strategic response in advance. So you price into the future, instead of just pricing reactively. The same can be done for product portfolio optimization along with pricing.

Let’s say you’re a car manufacturer and you decide what product portfolio you will offer and at what prices. Well, what you should do depends on what your competitors do and vice versa, but you don’t know that in advance. So again, it’s an imperfect-information game.

Gardner: And these are some of the most difficult problems that businesses face. They have huge billion-dollar investments that they need to line up behind for these types of decisions. Because of that pipeline, by the time they get to a dynamic environment where they can assess — it’s often too late. So having the best strategic reasoning as far in advance as possible is a huge benefit.

If you think about machine learning traditionally, it’s about learning from the past. But strategic reasoning is all about figuring out what’s going to happen in the future.

Sandholm: Exactly! If you think about machine learning traditionally, it’s about learning from the past. But strategic reasoning is all about figuring out what’s going to happen in the future. And you can marry these up, of course, where the machine learning gives the strategic reasoning technology prior beliefs, and other information to put into the model.

There are also other applications. For example, cyber security has several applications, such as zero-day vulnerabilities. You can run your custom algorithms and standard algorithms to find them, and what algorithms you should run depends on what the other opposing governments run — so it is a game.

Similarly, once you find them, how do you play them? Do you report your vulnerabilities to Microsoft? Do you attack with them, or do you stockpile them? Again, your best strategy depends on what all the opponents do, and that’s also a very strategic application.

And in upstairs blocks trading, in finance, it’s the same thing: A few players, very big, very strategic.

Gaming your own immune system

The most radical application is something that we are working on currently in the lab where we are doing medical treatment planning using these types of sequential planning techniques. We’re actually testing how well one can steer a patient’s T-cell population to fight cancers, autoimmune diseases, and infections better by not just using one short treatment plan — but through sophisticated conditional treatment plans where the adversary is actually your own immune system.

Gardner: Or cancer is your opponent, and you need to beat it?

Sandholm: Yes, that’s right. There are actually two different ways to think about that, and they lead to different algorithms. We have looked at it where the actual disease is the opponent — but here we are actually looking at how do you steer your own T-cell population.

Gardner: Going back to the technology, we’ve heard quite a bit from HPE about more memory-driven and edge-driven computing, where the analysis can happen closer to where the data is gathered. Are these advances of any use to you in better strategic reasoning algorithmic processing?

Algorithms at the edge

Sandholm: Yes, absolutely! We actually started running at the PSC on an earlier supercomputer, maybe 10 years ago, which was a shared-memory architecture. And then with Bridges, which is mostly a distributed system, we used distributed algorithms. As we go into the future with shared memory, we could get a lot of speedups.

We have both types of algorithms, so we know that we can run on both architectures. But obviously, the shared-memory, if it can fit our models and the dynamic state of the algorithms, is much faster.

Gardner: So the HPE Machine must be of interest to you: HPE’s advanced concept demonstration model, with a memory-driven architecture, photonics for internal communications, and so forth. Is that a technology you’re keeping a keen eye on?

HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Sandholm: Yes. That would definitely be a desirable thing for us, but what we really focus on is the algorithms and the AI research. We have been very fortunate in that the PSC and HPE have been able to take care of the hardware side.

We really don’t get involved in the hardware side that much, and I’m looking at it from the outside. I’m trusting that they will continue to build the best hardware and maintain it in the best way — so that we can focus on the AI research.

Gardner: Of course, you could help supplement the cost of the hardware by playing superhuman poker in places like Las Vegas, and perhaps doing quite well.

Sandholm: Actually here in the live game in Las Vegas they don’t allow that type of computational support. On the Internet, AI has become a big problem on gaming sites, and it will become an increasing problem. We don’t put our AI in there; it’s against their site rules. Also, I think it’s unethical to pretend to be a human when you are not. The business opportunities, the monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, Software-defined storage | Tagged , , , , , , , , , , | Leave a comment

Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcomes

The next BriefingsDirect healthcare transformation use-case discussion focuses on how an ecosystem approach to big data solutions brings about improved healthcare informatics-driven outcomes.

We’ll now learn how a Philips Healthcare Informatics and Hewlett Packard Enterprise (HPE) partnership creates new solutions for the global healthcare market and provides better health outcomes for patients by managing data and intelligence better.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Joining us to explain how companies tackle the complexity of solutions delivery in healthcare by using advanced big data and analytics is Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why are partnerships so important in healthcare informatics? Is it because there are clinical considerations combined with big data technology? Why are these types of solutions particularly dependent upon an ecosystem approach?

Heemskerk: It’s exactly as you say, Dana. At Philips we are very strong at developing clinical solutions for our customers. But nowadays those solutions also require an IT infrastructure layer underneath to solve the total equation. As such, we are looking for partners in the ecosystem because we at Philips recognize that we cannot do everything alone. We need partners in the ecosystem that can help address the total solution — or the total value proposition — for our customers.

Gardner: I’m sure it varies from region to region, but is there a cultural barrier in some regard to bringing cutting-edge IT in particular into healthcare organizations? Or have things progressed to where technology and healthcare converge?

Heemskerk: Of course, there are some countries that are more mature than others. Therefore the level of healthcare and the type of solutions that you offer to different countries may vary. But in principle, many of the challenges that hospitals everywhere are going through are similar.

Some of the not-so-mature markets are also trying to leapfrog so that they can deliver different solutions that are up to par with the mature markets.

Gardner: Because we are hearing a lot about big data and edge computing these days, we are seeing the need for analytics at a distributed architecture scale. Please explain how big data changes healthcare.

Big data value add

Heemskerk: What is very interesting for big data is what happens if you combine it with value-based care. It’s a very interesting topic. For example, nowadays, a hospital is not reimbursed for every procedure that it does in the hospital – the value is based more on the total outcome of how a patient recovers.

This means that more analytics need to be gathered across different elements of the process chain before reimbursement will take place. In that sense, analytics become very important for hospitals on how to measure on how things are being done efficiently, and determining if the costs are okay.

Gardner: The same data that can used to be more efficient can also be used for better healthcare outcomes and understanding the path of the disease, or for the efficacy of procedures, and so on. A great deal can be gained when data is gathered and used properly.

Heemskerk: That is correct. And you see, indeed, that there is much more data nowadays, and you can utilize it for all kind of different things.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: Please help us understand the relationship between your organization and HPE. Where does your part of the value begin and end, and how does HPE fill their role on the technology side?

Healthy hardware relationships 

Heemskerk: HPE has been a highly valued supplier of Philips for quite a long time. We use their technologies for all kinds of different clinical solutions. For example, all of the hardware that we use for our back-end solutions or for advanced visualization is sourced by HPE. I am focusing very much on the commercial side of the game, so to speak, where we are really looking at how can we jointly go to market.

As I said, customers are really looking for one-stop shopping, a complete value proposition, for the challenges that they are facing. That’s why we partner with HPE on a holistic level.

Gardner: Does that involve bringing HPE into certain accounts and vice versa, and then going in to provide larger solutions together?

Heemskerk: Yes, that is exactly the case, indeed. We recognized that we are not so much focusing on problems related to just the clinical implications, and we are not just focusing on the problems that HPE is facing — the IT infrastructure and the connectivity side of the value chain. Instead, we are really looking at the problems that the C-suite-level healthcare executives are facing.

How do you align all of your processes so that there is a more optimized process flow within the hospitals?

You can think about healthcare industry consolidation, for example, as a big topic. Many hospitals are now moving into a cluster or into a network and that creates all kinds of challenges, both on the clinical application layer, but also on the IT infrastructure. How do you harmonize all of this? How do you standardize all of your different applications? How do you make sure that hospitals are going to be connected? How do you align all of your processes so that there is a more optimized process flow within the hospitals?

By addressing these kinds of questions and jointly going to our customers with HPE, we can improve user experiences for the customers, we can create better services, we have optimized these solutions, and then we can deliver a lot of time savings for the hospitals as well.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: We have certainly seen in other industries that if you try IT modernization without including the larger organization — the people, the process, and the culture — the results just aren’t as good. It is important to go at modernization and transformation, consolidation of data centers, for example, with that full range of inputs and getting full buy-in.

Who else makes up the ecosystem? It takes more than two players to make an ecosystem.

Heemskerk: Yes, that’s very true, indeed. In this, system integrators also have a very important role. They can have an independent view on what would be the best solution to fit a specific hospital.

Of course, we think that the Philips healthcare solutions are quite often the best, jointly focused with the solutions from HPE, but from time to time you can be partnering with different vendors.

Besides that, we don’t have all of the clinical applications. By partnering with other vendors in the ecosystem, sometimes you can enhance the solutions that we have to think about; such as 3D solutions and 3D printing solutions.

Gardner: When you do this all correctly, when you leverage and exploit an ecosystem approach, when you cover the bases of technology, finance, culture, and clinical considerations, how much of an impressive improvement can we typically see?

Saving time, money, and people

Heemskerk: We try to look at it customer by customer, but generically what we see is that there are really a lot of savings.

First of all, addressing standardization across the clinical application layer means that a customer doesn’t have to spend a lot of money on training all of its hospital employees on different kinds of solutions. So that’s already a big savings.

Secondly, by harmonizing and making better effective use of the clinical applications, you can drive the total cost of ownership down.

Thirdly, it means that on the clinical applications layer, there are a lot of efficiency benefits possible. For example, advanced analytics make it possible to reduce the time that clinicians or radiologists are spending on analyzing different kinds of elements, which also creates time savings.

Gardner: Looking more to the future, as technologies improve, as costs go down, as they typically do, as hybrid IT models are utilized and understood better — where do you see things going next for the healthcare sector when it comes to utilizing technology, utilizing informatics, and improving their overall process and outcomes?

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Heemskerk: What for me would be very interesting is to see is if we can create some kind of a patient-centric data file for each patient. You see that consumers are increasingly engaged in their own health, with all the different devices like Fitbit, Jawbone, Apple Watch, etc. coming up. This is creating a massive amount of data. But there is much more data that you can put into such a patient-centric file, with the chronic diseases information now that people are being monitored much more, and much more often.

If you can have a chronological view of all of the different touch points that the patient has in the hospital, combined with the drugs that the patient is using etc., and you have that all in this patient-centric file — it will be very interesting. And everything, of course, needs to be interconnected. Therefore, Internet of Things (IoT) technologies will become more important. And as the data is growing, you will have smarter algorithms that can also interpret that data – and so artificial intelligence (AI) will become much more important.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Cloud computing, data analysis, Enterprise architect, healthcare, Hewlett Packard Enterprise, machine learning, User experience | Tagged , , , , , , , , , | Leave a comment

Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe

The next BriefingsDirect cloud ecosystem strategies interview explores how a Canadian software provider delivers a hybrid cloud platform for enterprises and service providers alike.

We’ll now learn how Ormuco has identified underserved regions and has crafted a standards-based hybrid cloud platform to allow its users to attain world-class cloud services just about anywhere.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to help us explore how new breeds of hybrid cloud are coming to more providers around the globe thanks to the Cloud28+ consortium is Orlando Bayter, CEO and Founder of Ormuco in Montréal, and Xavier Poisson Gouyou Beachamps, Vice President of Worldwide Indirect Digital Services at Hewlett Packard Enterprise (HPE), based in Paris. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s begin with this notion of underserved regions. Orlando, why is it that many people think that public cloud is everywhere for everyone when there are many places around the world where it is still immature? What is the opportunity to serve those markets?

Bayter: There are many countries underserved by the hyperscale cloud providers. If you look at Russia, United Arab Emirates (UAE), around the world, they want to comply with regulations on security, on data sovereignty, and they need to have the clouds locally to comply.

 

Orlando Bayter (1)

Bayter

Ormuco targets those countries that are underserved by the hyperscale providers and enables service providers and enterprises to consume cloud locally, in ways they can’t do today.

Gardner: Are you allowing them to have a private cloud on-premises as an enterprise? Or do local cloud providers offer a common platform, like yours, so that they get the best of both the private and public hybrid environment?

Bayter: That is an excellent question. There are many workloads that cannot leave the firewall of an enterprise. With that, you now need to deliver the economies, ease of use, flexibility, and orchestration of a public cloud experience in the enterprise. At Ormuco, we deliver a platform that provides the best of the two worlds. You are still leaving your data center and you don’t need to worry whether it’s on-premises or off-premises.

It’s a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.

It’s a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.

Gardner: What are the attributes of this platform that both your enterprise and service provider customers are looking for? What’s most important to them in this hybrid cloud platform?

Bayter: As I said, there are some workloads that cannot leave the data center. In the past, you couldn’t get the public cloud inside your data center. You could have built a private cloud, but you couldn’t get an Amazon Web Services (AWS)-like solution or a Microsoft Azure-like solution on-premises.

We have been running this now for two years and what we have noticed is that enterprises want to have the ease-of-use, sales, service, and orchestration on-premises. Now, they can connect to a public cloud based on the same platform and they don’t have to worry about how to connect it or how it will work. They just decide where to place this.

They have security, can comply with regulations, and gain control — plus 40 percent savings compared with VMware, and up to 50 percent to 60 percent compared with AWS.

Gardner: I’m also interested in the openness of the platform. Do they have certain requirements as to the cloud model, such as OpenStack?  What is it that enables this to be classified as a standard cloud?

Bayter: At Ormuco, we went out and checked what are the best solutions and the best platform that we can bring together to build this experience on-premises and off-premises.

We saw OpenStack, we saw Docker, and then we saw how to take, for example, OpenStack and make it like a public cloud solution. So if you look at OpenStack, the way I see it is as concrete, or a foundation. If you want to build a house or a condo on that, you also need the attic. Ormuco builds that software to be able to deliver that cloud look and feel, that self-service, all in open tools, with the same APIs both on private and public clouds.

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

Gardner: What is it about the HPE platform beneath that that supports you? How has HPE been instrumental in allowing that platform to be built?

Community collaboration

Bayter: HPE has been a great partner. Through Cloud28+ we are able to go to markets in places that HPE has a presence. They basically generate that through marketing, through sales. They were able to bring deals to us and help us grow our business.

From a technology perspective, we are using HPE Synergy. With Synergy, we can provide composability, and we can combine storage and compute into a single platform. Now we go together into a market, we win deals, and we solve the enterprise challenges around security and data sovereignty.

Gardner: Xavier, how is Cloud28+ coming to market, for those who are not familiar with it? Tell us a bit about Cloud28+ and how an organization like Ormuco is a good example of how it works.

Poisson: Cloud28+ is a community of IT players — service providers, technology partners, independent software vendors (ISVs), value added resellers, and universities — that have decided to join forces to enable digital transformation through cloud computing. To do that, we pull our resources together to have a single platform. We are allowing the enterprise to discover and consume cloud services from the different members of Cloud28+.

We launched Cloud28+ officially to the market on December 15, 2016. Today, we have more than 570 members from across the world inside Cloud28+. Roughly 18,000 distributed services may be consumed and we also have system integrators that support the platform. We cover more than 300 data centers from our partners, so we can provide choice.

In fact, we believe our customers need to have that choice. They need to know what is available for them. As an analogy, if you have your smartphone, you can have an app store and do what you want as a consumer. We wanted to do the same and provide the same ease for an enterprise globally anywhere on the planet. We respect diversity and what is happening in every single region.

Ormuco has been one of the first technology partners. Docker is another one. And Intel is another. They have been working together with HPE to really understand the needs of the customer and how we can deliver very quickly a cloud infrastructure to a service provider and to an enterprise in record time. At the same time, they can leverage all the partners from the catalog of content and services, propelled by Cloud28+, from the ISVs.

Global ecosystem, by choice

Because we are bringing together a global ecosystem, including the resellers, if a service provider builds a project through Cloud28+, with a technology partner like Ormuco, then all the ISVs are included. They can push their services onto the platform, and all the resellers that are part of the ecosystem can convey onto the market what the service providers have been building.

We have a lot of collaboration with Ormuco to help them to design their solutions. Ormuco has been helping us to design what Cloud28+ should be, because it’s a continuous improvement approach on Cloud28+ and it’s via collaboration.

If you want to join Cloud28+ to take, don’t come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.

As I like to say, “If you want to join Cloud28+ to take, don’t come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.”

Gardner: Orlando, when this all works well, whatdo your end-users gain in terms of business benefits? You mentioned reduction in costs, that’s very important, of course. But is there more about your platform from a development perspective and an operational perspective that we can share to encourage people to explore it?

Bayter: So imagine yourself with an ecosystem like Cloud28+. They have 500 members. They have multiple countries, many data centers.

Now imagine that you can have the Ormuco solution on-premises in an enterprise and then be able to burst to a global network of service providers, across all those regions. You get the same performance, you get the same security, and you get the same compliance across all of that.

For an end-customer, you don’t need to think anymore where you’re going to put your applications. They will go to the public cloud, they will go to the private cloud. It is agnostic. You basically place it where you want it to go and decide the economies you want to get. You can compare with the hyperscale providers.

That is the key, you get one platform throughout our ecosystem of partners that can deliver to you that same functionality and experience locally. With a community such as Cloud28+, we can accomplish something that was not possible before.

Gardner: So, just hoping to delineate between the development and then the operations in production. Are you offering the developer an opportunity to develop there and seamlessly deploy, or are you more focused on the deployment after the applications are developed, or both?

Development to deployment 

Bayter: With our solution, same as AWS or Azure allows, a developer can develop their app via APIs, automated, use a database of choice (it could be MySQL, Oracle), and the load balancing and the different features we have in the cloud, whether it’s Kubernetes or Docker, build all that — and then when the application is ready, you can decide in which region you want to deploy the application.

So you go from development, to deployment technology of your choice, whether it’s Docker or Kubernetes, and then you can deploy to the global network that we’re building on Cloud28+. You can go to any region, and you don’t have to worry about how to get a service provider contract in Russia, or how do I get a contract in Brazil? Who is going to provide me with the service? Now you can get that service locally through a reseller, a distributor, or have an ISV deploythe software worldwide.

Gardner: Xavier, what other sorts of organizations should be aware of the Cloud28+ network?

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

We accelerate go-to-market for startups, they gain immediate global reach with Cloud28+.

Poisson: We have the technology partners like Ormuco, and we are thankful for what they have brought to the community. We have service providers, of course, software vendors, because you can publish your software in Cloud28+ and provision it on-premises or off-premises. We accelerate go-to-market for startups, they gain immediate global reach with Cloud28+. So to all the ISVs, I say, “Come on, come on guys, we will help you reach out to the market.”

System integrators also, because we see this is an opportunity for the large enterprises and governments with a lot of multi-cloud projects taking care, having requirements for  security. And you know what is happening with security today, it’s a hot topic. So people are thinking about how they can have a multi-cloud strategy. System integrators are now turning to Cloud28+ because they find here a reservoir of all the capabilities to find the right solution to answer the right question.

Universities are another kind of member we are working with. Just to explain, we know that all the technologies are created first at the university and then they evolve. All the startups are starting at the university level. So we have some very good partnerships with some universities in several regions in Portugal, Germany, France, and the United States. These universities are designing new projects with members of Cloud28+, to answer questions of the governments, for example, or they are using Cloud28+ to propel the startups into the market.

Ormuco is also helping to change the business model of distribution. So distributors now also are joining Cloud28+. Why? Because a distributor has to make a choice for its consumers. In the past, a distributor had software inventory that they were pushing to the resellers. Now they need to have an inventory of cloud services.

There is more choice. They can purchase hyperscale services, resell, or maybe source to the different members of Cloud28+, according to the country they want to deliver to. Or they can own the platform using the technology of Ormuco, for example, and put that in a white-label model for the reseller to propel it into the market. This is what Azure is doing in Europe, typically. So new kinds of members and models are coming in.

Digital transformation

Lastly, an enterprise can use Cloud28+ to make their digital transformation. If they have services and software, they can become a supplier inside of Cloud28+. They source cloud services inside a platform, do digital transformation, and find a new go-to-market through the ecosystem to propel their offerings onto the global market.

Gardner: Orlando, do you have any examples that you could share with us of a service provider, ISV or enterprise that has white-labeled your software and your capabilities as Xavier has alluded to? That’s a really interesting model.

Bayter: We have been able to go-to-market to countries where Cloud28+ was a tremendous help. If you look at Western Europe, Xavier was just speaking about Microsoft Azure. They chose our platform and we are deploying it in Europe, making it available to the resellers to help them transform their consumption models.

They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises — all from a single platform.

If you look at the Europe, Middle East and Africa (EMEA) region, we have one of the largest managed service providers. They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises — all from a single platform.

We also have several of the largest telecoms in Latin America (LATAM) and EMEA. We have a US presence, where we have Managed.com as a provider. So things are going very well and it is largely thanks to what Cloud28+ has done for us.

Gardner: While this consortium is already very powerful, we are also seeing new technologies coming to the market that should further support the model. Such things as HPE New Stack, which is still in the works, HPE Synergy’s composability and auto-bursting, along with security now driven into the firmware and the silicon — it’s almost as if HPE’s technology roadmap is designed for this very model, or very much in alignment. Tell us how new technology and the Cloud28+ model come together.

Bayter: So HPE New Stack is becoming the control point of multi-cloud. Now what happens when you want to have that same experience off-premises and on-premises? New Stack could connect to Ormuco as a resource provider, even as it connects to other multi-clouds.

With an ecosystem like Cloud28+ all working together, we can connect those hybrid models with service providers to deliver that experience to enterprises across the world.

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

Gardner: Xavier, anything more in terms of how HPE New Stack and Cloud28+ fit?

Partnership is top priority

Poisson: It’s a real collaboration. I am very happy with that because I have been working a long time at HPE, and New Stack is a project that has been driven by thinking about the go-to-market at the same time as the technology. It’s a big reward to all the Cloud28+ partners because they are now de facto considered as resource providers for our end-user customers – same as the hyperscale providers, maybe.

At HPE, we say we are in partnership first — with our partners, or ecosystem, or channel. I believe that what we are doing with Cloud28+, New Stack, and all the other projects that we are describing – this will be the reality around the world. We deliver on-premises for the channel partners.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, cloud messaging, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, HP | Tagged , , , , , , , , | Leave a comment

How Nokia refactors the video delivery business with new time-managed IT financing models

The next BriefingsDirect IT financing and technology acquisition strategies interview examines how Nokia is refactoring the video delivery business. Learn both about new video delivery architectures and the creative ways media companies are paying for the technology that supports them.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe new models of Internet Protocol (IP) video and time-managed IT financing is Paul Larbey, Head of the Video Business Unit at Nokia, based in Cambridge, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems that the video-delivery business is in upheaval. How are video delivery trends coming together to make it necessary for rethinking architectures? How are pricing models and business models changing, too?

Larbey: We sit here in 2017, but let’s look back 10 years to 2007. There were a couple key events in 2007 that dramatically shaped how we all consume video today and how, as a company, we use technology to go to market.

Paul Larbey (1)

Larbey

It’s been 10 years since the creation of the Apple iPhone. The iPhone sparked whole new device-types, moving eventually into the iPad. Not only that, Apple underneath developed a lot of technology in terms of how you stream video, how you protect video over IP, and the technology underneath that, which we still use today. Not only did they create a new device-type and avenue for us to watch video, they also created new underlying protocols.

It was also 10 years ago that Netflix began to first offer a video streaming service. So if you look back, I see one year in which how we all consume our video today was dramatically changed by a couple of events.

If we fast-forward, and look to where that goes to in the future, there are two trends we see today that will create challenges tomorrow. Video has become truly mobile. When we talk about mobile video, we mean watching some films on our iPad or on our iPhone — so not on a big TV screen, that is what most people mean by mobile video today.

The future is personalized

When you can take your video with you, you want to take all your content with you. You can’t do that today. That has to happen in the future. When you are on an airplane, you can’t take your content with you. You need connectivity to extend so that you can take your content with you no matter where you are.

Take the simple example of a driverless car. Now, you are driving along and you are watching the satellite-navigation feed, watching the traffic, and keeping the kids quiet in the back. When driverless cars come, what you are going to be doing? You are still going to be keeping the kids quiet, but there is a void, a space that needs to be filled with activity, and clearly extending the content into the car is the natural next step.

And the final challenge is around personalization. TV will become a lot more personalized. Today we all get the same user experience. If we are all on the same service provider, it looks the same — it’s the same color, it’s the same grid. There is no reason why that should all be the same. There is no reason why my kids shouldn’t have a different user interface.

There is no reason why I should have 10 pages of channels that I have to through to find something that I want to watch.

The user interface presented to me in the morning may be different than the user interface presented to me in the evening. There is no reason why I should have 10 pages of channels that I have to go through to find something that I want to watch. Why aren’t all those channels specifically curated for me? That’s what we mean by personalization. So if you put those all together and extrapolate those 10 years into the future, then 2027 will be a very different place for video.

Gardner: It sounds like a few things need to change between the original content’s location and those mobile screens and those customized user scenarios you just described. What underlying architecture needs to change in order to get us to 2027 safely?

Larbey: It’s a journey; this is not a step-change. This is something that’s going to happen gradually.

But if you step back and look at the fundamental changes — all video will be streamed. Today, the majority of what we view is via broadcasting, from cable TV, or from a satellite. It’s a signal that’s going to everybody at the same time.

If you think about the mobile video concept, if you think about personalization, that is not going be the case. Today we watch a portion of our video streamed over IP. In the future, it will all be streamed over IP.

And that clearly creates challenges for operators in terms of how to architect the network, how to optimize the delivery, and how to recreate that broadcast experience using streaming video. This is where a lot of our innovation is focused today.

Gardner: You also mentioned in the case of an airplane, where it’s not just streaming but also bringing a video object down to the device. What will be different in terms of the boundary between the stream and a download?

IT’s all about intelligence

Larbey: It’s all about intelligence. Firstly, connectivity has to extend and become really ubiquitous via technology such as 5G. The increase in fiber technology will dramatically enable truly ubiquitous connectivity, which we don’t really have today. That will resolve some of the problems, but not all.

But, by the fact that television will be personalized, the network will know what’s in my schedule. If I have an upcoming flight, machine learning can automatically predict what I’m going to do and make sure it suggests the right content in context. It may download the content because it knows I am going to be sitting in a flight for the next 12 hours.

Gardner: We are putting intelligence into the network to be beneficial to the user experience. But it sounds like it’s also going to give you the opportunity to be more efficient, with just-in-time utilization — minimal viable streaming, if you will.

How does the network becoming more intelligent also benefit the carriers, the deliverers of the content, and even the content creators and owners? There must be an increased benefit for them on utility as well as in the user experience?

Larbey: Absolutely. We think everything moves into the network, and the intelligence becomes the network. So what does that do immediately? That means the operators don’t have to buy set-top boxes. They are expensive. They are very costly to maintain. They stay in the network a long time. They can have a much lighter client capability, which basically just renders the user interface.

The first obvious example of all this, that we are heavily focused on, is the storage. So taking the hard drive out of the set-top box and putting that data back into the network. Some huge deployments are going on at the moment in collaboration with Hewlett Packard Enterprise (HPE) using the HPE Apollo platform to deploy high-density storage systems that remove the need to ship a set-top box with a hard drive in it.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Now, what are the advantages of that? Everybody thinks it’s costly, so you’ve taken the hard drive out, you have the storage in the network, and that’s clearly one element. But actually if you talk to any operator, their biggest cause of subscriber churn is when somebody’s set-top box fails and they lose their personalized recordings.

The personal connection you had with your service isn’t there any longer. It’s a lot easier to then look at competing services. So if that content is in the network, then clearly you don’t have that churn issue. Not only can you access your content from any mobile device, it’s protected and it will always be with you.

Taking the CDN private

Gardner: For the past few decades, part of the solution to this problem was to employ a content delivery network (CDN) and use that in a variety of ways. It started with web pages and the downloading of flat graphic files. Now that’s extended into all sorts of objects and content. Are we going to do away with the CDN? Are we going to refactor it, is it going to evolve? How does that pan out over the next decade?

Larbey: The CDN will still exist. That still becomes the key way of optimizing video delivery — but it changes. If you go back 10 years, the only CDNs available were CDNs in the Internet. So it was a shared service, you bought capacity on the shared service.

Even today that’s how a lot of video from the content owners and broadcasters is streamed. For the past seven years, we have been taking that technology and deploying it in private network — with both telcos and cable operators — so they can have their own private CDN, and there are a lot of advantages to having your own private CDN.

You get complete control of the roadmap. You can start to introduce advanced features such as targeted ad insertion, blackout, and features like that to generate more revenue. You have complete control over the quality of experience, which you don’t if you outsource to a shared service.

There are a lot of advantages to having your own private CDN. You have complete control over the quality of experience which you don’t if you outsource to a shared service.

What we’re seeing now is both the programmers and broadcasters taking an interest in that private CDN because they want the control. Video is their business, so the quality they deliver is even more important to them. We’re seeing a lot of the programmers and broadcasters starting to look at adopting the private CDN model as well.

The challenge is how do you build that? You have to build for peak. Peak is generally driven by live sporting events and one-off news events. So that leaves you with a lot of capacity that’s sitting idle a lot of the time. With cloud and orchestration, we have solved that technically — we can add servers in very quickly, we can take them out very quickly, react to the traffic demands and we can technically move things around.

But the commercial model has lagged behind. So we have been working with HPE Financial Services to understand how we can innovate on that commercial model as well and get that flexibility — not just from an IT perspective, but also from a commercial perspective.

Gardner:  Tell me about Private CDN technology. Is that a Nokia product? Tell us about your business unit and the commercial models.

Larbey: We basically help as a business unit. Anyone who has content — be that broadcasters or programmers – they pay the operators to stream the content over IP, and to launch new services. We have a product focused on video networking: How to optimize a video, how it’s delivered, how it’s streamed, and how it’s personalized.

It can be a private CDN product, which we have deployed for the last seven years, and we have a cloud digital video recorder (DVR) product, which is all about moving the storage capacity into the network. We also have a systems integration part, which brings a lot of technology together and allows operators to combine vendors and partners from the ecosystem into a complete end-to-end solution.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Gardner: With HPE being a major supplier for a lot of the hardware and infrastructure, how does the new cost model change from the old model of pay up-front?

Flexible financial formats

Larbey: I would not classify HPE as a supplier; I think they are our partner. We work very closely together. We use HPE ProLiant DL380 Gen9 Servers, the HPE Apollo platform, and the HPE Moonshot platform, which are, as you know, world-leading compute-storage platforms that deliver these services cost-effectively. We have had a long-term technical relationship.

We are now moving toward how we advance the commercial relationship. We are working with the HPE Financial Services team to look at how we can get additional flexibility. There are a lot of pay-as-you-go-type financial IT models that have been in existence for some time — but these don’t necessarily work for my applications from a financial perspective.

Our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate.

In the private CDN and the video applications, our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate. With the traditional IT payment model for storage, my application fundamentally breaks that. So having a partner like HPE that was flexible and could understand the application is really important.

We also needed flexibility of compute scaling. We needed to be able to deploy for the peak, but not pay for that peak at all times. That’s easy from the software technology side, but we needed it from the commercial side as well.

And thirdly, we have been trying to enter a new market and be focused on the programmers and broadcasters, which is not our traditional segment. We have been deploying our CDN to the largest telcos and cable operators in the world, but now, selling to that programmers and broadcasters segment — they are used to buying a service from the Internet and they work in a different way and they have different requirements.

So we needed a financial model that allowed us to address that, but also a partner who would take some of the risk, too, because we didn’t know if it was going to be successful. Thankfully it has, and we have grown incredibly well, but it was a risk at the start. Finding a partner like HPE Financial Services who could share some of that risk was really important.

Gardner: These video delivery organizations are increasingly operating on subscription basis, so they would like to have their costs be incurred on a similar basis, so it all makes sense across the services ecosystem.

Our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.

Larbey: Yes, absolutely. That is becoming more and more important. If you go back to the very first the Internet video, you watched of a cat falling off a chair on YouTube. It didn’t matter if it was buffering, that wasn’t relevant. Now, our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.

If TV in 2027 is going to be purely IP, then clearly that has to deliver exactly the same quality of experience as the broadcasting technologies. And that creates challenges. The biggest obvious example is if you go to any IP TV operator and look at their streamed video channel that is live versus the one on broadcast, there is a big delay.

So there is a lag between the live event and what you are seeing on your IP stream, which is 30 to 40 seconds. If you are in an apartment block, watching a live sporting event, and your neighbor sees it 30 to 40 seconds before you, that creates a big issue. A lot of the innovations we’re now doing with streaming technologies are to deliver that same broadcast experience.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Gardner: We now also have to think about 4K resolution, the intelligent edge, no latency, and all with managed costs. Fortunately at this time HPE is also working on a lot of edge technologies, like Edgeline and Universal IoT, and so forth. There’s a lot more technology being driven to the edge for storage, for large memory processing, and so forth. How are these advances affecting your organization?

Optimal edge: functionality and storage

Larbey: There are two elements. The compute, the edge, is absolutely critical. We are going to move all the intelligence into the network, and clearly you need to reduce the latency, and you need to able to scale that functionality. This functionality was scaled in millions of households, and now it has to be done in the network. The only way you can effectively build the network to handle that scale is to put as much functionality as you can at the edge of the network.

The HPE platforms will allow you to deploy that computer storage deep into the network, and they are absolutely critical for our success. We will run our CDN, our ad insertion, and all that capability as deeply into the network as an operator wants to go — and certainly the deeper, the better.

The other thing we try to optimize all of the time is storage. One of the challenges with network-based recording — especially in the US due to the content-use regulations compliance — is that you have to store a copy per user. If, for example, both of us record the same program, there are two versions of that program in the cloud. That’s clearly very inefficient.

The question is how do you optimize that, and also support just-in-time transcoding techniques that have been talked about for some time. That would create the right quality of bitrate on the fly, so you don’t have to store all the different formats. It would dramatically reduce storage costs.

The challenge has always been that the computing processing units (CPUs) needed to do that, and that’s where HPE and the Moonshot platform, which has great compute density, come in. We have the Intel media library for doing the transcoding. It’s a really nice storage platform. But we still wanted to get even more out of it, so at our Bell Labs research facility we developed a capability called skim storage, which for a slight increase in storage, allows us to double the number of transcodes we can do on a single CPU.

That approach takes a really, really efficient hardware platform with nice technology and doubles the density we can get from it — and that’s a big change for the business case.

Gardner: It’s astonishing to think that that much encoding would need to happen on the fly for a mass market; that’s a tremendous amount of compute, and an intense compute requirement.

Content popularity

Larbey: Absolutely, and you have to be intelligent about it. At the end of the day, human behavior works in our favor. If you look at most programs that people record, if they do not watch within the first seven days, they are probably not going to watch that recording. That content in particular then can be optimized from a storage perspective. You still need the ability to recreate it on the fly, but it improves the scale model.

Gardner: So the more intelligent you can be about what the users’ behavior and/or their use patterns, the more efficient you can be. Intelligence seems to be the real key here.

Larbey: Yes, we have a number of algorithms even within the CDN itself today that predict content popularity. We want to maximize the disk usage. We want the popular content on the disk, so what’s the point of us deleting a piece of a popular content just because a piece of long-tail content has been requested. We do a lot of algorithms looking at and trying to predict the content popularity so that we can make sure we are optimizing the hardware platform accordingly.

Gardner: Perhaps we can deepen our knowledge about this all through some examples. Do have some examples that demonstrate how your clients and customers are taking these new technologies and making better business decisions that help them in their cost structure — but also deliver a far better user experience?

In-house control

Larbey: One