Logicalis chief technologist defines the new ideology of hybrid IT

The next BriefingsDirect thought leader interview explores how digital disruption demands that businesses develop a new ideology of hybrid IT.

We’ll hear how such trends as Internet of things (IoT), distributed IT, data sovereignty requirements, and pervasive security concerns are combining to challenge how IT operates. And we’ll learn how IT organizations are shifting to become strategists and internal service providers, and how that supports adoption of hybrid IT. We will also delve into how converged and hyper-converged infrastructures (HCI) provide an on-ramp to hybrid cloud strategies and adoption. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. 

To help us define a new ideology for hybrid IT, we’re joined by Neil Thurston, Chief Technologist for the Hybrid IT Practice at Logicalis Group in the UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why don’t we start at this notion of a new ideology? What’s wrong with the old ideology of IT?

Thurston: Good question. What we are facing now is what we’ve done for an awfully long time versus what the emerging large hyper-scale providers with cloud, for example, have been developing. 

Thurston

The two clashing ideologies that we have are: Either we continue with the technologies that we’ve been developing (and the skills and processes that we’ve developed in-house) and push those out to the cloud, or we adopt the alternative ideology. If we think about things such as Microsoft Azure and the forthcoming Azure Stack, which means that those technologies are pulled from the cloud into our on-premise environments. The two opposing ideologies we have are: Do we push out or do we pull in?

The technologies allow us to operate in a true hybrid environment. By that we mean not having isolated islands of innovation anymore. It’s not just standing things up in hybrid hyper-scale environments, or clouds, where you have specific skills, resources, teams and tools to manage those things. Moving forward, we want to have consistency in operations, security, and automation. We want to have a single toolset or control plane that we can put across all of our workloads and data, regardless of where they happen to reside.

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Gardner: One of the things I encounter, Neil, when I talk to Chief information officers (CIO)s, is their concern that as we move to a hybrid environment, they’re going to be left with having the responsibility — but without the authority to control those different elements. Is there some truth to that?

Thurston: I can certainly see where that viewpoint comes from. A lot of our own customers reflect that viewpoint. We’re seeing a lot of organizations, where they may have dabbled and cherry-picked from service management and from practices such as ITIL. We’re now seeing more pragmatic IT service management (ITSM) frameworks, such as IT4IT, coming to the fore. These are really more about pushing that responsibility level up the stack. 

You’re right in that people are becoming more of a supply-chain manager than the actual manager of the hardware, facilities, and everything else within IT. There definitely is a shift toward that, but there are also frameworks coming into play that allow you to deal with that as well. 

Gardner: The notion of shadow IT becoming distributed IT was once a very dangerous and worrisome thing. Now, it has to be embraced and perhaps is positive. Why should we view it as positive?

Out of the shadow

Thurston: The term shadow IT is controversial. Within our organization, we prefer to say that the shadow IT users are the digital users of the business. You have traditional IT users, but you also have digital users. I don’t really think it’s a shadow IT thing; it’s that they’re a totally different use-case for service consumption. 

But you’re right. They definitely need to be serviced by the organizations. They deserve to have the same level of services applied, the same governance, security, and everything else applied to them. 

Gardner: It seems that the new ideology of hybrid IT is about getting the right mix and keeping that mix of elements under some sort of control. Maybe it’s simply on the basis of management, or an automation framework of some sort, but you allow that to evolve and see what happens. We don’t know what this is going to be like in five years. 

Thurston: There are two pieces of the puzzle. There’s the workload, the actual applications and services, and then there’s the data. There is more importance placed on the data. Data is the new commodity, the new cash, in our industry. Data is the thing you want to protect. 

The actual workload and service consumption piece is the commodity piece that could be worked out. What you have to do moving forward is protect your data, but you can take more of a brokering approach to the actual workloads. If you can reach that abstraction, then you’re fit-for-purpose and moving forward into the hybrid IT world.

Gardner: It’s almost like we’re controlling the meta-processes over that abstraction without necessarily having full control of what goes on at those lower abstractions, but that might not be a bad thing. 

Thurston: I have a very quick use-case. A customer of ours for the last five years has been using Amazon Web Services (AWS), and they were getting the feeling they were getting tied into the platform. Their developers over the years had been using more and more of the platform services and they weren’t able to make all that code portable and take it elsewhere. 

This year, they made the transformation and they’ve decided to develop against Cloud Foundry, an open Platform as a Service (PaaS). They have instances of Cloud Foundry across Pivotal on AWS, also across IBM Bluemix, and across other cloud providers. So, they’re now coding once — and deploying anywhere for the compute workload side. Then, they have a separate data fabric that regulates the data underneath. There are emerging new architectures that help you to deal with this.

Gardner: It’s interesting that you just described an ecosystem approach. You’re no longer seeing as many organizations that are supplier “XYZ” shops, where 80 or 90 percent of everything would be one brand name. You just described a highly heterogeneous environment. 

Thurston: People have used cloud services, and hyper-scale of cloud services, and have specific use-cases, typically the more temporary types of workloads. Even companies born in the cloud, such as Uber and Netflix, reach those inflection points, where actually going to on-premise was far cheaper. It made compliance to regulations far easier. People are slowly realizing, through what other people are doing — and also from their own good or bad experiences — that hybrid IT really is the way forward.

Gardner: And the good news is that if you do bring it back from the cloud or re-factor what you’re doing on-premises, there are some fantastic new infrastructure technologies. We are talking about converged infrastructure, hyper-converged infrastructure, software-defined data center (SDDC). At recent HPE Discover events, we’ve seen more  memory-driven computing, and we’re seeing some interesting new powerful speeds and feeds along those lines. 

So, on the economics and the price-performance equation, the public cloud is good for certain things, but there’s some great attraction to some of these new technologies on-premises. Is that the mix that you are trying to help your clients factor?

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Thurston: Absolutely. We’re pretty much in parallel with the way that HPE approaches things, with the right mix. We see that in certain industries there’s always going to be things like regulated data. Regulated data is really hard to control in a public-cloud space, where you have no real idea where things are. You can’t easily order them physically. 

Having on-premise provides you with that far easier route to regulation, and today’s technologies, the hyper-converged platforms, for example, allow us to really condense the footprint. We don’t need these massive data centers anymore.

We’re working with customers where we have taken 10 or 12 racks worth of legacy classic equipment and with a new hyper-converged, we put in less than two racks worth of equipment. So, the actual operational footprint of facilities cost is much less. It makes it a far more compelling argument for those types of use-cases than using public cloud.

Gardner: Then you can mirror that small footprint data center into a geography, if you need it for compliance requirements, or you could mirror it for reasons of business continuity and backup and recovery. So, there are lots of very interesting choices. 

Neil, tell us a little bit about Logicalis. I want to make sure all of our listeners and readers understand who you are and how you fit into helping organizations make these very large strategic decisions.

Cloud-first is not cloud-only 

Thurston: Logicalis is essentially a digital business enabler. We take technologies across multiple areas and help our customers become digital-ready. We cover a whole breadth of technologies. 

I look at the hybrid IT practice, but we also have the more digital-focused parts of our business, such as collaboration and analytics. The hybrid IT side is where we’re working with our customers through the pains that they have, through the decisions that they have to make, and very often board-level decisions are made where you have to have a “cloud-first” strategy.

It’s unfortunate when that gets interpreted as “cloud-only.” There is some process to go through for cloud readiness, because some applications are not going to be fit for the cloud. Some cannot be virtualized; most can, but there are always regulations. Certainly, in Europe at present there is a lot of fear, uncertainty, and doubt (FUD) in the market, and there is a lot of uncertainty around European Union General Data Protection Regulation (EU GDPR), for example, and overall data protection.

There are a lot of reasons why we have to take a bit more of a factored, measured approach to looking at where workloads and data are best placed moving forward, and the models are that you want to operate in.

Gardner: I think HPE agrees with you. Their strategy is to put more emphasis on things like high performance computing (HPC), the workloads of which won’t likely be virtualized, that won’t work well in a public cloud, one-size-fits-all environment. It’s also factoring in the importance of the edge, even thinking about putting the equivalent of a data center on the edge for demands around information for IoT, and analytics and data requirements there as well as the compute requirements.

What’s the relationship between HPE and Logicalis? How do you operate as an alliance or as a partnership?

Thurston: We have a very strong partnership. We have a 15- or 16-year relationship with HPE in the UK. As everyone else did, we started out selling service and storage, but we’ve taken the journey with HPE and with our customers. The great thing about HPE is that they’ve always managed to innovate, they have always managed to keep up with the curve, and that’s really enabled us to work with our customers and decide what the right technologies are. Today, this allows us to work out the right mix for our customers of on-premise and off-premise equipment,

HPE is ahead of the curve in various technologies in our area, and one of those includes HPE Synergy. We’re now talking with a lot of our customers about the next curve that’s coming with infrastructure-as-code, and how we can leverage what the possible benefits and outcomes will be of enabling that technology.

The on-ramp to that is that we’re using hyper-converged technologies to virtualize all the workloads and make them portable, so that we can then abstract them and place them either within platform services or within cloud platforms, as necessary, as dictated by whatever our security policies dictate.

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Gardner: Getting back to this ideology of hybrid IT, when you have disparate workloads and you’re taking advantage of these benefits of platform choice, location, model and so forth, it seems that we’re still confronted with that issue of having the responsibility without the authority. Is there an approach that HPE is taking with management, perhaps thinking about HPE OneView that is anticipating that need and maybe adding some value there?

Thurston: With the HPE toolsets, we’re able to set things such as policies. Today, we’re at Platform 2.5 really, and the inflection that takes us on to the third platform is the policy automation. This is one part that HPE OneView allows us to do across the board. 

It’s policies on our storage resources, policies on our compute resources, and again, policies on non-technology, so quotas on public cloud, and those types of things. It enables us to leverage the software-defined infrastructure that we have underneath to set the policies that define the operational windows that we want our infrastructure to work in, the decisions it’s allowed to make itself within that, and we’ll just let it go. We really want to take IT from “high touch” to “low touch,” that we can do today with policy, and potentially, in the future with infrastructure as code, to “no touch.” 

Gardner: As you say, we are at Platform 2.5, heading rapidly towards Platform 3. Do you have some examples you can point to, customers of yours and HPE’s, and describe how a hybrid IT environment translates into enablement and business benefits and perhaps even economic benefits? 

Time is money

Thurston: The University of Wolverhampton is one of our customers, where we’ve taken this journey with them with HPE, with hyper-converged platforms, and created a hybrid environment for them. 

Today, the hybrid environment means that we’re wholly virtualized on HPE hyper-converged platform. We’ve rolled the solutions out across their campus. Where we normally would have had disparate clouds, we now have a single plane controlled by OneView that enables them to balance all the workloads across the whole campus, all of their departments. It’s bringing them new capabilities, such as agility, so they can now react a lot quicker. 

Before, a lot of the departments were coming to them with requirements, but those requirements were taking 12 to 16 weeks to actually fulfill. Now, we can do these things from the technology perspective within hours, and the whole process within days. We’re talking a factor of 10 here in reduction of time to actually produce services. 

As they say, success breeds success. Once someone sees what the other department is able to do, that generates more questions, more requests, and it becomes a self-fulfilling prophecy. 

We’re working with them to enable the next phase of this project. That is to leverage the hyper-scale of public clouds, but again, in a more controlled environment. Today, they’re used to the platform. That’s all embedded in. They are reaping the benefits of that from mainly an agility perspective. From an operational perspective, they are reaping the benefits of vastly reduced system, and more importantly, storage administration. 

Storage administrations have had 85 percent savings on their time required to administer the storage by having it wholly virtualized, which is fantastic from their perspective. It means they can concentrate more on developing the next phase, which is embracing or taking this ideology out to the public cloud.

Gardner: Let’s look to the future before we wrap this up. What would you like to see, not necessarily from HPE, but what can the vendors, the suppliers, or the public-cloud providers do to help you make that hybrid IT equation work better? 

Thurston: A lot of our mainstream customers always think that they’re late into adoption, but typically, they’re late into adoption because they’re waiting to see what becomes either a de-facto standard that is winning in the market, or they’re looking for bodies to create standards. Interoperability between platforms and standards is really the key to driving better adoption.

Today with AWS, Azure, etc., there’s no real compatibility that we can take from them. We can only abstract things further up. This is why I think platform as a service, things like Cloud Foundry and open platforms will, for those forward thinkers who want to adopt the hybrid IT, become the future platforms of choice.

Gardner: It sounds like what you are asking for is a multi-cloud set of options that actually works and is attainable. 

Thurston: It’s like networking, with Ethernet. We have had a standard, everyone adheres to it, and it’s a commodity. Everyone says public cloud is a commodity. It is, but unfortunately what we don’t have is the interoperability of the other standards, such as we find in networking. That’s what we need to drive better adoption, moving forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Advertisements

About danalgardner

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years. Gardner tracks and analyzes a critical set of enterprise software technologies and business development issues: Cloud computing, SOA, business process management, business intelligence, next-generation data centers, and application lifecycle optimization. His specific interests include Enterprise 2.0 and social media, cloud standards and security, as well as integrated marketing technologies and techniques. Gardner is a former senior analyst at Yankee Group and Aberdeen Group, and a former editor-at-large and founding online news editor at InfoWorld. He is a former news editor at IDG News Service, Digital News & Review, and Design News.
This entry was posted in Cloud computing, enterprise architecture, Hewlett Packard Enterprise, ITSM, Platform 3.0, Software-defined storage and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s