How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

mark-peters_150

Peters

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to — available tools.

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.  

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well — that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations — at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational — they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you’re trying to get done before you figure out what you’re going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark.

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage — and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds — and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects — but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects — but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought — all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular — their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got — because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control — not for control’s sake — only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users — but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have — and finding out that you’re in a mess — doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it — even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation — in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises — means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays — that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared — it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage — and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be — how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters — and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity — whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have — the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure — not just your physical infrastructure but your applications, too — can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You’re not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this.

Many vendor organizations are either focused on the cloud side of the business — and there are some very big names — or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations — Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Advertisements
Posted in application transformation, artificial intelligence, big data, Bimodal IT, Cloud computing, cloud messaging, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning, Platform 3.0, Software-defined storage | Tagged , , , , , , , , , | Leave a comment

Kansas Development Finance Authority gains peace of mind, end-points virtual shield using hypervisor-level security

Implementing and managing IT security has leaped in complexity for organizations ranging from small and medium-sized businesses (SMBs) to massive government agencies.

Once-safe products used to thwart invasions now have been exploited. E-mail phishing campaigns are far more sophisticated, leading to damaging ransomware attacks.

What’s more, the jack-of-all-trades IT leaders of the mid-market concerns are striving to protect more data types on and off premises, their workload servers and expanded networks, as well as the many essential devices of the mobile workforce.

Security demands have gone up, yet there is a continual need for reduced manual labor and costs — while protecting assets sooner and better.

The next BriefingsDirect security strategies case study examines how a Kansas economic development organization has been able to gain peace of mind by relying on increased automation and intelligence in how it secures its systems and people.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To explore how an all-encompassing approach to security has enabled improved results with fewer hours at a smaller enterprise, BriefingsDirect sat down with Jeff Kater, Director of Information Technology and Systems Architect at Kansas Development Finance Authority (KDFA) in Topeka. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: As a director of all of IT at KDFA, security must be a big concern, but it can’t devour all of your time. How have you been able to balance security demands with all of your other IT demands?

Kater: That’s a very interesting question, and it has a multi-segmented answer. In years past, leading up to the development of what KDFA is now, we faced the trends that demanded very basic anti-spam solutions and the very basic virus threats that came via the web and e-mail.

Jeff Kater

Kater

What we’ve seen more recently is the growing trend of enhanced security attacks coming through malware and different exploits — that were once thought impossible — are now are the reality.

Therefore in recent times, my percentage of time dedicated to security had grown from probably five to 10 percent all the way up to 50 to 60 percent of my workload during each given week.

Gardner: Before we get to how you’ve been able to react to that, tell us about KDFA.

Kater: KDFA promotes economic development and prosperity for the State of Kansas by providing efficient access to capital markets through various tax-exempt and taxable debt obligations.

KDFA works with public and private entities across the board to identify financial options and solutions for those entities. We are a public corporate entity operating in the municipal finance market, and therefore we are a conduit finance authority.

KDFA is a very small organization — but a very important one. Therefore we run enterprise-ready systems around the clock, enabling our staff to be as nimble and as efficient as possible.

There are about nine or 10 of us that operate here on any given day at KDFA. We run on a completely virtual environment platform via Citrix XenServer. So we run XenApp, XenDesktop, and NetScaler — almost the full gamut of Citrix products.

We have a few physical endpoints, such as laptops and iPads, and we also have the mobile workforce on iPhones as well. They are all interconnected using the virtual desktop infrastructure (VDI) approach.

Gardner: You’ve had this swing, where your demands from just security issues have blossomed. What have you been doing to wrench that back? How do you get your day back, to innovate and put in place real productivity improvements?

We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

Kater: We went with virtualization via Citrix. It became our solution of choice due to not being willing to pay the extra tax, if you will, for other solutions that are on the market. We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

When we embraced virtualization, the security approaches were very traditional in nature. The old way of doing things worked fantastically for a physical endpoint.

The traditional approaches to security had been on our physical PCs for years. But when that security came over to the virtual realm, they bogged down our systems. They still required updates be done manually. They just weren’t innovating at the same speed as the virtualization, which was allowing us to create new endpoints.

And so, the maintenance, the updating, the growing threats were no longer being seen by the traditional approaches of security. We had endpoint security in place on our physical stations, but when we went virtual we no longer had endpoint security. We then had to focus on antivirus and anti-spam at the server level.

What we found out very quickly was that this was not going to solve our security issues. We then faced a lot of growing threats again via e-mail, via web, that were coming in through malware, spyware, other activities that were embedding themselves on our file servers – and then trickling down and moving laterally across our network to our endpoints.

Gardner: Just as your organization went virtual and adjusted to those benefits, the malware and the bad guys, so to speak, adjusted as well — and started taking advantage of what they saw as perhaps vulnerabilities as organizations transitioned to higher virtualization.

Security for all, by all

Kater: They did. One thing that a lot of security analysts, experts, and end-users forget in the grand scheme of things is that this virtual world we live in has grown so rapidly — and innovated so quickly — that the same stuff we use to grow our businesses is also being used by the bad actors. So while we are learning what it can do, they are learning how to exploit it at the same speed — if not a little faster.

Gardner: You recognized that you had to change; you had to think more about your virtualization environment. What prompted you to increase the capability to focus on the hypervisor for security and prevent issues from trickling across your systems and down to your endpoints?

Kater: Security has always been a concern here at KDFA. And there has been more of a security focus recently, with the latest news and trends. We honestly struggled with CryptoLocker, and we struggled with ransomware.

While we never had to pay out any ransom or anything — and they were stopped in place before data could be exfiltrated outside of KDFA’s network — we still had two or three days of either data loss or data interruption. We had to pull back data from an archive; we had to restore some of our endpoints and some of our computers.

We needed to have a solution for our virtual environment — one that would be easy to deploy, easy to manage, and it would be centrally managed.

As we battled these things over a very short period of time, they were progressively getting worse and worse. We decided that we needed to have a solution for our virtual environment – one that would be not only be easy to deploy, easy to manage, but it would be centrally managed as well, enabling me to have more time to focus back on my workload — and not have to worry so much about the security thresholds that had to be updated and maintained via the traditional model.

So we went out to the market. We ran very extensive proof of concepts (POCs), and those POCs very quickly illustrated that the underlying architecture was only going to be enterprise-ready via two or three vendors. Once we started running those through the paces, Bitdefender emerged for us.

I had actually been watching the Hypervisor Introspection (HVI) product development for the past four years, since its inception came with a partnership between Citrix, Intel, the Linux community and, of course, Bitdefender. One thing that was continuous throughout all of that was that in order to deploy that solution you would need GravityZone in-house to be able to run the HVI workloads.

And so we became early adopters of Bitdefender GravityZone, and we are able to see what it could do for our endpoints, our servers, and our Microsoft Exchange Servers. Then, Hypervisor Introspection became another security layer that we are able to build upon the security solution that we had already adopted from Bitdefender.

Gardner: And how long have you had these solutions in place?

Kater: We are going on one and a half to two years for GravityZone. And when HVI went to general availability earlier this year, in 2017, and we were one of the first adopters to be able to deploy it across our production environment.

Gardner: If you had a “security is easy” button that you could pound on your desk, what are the sorts of things that you look for in a simpler security solution approach?

IT needs brains to battle breaches

Kater: The “security is easy” button would operate much like the human brain. It would need that level of intuitive instinct, that predictive insight ability. The button would generally be easily managed, automated; it would evolve and learn with artificial intelligence (AI) and machine learning what’s out there. It would dynamically operate with peaks and valleys depending on the current status of the environment, and provide the security that’s needed for that particular environment.

Gardner: Jeff, you really are an early adopter, and I commend you on that. A lot of organizations are not quite as bold. They want to make sure that everything has been in the market for a long time. They are a little hesitant.

But being an early adopter sounds like you have made yourselves ready to adopt more AI and machine learning capabilities. Again, I think that’s very forward-looking of you.

But tell us, in real terms, what has being an early adopter gotten for you? We’ve had some pretty scary incidents just in the recent past, with WannaCry, for example. What has being an early adopter done for you in terms of these contemporary threats?

Kater: The new threats, including the EternalBlue exploit that happened here recently, are very advanced in nature. Oftentimes when these breaches occur, it takes several months before they have even become apparent. And oftentimes they move laterally within our network without us knowing, no matter what you do.

Some of the more advanced and persistent threats don’t even have to infect the local host with any type of software. They work in the virtual memory space. It’s much different than the older threats, where you could simply reboot or clear your browser cache to resolve them and get back to your normal operations.

Earlier, when KDFA still made use of non-persistent desktops, if the user got any type of corruption on their virtual desktop, they were able to reboot, and get back to a master image and move on. However, with these advanced threats, when they get into your network, and they move laterally — even if you reboot your non-persistent desktop, the threat will come back up and it still infects your network. So with the growing ransomware techniques out there, we can no longer rely on those definition-based approaches. We have to look at the newer techniques.

As far as why we are early adopters, and why I have chosen some of the principles that I have, I feel strongly that you are really only as strong as your weakest link. I strive to provide my users with the most advanced, nimble, and agnostic solutions possible.

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations.

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations. It allows us to have discussions about increasing productivity at that point, and to maximize the potential of our smaller number of users — versus having to worry about the latest news of security breaches that are happening all around us.

Gardner: You’re able to have a more proactive posture, rather than doing the fire drill when things go amiss and you’re always reacting to things.

Kater: Absolutely.

Gardner: Going back to making sure that you’re getting a fresh image and versions of your tools …  We have heard some recent issues around the web browser not always being safe. What is it about being able to get a clean version of that browser that can be very important when you are dealing with cloud services and extensive virtualization?

Virtual awareness, secure browsing

Kater: Virtualization in and of itself has allowed us to remove the physical element of our workstations when desirable and operate truly in that virtual or memory space. And so when you are talking about browsers, you can have a very isolated, a very clean browser. But that browser is still going to hit a website that can exploit your system. It can run in that memory space for exploitation. And, again, it doesn’t rely on plug-ins to be downloaded or anything like that anymore, so we really have to look at the techniques that these browsers are using.

What we are able to do with the secure browsing technique is publish, in our case, via XenApp, any browser flavor with isolation out there on the server. We make it available to the users that have access for that particular browser and for that particular need. We are then able to secure it via Bitdefender HVI, making sure that no matter where that browser goes, no matter what interface it’s trying to align with, it’s secure across the board.

Gardner: In addition to secure browsing, what do you look for in terms of being able to keep all of your endpoints the way you want them? Is there a management approach of being able to verify what works and what doesn’t work? How do you try to guarantee 100 percent security on those many and varied endpoints?

Kater: I am a realist, and I realize that nothing will ever be 100 percent secure, but I really strive for that 99.9 percent security and availability for my users. In doing so — being that we are so small in staff, and being that I am the one that should manage all of the security, architecture, layers, networking and so forth — I really look for that centralized model. I want one pane of glass to look at for managing, for reporting.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go,  what did it do to me and how was I protected.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go, and what did it do to me and how was I protected. I need that so that I can report to my management staff and say, “Hey, honestly, this is what happened, this is what was happening behind the scenes. This is how we remediated and we are okay. We are protected. We are safe.”

And so I really look for that centralized management. Automation is key. I want something that will automatically update, with the latest virus and malware definitions, but also download the latest techniques that are seen out there via those innovative labs from our security vendors to fully patch our systems behind the scenes. So it takes that piece of management away from me and automates it to make my job more efficient and more effective.

Gardner: And how has Bitdefender HVI, in association with Bitdefender GravityZone, accomplished that? How big of a role does it play in your overall solution?

Kater: It has been a very easy deployment and management, to be honest. Again, entities large and small, we are all facing the same threats. When we looked at ways to attain the best solution for us, we wanted to make sure that all of the main vendors that we make use of here at KDFA were on board.

And it just so happened this was a perfect partnership, again, between Citrix, Bitdefender, Intel, and the Linux community. That close partnership, it really developed into HVI, and it is not an evolutionary product. It did not grow from anything else. It really is a revolutionary approach. It’s a different way of looking at security models. It’s a different way of protecting.

HVI allows for security to be seen outside of the endpoint, and outside of the guest agent. It’s kind of an inside-looking-outward approach. It really provides high levels of visibility, detection and, again, it prevents the attacks of today, with those advanced persistent threats or APTs.

With that said, since the partnership between GravityZone and HVI is so easy to deploy, so easy to manage, it really allows our systems to grow and scale when the need is there. And we just know that with those systems in place, when I populate my network with new VMs, they are automatically protected via the policies from HVI.

Given that the security has to be protected from the ground all the way up, we rest assured that the security moves with the workload. As the workload moves across my network, it’s spawned off and onto new VMs. The same set of security policies follows the workloads. It really takes out any human missteps, if you will, along the process because it’s all automated and it all works hand-in-hand together.

Behind the screens

Gardner: It sounds like you have gained increased peace of mind. That’s always a good thing in IT; certainly a good thing for security-oriented IT folks. What about your end-users? Has the ability to have these defenses in place allowed you to give people a bit more latitude with what they can do? Is there a productivity, end-user or user experience benefit to this?

Kater: When it comes to security agents and endpoint security as a whole, I think a lot of people would agree with me that the biggest drawback when implementing those into your work environment is loss of productivity. It’s really not the end-user’s fault. It’s not a limitation of what they can and can’t do, but it’s what happens when security puts an extra load on your CPU, it puts extra load on your RAM; therefore, it bogs down your systems. Your systems don’t operate as efficiently or effectively and that decreases your productivity.

With Bitdefender, and the approaches that we adopted, we have seen very, very limited, almost uncomputable limitations as far as impacts on our network, impacts on our endpoints. So user adoption has been greater than it ever has, as far as a security solution.

I’m also able to manipulate our policies within that Central Command Center or Central Command Console within Bitdefender GravityZone to allow my users, at will, if they would like, to see what they are being blocked against, and which websites they are trying to run in the background. I am able to pass that through to the endpoint for them to see firsthand. That has been a really eye-opening experience.

We used to compute daily, thinking we were protected, and that nothing was running in the background. We were visiting the pages, and those pages were acting as though we thought that they should. What we have quickly found out is that any given page can launch several hundred, if not thousands, of links in the background, which can then become an exploit mechanism, if not properly secured.

Gardner: I would like to address some of the qualitative metrics of success when you have experienced the transition to more automated security. Let’s begin with your time. You said you went from five or 10 percent of time spent on security to 50 or 60 percent. Have you been able to ratchet that back? What would you estimate is the amount of time you spend on security issues now, given that you are one and a half years in?

Kater: Dating back 5 to 10 years ago with the inception of VDI, my security footprint as far as my daily workload was probably around that 10 percent. And then, with the growing threats in the last two to three years, that ratcheted it up to about 50 percent, at minimum, maybe even 60 percent. By adopting GravityZone and HVI, I have been able to pull that back down to only consume about 10 percent of my workload, as most of it is automated for me behind the scenes.

Gardner: How about ransomware infections? Have you had any of those? Or lost documents, any other sort of qualitative metrics of how to measure efficiency and efficacy here?

We have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Kater: I am happy to report that since the adoption of GravityZone, and now with HVI as an extra security layer on top of Bitdefender GravityZone, that we have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Gardner: Well, that speaks for itself. Let’s look to the future, now that you have obtained this. You mentioned earlier your interest in AI, machine learning, automating, of being proactive. Tell us about what you expect to do in the future in terms of an even better security posture.

Safety layers everywhere, all the time

Kater: In my opinion, again, security layers are vital. They are key to any successful deployment, whether you are large or small. It’s important to have all of your traditional security hardware and software in place working alongside this new interwoven fabric, if you will, of software — and now at the hypervisor level. This is a new threshold. This is a new undiscovered territory that we are moving into with virtual technologies.

As that technology advances, and more complex deployments are made, it’s important to protect that computing ability every step of the way; again, from that base and core, all the way into the future.

More and more of my users are computing remotely, and they need to have the same security measures in place for all of their computing sessions. What HVI has been able to do for me here in the current time, and in moving to the future, is I am now able to provide secure working environments anywhere — whether that’s their desktop, whether that’s their secure browser. I am able to leverage that HVI technology once they are logged into our network to make their computing from remote areas safe and effective.

Gardner: For those listening who may not have yet moved toward a hypervisor-level security – or who have maybe even just more recently become involved with pervasive virtualization and VDI — what advice could you give them, Jeff, on how to get started? What would you suggest others do that would even improve on the way you have done it? And, of course, you have had some pretty good results.

Kater: It’s important to understand that everybody’s situation is very different, so identifying the best solutions for everybody is very much on an individual corporation basis. Each company has its own requirements, its own compliance to follow, of course.

Pick two or three vendors and run very stringent POCs; make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network.

The best advice that I can give is pick two or three vendors, at the least, and run very stringent POCs; no matter what they may be, make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network. Then, when you have two or three that come out of that and that you feel strongly about, continue to break them down.

I cannot stress the importance of POCs enough. It’s very important to identify that one or two that you really feel strongly about. Once you identify those, then talk to the industry experts that support those technologies, talk to the engineers, really get the insight from the inside out on how they are innovating and what their plan is for the future of their products to make sure that you are on a solid footprint.

Most success stories involve a leap of faith. With machine learning and AI, we are now taking a leap that is backed by factual knowledge and analyzing techniques to stay ahead of threats. No longer are we relying on those virus definitions and those virus updates that can be lagging sometimes.

Gardner: Before we sign off, where do you go to get your information? Where would you recommend other people go to find out more?

Kater: Honestly, I was very fortunate that HVI at its inception fell into my lap. When I was looking around at different products, we just hit the market at the right time. But to be honest with you, I cannot stress enough, again, run those POCs.

If you are interested in finding out more about Bitdefender and its product line up, Bitdefender has an excellent set of engineers on staff; they are very knowledgeable, they are very well-rounded in all of their individual disciplines. The Bitdefender website is very comprehensive. It contains many outside resources, along with inside labs reporting, showcasing just what their capabilities are, with a lot of unbiased opinions.

They have several video demos and technical white papers listed out there, you can find them all across the web and you can request the full product demo when you are ready for it and run that POC of Bitdefender products in-house with your network. Also, they have presales support that will help you all along the way.

Bitdefender HVI will revolutionize your data center security capacity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Citrix, Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, machine learning, Microsoft, mobile computing, Security, server, Virtualization, VMware | Tagged , , , , , , , , , , , , , , | Leave a comment

Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.

Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how international companies must factor localization, data sovereignty and other regional cloud requirements into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud — especially hybrid cloud — when you’re straddling global regions?

Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.

Peter-Burris-150x150

Burris

The downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.

So, cloud and globalization can go together — but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.

Gardner: If you need to then focus more on the data issues — such as compliance, regulation, and data sovereignty — how is that different from taking an applications-centric view of things?

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.

That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.

But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.

It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data — either small or very large — across different regions. It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

So, the issues of privacy, the issues of local control of data are also very important, but the first and most important consideration for every business needs to be: Can I actually run the application where I want to, given the realities of latency? And number two: Can I run the application where I want to given the realities of bandwidth? This issue can completely overwhelm all other costs for data-rich, data-intensive applications over distance.

Gardner: As you are factoring your architecture, you need to take these local considerations into account, particularly when you are factoring costs. If you have to do some heavy lifting and make your bandwidth capable, it might be better to have a local closet-sized data center, because they are small and efficient these days, and you can stick with a private cloud or on-premises approach. At the least, you should factor the economic basis for comparison, with all these other variables you brought up.

Edge centers

Burris: That’s correct. In fact, we call them “edge centers.” For example, if the application features any familiarity with Internet of Things (IoT), then there will likely be some degree of latency considerations obtained, and the cost of doing a round trip message over a few thousand miles can be pretty significant when we consider the total cost of how fast computing can be done these days.

The first consideration is what are the impacts of latency for an application workload like IoT and is that intending to drive more automation into the system? Imagine, if you will, the businessperson who says, “I would like to enter into a new market expand my presence in the market in a cost-effective way. And to do that, I want to have the system be more fully automated as it serves that particular market or that particular group of customers. And perhaps it’s something that looks more process manufacturing-oriented or something along those lines that has IoT capabilities.”

The goal is to bring in the technology in a way that does not explode the administration, management, and labor cost associated with the implementation.

The goal, therefore, is to bring in the technology in a way that does not explode the administration, managements, and labor cost associated with the implementation.

The other way you are going to do that is if you do introduce a fair amount of automation and if, in fact, that automation is capable of operating within the time constraints required by those automated moments, as we call them.

If the round-trip cost of moving the data from a remote global location back to somewhere in North America — independent of whether it’s legal or not – comes at a cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is the most obvious and stringent consideration.

On top of that, these moments of automation necessitate significant amounts of data being generated and captured. We have done model studies where, for example, the cost of moving data out of a small wind farm can be 10 times as expensive. It can cost hundreds of thousands of dollars a year to do relatively simple and straightforward types of data analysis on the performance of that wind farm.

Process locally, act globally

It’s a lot better to have a local presence that can handle local processing requirements against models that are operating against locally derived data or locally generated data, and let that work be automated with only periodic visibility into how the overall system is working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is starting.

It gets more complex than in a relatively simple environment like a wind farm, but nonetheless, the amount of processing power that’s necessary to run some of those kinds of models can get pretty significant. We are going to see a lot more of this kind of analytic work be pushed directly down to the devices themselves. So, the Sense, Infer, and Act loop will occur very, very closely in some of those devices. We will try to keep as much of that data as we can local.

But there are always going to be circumstances when we have to generate visibility across devices, we have to do local training of the data, we have to test the data or the models that we are developing locally, and all those things start to argue for sometimes much larger classes of systems.

Gardner: It’s a fascinating subject as to what to push down the edge given that the storage cost and processing costs are down and footprint is down and what to then use the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.

But before we go into any further, Peter, tell us about yourself, and your organization, Wikibon.

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE. TheCUBE conducts about 5,000 interviews per year with thought leaders at various locations, often on-site at large conferences.

I came to Wikibon from Forrester Research, and before that I had been a part of META Group, which was purchased by Gartner. I have a longstanding history in this business. I have also worked with IT organizations, and also worked inside technology marketing in a couple of different places. So, I have been around.

Wikibon’s objective is to help mid-sized to large enterprises traverse the challenges of digital transformation. Our opinion is that digital transformation actually does mean something. It’s not just a set of bromides about multichannel or omnichannel or being “uberized,” or anything along those lines.

The difference between a business and a digital business is the degree to which data is used as an asset.

The difference between a business and a digital business is the degree to which data is used as an asset. In a digital business, data absolutely is used as a differentiating asset for creating and keeping customers.

We look at the challenges of what does it mean to use data differently, how to capture it differently, which is a lot of what IoT is about. We look at how to turn it into business value, which is a lot of what big data and these advanced analytics like artificial intelligence (AI), machine learning and deep learning are all about. And then finally, how to create the next generation of applications that actually act on behalf of the brand with a fair degree of autonomy, which is what we call “systems of agency” are all about. And then ultimately how cloud and historical infrastructure are going to come together and be optimized to support all those requirements.

We are looking at digital business transformation as a relatively holistic thing that includes IT leadership, business leadership, and, crucially, new classes of partnerships to ensure that the services that are required are appropriately contracted for and can be sustained as it becomes an increasing feature of any company’s value proposition. That’s what we do.

Global risk and reward

Gardner: We have talked about the tension between public and private cloud in a global environment through speeds and feeds, and technology. I would like to elevate it to the issues of culture, politics and perception. Because in recent years, with offshoring and looking at intellectual property concerns in other countries, the fact is that all the major hyperscale cloud providers are US-based corporations. There is a wide ecosystem of other second tier providers, but certainly in the top tier.

Is that something that should concern people when it comes to risk to companies that are based outside of the US? What’s the level of risk when it comes to putting all your eggs in the basket of a company that’s US-based?

Burris: There are two perspectives on that, but let me add one more just check on this. Alibaba clearly is one of the top-tier, and they are not based in the US and that may be one of the advantages that they have. So, I think we are starting to see some new hyperscalers emerge, and we will see whether or not one will emerge in Europe.

I had gotten into a significant argument with a group of people not too long ago on this, and I tend to think that the political environment almost guarantees that we will get some kind of scale in Europe for a major cloud provider.

If you are a US company, are you concerned about how intellectual property is treated elsewhere? Similarly, if you are a non-US company, are you concerned that the US companies are typically operating under US law, which increasingly is demanding that some of these hyperscale firms be relatively liberal, shall we say, in how they share their data with the government? This is going to be one of the key issues that influence choices of technology over the course of the next few years.

Cross-border compute concerns

We think there are three fundamental concerns that every firm is going to have to worry about.

I mentioned one, the physics of cloud computing. That includes latency and bandwidth. One computer science professor told me years ago, “Latency is the domain of God, and bandwidth is the domain of man.” We may see bandwidth costs come down over the next few years, but let’s just lump those two things together because they are physical realities.

The second one, as we talked about, is the idea of privacy and the legal implications.

The third one is intellectual property control and concerns, and this is going to be an area that faces enormous change over the course of the next few years. It’s in conjunction with legal questions on contracting and business practices.

Learn More About

Hybrid IT Management

Solutions From HPE

From our perspective, a US firm that wants to operate in a location that features a more relaxed regime for intellectual property absolutely needs to be concerned. And the reason why they need to be concerned is data is unlike any other asset that businesses work with. Virtually every asset follows the laws of scarcity.

Money, you can put it here or you can put it there. Time, people, you can put here or you can put there. That machine can be dedicated to this kind of wire or that kind of wire.

Data is weird, because data can be copied, data can be shared. The value of data appreciates as we us it more successfully, as we integrate it and share it across multiple applications.

Scarcity is a dominant feature of how we think about generating returns on assets. Data is weird, though, because data can be copied, data can be shared. Indeed, the value of data appreciates as we use it more successfully, as we use it more completely, as we integrate it and share it across multiple applications.

And that is where the concern is, because if I have data in one location, two things could possibly happen. One is if it gets copied and stolen, and there are a lot of implications to that. And two, if there are rules and regulations in place that restrict how I can combine that data with other sources of data. That means if, for example, my customer data in Germany may not appreciate, or may not be able to generate the same types of returns as my customer data in the US.

Now, that sets aside any moral question of whether or not Germany or the US has better privacy laws and protects the consumers better. But if you are basing investments on how you can use data in the US, and presuming a similar type of approach in most other places, you are absolutely right. On the one hand, you probably aren’t going to be able to generate the total value of your data because of restrictions on its use; and number two, you have to be very careful about concerns related to data leakage and the appropriation of your data by unintended third parties.

Gardner: There is the concern about the appropriation of the data by governments, including the United States with the PATRIOT Act. And there are ways in which governments can access hyperscalers’ infrastructure, assets, and data under certain circumstances. I suppose there’s a whole other topic there, but at least we should recognize that there’s some added risk when it comes to governments and their access to this data.

Burris: It’s a double-edged sword that US companies may be worried about hyperscalers elsewhere, but companies that aren’t necessarily located in the US may be concerned about using those hyperscalers because of the relationship between those hyperscalers and the US government.

These concerns have been suppressed in the grand regime of decision-making in a lot of businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up, and perhaps, it’s one of the reasons why Alibaba is growing so fast right now.

All hyperscalers are going to have to be able to demonstrate that they can protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated.

All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated. [The rationale] for basing your business in these types of services is really immature. We have made enormous progress, but there’s a long way yet to go here, and that’s something that businesses must factor as they make decisions about how they want to incorporate a cloud strategy.

Gardner: It’s difficult enough given the variables and complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues. But, of course, now there are legal issues around data sovereignty, privacy, and intellectual property concerns. It’s complex, and it’s something that an IT organization, on its own, cannot juggle. This is something that cuts across all the different parts of a global enterprise — their legal, marketing, security, risk avoidance and governance units — right up to the board of directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud computing on any sustainable basis.

Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a business person says, “Oh, no sweat, I am just going to grab some resources and start building something in the cloud.”

I can remember back in the mid-1990s when I would go into large media companies to meet with IT people to talk about the web, and what it would mean technically to build applications on the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be in legal. They were very concerned about what it meant to put intellectual property in a digital format up on the web, because of how it could be misappropriated or how it could lose value. So, that class of concern — or that type of concern — is minuscule relative to the broader questions of cloud computing, of the grabbing of your data and holding it a hostage, for example.

There are a lot of considerations that are not within the traditional purview of IT, but CIOs need to start thinking about them on their own and in conjunction with their peers within the business.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can organizations do to prevent going too far down an alley that’s dark and misunderstood, and therefore have a difficult time adjusting?

How do we better rationalize for cloud computing decisions? Do we need better management? Do we need better visibility into what our organizations are doing or not doing? How do we architect with foresight into the larger picture, the strategic situation? What do we need to start thinking about in terms of the solutions side of some of these issues?

Cloud to business, not business to cloud

Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here. The first thing we tell senior executives is, don’t think about bringing your business to the cloud — think about bringing the cloud to your business. That’s the most important thing. A lot of companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”

It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would go back and do research on outsourcing, I discovered that a lot of the outsourcing was not driven by business needs, but driven by executive compensation schemes, literally. So, where executives were told that they would be paid on the basis of return in net assets, there was a high likelihood that the business was going to go to outsourcers to get rid of the assets, so the executives could pay themselves an enormous amount of money.

Think about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

The same type of thinking pertains here — the goal is not to get rid of IT assets since those assets, generally speaking, are becoming less important features of the overall proposition of digital businesses.

Think instead about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

Every decision-maker needs to ask himself or herself, “How can I get the cloud experience wherever the data demands?” The goal of the cloud experience, which is a very, very powerful concept, ultimately needs to be able to get access to a very rich set of services associated with automation. We need visible pricing and metering, self-sufficiency, and self-service. These are all the experiences that we want out of cloud.

What we want, however, are those experiences wherever the data requires it, and that’s what’s driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack that provides a consistent cloud experience wherever the data has to run — whether that’s because of IoT or because of privacy issues or because of intellectual property concerns. True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience.

Weaving IT all together

The third thing to note here is that ultimately this is going to lead to the most complex integration regime we’ve ever envisioned for IT. By that I mean, we are going to have applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private cloud, legacy applications, and many other types of services that we haven’t even conceived of right now.

And understanding how to weave all of those different data sources, and all those different service sources, into coherent application framework that runs reliably and providers a continuous ongoing service to the business is essential. It must involve a degree of distribution that completely breaks most models. We’re thinking about infrastructure, architecture, but also, data management, system management, security management, and as I said earlier, all the way out to even contractual management, and vendor management.

The arrangement of resources for the classes of applications that we are going to be building in the future are going to require deep, deep, deep thinking.

That leads to the fourth thing, and that is defining the metric we’re going to use increasingly from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop — and they will continue to drop — it means ultimately that the fundamental cost determinant will be, How long does it take an application to complete? How long does it take this transaction to complete? And that’s not so much a throughput question, as it is a question of, “I have all these multiple sources that each on their own are contributing some degree of time to how this piece of work finishes, and can I do that piece of work in less time if I bring some of the work, for example, in-house, and run it close to the event?”

This relationship between increasing distribution of work, increasing distribution of data, and the role that time is going to play when we think about the event that we need to manage is going to become a significant architectural concern.

The fifth issue, that really places an enormous strain on IT is how we think about backing up and restoring data. Backup/restore has been an afterthought for most of the history of the computing industry.

As we start to build these more complex applications that have more complex data sources and more complex services — and as these applications increasingly are the basis for the business and the end-value that we’re creating — we are not thinking about backing up devices or infrastructure or even subsystems.

We are thinking about what does it mean to backup, even more importantly, applications and even businesses. The issue becomes associated more with restoring. How do we restore applications in business across this incredibly complex arrangement of services and data locations and sources?

There’s a new data regime that’s emerging to support application development. How’s that going to work — the role the data scientists and analytics are going to play in working with application developers?

I listed five areas that are going to be very important. We haven’t even talked about the new regime that’s emerging to support application development and how that’s going to work. The role the data scientists and analytics are going to play in working with application developers – again, we could go on and on and on. There is a wide array of considerations, but I think all of them are going to come back to the five that I mentioned.

Gardner: That’s an excellent overview. One of the common themes that I keep hearing from you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk, and a lack of maturity. We really are venturing into unknown territory in creating applications that draw on these resources, assets and data from these different clouds and deployment models.

When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a party to come in to bring in new types of management with maturity and with visibility. Who are some of the players that might fill that role? One that I am familiar with, and I think I have seen them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an ecosystem of global cloud environments that helps mitigate some of the concerns about a single hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come in to this problem set and start to solve it? What do you think from what you’ve heard so far about Project New Hybrid IT Stack at HPE?

Key cloud players

Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of people like to claim it was microprocessors, and there is an element of truth to that, but many computer companies had their own proprietary networks. When companies wanted to put those networks together to build more distributed applications, the mini-computer companies said, “Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the process flattened the mini-computer companies.

HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody else.

We are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized.

The second thing is that to build the next generations of more complex applications — and especially applications that involve capabilities like deep learning or machine learning with increased automation — we are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized. That is an absolute requirement. We are not going to make progress by adding new levels of complexity and building increasingly rich applications if we don’t take full advantage of the technologies that we want to use in the applications — inside how we run our infrastructures and run our subsystems, and do all the things we need to do from a hybrid cloud standpoint.

Ultimately, the companies are going to step up and start to flatten out some of these cloud options that are emerging. We will need companies that have significant experience with infrastructure, that really understand the problem. They need a lot of experience with a lot of different environments, not just one operating system or one cloud platform. They will need a lot of experience with these advanced applications, and have both the brainpower and the inclination to appropriately invest in those capabilities so they can build the type of platforms that we are talking about. There are not a lot of companies out there that can.

There are few out there, and certainly HPE with its New Stack initiative is one of them, and we at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece parts that will be required to make a go of this technology. It’s going to be one of the most exciting areas of invention over the next few years. We really look forward to working with our user clients to introduce some of these technologies and innovate with them. It’s crucial to solve the next generation of problems that the world faces; we can’t move forward without some of these new classes of hybrid technologies that weave together fabrics that are capable of running any number of different application forms.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Bimodal IT, Cloud computing, cloud messaging, data analysis, data center, Data center transformation, DevOps, disaster recovery, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, Security, server, Software, Software-defined storage | Tagged , , , , , , , , , , | Leave a comment

As enterprises face hybrid IT complexity, new management solutions beckon

The next BriefingsDirect Voice of the Analyst interview examines how new machine learning and artificial intelligence (AI) capabilities are being applied to hybrid IT complexity challenges.

We’ll explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to report on how companies and IT leaders are seeking new means to manage an increasingly complex transition to sustainable hybrid IT is Paul Teich, Principal Analyst at TIRIAS Research in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Paul, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. There is also lingering concern about the complexity of managing so many fast-moving parts. We have legacy IT, private cloud, public cloud, software as a service (SaaS) and, of course, multi-cloud. So as someone who tracks technology and its consumption, how much has technology itself been tapped to manage this sprawl, if you will, across hybrid IT?

Paul Teich

Teich

Teich: So far, not very much, mostly because of the early state of multi-cloud and the hybrid cloud business model. As you know, it takes a while for management technology to catch up with the actual compute technology and storage. So I think we are seeing that management is the tail of the dog, it’s getting wagged by the rest of it, and it just hasn’t caught up yet.

Gardner: Things have been moving so quickly with cloud computing that few organizations have had an opportunity to step back and examine what’s actually going on around them — never mind properly react to it. We really are playing catch up.

Teich: As we look at the options available, the cloud giants — the public cloud services — don’t have much incentive to work together. So you are looking at a market where there will be third parties stepping in to help manage multi-cloud environments, and there’s a lag time between having those services available and having the cloud services available and then seeing the third-party management solution step in.

Gardner: It’s natural to see that a specific cloud environment, whether it’s purely public like AWS or a hybrid like Microsoft Azure and Azure Stack, want to help their customers, but they want to help their customers all get to their solutions first and foremost. It’s a natural thing. We have seen this before in technology.

There are not that many organizations willing to step into the neutral position of being ecumenical, of saying they want to help the customer first, manage it all from the first.

As we look to how this might unfold, it seems to me that the previous models of IT management — agent-based, single-pane-of-glass, and unfortunately still in some cases spreadsheets and Post-It notes — have been brought to bear on this. But we might be in a different ball game, Paul, with hybrid IT, that there’s just too many moving parts, too much complexity, and that we might need to look at data-driven approaches. What is your take on that?

Learn More About

Hybrid IT Management

Solutions From HPE

Teich: I think that’s exactly correct. One of the jokes in the industry right now is if you want to find your stranded instances in the cloud, cancel your credit card and AWS or Microsoft will be happy to notify you of all of the instances that you are no longer paying for because your credit card expired. It’s hard to keep track of this, because we don’t have adequate tools yet.

When you are an IT manager and you have a lot of folks on public cloud services, you don’t have a full picture.

That single pane of glass, looking at a lot of data and information, is soon overloaded. When you are an IT manager, you are at a mid-sized or a large corporation, you have a lot of folks paying out-of-pocket right now, slapping a credit card down on public cloud services, so you don’t have a full picture. Where you do have a picture, there are so many moving parts.

I think we have to get past having a screen full of data, a screen full of information, and to a point where we have insight. And that is going to require a new generation of tools, probably borrowing from some of the machine learning evolution that’s happening now in pattern analytics.

Gardner: The timing in some respects couldn’t be better, right? Just as we are facing this massive problem of complexity of volume and velocity in managing IT across a hybrid environment, we have some of the most powerful and cost-effective means to deal with big data problems just like that.

Life in the infrastructure

Paul, before we go further let’s hear about you and your organization, and tell us, if you would, what a typical day is like in the life of Paul Teich?

Teich: At TIRIAS Research we are boutique industry analysts. By boutique we mean there are three of us — three principal analysts; we have just added a few senior analysts. We are close to the metal. We live in the infrastructure. We are all former engineers and/or product managers. We are very familiar with deep technology.

My day tends to be first, a lot of reading. We look at a lot of chips, we look at a lot of service-level information, and our job is to, at a very fundamental level, take very complex products and technologies and surface them to business decision-makers, IT decision-makers, folks who are trying to run lines of business (LOB) and make a profit. So we do the heavy lifting on why new technology is important, disruptive, and transformative.

Gardner: Thanks. Let’s go back to this idea of data-driven and analytical values as applied to hybrid IT management and complexity. If we can apply AI and machine learning to solve business problems outside of IT — in such verticals as retail, pharmaceutical, transportation — with the same characteristics of data volume, velocity, and variety, why not apply that to IT? Is this a case of the cobbler’s kids having no shoes? You would think that IT would be among the first to do this.

Dig deep, gain insight

Teich: The cloud giants have already implemented systems like this because of necessity. So they have been at the front-end of that big data mantra of volume, velocity — and all of that.

To successfully train for the new pattern recognition analytics, especially the deep learning stuff, you need a lot of data. You can’t actually train a system usefully without presenting it with a lot of use cases.

The public clouds have this data. They are operating social media services, large retail storefronts, and e-tail, for example. As the public clouds became available to enterprises, the IT management problem ballooned into a big data problem. I don’t think it was a big data problem five or 10 years ago, but it is now.

That’s a big transformation. We haven’t actually internalized what that means operationally when your internal IT department no longer runs all of your IT jobs anymore.

We are generating big data and that means we need big data tools to go analyze it and to get that relevant insight.

That’s the biggest sea change — we are generating big data in the course of managing our IT infrastructure now, and that means we need big data tools to go analyze it, and to get that relevant insight. It’s too much data flowing by for humans to comprehend in real time.

Gardner: And, of course, we are also talking about islands of such operational data. You might have a lot of data in your legacy operations. You might have tier 1 apps that you are running on older infrastructure, and you are probably happy to do that. It might be very difficult to transition those specific apps into newer operating environments.

You also have multiple SaaS and cloud data repositories and logs. There’s also not only the data within those apps, but there’s the metadata as to how those apps are running in clusters and what they are doing as a whole. It seems to me that not only would you benefit from having a comprehensive data and analytics approach for your IT operations, but you might also have a workflow and process business benefit by being an uber analyst, by being on top of all of these islands of operational data.

Learn More About

Hybrid IT Management

Solutions From HPE

To me, moving toward a comprehensive intelligence and data analysis capability for IT is the gift that keeps giving. You would then be able to also provide insight for an uber approach to processes across your entire organization — across the supply chains, across partner networks, and back to your customers. Paul, do you also see that there’s an ancillary business benefit to having that data analysis capability, and not ceding it to your cloud providers?

Manage data, improve workflow

Teich: I do. At one end of the spectrum it’s simply what do you need to do to keep the lights on, where is your data, all of it, in the various islands and collections and the data you are sharing with your supply chain as well. Where is the processing that you can apply to that data? Increasingly, I think, we are looking at a world in which the location of the stored data is more important than the processing power.

The management of all the data you have needs to segue into visible workflows.

We have processing power pretty much everywhere now. What’s key is moving data from place to place and setting up the connections to acquire it. It means that the management of all the data you have needs to segue into visible workflows.

Once I know what I have, and I am managing it at a baseline effectively, then I can start to improve my processes. Then I can start to get better workflows, internally as well as across my supply chain. But I think at first it’s simply, “What do I have going on right now?”

As an IT manager, how can I rein in some of these credit card instances, credit card storage on the public clouds, and put that all into the right mix. I have to know what I know first — then I can start to streamline. Then I can start to control my costs. Does that make sense?

Gardner: Yes, absolutely. And how can you know which people you want to give even more credit to on their credit cards – and let them do more of what they are doing? It might be very innovative, and it might be very cost-effective. There might also be those wasting money, spinning their wheels, repaving cow paths, over and over again.

If you don’t have the ability to make those decisions with insight, without the visibility, and then further analyze it as to how best to go about it – it seems to me a no-brainer.

It also comes at an auspicious time as IT is trying to re-factor its value to the organization. If in fact they are no longer running servers and networks and keeping the trains running on time, they have to start being more in the business of defining what trains should be running and then how to make them the best business engines, if you will.

If IT departments needs to rethink their role and step up their game, then they need to use technologies like advanced hybrid IT management from vendors with a neutral perspective. Then they become the overseers of operations at a fundamentally different level.

Data revelation, not revolution

Teich: I think that’s right. It’s evolutionary stuff. I don’t think it’s revolutionary. I think that in the same way you add servers to a virtual machine farm, as your demand increases, as your baseline demand increases, IT needs to keep a handle on costs — so you can understand which jobs are running where and how much more capacity you need.

One of the things they are missing with random access to the cloud is bulk purchasing. And so at a very fundamental level, helping your organization manage which clouds you are spending on by aggregating the purchase of storage, aggregating the purchase of compute instances to get just better buying power, doing price arbitrage when you can. To me, those are fundamental qualities of IT going forward in a multi-cloud environment.

They are extensions of where we are today; it just doesn’t seem like it yet. They have always added new servers to increasing internal capacity and this is just the next evolutionary step.

Gardner: It certainly makes sense that you would move as maturity occurs in any business function toward that orchestration, automation and optimization – rather than simply getting the parts in place. What you are describing is that IT is becoming more like a procurement function and less like a building, architecture, or construction function, which is just as powerful.

Not many people can make those hybrid IT procurement decisions without knowing a lot about the technology. Someone with just business acumen can’t walk in and make these decisions. I think this is an opportunity for IT to elevate itself and become even more essential to the businesses.

Teich: The opportunity is a lot like the Sabre airline scheduling system that nearly every airline uses now. That’s a fundamental capability for doing business, and it’s separate from the technology of Sabre. It’s the ability to schedule — people and airplanes – and it’s a lot like scheduling storage and jobs on compute instances. So I think there will be this step.

But to go back to the technology versus procurement, I think some element of that has always existed in IT in terms of dealing with vendors and doing the volume purchases on one side, but also having some architect know how to compose the hardware and the software infrastructure to serve those applications.

Connect the clouds

We’re simply translating that now into a multi-cloud architecture. How do I connect those pieces? What network capacity do I need to buy? What kind of storage architectures do I need? I don’t think that all goes away. It becomes far more important as you look at, for example, AWS as a very large bag of services. It’s very powerful. You can assemble it in any way you want, but in some respect, that’s like programming in C. You have all the power of assembly language and all the danger of assembly language, because you can walk up in the memory and delete stuff, and so, you have to have architects who know how to build a service that’s robust, that won’t go down, that serves your application most efficiently and all of those things are still hard to do.

So, architecture and purchasing are both still necessary. They don’t go away. I think the important part is that the orchestration part now becomes as important as deploying a service on the side of infrastructure because you’ve got multiple sets of infrastructure.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: For hybrid IT, it really has to be an enlightened procurement, not just blind procurement. And the people in the trenches that are just buying these services — whether the developers or operations folks — they don’t have that oversight, that view of the big picture to make those larger decisions about optimization of purchasing and business processes.

That gets us back to some of our earlier points of, what are the tools, what are the management insights that these individuals need in order to make those decisions? Like with Sabre, where they are optimizing to fill every hotel room or every airplane seat, we’re going to want in hybrid IT to fill every socket, right? We’re going to want all that bare metal and all those virtualization instances to be fully optimized — whether it’s your cloud or somebody else’s.

It seems to me that there is an algorithmic approach eventually, right? Somebody is going to need to be the keeper of that algorithm as to how this all operates — but you can’t program that algorithm if you don’t have the uber insights into what’s going on, and what works and what doesn’t.

What’s the next step, Paul, in terms of the technology catching up to the management requirements in this new hybrid IT complex environment?

Teich: People can develop some of that experience on a small scale, but there are so many dimensions to managing a multi-cloud, hybrid IT infrastructure business model. It’s throwing off all of this metadata for performance and efficiency. It’s ripe for machine learning.

We’re moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale.

In a strong sense, we’re moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale. It’s just going to be looking at a bigger picture, it’s going to be managing more variables, and learning across a lot more data points than a human can possibly comprehend.

We are at this really interesting point in the industry where we are getting deep-learning approaches that are coming online cost effectively; they can help us do that. They have a little while to go before they are fully mature. But IT organizations that learn to take advantage of these systems now are going to have a head start, and they are going to be more efficient than their competitors.

Gardner: At the end of the day, if you’re all using similar cloud services then that differentiation between your company and your competitor is in how well you utilize and optimize those services. If the baseline technologies are becoming commoditized, then optimization — that algorithm-like approach to smartly moving workloads and data, and providing consumption models that are efficiency-driven — that’s going to be the difference between a 1 percent margin and a 5 percent margin over time.

The deep-learning difference

Teich: The important part to remember is that these machine-training algorithms are somewhat new, so there are several challenges with deploying them. First is the transparency issue. We don’t quite yet know how a deep-learning model makes specific decisions. We can’t point to one aspect and say that aspect is managing the quality of our AWS services, for example. It’s a black box model.

We can’t yet verify the results of these models. We know they are being efficient and fast but we can’t verify that the model is as efficient as it could possibly be. There is room for improvement over the next few years. As the models get better, they’ll leave less money on the table.

We’re also validating that when you build a machine-learning model that it’s covering all the situations you want it to cover. You need an audit trail for specific sets of decisions, especially with data that is subject to regulatory constraints. You need to know why you made decisions.

So the net is, once you are training a machine-learning model, you have to keep retraining it over time. Your model is not going to do the same thing as your competitor’s model. There is a lot of room for differentiation, a lot of room for learning. You just have to go into it with your eyes open that, yeah, occasionally things will go sideways. Your model might do something unexpected, and you just have to be prepared for that. We’re still in the early days of machine learning.

Gardner: You raise an interesting point, Paul, because even as the baseline technology services in the multi-cloud era become commoditized, you’re going to have specific, unique, and custom approaches to your own business’ management.

Your hybrid IT optimization is not going to be like that of any other company. I think getting that machine-learning capability attuned to your specific hybrid IT panoply of resources and assets is going to be a gift that keeps giving. Not only will you run your IT better, you will run your business better. You’ll be fleet and agile.

If some risk arises — whether it’s a cyber security risk, a natural disaster risk, a business risk of unintended or unexpected changes in your supply chain or in your business environment — you’re going to be in a better position to react. You’re going to have your eyes to the ground, you’re going to be well tuned to your specific global infrastructure, and you’ll be able to make good choices. So I am with you. I think machine learning is essential, and the sooner you get involved with it, the better.

Before we sign off, who are the vendors and some of the technologies that we will look to in order to fill this apparent vacuum on advanced hybrid IT management? It seems to me that traditional IT management vendors would be a likely place to start.

Who’s in?

Teich: They are a likely place to start. All of them are starting to say something about being in a multi-cloud environment, about being in a multi-cloud-vendor environment. They are already finding themselves there with virtualization, and the key is they have recognized that they are in a multi-vendor world.

There are some start-ups, and I can’t name them specifically right now. But a lot of folks are working on this problem of how do I manage hybrid IT: In-house IT, and multi-cloud orchestration, a lot of work going on there. We haven’t seen a lot of it publicly yet, but there is a lot of venture capital being placed.

I think this is the next step, just like PCs came in the office, smartphones came in the office as we move from server farms to the clouds, going from cloud to multi-cloud, it’s attracting a lot of attention. The hard part right now is nailing whom to place your faith in. The name brands that people are buying their internal IT from right now are probably good near-term bets. As the industry gets more mature, we’ll have to see what happens.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We did hear a vision described on this from Hewlett Packard Enterprise (HPE) back in June at their Discover event in Las Vegas. I’m expecting to hear quite a bit more on something they’ve been calling New Hybrid IT Stack that seems to possess some of the characteristics we’ve been describing, such as broad visibility and management.

So at least one of the long-term IT management vendors is looking in this direction. That’s a place I’m going to be focusing on, wondering what the competitive landscape is going to be, and if HPE is going to be in the leadership position on hybrid IT management.

Teich: Actually, I think HPE is the only company I’ve heard from so far talking at that level. Everybody is voicing some opinion about it, but from what I’ve heard, it does sound like a very interesting approach to the problem.

Microsoft actually constrained their view on Azure Stack to a very small set of problems, and is actively saying, “No, I don’t.” If you’re looking at doing virtual machine migration and taking advantage of multi-cloud for general-purpose solutions, it’s probably not something that you want to do yet. It was very interesting for me then to hear about the HPE Project New Hybrid IT Stack and what HPE is planning to do there.

Gardner: For Microsoft, the more automated and constrained they can make it, the more likely you’d be susceptible or tempted to want to just stay within an Azure and/or Azure Stack environment. So I can appreciate why they would do that.

Before we sign off, one other area I’m going to be keeping my eyes on is around orchestration of containers, Kubernetes, in particular. If you follow orchestration of containers and container usage in multi-cloud environments, that’s going to be a harbinger of how the larger hybrid IT management demands are going to go as well. So a canary in the coal mine, if you will, as to where things could get very interesting very quickly.

The place to be

Teich: Absolutely. And I point out that the Linux Foundation’s CloudNativeCon in early December 2017 looks like the place to be — with nearly everyone in the server infrastructure community and cloud infrastructure communities signing on. Part of the interest is in basically interchangeable container services. We’ll see that become much more important. So that sleepy little technical show is going to be invaded by “suits,” this year, and we’re paying a lot of attention to it.

Gardner: Yes, I agree. I’m afraid we’ll have to leave it there. Paul, how can our listeners and readers best follow you to gain more of your excellent insights?

Teich: You can follow us at www.tiriasresearch.com, and also we have a page on Forbes Tech, and you can find us there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, cloud messaging, data analysis, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning, Microsoft, Platform 3.0 | Tagged , , , , , , , , , | Leave a comment

How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT’s ability to grow and thrive

The next BriefingsDirect Voice of the Analyst interview examines how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice.

We’ll now explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order to have businesses grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy

Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles joins us to report on how companies are managing an increasingly complex transition to sustainable hybrid IT. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. But there is also lingering concern about how to best determine the right mix of cloud, what kinds of cloud, and how to mitigate the risks and manage change over time.

As someone who regularly advises chief information officers (CIOs), who or which group is surfacing that is tasked with managing this cloud adoption and its complexity within these businesses? Who will be managing this dynamic complexity?

Crawford: For the short-term, I would say everyone. It’s not as simple as it has been in the past where we look to the IT organization as the end-all, be-all for all things technology. As we begin talking about different consumption models — and cloud is a relatively new consumption model for technology — it changes the dynamics of it. It’s the combination of changing that consumption model — but then there’s another factor that comes into this. There is also the consumerization of technology, right? We are “democratizing” technology to the point where everyone can use it, and therefore everyone does use it, and they begin to get more comfortable with technology.

It’s not as it used to be, where we would say, “Okay, I’m not sure how to turn on a computer.” Now, businesses may be more familiar outside of the IT organization with certain technologies. Bringing that full-circle, the answer is that we have to look beyond just IT. Cloud is something that is consumed by IT organizations. It’s consumed by different lines of business, too. It’s consumed even by end-consumers of the products and services. I would say it’s all of the above.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: The good news is that more and more people are able to — on their own – innovate, to acquire cloud services, and they can factor those into how they obtain business objectives. But do you expect that we will get to the point where that becomes disjointed? Will the goodness of innovation become something that spins out of control, or becomes a negative over time?

Crawford: To some degree, we’ve already hit that inflection-point where technology is being used in inappropriate ways. A great example of this — and it’s something that just kind of raises the hair on the back of my neck — is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “Go cloud.”

The board should be very business-focused and instead they’re dictating specific technology — whether it’s the right technology or not. That’s really what this comes down to.

What’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application?

Another example is folks that try and go all-in on cloud but aren’t necessarily thinking about what’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? It’s not a one-size-fits-all answer.

We in the enterprise IT space haven’t really done enough work to truly understand how best to leverage these new sets of tools. We need to both wrap our head around it but also get in the right frame of mind and thought process around how to take advantage of them in the best way possible.

Another example that I’ve worked through from an economic standpoint is if you were to do the math, which I have done a number of times with clients — you do the math to figure out what’s the comparative between the IT you’re doing on-premises in your corporate data center with any given application — versus doing it in a public cloud.

Think differently

If you do the math, taking an application from a corporate data center and moving it to public cloud will cost you four times as much money. Four times as much money to go to cloud! Yet we hear the cloud is a lot cheaper. Why is that?

When you begin to tease apart the pieces, the bottom line is that we get that four-times-as-much number because we’re using the same traditional mindset where we think about cloud as a solution, the delivery mechanism, and a tool. The reality is it’s a different delivery mechanism, and it’s a different kind of tool.

When used appropriately, in some cases, yes, it can be less expensive. The challenge is you have to get yourself out of your traditional thinking and think differently about the how and why of leveraging cloud. And when you do that, then things begin to fall into place and make a lot more sense both organizationally — from a process standpoint, and from a delivery standpoint — and also economically.

Gardner: That “appropriate use of cloud” is the key. Of course, that could be a moving target. What’s appropriate today might not be appropriate in a month or a quarter. But before we delve into more … Tim, tell us about your organization. What’s a typical day in the life for Tim Crawford like?

It’s not tech for tech’s sake, rather it’s best to say, “How do we use technology for business advantage?”

Crawford: I love that question. AVOA stands for that position in which we sit between business and technology. If you think about the intersection of business and technology, of using technology for business advantage, that’s the space we spend our time thinking about. We think about how organizations across a myriad of different industries can leverage technology in a meaningful way. It’s not tech for tech’s sake, and I want to be really clear about that. But rather it’s best to say, “How do we use technology for business advantage?”

We spend a lot of time with large enterprises across the globe working through some of these challenges. It could be as simple as changing traditional mindsets to transformational, or it could be talking about tactical objectives. Most times, though, it’s strategic in nature. We spend quite a bit of time thinking about how to solve these big problems and to change the way that companies function, how they operate.

A day in a life of me could range from, if I’m lucky, being able to stay in my office and be on the phone with clients, working with folks and thinking through some of these big problems. But I do spend a lot of time on the road, on an airplane, getting out in the field, meeting with clients, understanding what people really are contending with.

I spent well over 20 years of my career before I began doing this within the IT organization, inside leading IT organizations. It’s incredibly important for me to stay relevant by being out with these folks and understanding what they’re challenged by — and then, of course, helping them through their challenges.

Any given day is something new and I love that diversity. I love hearing different ideas. I love hearing new ideas. I love people who challenge the way I think.

It’s an opportunity for me personally to learn and to grow, and I wish more of us would do that. So it does vary quite a bit, but I’m grateful that the opportunities that I’ve had to work with have been just fabulous, and the same goes for the people.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: I’ve always enjoyed my conversations with you, Tim, because you always do challenge me to think a little bit differently — and I find that very valuable.

Okay, let’s get back to this idea of “appropriate use of cloud.” I wonder if we should also expand that to be “appropriate use of IT and cloud.” So including that notion of hybrid IT, which includes cloud and hybrid cloud and even multi-cloud. And let’s not forget about the legacy IT services.

How do we know if we’re appropriately using cloud in the context of hybrid IT? Are there measurements? Is there a methodology that’s been established yet? Or are we still in the opening innings of how to even measure and gain visibility into how we consume and use cloud in the context of all IT — to therefore know if we’re doing it appropriately?

The monkey-bread model

Crawford: The first thing we have to do is take a step back to provide the context of that visibility — or a compass, as I usually refer to these things. You need to provide a compass to help understand where we need to go.

If we look back for a minute, and look at how IT operates — traditionally, we did everything. We had our own data center, we built all the applications, we ran our own servers, our own storage, we had the network – we did it all. We did it all, because we had to. We, in IT, didn’t really have a reasonable alternative to running our own email systems, our own file storage systems. Those days have changed.

Fast-forward to today. Now, you have to pick apart the pieces and ask, “What is strategic?” When I say, “strategic,” it doesn’t mean critically important. Electrical power is an example. Is that strategic to your business? No. Is it important? Heck, yeah, because without it, we don’t run. But it’s not something where we’re going out and building power plants next to our office buildings just so we can have power, right? We rely on others to do it because there are mature infrastructures, mature solutions for that. The same is true with IT. We have now crossed the point where there are mature solutions at an enterprise level that we can capitalize on, or that we can leverage.

Part of the methodology I use is the monkey bread example. If you’re not familiar with monkey bread, it’s kind of a crazy thing where you have these balls of dough. When you bake it, the balls of dough congeal together and meld. What you’re essentially doing is using that as representative of, or an analogue to, your IT portfolio of services and applications. You have to pick apart the pieces of those balls of dough and figure out, “Okay. Well, these systems that support email, those could go off to Google or Microsoft 365. And these applications, well, they could go off to this SaaS-based offering. And these other applications, well, they could go off to this platform.”

And then, what you’re left with is this really squishy — but much smaller — footprint that you have to contend with. That problem in the center is much more specific — and arguably that’s what differentiates your company from your competition.

Whether you run email [on-premises] or in a cloud, that’s not differentiating to a business. It’s incredibly important, but not differentiating. When you get to that gooey center, that’s the core piece, that’s where you put your resources in, that’s what you focus on.

This example helps you work through determining what’s critical, and — more importantly — what’s strategic and differentiating to my business, and what is not. And when you start to pick apart these pieces, it actually is incredibly liberating. At first, it’s a little scary, but once you get the hang of it, you realize how liberating it is. It brings focus to the things that are most critical for your business.

Identify opportunities where cloud makes sense – and where it doesn’t. It definitely is one of the most significant opportunities for most IT organizations today.

That’s what we have to do more of. When we do that, we identify opportunities where cloud makes sense — and where it doesn’t. Cloud is not the end-all, be-all for everything. It definitely is one of the most significant opportunities for most IT organizations today.

So it’s important: Understand what is appropriate, how you leverage the right solutions for the right application or service.

Gardner: IT in many organizations is still responsible for everything around technology. And that now includes higher-level strategic undertakings of how all this technology and the businesses come together. It includes how we help our businesses transform to be more agile in new and competitive environments.

So is IT itself going to rise to this challenge, of not doing everything, but instead becoming more of that strategic broker between in IT functions and business outcomes? Or will those decisions get ceded over to another group? Maybe enterprise architects, business architects, business process management (BPM) analysts? Do you think it’s important for IT to both stay in and elevate to the bigger game?

Changing IT roles and responsibilities

Crawford: It’s a great question. For every organization, the answer is going to be different. IT needs to take on a very different role and sensibility. IT needs to look different than how it looks today. Instead of being a technology-centric organization, IT really needs to be a business organization that leverages technology.

The CIO of today and moving forward is not the tech-centric CIO. There are traditional CIOs and transformational CIOs. The transformational CIO is the business leader first who happens to have responsibility for technology. IT, as a whole, needs to follow the same vein.

For example, if you were to go into a traditional IT organization today and ask them what’s the nature of their business, ask them to tell you what they do as an administrator, as a developer, to help you understand how that’s going to impact the company and the business — unfortunately, most of them would have a really hard time doing that.

The IT organization of the future, will articulate clearly the work they’re doing and how that impacts their customers and their business, and how making different changes and tweaks will impact their business. They will have an intimate knowledge of how their business functions much more than how the technology functions. That’s a very different mindset, that’s the place we have to get to for IT on the whole. IT can’t just be this technology organization that sits in a room, separate from the rest of the company. It has to be integral, absolutely integral to the business.

Gardner: If we recognize that cloud is here to stay — but that the consumption of it needs to be appropriate, and if we’re at some sort of inflection point, we’re also at the risk of consuming cloud inappropriately. If IT and leadership within IT are elevating themselves, and upping their game to be that strategic player, isn’t IT then in the best position to be managing cloud, hybrid cloud and hybrid IT? What tools and what mechanisms will they need in order to make that possible?

Learn More About

Hybrid IT Management

Solutions From HPE

Crawford: Theoretically, the answer is that they really need to get to that level. We’re not there, on the whole, yet. Many organizations are not prepared to adopt cloud. I don’t want to be a naysayer of IT, but I think in terms of where IT needs to go on the whole, on the sum, we need to move into that position where we can manage the different types of delivery mechanisms — whether it’s public cloud, SaaS, private cloud, appropriate data centers — those are all just different levers we can pull depending on the business type.

Businesses change, customers change, demand changes and revenue comes from different places. IT needs to be able to shift gears just as fast and in anticipation of where the company goes.

As you mentioned earlier, businesses change, customers change, demand changes, and revenue comes from different places. In IT, we need to be able to shift gears just as fast and be prepared to shift those gears in anticipation of where the company goes. That’s a very different mindset. It’s a very different way of thinking, but it also means we have to think of clever ways to bring these tools together so that we’re well-prepared to leverage things like cloud.

The challenge is many folks are still in that classic mindset, which unfortunately holds back companies from being able to take advantage of some of these new technologies and methodologies. But getting there is key.

Gardner: Some boards of directors, as you mentioned, are saying, “Go cloud,” or be cloud-first. People are taking them at that, and so we are facing a sort of cloud sprawl. People are doing micro services and as developers spinning up cloud instances and object storage instances. Sometimes they’ll keep those running into production; sometimes they’ll shut them down. We have line of business (LOB) managers going out and acquiring services like SaaS applications, running them for a while, perhaps making them a part of their standard operating procedures. But, in many organizations, one hand doesn’t really know what the other is doing.

Are we at the inflection point now where it’s simply a matter of measurement? Would we stifle innovation if we required people to at least mention what it is that they’re doing with their credit cards or petty cash when it comes to IT and cloud services? How important is it to understand what’s going on in your organization so that you can begin a journey toward better management of this overall hybrid IT?

Why, oh why, oh why, cloud?

Crawford: It depends on how you approach it. If you’re doing it from an IT command-and-control perspective, where you want to control everything in cloud — full stop, that’s failure right out of the gate. But if you’re doing it from a position of — I’m trying to use it as an opportunity to understand why are these folks leveraging cloud, and why are they not coming to IT, and how can I as CIO be better positioned to be able to support them, then great! Go forth and conquer.

The reality is that different parts of the organization are consuming cloud-based services today. I think there’s an opportunity to bring those together where appropriate. But at the end of the day, you have to ask yourself a very important question. It’s a very simple question, but you have to ask it, and it has to do with each of the different ways that you might leverage cloud. Even when you go beyond cloud and talk about just traditional corporate data assets — especially as you start thinking about Internet of things (IoT) and start thinking about edge computing — you know that public cloud becomes problematic for some of those things.

The important question you have to ask yourself is, “Why?” A very simple question, but it can have a really complicated answer. Why are you using public cloud? Why are you using three different forms of public cloud? Why are you using private cloud and public cloud together?

Once you begin to ask yourself those questions, and you keep asking yourself that question … it’s like that old adage. Ask yourself why three times and you kind of get to the core as the true reason why. You’ll bring greater clarity as to the reasons, and typically the business reasons, of why you’re actually going down that path. When you start to understand that, it brings clarity to what decisions are smart decisions — and what decisions maybe you might want to think about doing differently.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: Of course, you may begin doing something with cloud for a very good reason. It could be a business reason, a technology reason. You’ll recognize it, you gain value from it — but then over time you have to step back with maturity and ask, “Am I consuming this in such a way that I’m getting it at the best price-point?” You mentioned a little earlier that sometimes going to public cloud could be four times as expensive.

So even though you may have an organization where you want to foster innovation, you want people to spread their wings, try out proofs of concept, be agile and democratic in terms of their ability to use myriad IT services, at what point do you say, “Okay, we’re doing the business, but we’re not running it like a good business should be run.” How are the economic factors driven into cloud decision-making after you’ve done it for a period of time?

Cloud’s good, but is it good for business?

Crawford: That’s a tough question. You have to look at the services that you’re leveraging and how that ties into business outcomes. If you tie it back to a business outcome, it will provide greater clarity on the sourcing decisions you should make.

For example, if you’re spending $5 to make $6 in a specialty industry, that’s probably not a wise move. But if you’re spending $5 to make $500, okay, that’s a pretty good move, right? There is a trade-off that you have to understand from an economic standpoint. But you have to understand what the true cost is and whether there’s sufficient value. I don’t mean technological value, I mean business value, which is measured in dollars.

If you begin to understand the business value of the actions you take — how you leverage public cloud versus private cloud versus your corporate data center assets — and you match that against the strategic decisions of what is differentiating versus what’s not, then you get clarity around these decisions. You can properly leverage different resources and gain them at the price points that make sense. If that gets above a certain amount, well, you know that’s not necessarily the right decision to make.

Economics plays a very significant role — but let’s not kid ourselves. IT organizations haven’t exactly been the best at economics in the past. We need to be moving forward. And so it’s just one more thing on that overflowing plate that we call demand and requirements for IT, but we have to be prepared for that.

Gardner: There might be one other big item on that plate. We can allow people to pursue business outcomes using any technology that they can get their hands on — perhaps at any price – and we can then mature that process over time by looking at price, by finding the best options.

But the other item that we need to consider at all times is risk. Sometimes we need to consider whether getting too far into a model like a public cloud, for example, that we can’t get back out of, is part of that risk. Maybe we have to consider that being completely dependent on external cloud networks across a global supply chain, for example, has inherent cyber security risks. Isn’t it up to IT also to help organizations factor some of these risks — along with compliance, regulation, data sovereignty issues? It’s a big barrel of monkeys.

Before we sign off, as we’re almost out of time, please address for me, Tim, the idea of IT being a risk factor mitigator for a business.

Safety in numbers

Crawford: You bring up a great point, Dana. Risk — whether it is risk from a cyber security standpoint or it could be data sovereignty issues, as well as regulatory compliance — the reality is that nobody across the organization truly understands all of these pieces together.

It really is a team effort to bring it all together — where you have the privacy folks, the information security folks, and the compliance folks — that can become a united team.

It really is a team effort to bring it all together — where you have the privacy folks, the information security folks, and the compliance folks — that can become a united team. I don’t think IT is the only component of that. I really think this is a team sport. In any organization that I’ve worked with, across the industry it’s a team sport. It’s not just one group.

It’s complicated, and frankly, it’s getting more complicated every single day. When you have these huge breaches that sit on the front page of The Wall Street Journal and other publications, it’s really hard to get clarity around risk when you’re always trying to fight against the fear factor. So that’s another balancing act that these groups are going to have to contend with moving forward. You can’t ignore it. You absolutely shouldn’t. You should get proactive about it, but it is complicated and it is a team sport.

Gardner: Some take-aways for me today are that IT needs to raise its game. Yet again, they need to get more strategic, to develop some of the tools that they’ll need to address issues of sprawl, complexity, cost, and simply gaining visibility into what everyone in the organization is – or isn’t — doing appropriately with hybrid cloud and hybrid IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Software | Tagged , , , , , , , , , , | 2 Comments

Case study: How HCI-powered private clouds accelerate efficient digital transformation

The next BriefingsDirect cloud efficiency case study examines how a world-class private cloud project evolved in the financial sector.

We’ll now learn how public cloud-like experiences, agility, and cost structures are being delivered via a strictly on-premises model built on hyper-converged infrastructure for a risk-sensitive financial services company.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Jim McKittrick joins to help explore the potential for cloud benefits when retaining control over the data center is a critical requirement. He is Senior Account Manager at Applied Computer Solutions (ACS) in Huntington Beach, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Many enterprises want a private cloud for security and control reasons. They want an OpEx-like public cloud model, and that total on-premises control. Can you have it both ways?

McKittrick: We are showing that you can. People are learning that the public cloud isn’t necessarily all it has been hyped up to be, which is what happens with newer technologies as they come out.

Gardner: What are the drivers for keeping it all private?

McKittrick: Security, of course. But if somebody actually analyzes it, a lot of times it will be about cost and data access, and the ease of data egress because getting your data back can sometimes be a challenge.

Jim McKittrickAlso, there is a realization that even though I may have strict service-level agreements (SLAs), if something goes wrong they are not going to save my business. If that thing tanks, do I want to give that business away? I have some clients who absolutely will not.

Gardner: Control, and so being able to sleep well at night.

McKittrick: Absolutely. I have other clients that we can speak about who have HIPAA requirements, and they are privately held and privately owned. And literally the CEO says, “I am not doing it.” And he doesn’t care what it costs.

Gardner: If there were a huge delta between the price of going with a public cloud or staying private, sure. But that deltais closing. So you can have the best of both worlds — and not pay a very high penalty nowadays.

McKittrick: If done properly, certainly from my experience. We have been able to prove that you can run an agile, cloud-like infrastructure or private cloud as cost-effectively — or even more cost effectively — than you can in the public clouds. There are certainly places for both in the market.

Gardner: It’s going to vary, of course, from company to company — and even department to department within a company — but the fact is that that choice is there.

McKittrick: No doubt about it, it absolutely is.

Gardner: Tell us about ACS, your role there, and how the company is defining what you consider the best of hybrid cloud environments.

McKittrick: We are a relatively large reseller, about $600 million. We have specialized in data center practices for 27 years. So we have been in business quite some time and have had to evolve with the IT industry.

We have a head start on what’s really coming down the pipe — we are one to two years ahead of the general marketplace.

Structurally, we are fairly conventional from the standpoint that we are a typical reseller, but we pride ourselves on our technical acumen. Because we have some very, very large clients and have worked with them to get on their technology boards, we feel like we have a head start on what’s really coming down the pipe —  we are maybe one to two years ahead of the general marketplace. We feel that we have a thought leadership edge there, and we use that as well as very senior engineering leadership in our organization to tell us what we are supposed to be doing.

Gardner: I know you probably can’t mention the company by name, but tell us about a recent project that seems a harbinger of things to come.

Hyper-convergent control 

McKittrick: It began as a proof of concept (POC), but it’s in production, it’s live globally.

I have been with ACS for 18 years, and I have had this client for 17 of those years. We have been through multiple data center iterations.

When this last one came up, three things happened. Number one, they were under tremendous cost pressure — but public cloud was not an option for them.

The second thing was that they had grown by acquisition, and so they had dozens of IT fiefdoms. You can imagine culturally and technologically the challenges involved there. Nonetheless, we were told to consolidate and globalize all these operations.

Thirdly, I was brought in by a client who had run the US presence for this company. We had created a single IT infrastructure in the US for them. He said, “Do it again for the whole world, but save us a bunch of money.” The gauntlet was thrown down. The customer was put in the position of having to make some very aggressive choices. And so he effectively asked me bring them “cool stuff.”

You could give control to anybody in the organization across the globe and they would be able to manage it.

They asked, “What’s new out there? How can we do this?” Our senior engineering staff brought a couple of ideas to the table, and hyper-converged infrastructure (HCI) was central to that. HCI provided the ability to simplify the organization, as well as the IT management for the organization. You could give control of it to anybody in the organization across the globe and they would be able to manage it, working with partners in other parts of the world.

Gardner: Remote management being very important for this.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Absolutely, yes. We also gained failover capabilities, and disaster recovery within these regional data centers. We ended going from — depending on whom you spoke to — somewhere between seven to 19 data centers globally, down to three. We were able to consolidate down to three. The data center footprint shrank massively. Just in the US, we went to one data center; we got rid of the other data center completely. We went from 34 racks down to 3.5.

Gardner: Hyper-convergence being a big part of that?

McKittrick: Correct, that was really the key, hyper-convergence and virtualization.

The other key enabling technology was data de-duplication, so the ability to shrink the data and then be able to move it from place to place without crushing bandwidth requirements, because you were only moving the changes, the change blocks.

Gardner: So more of a modern data lifecycle approach?

McKittrick: Absolutely. The backup and recovery approach was built in to the solution itself. So we also deployed a separate data archive, but that’s different than backup and recovery. Backup and recovery were essentially handled by VMware and the capability to have the same machine exist in multiple places at the same time.

Gardner: Now, there is more than just the physical approach to IT, as you described it, there is the budgetary financial approach. So how do they maybe get the benefit of the  OpEx approach that people are fond of with public cloud models and apply that in a private cloud setting?

Budget benefits 

McKittrick: They didn’t really take that approach. I mean we looked at it. We looked at essentially leasing. We looked at the pay-as-you-go models and it didn’t work for them. We ended up doing essentially a purchase of the equipment with a depreciation schedule and traditional support. It was analyzed, and they essentially said, “No, we are just going to buy it.”

Gardner: So total cost of ownership (TCO) is a better metric to look at. Did you have the ability to measure that? What were some of the metrics of success other than this massive consolidation of footprint and better control over management?

McKittrick: We had to justify TCO relative to what a traditional IT refresh would have cost. That’s what I was working on for the client until the cost pressure came to bear. We then needed to change our thinking. That’s when hyper-convergence came through.

What we would have spent on just hardware and infrastructure costs, not including network and bandwidth — would have been $55 million over five years, and we ended up doing it for $15 million.

The cost analysis was already done, because I was already costing it with a refresh, including compute and traditional SAN storage. The numbers I had over a five-year period – just what we would have spent on hardware and infrastructure costs, and not including network and bandwidth – would have been $55 million over five years, and we ended up doing it for $15 million.

Gardner: We have mentioned HCI several times, but you were specifically using SimpliVity, which is now part of Hewlett Packard Enterprise (HPE). Tell us about why SimpliVity was a proof-point for you, and why you think that’s going to strengthen HPE’s portfolio.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: This thing is now built and running, and it’s been two years since inception. So that’s a long time in technology, of course. The major factors involved were the cost savings.

As for HPE going forward, the way the client looked at it — and he is a very forward-thinking technologist — he always liked to say, “It’s just VMware.” So the beauty of it from their perspective – was that they could just deploy on VMware virtualization. Everyone in our organization knows how to work with VMware, we just deploy that, and they move things around. Everything is managed in that fashion, as virtual machines, as opposed to traditional storage, and all the other layers of things that have to be involved in traditional data centers.

The HCI-based data centers also included built-in WAN optimization, built-in backup and recovery, and were largely on solid-state disks (SSDs). All of the other pieces of the hardware stack that you would traditionally have — from the server on down — folded into a little box, so to speak, a physical box. With HCI, you get all of that functionality in a much simpler and much easier to manage fashion. It just makes everything easier.

Gardner: When you bring all those HCI elements together, it really creates a solution. Are there any other aspects of HPE’s portfolio, in addition now to SimpliVity, that would be of interest for future projects?

McKittrick: HPE is able to take this further. You have to remember, at the time, SimpliVity was a widget, and they would partner with the server vendors. That was really it, and with VMware.

Now with HPE, SimpliVity can really build out their roadmap. There is all kinds of innovation that’s going to come.

Now with HPE, SimpliVity has behind them one of the largest technology companies in the world. They can really build out their roadmap. There is all kinds of innovation that’s going to come. When you then pair that with things like Microsoft Azure Stack and HPE Synergy and its composable architecture — yes, all of that is going to be folded right in there.

I give HPE credit for having seen what HCI technology can bring to them and can help them springboard forward, and then also apply it back into things that they are already developing. Am I going to have more opportunity with this infrastructure now because of the SimpliVity acquisition? Yes.

Gardner:  For those organizations that want to take advantage of public cloud options, also having HCI-powered hybrid clouds, and composable and automated-bursting and scale-out — and soon combining that multi-cloud options via HPE New Stack – this gives them the best of all worlds.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Exactly. There you are. You have your hybrid cloud right there. And certainly one could do that with traditional IT, and still have that capability that HPE has been working on. But now, [with SimpliVity HCI] you have just consolidated all of that down to a relatively simple hardware approach. You can now quickly deploy and gain all those hybrid capabilities along with it. And you have the mobility of your applications and workloads, and all of that goodness, so that you can decide where you want to put this stuff.

Gardner: Before we sign off, let’s revisit this notion of those organizations that have to have a private cloud. What words of advice might you give them as they pursue such dramatic re-architecting of their entire IT systems?

A people-first process

McKittrick: Great question. The technology was the easy part. This was my first global HCI roll out, and I have been in the business well over 20 years. The differences come when you are messing with people — moving their cheese, and messing with their rice bowl. It’s profound. It always comes back to people.

The people and process were the hardest things to deal with, and quite frankly, still are. Make sure that everybody is on-board. They must understand what’s happening, why it’s happening, and then you try to get all those people pulling in the same direction. Otherwise, you end up in a massive morass and things don’t get done, or they become almost unmanageable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise | Tagged , , , , , , , , , | Leave a comment

Inside story on HPC’s AI role in Bridges ‘strategic reasoning’ research at CMU

The next BriefingsDirect high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable — even using imperfect information.

We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?

Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can’t just optimize as if you were the only actor — because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.

ts

Sandholm

That’s what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning — not all of it, but most of it until about 12 years ago — was really about perfect information games like Othello, Checkers, Chess and Go.

And there has been tremendous progress. But these complete information, or perfect information, games don’t really model real business situations very well. Most business situations are of imperfect information.

So you don’t know the other guy’s resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent’s mistakes and learn to exploit them.

So totally different techniques are needed, and this has way more applications in reality than perfect information games have.

Gardner: In business, you don’t always know the rules. All the variables are dynamic, and we don’t know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both.

Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw?

Heads-Up No-Limit Texas Hold’em has become the leading benchmark in the AI community.

Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold’em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the IC chip makers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold’em turned out to be great benchmark because it is a huge game of imperfect information.

It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes — it will still be more than that.

Gardner: This is as close to infinity as you can probably get, right?

Sandholm: Ha-ha, basically yes.

Gardner: Okay, so you have this massively complex potential data set. How do you winnow that down, and how rapidly does the algorithmic process and platform learn? I imagine that being reactive, creating a pattern that creates better learning is an important part of it. So tell me about the learning part.

Three part harmony

Sandholm: The learning part always interests people, but it’s not really the only part here — or not even the main part. We basically have three main modules in our architecture. One computes approximations of Nash equilibrium strategies using only the rules of the game as input. In other words, game-theoretic strategies.

That doesn’t take any data as input, just the rules of the game. The second part is during play, refining that strategy. We call that subgame solving.

Then the third part is the learning part, or the self-improvement part. And there, traditionally people have done what’s called opponent modeling and opponent exploitation, where you try to model the opponent or opponents and adjust your strategies so as to take advantage of their weaknesses.

However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don’t have that many holes to exploit and they are experts at counter-exploiting. When you start to exploit opponents, you typically open yourself up for exploitation, and we didn’t want to take that risk. In the learning part, the third part, we took a totally different approach than traditionally is taken in AI.

We are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

One way to think about that is that we are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

All three of these modules run on the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC), for which the hardware was built by Hewlett Packard Enterprise (HPE).

HPC from HPE

Overcomes Barriers

To Supercomputing and Deep Learning

Gardner: Is this being used in any business settings? It certainly seems like there’s potential there for a lot of use cases. Business competition and circumstances seem to have an affinity for what you’re describing in the poker use case. Where are you taking this next?

Sandholm: So far this, to my knowledge, has not been used in business. One of the reasons is that we have just reached the superhuman level in January 2017. And, of course, if you think about your strategic reasoning problems, many of them are very important, and you don’t want to delegate them to AI just to save time or something like that.

Now that the AI is better at strategic reasoning than humans, that completely shifts things. I believe that in the next few years it will be a necessity to have what I call strategic augmentation. So you can’t have just people doing business strategy, negotiation, strategic pricing, and product portfolio optimization.

You are going to have to have better strategic reasoning to support you, and so it becomes a kind of competition. So if your competitors have it, or even if they don’t, you better have it because it’s a competitive advantage.

Gardner: So a lot of what we’re seeing in AI and machine learning is to find the things that the machines do better and allow the humans to do what they can do even better than machines. Now that you have this new capability with strategic reasoning, where does that demarcation come in a business setting? Where do you think that humans will be still paramount, and where will the machines be a very powerful tool for them?

Human modeling, AI solving

Sandholm: At least in the foreseeable future, I see the demarcation as being modeling versus solving. I think that humans will continue to play a very important role in modeling their strategic situations, just to know everything that is pertinent and deciding what’s not pertinent in the model, and so forth. Then the AI is best at solving the model.

That’s the demarcation, at least for the foreseeable future. In the very long run, maybe the AI itself actually can start to do the modeling part as well as it builds a better understanding of the world — but that is far in the future.

Gardner: Looking back as to what is enabling this, clearly the software and the algorithms and finding the right benchmark, in this case the poker game are essential. But with that large of a data set potential — probabilities set like you mentioned — the underlying computersystems must need to keep up. Where are you in terms of the threshold that holds you back? Is this a price issue that holds you back? Is it a performance limit, the amount of time required? What are the limits, the governors to continuing?

Sandholm: It’s all of the above, and we are very fortunate that we had access to Bridges; otherwise this wouldn’t have been possible at all.  We spent more than a year and needed about 25 million core hours of computing and 2.6 petabytes of data storage.

This amount is necessary to conduct serious absolute superhuman research in this field — but it is something very hard for a professor to obtain. We were very fortunate to have that computing at our disposal.

Gardner: Let’s examine the commercialization potential of this. You’re not only a professor at Carnegie Mellon, you’re a founder and CEO of a few companies. Tell us about your companies and how the research is leading to business benefits.

Superhuman business strategies

Sandholm: Let’s start with Strategic Machine, a brand-new start-up company, all of two months old. It’s already profitable, and we are applying the strategic reasoning technology, which again is application independent, along with the Libratus technology, the Lengpudashi technology, and a host of other technologies that we have exclusively licensed to Strategic Machine. We are doing research and development at Strategic Machine as well, and we are taking these to any application that wants us.

 HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Such applications include business strategy optimization, automated negotiation, and strategic pricing. Typically when people do pricing optimization algorithmically, they assume that either their company is a monopolist or the competitors’ prices are fixed, but obviously neither is typically true.

We are looking at how do you price strategically where you are taking into account the opponent’s strategic response in advance. So you price into the future, instead of just pricing reactively. The same can be done for product portfolio optimization along with pricing.

Let’s say you’re a car manufacturer and you decide what product portfolio you will offer and at what prices. Well, what you should do depends on what your competitors do and vice versa, but you don’t know that in advance. So again, it’s an imperfect-information game.

Gardner: And these are some of the most difficult problems that businesses face. They have huge billion-dollar investments that they need to line up behind for these types of decisions. Because of that pipeline, by the time they get to a dynamic environment where they can assess — it’s often too late. So having the best strategic reasoning as far in advance as possible is a huge benefit.

If you think about machine learning traditionally, it’s about learning from the past. But strategic reasoning is all about figuring out what’s going to happen in the future.

Sandholm: Exactly! If you think about machine learning traditionally, it’s about learning from the past. But strategic reasoning is all about figuring out what’s going to happen in the future. And you can marry these up, of course, where the machine learning gives the strategic reasoning technology prior beliefs, and other information to put into the model.

There are also other applications. For example, cyber security has several applications, such as zero-day vulnerabilities. You can run your custom algorithms and standard algorithms to find them, and what algorithms you should run depends on what the other opposing governments run — so it is a game.

Similarly, once you find them, how do you play them? Do you report your vulnerabilities to Microsoft? Do you attack with them, or do you stockpile them? Again, your best strategy depends on what all the opponents do, and that’s also a very strategic application.

And in upstairs blocks trading, in finance, it’s the same thing: A few players, very big, very strategic.

Gaming your own immune system

The most radical application is something that we are working on currently in the lab where we are doing medical treatment planning using these types of sequential planning techniques. We’re actually testing how well one can steer a patient’s T-cell population to fight cancers, autoimmune diseases, and infections better by not just using one short treatment plan — but through sophisticated conditional treatment plans where the adversary is actually your own immune system.

Gardner: Or cancer is your opponent, and you need to beat it?

Sandholm: Yes, that’s right. There are actually two different ways to think about that, and they lead to different algorithms. We have looked at it where the actual disease is the opponent — but here we are actually looking at how do you steer your own T-cell population.

Gardner: Going back to the technology, we’ve heard quite a bit from HPE about more memory-driven and edge-driven computing, where the analysis can happen closer to where the data is gathered. Are these advances of any use to you in better strategic reasoning algorithmic processing?

Algorithms at the edge

Sandholm: Yes, absolutely! We actually started running at the PSC on an earlier supercomputer, maybe 10 years ago, which was a shared-memory architecture. And then with Bridges, which is mostly a distributed system, we used distributed algorithms. As we go into the future with shared memory, we could get a lot of speedups.

We have both types of algorithms, so we know that we can run on both architectures. But obviously, the shared-memory, if it can fit our models and the dynamic state of the algorithms, is much faster.

Gardner: So the HPE Machine must be of interest to you: HPE’s advanced concept demonstration model, with a memory-driven architecture, photonics for internal communications, and so forth. Is that a technology you’re keeping a keen eye on?

HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Sandholm: Yes. That would definitely be a desirable thing for us, but what we really focus on is the algorithms and the AI research. We have been very fortunate in that the PSC and HPE have been able to take care of the hardware side.

We really don’t get involved in the hardware side that much, and I’m looking at it from the outside. I’m trusting that they will continue to build the best hardware and maintain it in the best way — so that we can focus on the AI research.

Gardner: Of course, you could help supplement the cost of the hardware by playing superhuman poker in places like Las Vegas, and perhaps doing quite well.

Sandholm: Actually here in the live game in Las Vegas they don’t allow that type of computational support. On the Internet, AI has become a big problem on gaming sites, and it will become an increasing problem. We don’t put our AI in there; it’s against their site rules. Also, I think it’s unethical to pretend to be a human when you are not. The business opportunities, the monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, Software-defined storage | Tagged , , , , , , , , , , | Leave a comment

Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcomes

The next BriefingsDirect healthcare transformation use-case discussion focuses on how an ecosystem approach to big data solutions brings about improved healthcare informatics-driven outcomes.

We’ll now learn how a Philips Healthcare Informatics and Hewlett Packard Enterprise (HPE) partnership creates new solutions for the global healthcare market and provides better health outcomes for patients by managing data and intelligence better.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Joining us to explain how companies tackle the complexity of solutions delivery in healthcare by using advanced big data and analytics is Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why are partnerships so important in healthcare informatics? Is it because there are clinical considerations combined with big data technology? Why are these types of solutions particularly dependent upon an ecosystem approach?

Heemskerk: It’s exactly as you say, Dana. At Philips we are very strong at developing clinical solutions for our customers. But nowadays those solutions also require an IT infrastructure layer underneath to solve the total equation. As such, we are looking for partners in the ecosystem because we at Philips recognize that we cannot do everything alone. We need partners in the ecosystem that can help address the total solution — or the total value proposition — for our customers.

Gardner: I’m sure it varies from region to region, but is there a cultural barrier in some regard to bringing cutting-edge IT in particular into healthcare organizations? Or have things progressed to where technology and healthcare converge?

Heemskerk: Of course, there are some countries that are more mature than others. Therefore the level of healthcare and the type of solutions that you offer to different countries may vary. But in principle, many of the challenges that hospitals everywhere are going through are similar.

Some of the not-so-mature markets are also trying to leapfrog so that they can deliver different solutions that are up to par with the mature markets.

Gardner: Because we are hearing a lot about big data and edge computing these days, we are seeing the need for analytics at a distributed architecture scale. Please explain how big data changes healthcare.

Big data value add

Heemskerk: What is very interesting for big data is what happens if you combine it with value-based care. It’s a very interesting topic. For example, nowadays, a hospital is not reimbursed for every procedure that it does in the hospital – the value is based more on the total outcome of how a patient recovers.

This means that more analytics need to be gathered across different elements of the process chain before reimbursement will take place. In that sense, analytics become very important for hospitals on how to measure on how things are being done efficiently, and determining if the costs are okay.

Gardner: The same data that can used to be more efficient can also be used for better healthcare outcomes and understanding the path of the disease, or for the efficacy of procedures, and so on. A great deal can be gained when data is gathered and used properly.

Heemskerk: That is correct. And you see, indeed, that there is much more data nowadays, and you can utilize it for all kind of different things.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: Please help us understand the relationship between your organization and HPE. Where does your part of the value begin and end, and how does HPE fill their role on the technology side?

Healthy hardware relationships 

Heemskerk: HPE has been a highly valued supplier of Philips for quite a long time. We use their technologies for all kinds of different clinical solutions. For example, all of the hardware that we use for our back-end solutions or for advanced visualization is sourced by HPE. I am focusing very much on the commercial side of the game, so to speak, where we are really looking at how can we jointly go to market.

As I said, customers are really looking for one-stop shopping, a complete value proposition, for the challenges that they are facing. That’s why we partner with HPE on a holistic level.

Gardner: Does that involve bringing HPE into certain accounts and vice versa, and then going in to provide larger solutions together?

Heemskerk: Yes, that is exactly the case, indeed. We recognized that we are not so much focusing on problems related to just the clinical implications, and we are not just focusing on the problems that HPE is facing — the IT infrastructure and the connectivity side of the value chain. Instead, we are really looking at the problems that the C-suite-level healthcare executives are facing.

How do you align all of your processes so that there is a more optimized process flow within the hospitals?

You can think about healthcare industry consolidation, for example, as a big topic. Many hospitals are now moving into a cluster or into a network and that creates all kinds of challenges, both on the clinical application layer, but also on the IT infrastructure. How do you harmonize all of this? How do you standardize all of your different applications? How do you make sure that hospitals are going to be connected? How do you align all of your processes so that there is a more optimized process flow within the hospitals?

By addressing these kinds of questions and jointly going to our customers with HPE, we can improve user experiences for the customers, we can create better services, we have optimized these solutions, and then we can deliver a lot of time savings for the hospitals as well.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: We have certainly seen in other industries that if you try IT modernization without including the larger organization — the people, the process, and the culture — the results just aren’t as good. It is important to go at modernization and transformation, consolidation of data centers, for example, with that full range of inputs and getting full buy-in.

Who else makes up the ecosystem? It takes more than two players to make an ecosystem.

Heemskerk: Yes, that’s very true, indeed. In this, system integrators also have a very important role. They can have an independent view on what would be the best solution to fit a specific hospital.

Of course, we think that the Philips healthcare solutions are quite often the best, jointly focused with the solutions from HPE, but from time to time you can be partnering with different vendors.

Besides that, we don’t have all of the clinical applications. By partnering with other vendors in the ecosystem, sometimes you can enhance the solutions that we have to think about; such as 3D solutions and 3D printing solutions.

Gardner: When you do this all correctly, when you leverage and exploit an ecosystem approach, when you cover the bases of technology, finance, culture, and clinical considerations, how much of an impressive improvement can we typically see?

Saving time, money, and people

Heemskerk: We try to look at it customer by customer, but generically what we see is that there are really a lot of savings.

First of all, addressing standardization across the clinical application layer means that a customer doesn’t have to spend a lot of money on training all of its hospital employees on different kinds of solutions. So that’s already a big savings.

Secondly, by harmonizing and making better effective use of the clinical applications, you can drive the total cost of ownership down.

Thirdly, it means that on the clinical applications layer, there are a lot of efficiency benefits possible. For example, advanced analytics make it possible to reduce the time that clinicians or radiologists are spending on analyzing different kinds of elements, which also creates time savings.

Gardner: Looking more to the future, as technologies improve, as costs go down, as they typically do, as hybrid IT models are utilized and understood better — where do you see things going next for the healthcare sector when it comes to utilizing technology, utilizing informatics, and improving their overall process and outcomes?

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Heemskerk: What for me would be very interesting is to see is if we can create some kind of a patient-centric data file for each patient. You see that consumers are increasingly engaged in their own health, with all the different devices like Fitbit, Jawbone, Apple Watch, etc. coming up. This is creating a massive amount of data. But there is much more data that you can put into such a patient-centric file, with the chronic diseases information now that people are being monitored much more, and much more often.

If you can have a chronological view of all of the different touch points that the patient has in the hospital, combined with the drugs that the patient is using etc., and you have that all in this patient-centric file — it will be very interesting. And everything, of course, needs to be interconnected. Therefore, Internet of Things (IoT) technologies will become more important. And as the data is growing, you will have smarter algorithms that can also interpret that data – and so artificial intelligence (AI) will become much more important.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Cloud computing, data analysis, Enterprise architect, healthcare, Hewlett Packard Enterprise, machine learning, User experience | Tagged , , , , , , , , , | Leave a comment

Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe

The next BriefingsDirect cloud ecosystem strategies interview explores how a Canadian software provider delivers a hybrid cloud platform for enterprises and service providers alike.

We’ll now learn how Ormuco has identified underserved regions and has crafted a standards-based hybrid cloud platform to allow its users to attain world-class cloud services just about anywhere.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to help us explore how new breeds of hybrid cloud are coming to more providers around the globe thanks to the Cloud28+ consortium is Orlando Bayter, CEO and Founder of Ormuco in Montréal, and Xavier Poisson Gouyou Beachamps, Vice President of Worldwide Indirect Digital Services at Hewlett Packard Enterprise (HPE), based in Paris. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s begin with this notion of underserved regions. Orlando, why is it that many people think that public cloud is everywhere for everyone when there are many places around the world where it is still immature? What is the opportunity to serve those markets?

Bayter: There are many countries underserved by the hyperscale cloud providers. If you look at Russia, United Arab Emirates (UAE), around the world, they want to comply with regulations on security, on data sovereignty, and they need to have the clouds locally to comply.

 

Orlando Bayter (1)

Bayter

Ormuco targets those countries that are underserved by the hyperscale providers and enables service providers and enterprises to consume cloud locally, in ways they can’t do today.

Gardner: Are you allowing them to have a private cloud on-premises as an enterprise? Or do local cloud providers offer a common platform, like yours, so that they get the best of both the private and public hybrid environment?

Bayter: That is an excellent question. There are many workloads that cannot leave the firewall of an enterprise. With that, you now need to deliver the economies, ease of use, flexibility, and orchestration of a public cloud experience in the enterprise. At Ormuco, we deliver a platform that provides the best of the two worlds. You are still leaving your data center and you don’t need to worry whether it’s on-premises or off-premises.

It’s a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.

It’s a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.

Gardner: What are the attributes of this platform that both your enterprise and service provider customers are looking for? What’s most important to them in this hybrid cloud platform?

Bayter: As I said, there are some workloads that cannot leave the data center. In the past, you couldn’t get the public cloud inside your data center. You could have built a private cloud, but you couldn’t get an Amazon Web Services (AWS)-like solution or a Microsoft Azure-like solution on-premises.

We have been running this now for two years and what we have noticed is that enterprises want to have the ease-of-use, sales, service, and orchestration on-premises. Now, they can connect to a public cloud based on the same platform and they don’t have to worry about how to connect it or how it will work. They just decide where to place this.

They have security, can comply with regulations, and gain control — plus 40 percent savings compared with VMware, and up to 50 percent to 60 percent compared with AWS.

Gardner: I’m also interested in the openness of the platform. Do they have certain requirements as to the cloud model, such as OpenStack?  What is it that enables this to be classified as a standard cloud?

Bayter: At Ormuco, we went out and checked what are the best solutions and the best platform that we can bring together to build this experience on-premises and off-premises.

We saw OpenStack, we saw Docker, and then we saw how to take, for example, OpenStack and make it like a public cloud solution. So if you look at OpenStack, the way I see it is as concrete, or a foundation. If you want to build a house or a condo on that, you also need the attic. Ormuco builds that software to be able to deliver that cloud look and feel, that self-service, all in open tools, with the same APIs both on private and public clouds.

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

Gardner: What is it about the HPE platform beneath that that supports you? How has HPE been instrumental in allowing that platform to be built?

Community collaboration

Bayter: HPE has been a great partner. Through Cloud28+ we are able to go to markets in places that HPE has a presence. They basically generate that through marketing, through sales. They were able to bring deals to us and help us grow our business.

From a technology perspective, we are using HPE Synergy. With Synergy, we can provide composability, and we can combine storage and compute into a single platform. Now we go together into a market, we win deals, and we solve the enterprise challenges around security and data sovereignty.

Gardner: Xavier, how is Cloud28+ coming to market, for those who are not familiar with it? Tell us a bit about Cloud28+ and how an organization like Ormuco is a good example of how it works.

Poisson: Cloud28+ is a community of IT players — service providers, technology partners, independent software vendors (ISVs), value added resellers, and universities — that have decided to join forces to enable digital transformation through cloud computing. To do that, we pull our resources together to have a single platform. We are allowing the enterprise to discover and consume cloud services from the different members of Cloud28+.

We launched Cloud28+ officially to the market on December 15, 2016. Today, we have more than 570 members from across the world inside Cloud28+. Roughly 18,000 distributed services may be consumed and we also have system integrators that support the platform. We cover more than 300 data centers from our partners, so we can provide choice.

In fact, we believe our customers need to have that choice. They need to know what is available for them. As an analogy, if you have your smartphone, you can have an app store and do what you want as a consumer. We wanted to do the same and provide the same ease for an enterprise globally anywhere on the planet. We respect diversity and what is happening in every single region.

Ormuco has been one of the first technology partners. Docker is another one. And Intel is another. They have been working together with HPE to really understand the needs of the customer and how we can deliver very quickly a cloud infrastructure to a service provider and to an enterprise in record time. At the same time, they can leverage all the partners from the catalog of content and services, propelled by Cloud28+, from the ISVs.

Global ecosystem, by choice

Because we are bringing together a global ecosystem, including the resellers, if a service provider builds a project through Cloud28+, with a technology partner like Ormuco, then all the ISVs are included. They can push their services onto the platform, and all the resellers that are part of the ecosystem can convey onto the market what the service providers have been building.

We have a lot of collaboration with Ormuco to help them to design their solutions. Ormuco has been helping us to design what Cloud28+ should be, because it’s a continuous improvement approach on Cloud28+ and it’s via collaboration.

If you want to join Cloud28+ to take, don’t come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.

As I like to say, “If you want to join Cloud28+ to take, don’t come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.”

Gardner: Orlando, when this all works well, whatdo your end-users gain in terms of business benefits? You mentioned reduction in costs, that’s very important, of course. But is there more about your platform from a development perspective and an operational perspective that we can share to encourage people to explore it?

Bayter: So imagine yourself with an ecosystem like Cloud28+. They have 500 members. They have multiple countries, many data centers.

Now imagine that you can have the Ormuco solution on-premises in an enterprise and then be able to burst to a global network of service providers, across all those regions. You get the same performance, you get the same security, and you get the same compliance across all of that.

For an end-customer, you don’t need to think anymore where you’re going to put your applications. They will go to the public cloud, they will go to the private cloud. It is agnostic. You basically place it where you want it to go and decide the economies you want to get. You can compare with the hyperscale providers.

That is the key, you get one platform throughout our ecosystem of partners that can deliver to you that same functionality and experience locally. With a community such as Cloud28+, we can accomplish something that was not possible before.

Gardner: So, just hoping to delineate between the development and then the operations in production. Are you offering the developer an opportunity to develop there and seamlessly deploy, or are you more focused on the deployment after the applications are developed, or both?

Development to deployment 

Bayter: With our solution, same as AWS or Azure allows, a developer can develop their app via APIs, automated, use a database of choice (it could be MySQL, Oracle), and the load balancing and the different features we have in the cloud, whether it’s Kubernetes or Docker, build all that — and then when the application is ready, you can decide in which region you want to deploy the application.

So you go from development, to deployment technology of your choice, whether it’s Docker or Kubernetes, and then you can deploy to the global network that we’re building on Cloud28+. You can go to any region, and you don’t have to worry about how to get a service provider contract in Russia, or how do I get a contract in Brazil? Who is going to provide me with the service? Now you can get that service locally through a reseller, a distributor, or have an ISV deploythe software worldwide.

Gardner: Xavier, what other sorts of organizations should be aware of the Cloud28+ network?

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

We accelerate go-to-market for startups, they gain immediate global reach with Cloud28+.

Poisson: We have the technology partners like Ormuco, and we are thankful for what they have brought to the community. We have service providers, of course, software vendors, because you can publish your software in Cloud28+ and provision it on-premises or off-premises. We accelerate go-to-market for startups, they gain immediate global reach with Cloud28+. So to all the ISVs, I say, “Come on, come on guys, we will help you reach out to the market.”

System integrators also, because we see this is an opportunity for the large enterprises and governments with a lot of multi-cloud projects taking care, having requirements for  security. And you know what is happening with security today, it’s a hot topic. So people are thinking about how they can have a multi-cloud strategy. System integrators are now turning to Cloud28+ because they find here a reservoir of all the capabilities to find the right solution to answer the right question.

Universities are another kind of member we are working with. Just to explain, we know that all the technologies are created first at the university and then they evolve. All the startups are starting at the university level. So we have some very good partnerships with some universities in several regions in Portugal, Germany, France, and the United States. These universities are designing new projects with members of Cloud28+, to answer questions of the governments, for example, or they are using Cloud28+ to propel the startups into the market.

Ormuco is also helping to change the business model of distribution. So distributors now also are joining Cloud28+. Why? Because a distributor has to make a choice for its consumers. In the past, a distributor had software inventory that they were pushing to the resellers. Now they need to have an inventory of cloud services.

There is more choice. They can purchase hyperscale services, resell, or maybe source to the different members of Cloud28+, according to the country they want to deliver to. Or they can own the platform using the technology of Ormuco, for example, and put that in a white-label model for the reseller to propel it into the market. This is what Azure is doing in Europe, typically. So new kinds of members and models are coming in.

Digital transformation

Lastly, an enterprise can use Cloud28+ to make their digital transformation. If they have services and software, they can become a supplier inside of Cloud28+. They source cloud services inside a platform, do digital transformation, and find a new go-to-market through the ecosystem to propel their offerings onto the global market.

Gardner: Orlando, do you have any examples that you could share with us of a service provider, ISV or enterprise that has white-labeled your software and your capabilities as Xavier has alluded to? That’s a really interesting model.

Bayter: We have been able to go-to-market to countries where Cloud28+ was a tremendous help. If you look at Western Europe, Xavier was just speaking about Microsoft Azure. They chose our platform and we are deploying it in Europe, making it available to the resellers to help them transform their consumption models.

They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises — all from a single platform.

If you look at the Europe, Middle East and Africa (EMEA) region, we have one of the largest managed service providers. They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises — all from a single platform.

We also have several of the largest telecoms in Latin America (LATAM) and EMEA. We have a US presence, where we have Managed.com as a provider. So things are going very well and it is largely thanks to what Cloud28+ has done for us.

Gardner: While this consortium is already very powerful, we are also seeing new technologies coming to the market that should further support the model. Such things as HPE New Stack, which is still in the works, HPE Synergy’s composability and auto-bursting, along with security now driven into the firmware and the silicon — it’s almost as if HPE’s technology roadmap is designed for this very model, or very much in alignment. Tell us how new technology and the Cloud28+ model come together.

Bayter: So HPE New Stack is becoming the control point of multi-cloud. Now what happens when you want to have that same experience off-premises and on-premises? New Stack could connect to Ormuco as a resource provider, even as it connects to other multi-clouds.

With an ecosystem like Cloud28+ all working together, we can connect those hybrid models with service providers to deliver that experience to enterprises across the world.

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

Gardner: Xavier, anything more in terms of how HPE New Stack and Cloud28+ fit?

Partnership is top priority

Poisson: It’s a real collaboration. I am very happy with that because I have been working a long time at HPE, and New Stack is a project that has been driven by thinking about the go-to-market at the same time as the technology. It’s a big reward to all the Cloud28+ partners because they are now de facto considered as resource providers for our end-user customers – same as the hyperscale providers, maybe.

At HPE, we say we are in partnership first — with our partners, or ecosystem, or channel. I believe that what we are doing with Cloud28+, New Stack, and all the other projects that we are describing – this will be the reality around the world. We deliver on-premises for the channel partners.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, cloud messaging, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, HP | Tagged , , , , , , , , | Leave a comment

How Nokia refactors the video delivery business with new time-managed IT financing models

The next BriefingsDirect IT financing and technology acquisition strategies interview examines how Nokia is refactoring the video delivery business. Learn both about new video delivery architectures and the creative ways media companies are paying for the technology that supports them.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe new models of Internet Protocol (IP) video and time-managed IT financing is Paul Larbey, Head of the Video Business Unit at Nokia, based in Cambridge, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems that the video-delivery business is in upheaval. How are video delivery trends coming together to make it necessary for rethinking architectures? How are pricing models and business models changing, too?

Larbey: We sit here in 2017, but let’s look back 10 years to 2007. There were a couple key events in 2007 that dramatically shaped how we all consume video today and how, as a company, we use technology to go to market.

Paul Larbey (1)

Larbey

It’s been 10 years since the creation of the Apple iPhone. The iPhone sparked whole new device-types, moving eventually into the iPad. Not only that, Apple underneath developed a lot of technology in terms of how you stream video, how you protect video over IP, and the technology underneath that, which we still use today. Not only did they create a new device-type and avenue for us to watch video, they also created new underlying protocols.

It was also 10 years ago that Netflix began to first offer a video streaming service. So if you look back, I see one year in which how we all consume our video today was dramatically changed by a couple of events.

If we fast-forward, and look to where that goes to in the future, there are two trends we see today that will create challenges tomorrow. Video has become truly mobile. When we talk about mobile video, we mean watching some films on our iPad or on our iPhone — so not on a big TV screen, that is what most people mean by mobile video today.

The future is personalized

When you can take your video with you, you want to take all your content with you. You can’t do that today. That has to happen in the future. When you are on an airplane, you can’t take your content with you. You need connectivity to extend so that you can take your content with you no matter where you are.

Take the simple example of a driverless car. Now, you are driving along and you are watching the satellite-navigation feed, watching the traffic, and keeping the kids quiet in the back. When driverless cars come, what you are going to be doing? You are still going to be keeping the kids quiet, but there is a void, a space that needs to be filled with activity, and clearly extending the content into the car is the natural next step.

And the final challenge is around personalization. TV will become a lot more personalized. Today we all get the same user experience. If we are all on the same service provider, it looks the same — it’s the same color, it’s the same grid. There is no reason why that should all be the same. There is no reason why my kids shouldn’t have a different user interface.

There is no reason why I should have 10 pages of channels that I have to through to find something that I want to watch.

The user interface presented to me in the morning may be different than the user interface presented to me in the evening. There is no reason why I should have 10 pages of channels that I have to go through to find something that I want to watch. Why aren’t all those channels specifically curated for me? That’s what we mean by personalization. So if you put those all together and extrapolate those 10 years into the future, then 2027 will be a very different place for video.

Gardner: It sounds like a few things need to change between the original content’s location and those mobile screens and those customized user scenarios you just described. What underlying architecture needs to change in order to get us to 2027 safely?

Larbey: It’s a journey; this is not a step-change. This is something that’s going to happen gradually.

But if you step back and look at the fundamental changes — all video will be streamed. Today, the majority of what we view is via broadcasting, from cable TV, or from a satellite. It’s a signal that’s going to everybody at the same time.

If you think about the mobile video concept, if you think about personalization, that is not going be the case. Today we watch a portion of our video streamed over IP. In the future, it will all be streamed over IP.

And that clearly creates challenges for operators in terms of how to architect the network, how to optimize the delivery, and how to recreate that broadcast experience using streaming video. This is where a lot of our innovation is focused today.

Gardner: You also mentioned in the case of an airplane, where it’s not just streaming but also bringing a video object down to the device. What will be different in terms of the boundary between the stream and a download?

IT’s all about intelligence

Larbey: It’s all about intelligence. Firstly, connectivity has to extend and become really ubiquitous via technology such as 5G. The increase in fiber technology will dramatically enable truly ubiquitous connectivity, which we don’t really have today. That will resolve some of the problems, but not all.

But, by the fact that television will be personalized, the network will know what’s in my schedule. If I have an upcoming flight, machine learning can automatically predict what I’m going to do and make sure it suggests the right content in context. It may download the content because it knows I am going to be sitting in a flight for the next 12 hours.

Gardner: We are putting intelligence into the network to be beneficial to the user experience. But it sounds like it’s also going to give you the opportunity to be more efficient, with just-in-time utilization — minimal viable streaming, if you will.

How does the network becoming more intelligent also benefit the carriers, the deliverers of the content, and even the content creators and owners? There must be an increased benefit for them on utility as well as in the user experience?

Larbey: Absolutely. We think everything moves into the network, and the intelligence becomes the network. So what does that do immediately? That means the operators don’t have to buy set-top boxes. They are expensive. They are very costly to maintain. They stay in the network a long time. They can have a much lighter client capability, which basically just renders the user interface.

The first obvious example of all this, that we are heavily focused on, is the storage. So taking the hard drive out of the set-top box and putting that data back into the network. Some huge deployments are going on at the moment in collaboration with Hewlett Packard Enterprise (HPE) using the HPE Apollo platform to deploy high-density storage systems that remove the need to ship a set-top box with a hard drive in it.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Now, what are the advantages of that? Everybody thinks it’s costly, so you’ve taken the hard drive out, you have the storage in the network, and that’s clearly one element. But actually if you talk to any operator, their biggest cause of subscriber churn is when somebody’s set-top box fails and they lose their personalized recordings.

The personal connection you had with your service isn’t there any longer. It’s a lot easier to then look at competing services. So if that content is in the network, then clearly you don’t have that churn issue. Not only can you access your content from any mobile device, it’s protected and it will always be with you.

Taking the CDN private

Gardner: For the past few decades, part of the solution to this problem was to employ a content delivery network (CDN) and use that in a variety of ways. It started with web pages and the downloading of flat graphic files. Now that’s extended into all sorts of objects and content. Are we going to do away with the CDN? Are we going to refactor it, is it going to evolve? How does that pan out over the next decade?

Larbey: The CDN will still exist. That still becomes the key way of optimizing video delivery — but it changes. If you go back 10 years, the only CDNs available were CDNs in the Internet. So it was a shared service, you bought capacity on the shared service.

Even today that’s how a lot of video from the content owners and broadcasters is streamed. For the past seven years, we have been taking that technology and deploying it in private network — with both telcos and cable operators — so they can have their own private CDN, and there are a lot of advantages to having your own private CDN.

You get complete control of the roadmap. You can start to introduce advanced features such as targeted ad insertion, blackout, and features like that to generate more revenue. You have complete control over the quality of experience, which you don’t if you outsource to a shared service.

There are a lot of advantages to having your own private CDN. You have complete control over the quality of experience which you don’t if you outsource to a shared service.

What we’re seeing now is both the programmers and broadcasters taking an interest in that private CDN because they want the control. Video is their business, so the quality they deliver is even more important to them. We’re seeing a lot of the programmers and broadcasters starting to look at adopting the private CDN model as well.

The challenge is how do you build that? You have to build for peak. Peak is generally driven by live sporting events and one-off news events. So that leaves you with a lot of capacity that’s sitting idle a lot of the time. With cloud and orchestration, we have solved that technically — we can add servers in very quickly, we can take them out very quickly, react to the traffic demands and we can technically move things around.

But the commercial model has lagged behind. So we have been working with HPE Financial Services to understand how we can innovate on that commercial model as well and get that flexibility — not just from an IT perspective, but also from a commercial perspective.

Gardner:  Tell me about Private CDN technology. Is that a Nokia product? Tell us about your business unit and the commercial models.

Larbey: We basically help as a business unit. Anyone who has content — be that broadcasters or programmers – they pay the operators to stream the content over IP, and to launch new services. We have a product focused on video networking: How to optimize a video, how it’s delivered, how it’s streamed, and how it’s personalized.

It can be a private CDN product, which we have deployed for the last seven years, and we have a cloud digital video recorder (DVR) product, which is all about moving the storage capacity into the network. We also have a systems integration part, which brings a lot of technology together and allows operators to combine vendors and partners from the ecosystem into a complete end-to-end solution.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Gardner: With HPE being a major supplier for a lot of the hardware and infrastructure, how does the new cost model change from the old model of pay up-front?

Flexible financial formats

Larbey: I would not classify HPE as a supplier; I think they are our partner. We work very closely together. We use HPE ProLiant DL380 Gen9 Servers, the HPE Apollo platform, and the HPE Moonshot platform, which are, as you know, world-leading compute-storage platforms that deliver these services cost-effectively. We have had a long-term technical relationship.

We are now moving toward how we advance the commercial relationship. We are working with the HPE Financial Services team to look at how we can get additional flexibility. There are a lot of pay-as-you-go-type financial IT models that have been in existence for some time — but these don’t necessarily work for my applications from a financial perspective.

Our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate.

In the private CDN and the video applications, our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate. With the traditional IT payment model for storage, my application fundamentally breaks that. So having a partner like HPE that was flexible and could understand the application is really important.

We also needed flexibility of compute scaling. We needed to be able to deploy for the peak, but not pay for that peak at all times. That’s easy from the software technology side, but we needed it from the commercial side as well.

And thirdly, we have been trying to enter a new market and be focused on the programmers and broadcasters, which is not our traditional segment. We have been deploying our CDN to the largest telcos and cable operators in the world, but now, selling to that programmers and broadcasters segment — they are used to buying a service from the Internet and they work in a different way and they have different requirements.

So we needed a financial model that allowed us to address that, but also a partner who would take some of the risk, too, because we didn’t know if it was going to be successful. Thankfully it has, and we have grown incredibly well, but it was a risk at the start. Finding a partner like HPE Financial Services who could share some of that risk was really important.

Gardner: These video delivery organizations are increasingly operating on subscription basis, so they would like to have their costs be incurred on a similar basis, so it all makes sense across the services ecosystem.

Our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.

Larbey: Yes, absolutely. That is becoming more and more important. If you go back to the very first the Internet video, you watched of a cat falling off a chair on YouTube. It didn’t matter if it was buffering, that wasn’t relevant. Now, our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.

If TV in 2027 is going to be purely IP, then clearly that has to deliver exactly the same quality of experience as the broadcasting technologies. And that creates challenges. The biggest obvious example is if you go to any IP TV operator and look at their streamed video channel that is live versus the one on broadcast, there is a big delay.

So there is a lag between the live event and what you are seeing on your IP stream, which is 30 to 40 seconds. If you are in an apartment block, watching a live sporting event, and your neighbor sees it 30 to 40 seconds before you, that creates a big issue. A lot of the innovations we’re now doing with streaming technologies are to deliver that same broadcast experience.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Gardner: We now also have to think about 4K resolution, the intelligent edge, no latency, and all with managed costs. Fortunately at this time HPE is also working on a lot of edge technologies, like Edgeline and Universal IoT, and so forth. There’s a lot more technology being driven to the edge for storage, for large memory processing, and so forth. How are these advances affecting your organization?

Optimal edge: functionality and storage

Larbey: There are two elements. The compute, the edge, is absolutely critical. We are going to move all the intelligence into the network, and clearly you need to reduce the latency, and you need to able to scale that functionality. This functionality was scaled in millions of households, and now it has to be done in the network. The only way you can effectively build the network to handle that scale is to put as much functionality as you can at the edge of the network.

The HPE platforms will allow you to deploy that computer storage deep into the network, and they are absolutely critical for our success. We will run our CDN, our ad insertion, and all that capability as deeply into the network as an operator wants to go — and certainly the deeper, the better.

The other thing we try to optimize all of the time is storage. One of the challenges with network-based recording — especially in the US due to the content-use regulations compliance — is that you have to store a copy per user. If, for example, both of us record the same program, there are two versions of that program in the cloud. That’s clearly very inefficient.

The question is how do you optimize that, and also support just-in-time transcoding techniques that have been talked about for some time. That would create the right quality of bitrate on the fly, so you don’t have to store all the different formats. It would dramatically reduce storage costs.

The challenge has always been that the computing processing units (CPUs) needed to do that, and that’s where HPE and the Moonshot platform, which has great compute density, come in. We have the Intel media library for doing the transcoding. It’s a really nice storage platform. But we still wanted to get even more out of it, so at our Bell Labs research facility we developed a capability called skim storage, which for a slight increase in storage, allows us to double the number of transcodes we can do on a single CPU.

That approach takes a really, really efficient hardware platform with nice technology and doubles the density we can get from it — and that’s a big change for the business case.

Gardner: It’s astonishing to think that that much encoding would need to happen on the fly for a mass market; that’s a tremendous amount of compute, and an intense compute requirement.

Content popularity

Larbey: Absolutely, and you have to be intelligent about it. At the end of the day, human behavior works in our favor. If you look at most programs that people record, if they do not watch within the first seven days, they are probably not going to watch that recording. That content in particular then can be optimized from a storage perspective. You still need the ability to recreate it on the fly, but it improves the scale model.

Gardner: So the more intelligent you can be about what the users’ behavior and/or their use patterns, the more efficient you can be. Intelligence seems to be the real key here.

Larbey: Yes, we have a number of algorithms even within the CDN itself today that predict content popularity. We want to maximize the disk usage. We want the popular content on the disk, so what’s the point of us deleting a piece of a popular content just because a piece of long-tail content has been requested. We do a lot of algorithms looking at and trying to predict the content popularity so that we can make sure we are optimizing the hardware platform accordingly.

Gardner: Perhaps we can deepen our knowledge about this all through some examples. Do have some examples that demonstrate how your clients and customers are taking these new technologies and making better business decisions that help them in their cost structure — but also deliver a far better user experience?

In-house control

Larbey: One of our largest customers is Liberty Global, with a large number of cable operators in a variety of countries across Europe. They were enhancing an IP service. They started with an Internet-based CDN and that’s how they were delivering their service. But recognizing the importance of gaining more control over costs and the quality experience, they wanted to take that in-house and put the content on a private CDN.

We worked with them to deliver that technology. One of things that they noticed very quickly, which I don’t think they were expecting, was a dramatic reduction in the number of people calling in to complain because the stream had stopped or buffered. They enjoyed a big decrease in call-center calls as soon as they switched on our new CDN technology, which is quite an interesting use-case benefit.

When they deployed a private CDN, they reached costs payback in less than 12 months.

We do a lot with Sky in the UK, which was also looking to migrate away from an Internet-based CDN service into something in-house so they could take more control over it and improve the users’ quality of experience.

One of our customers in Canada, TELUS, when they deployed a private CDN, they reached costs payback in less than 12 months in terms of both the network savings and the Internet CDN costs savings.

Gardner: Before we close out, perhaps a look to the future and thinking about some of the requirements on business models as we leverage edge intelligence. What about personalization services, or even inserting ads in different ways? Can there be more of a two-way relationship, or a one-to-one interaction with the end consumers? What are the increased benefits from that high-performing, high-efficiency edge architecture?

VR vision and beyond

Larbey: All of that generates more traffic — moving from standard-definition to high-definition to 4K, to beyond 4K — it all generates more network traffic. You then take into account a 360-degree-video capability and virtual reality (VR) services, which is a focus for Nokia with our Ozo camera, and it’s clear that the data is just going to explode.

So being able to optimize, and continue to optimize that, in terms of new codec technology and new streaming technologies — to be able to constrain the growth of video demands on the network – is essential, otherwise the traffic would just explode.

There is lot of innovation going on to optimize the content experience. People may not want to watch all their TV through VR headsets. That may not become the way you want to watch the latest episode of Game of Thrones. However, maybe there will be a uniquely created piece of content that’s an add-on in 360, and the real serious fans can go and look for it. I think we will see new types of content being created to address these different use-cases.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Bimodal IT, Business intelligence, Business networks, Cloud computing, data analysis, data center, Data center transformation, Enterprise app stores, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, managed services, Microsoft, server, Software-defined storage, storage, User experience, video delivery | Tagged , , , , , , , , , | Leave a comment

IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT

The next BriefingsDirect Internet of Things (IoT) strategies insights interview focuses on how a Miami telecommunications products provider has developed new breeds of services to help manage complex edge and data scenarios.

We will now learn how IoT platforms and services help to improve network services, operations, and business goals — for carriers and end users alike.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore what is needed to build an efficient IoT support business is Andres Sanchez, CEO of Identidad IoT in Miami. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How has your business changed in the telecoms support industry and why is IoT such a big opportunity for you?

Sanchez: With the new OTT (Over the Top content) technology, and the way that it came into the picture and took part of the whole communications chain of business, the business is basically getting very tough in telecoms. When we begin evaluating what IoT can do and seeing the possibilities, this is a new wave. We understand that it’s not about connectivity, it’s not about the 10 percent of the value chain — it’s more about the solutions.

Andres_SanchezWe saw a very good opportunity to start something new and to take the experience we have with the technology that we have in telecoms, and get new people, get new developers, and start building solutions, and that’s what we are doing right now.

Gardner: So as the voice telecoms business trails off, there is a new opportunity at the edge for data and networks to extend for a variety of use cases. What are some the use cases that you are seeing now in IoT that is a growth opportunity for your business?

Sanchez: IoT is everywhere. The beauty of IoT is that you can find solutions everywhere you look. What we have found is that when people think about IoT, they think about connected home, they think about connected car, or the smart parking where it’s just a green or red light when the parking is occupied or not. But IoT is more than that.

There are two ways to generate revenue in IoT. One is by having new products. The second is understanding what it is on the operational level that we can do better. And it’s in this way that we are putting in sensors, measuring things, and analyzing things. You can basically reduce your operational cost, or be more effective in the way that you are doing business. It’s not only getting the information, it’s using that information to automate processes that it will make your company better.

Gardner: As organizations recognize that there are new technologies coming in that are enabling this smart edge, smart network, what is it that’s preventing them from being able to take advantage of this?

Manage your solutions

with the HPE

Universal IoT Platform

Sanchez: Companies think that they just have to connect the sensors, that they only have to digitize their information. They haven’t realized that they really have to go through a digital transformation. It’s not about connecting the sensors that are already there; it’s building a solution using that information. They have to reorganize and to reinvent their organizations.

For example, it’s not about taking a sensor, putting the sensor in the machine and just start taking information and watching it on a screen. It’s taking the information and being able to see and check special patterns, to predict when a machine is going to break, when a machine at certain temperatures starts to work better or worse. It’s being able to be more productive without having to do more work. It’s just letting the machines do the work by themselves.

Gardner: A big part of that is bringing more of an IT mentality to the edge, creating a standard network and standard platforms that can take advantage of the underlying technologies that are now off-the-shelf.

Sanchez: Definitely. The approach that Identidad IoT takes is we are not building solutions based on what we think is good for the customer. What we are doing is building proof of concepts (PoCs) and tailored solutions for companies that need digital transformation.

I don’t think there are two companies doing the same thing that have the same problems. One manufacturer may have one problem, and another manufacturer using the same technology has another completely different problem. So the approach we are taking is that we generate a PoC, check exactly what the problems are, and then develop that application and solution.

This is not just a change of process. This is not purely putting in new software. This is trying to solve a problem when you may not even know the problem is there. It’s really digital transformation.

But it’s important to understand that IoT is not an IT thing. When we go to a customer, we don’t just go to an IT person, we go to the CEO, because this is a change of mentality. This is not just a change of process. This is not purely putting in new software. This is trying to solve a problem when you may not even know the problem is there. It’s really digital transformation.

Gardner: Where is this being successful? Where are you finding that people really understand it and are willing to take the leap, change their culture, rethink things to gain advantages?

One solution at a time

Sanchez: Unfortunately, people are afraid of what is coming, because people don’t understand what IoT is, and everybody thinks it’s really complicated. It does need expertise. It does need to have security — that is a very big topic right now. But it’s not impossible.

When we approach a company and that CEO, CIO or CTO understands that the benefits of IoT will be shown once you have that solution built — and that probably the initial solution is not going to be the final solution, but it’s going to be based on iterations — that’s when it starts working.

If people think it’s just an out-of-the-box solution, it’s not going to work. That’s the challenge we are having right now. The opportunity is when the head of the company understands that they need to go through a digital transformation.

Manage your solutions

with the HPE

Universal IoT Platform

Gardner: When you work with a partner like Hewlett PackardEnterprise (HPE), they have made big investments and developments in edge computing, such as Universal IoT Platform and Edgeline Systems. How does that help you as a solutions provider make that difficult transition for your customers easier, and encourage them to understand that it’s not impossible, that there are a lot of solutions already designed for their needs?

Sanchez: Our relationship with HPE has been a huge success for Identidad IoT. When we started looking at platforms, when we started this company, we couldn’t find the right platform to fulfill our needs. We were looking for a platform that we could build solutions on and then extrapolate that data with other data, and build other solutions over those solutions.

When we approached HPE, we saw that they do have a unique platform that allows us to generate whatever applications, for whatever verticals, for whatever organizations – whether a city or company. Even if you wanted to create a product just for end-users, they have the ability to do it.

Also, it’s a platform that is so robust that you know it’s going to work, it’s reliable, and it’s very secure. You can build security from the device right on up to the platform and the applications. Other platforms, they don’t have that.

We think that IoT is about relationships and partnerships — it’s about an ecosystem.

Our business model correlates a lot with the HPE business model. We think that IoT is about relationships and partnerships — it’s about an ecosystem. The approach that HPE has to IoT and to ecosystem is exactly the same approach that we have. They are building this big ecosystem of partners. They are helping each other to build relationships and in that way, they build a better and more robust platform.

Gardner: For companies and network providers looking to take advantage of IoT, what would you suggest that they do in preparation? Is there a typical on-ramp to an IoT project?

A leap of faith

Sanchez: There’s no time to be prepared right now. I think they have to take a leap of faith and start building the IoT applications. The pace of the technology transformation is incredible.

When you see the technology right now, today — probably in four months it’s going to be obsolete. You are going to have even better technology, a better sensor. So if you wait –most likely the competition is not going to wait and they will have a very big advantage.

Our approach at Identidad IoT is about platform-as-a-service (PaaS). We are helping companies take that leap without having to create very big financial struggles. And the companies will know that by our using the HPE platform, they are using the state-of-the-art platform. They are not using just a mom-and pop-platform built in a garage. It’s a robust PaaS — so why not to take that leap of faith and start building it? Now is the time.

Gardner: Once you pick up that success, perhaps via a PoC, that gives you ammunition to show economic and productivity benefits that then would lead to even more investment. It seems like there is a virtuous adoption cycle potential here.

Sanchez: Definitely! Once we start a new solution, usually the people who are seeing that solution, they start seeing things that they are not used to seeing. They can pinpoint problems that they have been having for years – but they didn’t understand why.

For example, there’s one manufacturer of T-shirts in Colombia. They were having issues with one specific machine. That machine used to break after two or three weeks. There was just this small piece that was broken. When we installed the sensor and we started gathering their information, after two or three breaks, we understood that it was not the amount of work — it was the temperature at which the machine was working.

So what they did is once the temperature reached a certain point, we automatically started some fans to normalize the temperature, and then they haven’t had any broken pieces for months. It was a simple solution, but it took a lot of study and gathering of information to be able to understand that break point — and that’s the beauty of IoT.

Gardner: It’s data-driven, it’s empirical, it’s understood, but you can’t know what you don’t know until you start measuring things, right?

Listen to things

Sanchez: Exactly! I always say that the “things” are trying to say something, and we are not listening. IoT enables the people, the companies, and the organization to start listening to the things, and not only to start listening, but to make the things to work for us. We need the applications to be able to trigger something to fix the problem without any human intervention — and that’s also the beauty of IoT.

Gardner: And that IoT philosophy even extends to healthcare, manufacturing, transportation, any place where you have complexity, it is pertinent.

Manage your solutions

with the HPE

Universal IoT Platform

Sanchez: Yes, the solution for IoT is everywhere. You can think about healthcare or tracking people or tracking guns or building solutions for cities in which the city can understand what is triggering certain pollution levels that they can fix. Or it can be in manufacturing, or even a small thing like finding your cellphone.

It’s everything that you can measure. Everything that you can put a sensor on, you can measure — that’s IoT. The idea is that IoT will help people live better lives without having to take care of the “thing;” things will have to take care of themselves.

Gardner: You seem quite confident that this is a growth industry. You are betting a significant amount of your future growth on it. How do you see it increasing over the next couple of years? Is this a modest change or do you really see some potential for a much larger market?

Once people understand the capability of IoT, there’s going to be an explosion of solutions.

Sanchez: That’s a really good question. I do see that IoT is the next wave of technology. There are several studies that say that by 2020 there are going to be 50 billion devices connected. I am not that futuristic, but I do see that IoT will start working now and probably within the next two or three years we are going to start seeing an incremental growth of the solutions. Once people understand the capability of IoT, there’s going to be an explosion of solutions. And I think the moment to start doing it is now. I think that next year it’s going to be too late.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Cloud computing, data analysis, enterprise architecture, Hewlett Packard Enterprise, Internet of Things, machine learning | Tagged , , , , , , , , , | Leave a comment

Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment

The next BriefingsDirect inside story interview explores how a software-defined data center (SDDC)-focused systems integrator developed an ultimate open-source object storage environment.

We’re now going to learn how Key Information Systems crafted a storage capability that may have broad extensibility into such realms as hybrid cloud and multi-cloud support.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us better understand a new approach to open-source object storage is Clayton Weise, Director of Cloud Services at Key Information Systems in Agoura Hills, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What prompted you to improve on the way that object storage is being offered as a service? How might this become a new business opportunity for you?

Weise: About a year ago, at Hewlett Packard Enterprise (HPE) Discover, I was wandering the event floor. We had just gotten out of a meeting with SwitchNAP, which is a major data center in Las Vegas. We had been talking to them about some preferred concepts and deployments for storage for their clients.

Clayton Weise (1)That discussion evolved into realizing that there are number of clients inside of Switch and their ecosystem that could make use of storage that was more locally based, that needed to be closer at hand. There were cost savings that could be gained if you have a connection within the same data center, or within the same fiber network.

Pulling data in and out of a cloud

Under this model, there would be significantly less expensive ways of pulling data in and out of a cloud, since you wouldn’t have transfer fees as you normally would. There would also be an advantage to privacy, and to cutting latency, and other beneficial things because of a private network all run by Switch and through their fiber network. So we looked at this and thought this might be interesting.

In discussions with the number of groups within HPE while wandering the floor at Discover, we found that there were some pretty interesting ways that we could play games with the network to allow clients to not have to uproot the way they do things, or force them to do things, for lack of a better term, “Our way.”

If you go to Amazon Web Services or you go to Microsoft Azure, you do it the Microsoft way, or you do it the Amazon way. You don’t really have a choice, since you have to follow their guidelines.

They generally use object storage as an inexpensive way to store archival or less-frequently accessed data. Cloud storage became an alternative to tape and long-term storage.

Where we saw value is, there are times in the mid-market space for clients — ranging from a couple of hundred million dollars up to maybe a couple of billion dollars in annual revenue — where they generally use object storage as kind of an inexpensive way to store archival, or less-frequently accessed, data. So [the cloud storage] became an alternative to tape and long-term storage.

We’ve had this massive explosion of unstructured data, files, and all sorts of things. We have a number of clients in medical and finance, and they have just seen this huge spike in data.

The challenge is: To deploy your own object storage is a fairly complex operation, and it requires a minimum number of petabytes to get started. In that mid-market, they are not typically measuring their storage in that petabytes level.

These customers are more typically in the tens to hundreds of terabytes range, and so they need an inexpensive way to offload that data and put it somewhere where it makes sense. In the medical industry particularly, there’s a lot of concern about putting any kind of patient data up in a public cloud environment — even with encryption.

We thought that if we are in the same data center, and it is a completely private operation that exists within these facilities, that will fulfill the total need — and we can encrypt the data.

But we needed a way to support such private-cloud object storage that would be multitenant. Also, we just have had better luck working with open standards. The challenge with dealing with proprietary systems is you end up locked into a standard, and if you pick wrong, you find yourself having to reinvent everything later on.

I come from a networking background; I was an Internet plumber for many years. We saw the transition then on our side when routing protocols first got introduced. There were proprietary routing protocols, and there were open standards, and that’s what we still use today.

Transition to

Cloud-first

HPE Data Center Networking

So we took a similar approach in object storage as a private-cloud service. We went down the open source path in terms of how we handled the provisioning. We needed something that integrated well with that. We needed a system that had the multitenancy, that understood the tenancy, and that is provided by OpenStack. We found a solution from HPE called Distributed Cloud Networking (DCN) that allows us to carve up the network in all sorts of interesting ways, and that way we don’t have to dictate to the client how to run it.

Many clients are still running traditional networks. The adoption of Virtual Extensible LAN (VXLAN) and other types of SDDC within the network is still pretty low, especially in the mid-market space. So to go to a client and dictate that they have to change how they run the network it is not going to work.

And we wanted it to be as simple as possible. We wanted to treat this as much as we could as a flat network. By using a combination of DCN, Altoline switches from HPE, and some of other software, we were able to give clients a complete network carrying regular Virtual Local Area Networks (VLANs) across it. We then could tie this together in a hybrid fashion, whereby the customers can actually treat our cloud environment as a natural extension of their existing networks, of their existing data centers.

Gardner: You are calling this hybrid storage as a service. It’s focused on object storage at this point, and you can take this into different data center environments. What are some of the sweet spots in the market?

The object service becomes a very inexpensive way to store large amounts of data, and unlike tape — with object as a service, everything is accessible easily.

Weise: The areas where we are seeing the most interest have been backup and archive. It’s an alternative to tape. The object service becomes a very inexpensive way to store large amounts of data, and unlike tape — where it’s inconvenient to access the data — with object as a service everything is accessible very, very easily.

For customers that cannot directly integrate into that object service as supported by their backup software, we can make use of object gateways to provide a method that’s more like traditional access. It looks like a file, or file share, and you edit the file share to be written to the object storage, and so it acts as a go-between. For backup and archive, it makes a really, really great solution.

The other two areas where we seen the most interest have been in the medical space, specifically for large medical image files and archival. We’re working now specifically to build that type of solution, with HIPAA compliance. We have gone through the audits and compliance verification.

The second use-case has been in the media and entertainment industry. In fact, they are the very first to consume this new system and put in hundreds of terabytes worth of storage — they are an entertainment industry client in Burbank, California. A lot of these guys are just shuffling along on external drives.

For them it’s often external arrays, and it’s a lot more Mac OS users. They needed something that was better, and so hybrid object storage as a service has created a great opportunity for them and allows them to collaborate.

They have a location in Burbank, and then they brought up another office in the UK. There is yet another office for them coming up in Europe. The object storage approach allows a kind of central repository, an inexpensive place to place the data — but it also allows them to be more collaborative as well.

Gardner: We have had a weak link in cloud computing storage, which has been the network — and you solved some of those issues. You found a prime use-case with backup and archival, but it seems to me that given the storage capabilities that we’ve seen that this has extensibility. So where it might go next in terms of a storage-as-a service that hybrid cloud providers would use? Where can this go?

Carving up the network

Weise: It’s an interesting question because one of the challenges we have all faced in the world of cloud is we have virtualized servers and virtualized storage, meaning there is disaggregation; there is a separation between the workload that’s running and the actual hardware it’s running on.

In many cases, and for almost all clients in the mid-market, that level of virtualization has not occurred at the network level. We are still nailed to things. We are all tied down to the cable, to the switch port, and to the human that can figure those things out. It’s not as flexible or as extensible as some of the other solutions that are out there.

In our case, when we build this out, the real magic is with the network. That improved connection might be a cost savings for a client — especially from a bandwidth standpoint. But as you get a private cross-connect into that environment to make use of, in this case, storage as a service, we can now carve that up in a number of different ways and allow the client to use it for other things.

For example, if they want to have burst capability within the environments, they can have it — and it’s on the same network as their existing system. So that’s where it gets really interesting: Instead of having to have complex virtual guest package configurations, and tiny networks, and dealing with some the routing of other pieces, you can literally treat our cloud environment as if it’s a network cable thrown over the wall — and it becomes just an extension of the existing network.

We can secure that traffic and ensure that there is high-performance, low-latency and complete separation of tenancy. If you have Coke and Pepsi as clients, they will never see each other.

That opens up some additional possibilities. Some things to work on eventually would be block storage, file storage, right there existing on the same network. We can secure that traffic and ensure that there is high-performance, low-latency and complete separation of tenancy. So if you have Coke and Pepsi as clients, they will never see each other.

Gardner: Very cool. You can take this object storage benefit — and by the way, the cost of that can be significantly lower because you don’t have egress charges and some of the other unfriendly aspects of economics of public cloud providers. But you also have an avenue into a true hybrid cloud environment, where you can move data but also burst workloads and manage that accordingly. Now, what about making this work toward a multi-cloud capability?

Transition to

Cloud-first

HPE Data Center Networking

Weise: Right. So this is where HPE’s DCN software-defined networking (SDN) really starts to shine and separates itself from the pack. We can tie environments together regardless of where they are. If there is a virtual endpoint or physical appliance; if it’s at a remote location that can be deployed, which can act as a gateway — that links everything together.

We can take a client network that’s going from their environment into our environment, we can deploy a small virtual machine inside of a public cloud, and it will tie the networks together and allow them to treat it all as the same. The same policy enforcement engine and things that they use to segregate traffic in microsegmentation and service chaining can be done just as easily in the public cloud environment.

One of the reasons we went to Switch was because they have multiple locations. So in the case of our object storage, we deployed the objects across all three of their data center sites. So a single repository that’s written the data is distributed among three different regions. This protects against a possible regional outage that could mean data is inaccessible, and this is the kind of recent thing that we in the US have seen, where clients were down anywhere from 6 to 16 hours.

One big network, wherever you are

This eliminates that. But the nice thing is because of the network technology that theywere using from HPE, it allowed us to treat that all as one big network — and we can carve that up and virtualize it. So clients inside of the data center — maybe they need resources for disaster recovery or for additional backups or those things — it’s all part of that. We can tie-in from a network standpoint and regardless of where you want to exist — if you are in Vegas, you may want to recover in Reno, or you may want to recover in Grand Rapids. We can make that network look exactly the same in your location.

You want to recover in AWS? You want to recover in Azure? We can tie it in that way, too. So it opens up these great possibilities that allows this true hybrid cloud — and not as a completely separate entity.

Gardner: Very cool. Now there’s nothing wrong, of course, with Switch, but there are other fiber and data center folks out there. Some names that begin with “E” come to mind that you might want to drop in this and that should even increase the opportunity for distribution.

Weise: That’s right. So this initial deployment is focused on Switch, but we do a grand scheme to work this into other data centers. There are a handful of major data center operators out there, including the one that starts with an “E” along with another that starts with a “D.” We do have plans to expand this, or use this as a success use-case.

As this continues to grow, and we get some additional momentum and some good feedback, and really refine the offering to make sure we know exactly what everything needs to be, then we can work with those other data center providers.

Whenever clients deploy their workloads in those public clouds, that means there is equipment that has not been collocated inside one of your facilities.

From the data center operators’ perspective, if you’re one of those facilities, you are at war with AWS or with Azure. Because whenever clients deploy their workloads in those public clouds, that means there is equipment that has not been collocated inside one of your facilities.

So they have a vested interest in doing this, and there is a benefit to the clients inside of those facilities too because they get to live inside of the ecosystem that exists within those data centers, and the private networks that they carry in there deliver the same benefits to all in that ecosystem.

We do plan to use this hybrid cloud object storage as a service capability as a model to deploy in several other data center environments. There is not only a private cloud, but also a multitenant private cloud that could be operative for clients that have a large enough need. You can talk about this in a multi-petabyte scale, or you talk about thousands of virtual machines. Then it’s a question of should you do a private cloud deployment just for you? The same technology, fulfilling the same requirements, and the same solutions could still be used.

Partners in time

Gardner: It sounds like it makes sense, on the back of a napkin basis, for you and HPE to get together and brand something along these lines and go to market together with it.

Weise: It certainly does. We’ve had some great discussions with them. Actually there is a group that was popular in Europe that is now starting to take its growth here in US called Cloud28+.

We had some great discussions with them. We are going to be joining that, and it’s a great thing as well.

The goal is building out this sort of partner network, and working with HPE to do that has been extremely supportive. In addition to these crazy ideas, I also have a really crazy timeline for deployment. When we initially met with HPE and talked about what we wanted to do, they estimated that I should reserve about 6 to 8 weeks for planning and then another 1.5 months for deployment.

Transition to

Cloud-first

HPE Data Center Networking

I said, “Great we have 3 weeks to do the whole thing,” and everyone thought we were crazy. But we actually had it completed in a little over 2.5 weeks. So we have a huge amount of thanks to HPE, and to their technical services group who were able to assist us in getting this going extremely quickly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Business networks, Cloud computing, data center, Data center transformation, disaster recovery, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, Software-defined storage, storage | Tagged , , , , , , , , , , , , | Leave a comment

How IoT and OT collaborate to usher in the data-driven factory of the future

The next BriefingsDirect Internet of Things (IoT) technology trends interview explores how innovation is impacting modern factories and supply chains.

We’ll now learn how a leading-edge manufacturer, Hirotec, in the global automotive industry, takes advantage of IoT and Operational Technology (OT) combined to deliver dependable, managed, and continuous operations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us to find the best factory of the future attributes is Justin Hester, Senior Researcher in the IoT Lab at Hirotec Corp. in Hiroshima, Japan. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s happening in the market with business and technology trends that’s driving this need for more modern factories and more responsive supply chains?

Hester: Our customers are demanding shorter lead times. There is a drive for even higher quality, especially in automotive manufacturing. We’re also seeing a much higher level of customization requests coming from our customers. So how can we create products that better match the unique needs of each customer?

As we look at how we can continue to compete in an ever-competitive environment, we are starting to see how the solutions from IoT can help us.

Gardner: What is it about IoT and Industrial IoT (IIoT) that allows you to do things that you could not have done before?

Hester: Within the manufacturing space, a lot of data has been there for years; for decades. Manufacturing has been very good at collecting data. The challenges we’ve had, though, is bringing in that data in real-time, because the amount of data is so large. How can we act on that data quicker, not on a day-by-day basis or week-by-week basis, but actually on a minute-by-minute basis, or a second-by-second basis? And how do we take that data and contextualize it?

Justin Hester

Hester

It’s one thing in a manufacturing environment to say, “Okay, this machine is having a challenge.” But it’s another thing if I can say, “This machine is having a challenge, and in the context of the factory, here’s how it’s affecting downstream processes, and here’s what we can do to mitigate those downstream challenges that we’re going to have.” That’s where IoT starts bringing us a lot of value.

The analytics, the real-time contextualization of that data that we’ve already had in the manufacturing area, is very helpful.

Gardner: So moving from what may have been a gather, batch, analyze, report process — we’re now taking more discrete analysis opportunities and injecting that into a wider context of efficiency and productivity. So this is a fairly big change. This is not incremental; this is a step-change advancement, right?

A huge step-change 

Hester: It’s a huge change for the market. It’s a huge change for us at Hirotec. One of the things we like to talk about is what we jokingly call the Tuesday Morning Meeting. We talk about this idea that in the morning at a manufacturing facility, everyone gets together and talks about what happened yesterday, and what we can do today to make up for what happened yesterday.

Why don’t we get the data to the right people with the right context and let them make a decision so they can affect what’s going on, instead of waiting until tomorrow to react?

Instead, now we’re making that huge step-change to say,  “Why don’t we get the data to the right people with the right context and let them make a decision so they can affect what’s going on, instead of waiting until tomorrow to react to what’s going on?” It’s a huge step-change. We’re really looking at it as how can we take small steps right away to get to that larger goal.

In manufacturing areas, there’s been a lot of delay, confusion, and hesitancy to move forward because everyone sees the value, but it’s this huge change, this huge project. At Hirotec, we’re taking more of a scaled approach, and saying let’s start small, let’s scale up, let’s learn along the way, let’s bring value back to the organization — and that’s helped us move very quickly.

Gardner: We’d like to hear more about that success story but in the meantime, tell us about Hirotec for those who don’t know of it. What role do you play in the automotive industry, and how are you succeeding in your markets?

Hester: Hirotec is a large, tier-1 automotive supplier. What that means is we supply parts and systems directly to the automotive original equipment manufacturers (OEMs), like Mazda, General Motors, FCA, Ford, and we specialize in door manufacturing, as well as exhaust system manufacturing. So every year we make about 8 million doors, 1.8 million exhaust systems, and we provide those systems mainly to Mazda and General Motors, but also we provide that expertise through tooling.

For example, if an automotive OEM would like Hirotec’s expertise in producing these parts, but they would like to produce them in-house, Hirotec has a tooling arm where we can provide that tooling for automotive manufacturing. It’s an interesting strategy that allows us to take advantage of data both in our facilities, but then also work with our customers on the tooling side to provide those lessons learned and bring them value there as well.

Gardner: How big of a distribution are we talking about? How many factories, how many countries; what’s the scale here?

Hester: We are based in Hiroshima, Japan, but we’re actually in nine countries around the world, currently with 27 facilities. We have reached into all the major continents with automotive manufacturing: we’re in North America, we’re in Europe, we’re all throughout Asia, in China and India. We have a large global presence. Anywhere you find automotive manufacturing, we’re there supporting it.

Discover How the 

IoT Advantage

Works in Multiple Industries 

Gardner: With that massive scale, very small improvements can turn into very big benefits. Tell us why the opportunity in a manufacturing environment to eke out efficiency and productivity has such big payoffs.

Hester: So especially in manufacturing, what we find when we get to those large scales like you’re alluding to is that a 1 percent or 2 percent improvement has huge financial benefits. And so the other thing is in manufacturing, especially automotive manufacturing, we tend to standardize our processes, and within Hirotec, we’ve done a great job of standardizing that world-class leadership in door manufacturing.

And so what we find is when we get improvements not only in IoT but anywhere in manufacturing, if we can get 1 percent or 2 percent, not only is that a huge financial benefit but because we standardized globally, we can move that to our other facilities very quickly, doubling down on that benefit.

Gardner: Well, clearly Hirotec sees this as something to really invest in, they’ve created the IoT Lab. Tell me a little bit about that and how that fits into this?

The IoT Lab works

Hester: The IoT Lab is a very exciting new group, it’s part of our Advanced Engineering Center (AEC). The AEC is a group out of our global headquarters and this group is tasked with the five- to 10-year horizon. So they’re able to work across all of our global organizations with tooling, with engineering, with production, with sales, and even our global operations groups. Our IoT group goes and finds solutions that can bring value anywhere in the organization through bringing in new technologies, new ideas, and new solutions.

And so we formed the IoT Lab to find how can we bring IoT-based solutions into the manufacturing space, into the tooling space, and how actually can those solutions not only help our manufacturing and tooling teams but also help our IT teams, our finance teams, and our sales teams.

Gardner: Let’s dig back down a little bit into why IT, IoT and Operational Technology (OT) are into this step-change opportunity, looking for some significant benefits but being careful in how to institute that. What is required when you move to a more an IT-focused, a standard-platform approach — across all the different systems — that allows you to eke these great benefits?

Tell us about how IoT as a concept is working its way into the very edge of the factory floor.

Discover How the 

IoT Advantage

Works in Multiple Industries 

Hester: One of the things we’re seeing is that IT is beginning to meld, like you alluded to, with OT — and there really isn’t a distinction between OT and IT anymore. What we’re finding is that we’re starting to get to these solution levels by working with partners such as PTC and Hewlett Packard Enterprise (HPE) to bring our IT group and our OT group all together within Hirotec and bring value to the organization.

What we find is there is no longer a need in OT that becomes a request for IT to support it, and also that IT has a need and so they go to OT for support. What we are finding is we have organizational needs, and we’re coming to the table together to make these changes. And that actually within itself is bringing even more value to the organization.

Instead of coming last-minute to the IT group and saying, “Hey, we need your support for all these different solutions, and we’ve already got everything set, and you are just here to put it in,” what we are seeing, is that they bring the expertise in, help us out upfront, and we’re finding better solutions because we are getting experts both from OT and IT together.

We are seeing this convergence of these two teams working on solutions to bring value. And they’re really moving everything to the edge. So where everyone talks about cloud-based computing — or maybe it’s in their data center — where we are finding value is in bringing all of these solutions right out to the production line.

We are doing data collection right there, but we are also starting to do data analytics right at the production line level, where it can bring the best value in the fastest way.

Gardner: So it’s an auspicious time because just as you are seeking to do this, the providers of technology are creating micro data centers, and they are creating Edgeline converged systems, and they are looking at energy conservation so that they can do this in an affordable way — and with storage models that can support this at a competitive price.

What is it about the way that IT is evolving and providing platforms and systems that has gotten you and The IoT Lab so excited?

Excitement at the edge  

Hester: With IoT and IT platforms, originally to do the analytics, we had to go up to the cloud — that was the only place where the compute power existed. Solution providers now are bringing that level of intelligence down to the edge. We’re hearing some exciting things from HPE on memory-driven computing, and that’s huge for us because as we start doing these very complex analytics at the edge, we need that power, that horsepower, to run different applications at the same time at the production line. And something like memory-driven solutions helps us accomplish that.

It’s one thing to have higher-performance computing, but another to gain edge computing that’s proper for the factory environment.

It’s one thing to have higher-performance computing, but another thing to gain edge computing that’s proper for the factory environment. In a manufacturing environment it’s not conducive to a standard servers, a standard rack where it needs dust protection and heat protection — that doesn’t exist in a manufacturing environment.

The other thing we’re beginning to see with edge computing, that HPE provides with Edgeline products, is that we have computers that have high power, high ability to perform the analytics and data collection capabilities — but they’re also proper for the environment.

I don’t need to build out a special protection unit with special temperature control, humidity control – all of which drives up energy costs, which drives up total costs. Instead, we’re able to run edge computing in the environment as it should be on its own, protected from what comes in a manufacturing environment — and that’s huge for us.

Gardner: They are engineering these systems now with such ruggedized micro facilities in mind. It’s quite impressive that the very best of what a data center can do, can now be brought to the very worst types of environments. I’m sure we’ll see more of that, and I am sure we’ll see it get even smaller and more powerful.

Do you have any examples of where you have already been able to take IoT in the confluence of OT and IT to a point where you can demonstrate entirely new types of benefits? I know this is still early in the game, but it helps to demonstrate what you can do in terms of efficiency, productivity, and analytics. What are you getting when you do this well?

IoT insights save time and money

Hester: Taking the stepped strategy that we have, we actually started at Hirotec very small with only eight machines in North America and we were just looking to see if the machines are on, are they running, and even from there, we saw a value because all of a sudden we were getting that real-time contextualized insight into the whole facility. We then quickly moved over to one of our production facilities in Japan, where we have a brand-new robotic inspection system, and this system uses vision sensors, laser sensors, force sensors — and it’s actually inspecting exhaust systems before they leave the facility.

We very quickly implemented an IoT solution in that area, and all we did was we said, “Hey, we just want to get insight into the data, so we want to be able to see all these data points. Over 400 data points are created every inspection. We want to be able to see this data, compared in historical ways — so let’s bring context to that data, and we want to provide it in real-time.”

Discover How the 

IoT Advantage

Works in Multiple Industries 

What we found from just those two projects very quickly is that we’re bringing value to the organization because now our teams can go in and say, “Okay, the system is doing its job, it’s inspecting things before they leave our facility to make sure our customers always get a high-quality product.” But now, we’re able to dive in and find different trends that we weren’t able to see before because all we were doing is saying, “Okay, this system leaves the facility or this system doesn’t.”

And so already just from that application, we’ve been able to find ways that our engineers can even increase the throughput and the reliability of the system because now they have these historical trends. They were able to do a root-cause analysis on some improvements that would have taken months of investigation; it was completed in less than a week for us.

And so that’s a huge value — not only in that my project costs go down but now I am able to impact the organization quicker, and that’s the big thing that Hirotec is seeing. It’s one thing to talk about the financial cost of a project, or I can say, “Okay, here is the financial impact,” but what we are seeing is that we’re moving quicker.

And so, we’re having long-term financial benefits because we’re able to react to things much faster. In this case, we’re able to reduce months of investigation down to a week. That means that when I implement my solution quicker, I’m now bringing that impact to the organization even faster, which has long-term benefits. We are already seeing those benefits today.

Gardner: You’ll obviously be able to improve quality, you’ll be able to reduce the time to improving that quality, gain predictive analytics in your operations, but also it sounds like you are going to gain metadata insights that you can take back into design for the next iteration of not only the design for the parts but the design for the tooling as well and even the operations around that. So that intelligence at the edge can be something that is a full lifecycle process, it goes right back to the very initiation of both the design and the tooling.

Data-driven design, decisions

As you loop this data back to our engineering teams — what kind of benefits can we see, how can we improve our processes, how can we drive out into the organization?

Hester: Absolutely, and so, these solutions, they can’t live in a silo. We’re really starting to look at these ideas of what some people call the Digital Thread, the Digital Twin. We’re starting to understand what does that mean as you loop this data back to our engineering teams — what kind of benefits can we see, how can we improve our processes, how can we drive out into the organization?

And one of the biggest things with IoT-based solutions is that they can’t stay inside this box, where we talked about OT to IT, we are talking about manufacturing, engineering, these IoT solutions at their best, all they really do is bring these groups together and bring a whole organization together with more contextualized data to make better decisions faster.

And so, exactly to your point, as we are looping back, we’re able to start understanding the benefit we’re going to be seeing from bringing these teams together.

Gardner: One last point before we close out. It seems to me as well that at a macro level, this type of data insight and efficiency can be brought into the entire supply chain. As you’re providing certain elements of an automobile, other suppliers are providing what they specialize in, too, and having that quality control and integration and reduced time-to-value or mean-time-to-resolution of the production issues, and so forth, can be applied at a macro level.

So how does the automotive supplier itself look at this when it can take into consideration all of its suppliers like Hirotec are doing?

Start small 

Hester: It’s a very early phase, so a lot of the suppliers are starting to understand what this means for them. There is definitely a macro benefit that the industry is going to see in five to 10 years. Suppliers now need to start small. One of my favorite pictures is a picture of the ocean and a guy holding a lighter. It [boiling the ocean] is not going to happen. So we see these huge macro benefits of where we’re going, but we have to start out somewhere.

Discover How the 

IoT Advantage

Works in Multiple Industries 

A lot of suppliers, what we’re recommending to them, is to do the same thing we did, just start small with a couple of machines, start getting that data visualized, start pulling that data into the organization. Once you do that, you start benefiting from the data, and then start finding new use-cases.

As these suppliers all start doing their own small projects and working together, I think that’s when we are going to start to see the macro benefits but in about five to 10 years out in the industry.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Business intelligence, Cloud computing, data analysis, data center, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, Security, Software-defined storage, storage | Tagged , , , , , , , , , , , , , , | 1 Comment

DreamWorks Animation crafts its next era of dynamic IT infrastructure

The next BriefingsDirect Voice of the Customer thought leader interview examines how DreamWorks Animation is building a multipurpose, all-inclusive, and agile data center capability.

Learn here why a new era of responsive and dynamic IT infrastructure is demanded, and how one high-performance digital manufacturing leader aims to get there sooner rather than later.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how an entertainment industry innovator leads the charge for bleeding-edge IT-as-a-service capabilities is Jeff Wike, CTO of DreamWorks Animation in Glendale, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us why the older way of doing IT infrastructure and hosting apps and data just doesn’t cut it anymore. What has made that run out of gas?

Wike: You have to continue to improve things. We are in a world where technology is advancing at an unbelievable pace. The amount of data, the capability of the hardware, the intelligence of the infrastructure are coming. In order for any business to stay ahead of the curve — to really drive value into the business – it has to continue to innovate.

Gardner: IT has become more pervasive in what we do. I have heard you all refer to yourselves as digital manufacturing. Are the demands of your industry also a factor in making it difficult for IT to keep up?

Wike: When I say we are a digital manufacturer, it’s because we are a place that manufacturers content, whether it’s animated films or TV shows; that content is all made on the computer. An artist sits in front of a workstation or a monitor, and is basically building these digital assets that we put through simulations and rendering so in the end it comes together to produce a movie.

Jeff Wike (1)

Wike

That’s all about manufacturing, and we actually have a pipeline, but it’s really like an assembly line. I was looking at a slide today about Henry Ford coming up with the first assembly line; it’s exactly what we are doing, except instead of adding a car part, we are adding a character, we’re adding a hair to a character, we’re adding clothes, we’re adding an environment, and we’re putting things into that environment.

We are manufacturing that image, that story, in a linear way, but also in an iterative way. We are constantly adding more details as we embark on that process of three to four years to create one animated film.

Gardner: Well, it also seems that we are now taking that analogy of the manufacturing assembly line to a higher plane, because you want to have an assembly line that doesn’t just make cars — it can make cars and trains and submarines and helicopters, but you don’t have to change the assembly line, you have to adjust and you have to utilize it properly.

So it seems to me that we are at perhaps a cusp in IT where the agility of the infrastructure and its responsiveness to your workloads and demands is better than ever.

Greater creativity, increased efficiency

Wike: That’s true. If you think about this animation process or any digital manufacturing process, one issue that you have to account for is legacy workflows, legacy software, and legacy data formats — all these things are inhibitors to innovation. There are a lot of tools. We actually write our own software, and we’re very involved in projects related to computer science at the studio.

We’ll ask ourselves, “How do you innovate? How can you change your environment to be able to move forward and innovate and still carry around some of those legacy systems?”

How HPE Synergy

Automates

Infrastructure Operations

And one of the things we’ve done over the past couple of years is start to re-architect all of our software tools in order to take advantage of massive multi-core processing to try to give artists interactivity into their creative process. It’s about iterations. How many things can I show a director, how quickly can I create the scene to get it approved so that I can hand it off to the next person, because there’s two things that you get out of that.

One, you can explore more and you can add more creativity. Two, you can drive efficiency, because it’s all about how much time, how many people are working on a particular project and how long does it take, all of which drives up the costs. So you now have these choices where you can add more creativity or — because of the compute infrastructure — you can drive efficiency into the operation.

So where does the infrastructure fit into that, because we talk about tools and the ability to make those tools quicker, faster, more real-time? We conducted a project where we tried to create a middleware layer between running applications and the hardware, so that we can start to do data abstraction. We can get more mobile as to where the data is, where the processing is, and what the systems underneath it all are. Until we could separate the applications through that layer, we weren’t really able to do anything down at the core.

Core flexibility, fast

We want to be able to change how we are using that infrastructure — examine usage patterns, the workflows — and be able to optimize.

Now that we have done that, we are attacking the core. When we look at our ability to replace that with new compute, and add the new templates with all the security in it — we want that in our infrastructure. We want to be able to change how we are using that infrastructure — examine usage patterns, the workflows — and be able to optimize.

Before, if we wanted to do a new project, we’d say, “Well, we know that this project takes x amount of infrastructure. So if we want to add a project, we need 2x,” and that makes a lot of sense. So we would build to peak. If at some point in the last six months of a show, we are going to need 30,000 cores to be able to finish it in six months, we say, “Well, we better have 30,000 cores available, even though there might be times when we are only using 12,000 cores.” So we were buying to peak, and that’s wasteful.

What we wanted was to be able to take advantage of those valleys, if you will, as an opportunity — the opportunity to do other types of projects. But because our infrastructure was so homogeneous, we really didn’t have the ability to do a different type of project. We could create another movie if it was very much the same as a previous film from an infrastructure-usage standpoint.

By now having composable, or software-defined infrastructure, and being able to understand what the requirements are for those particular projects, we can recompose our infrastructure — parts of it or all of it — and we can vary that. We can horizontally scale and redefine it to get maximum use of our infrastructure — and do it quickly.

Gardner: It sounds like you have an assembly line that’s very agile, able to do different things without ripping and replacing the whole thing. It also sounds like you gain infrastructure agility to allow your business leaders to make decisions such as bringing in new types of businesses. And in IT, you will be responsive, able to put in the apps, manage those peaks and troughs.

Does having that agility not only give you the ability to make more and better movies with higher utilization, but also gives perhaps more wings to your leaders to go and find the right business models for the future?

Wike: That’s absolutely true. We certainly don’t want to ever have a reason to turn down some exciting project because our digital infrastructure can’t support it. I would feel really bad if that were the case.

In fact, that was the case at one time, way back when we produced Spirit: Stallion of the Cimarron. Because it was such a big movie from a consumer products standpoint, we were asked to make another movie for direct-to-video. But we couldn’t do it; we just didn’t have the capacity, so we had to just say, “No.” We turned away a project because we weren’t capable of doing it. The time it would take us to spin up a project like that would have been six months.

The world is great for us today, because people want content — they want to consume it on their phone, on their laptop, on the side of buildings and in theaters. People are looking for more content everywhere.

Yet projects for varied content platforms require different amounts of compute and infrastructure, so we want to be able to create content quickly and avoid building to peak, which is too expensive. We want to be able to be flexible with infrastructure in order to take advantage of those opportunities.

HPE Synergy

Automates

Infrastructure Operations

Gardner: How is the agility in your infrastructure helping you reach the right creative balance? I suppose it’s similar to what we did 30 years ago with simultaneous engineering, where we would design a physical product for manufacturing, knowing that if it didn’t work on the factory floor, then what’s the point of the design? Are we doing that with digital manufacturing now?

Artifact analytics improve usage, rendering

We always look at budgets, and budgets can be money budgets, they can be rendering budgets, they can be storage budgets, and networking — all of those things are commodities that are required to create a project.

Wike: It’s interesting that you mention that. We always look at budgets, and budgets can be money budgets, it can be rendering budgets, it can be storage budgets, and networking — I mean all of those things are commodities that are required to create a project.

Artists, managers, production managers, directors, and producers are all really good at managing those projects if they understand what the commodity is. Years ago we used to complain about disk space: “You guys are using too much disk space.” And our production department would say, “Well, give me a tool to help me manage my disk space, and then I can clean it up. Don’t just tell me it’s too much.”

One of the initiatives that we have incorporated in recent years is in the area of data analytics. We re-architected our software and we decided we would re-instrument everything. So we started collecting artifacts about rendering and usage. Every night we ran every digital asset that had been created through our rendering, and we also collected analytics about it. We now collect 1.2 billion artifacts a night.

And we correlate that information to a specific asset, such as a character, basket, or chair — whatever it is that I am rendering — as well as where it’s located, which shot it’s in, which sequence it’s in, and which characters are connected to it. So, when an artist wants to render a particular shot, we know what digital resources are required to be able to do that.

One of the things that’s wasteful of digital resources is either having a job that doesn’t fit the allocation that you assign to it, or not knowing when a job is complete. Some of these rendering jobs and simulations will take hours and hours — it could take 10 hours to run.

At what point is it stuck? At what point do you kill that job and restart it because something got wedged and it was a dependency? And you don’t really know, you are just watching it run. Do I pull the plug now? Is it two minutes away from finishing, or is it never going to finish?

Just the facts

Before, an artist would go in every night and conduct a test render. And they would say, “I think this is going to take this much memory, and I think it’s going to take this long.” And then we would add a margin of error, because people are not great judges, as opposed to a computer. This is where we talk about going from feeling to facts.

So now we don’t have artists do that anymore, because we are collecting all that information every night. We have machine learning that then goes in and determines requirements. Even though a certain shot has never been run before, it is very similar to another previous shot, and so we can predict what it is going to need to run.

By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.

Now, if a job is stuck, we can kill it with confidence. By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.

I recently listened to a gentleman talk about what a difference of 1 percent improvement would be. So 15 percent is huge, that’s 15 percent less money you have to spend. It’s 15 percent faster time for a director to be able to see something. It’s 15 percent more iterations. So that was really huge for us.

Gardner: It sounds like you are in the digital manufacturing equivalent of working smarter and not harder. With more intelligence, you can free up the art, because you have nailed the science when it comes to creating something.

Creative intelligence at the edge

Wike: It’s interesting; we talk about intelligence at the edge and the Internet of Things (IoT), and that sort of thing. In my world, the edge is actually an artist. If we can take intelligence about their work, the computational requirements that they have, and if we can push that data — that intelligence — to an artist, then they are actually really, really good at managing their own work.

It’s only a problem when they don’t have any idea that six months from now it’s going to cause a huge increase in memory usage or render time. When they don’t know that, it’s hard for them to be able to self-manage. But now we have artists who can access Tableau reports everyday and see exactly what the memory usage was or the compute usage of any of the assets they’ve created, and they can correct it immediately.

On Megamind, a film DreamWorks Animation released several years ago, it was prior to having the data analytics in place, and the studio encountered massive rendering spikes on certain shots. We really didn’t understand why.

After the movie was complete, when we could go back and get printouts of logs to analyze, we determined that these peaks in rendering resources were caused by his watch. Whenever the main character’s watch was in a frame, the render times went up. We looked at the models, and well-intended artists had taken a model of a watch and every gear was modeled, and it was just a huge, heavy asset to render.

But it was too late to do anything about it. But now, if an artist were to create that watch today, they would quickly find out that they had really over-modeled that watch. We would then need to go in and reduce that asset down, because it’s really not a key element to the story. And they can do that today, which is really great.

HPE Synergy

Automates

Infrastructure Operations

Gardner: I am a big fan of animated films, and I am so happy that my kids take me to see them because I enjoy them as much as they do. When you mention an artist at the edge, it seems to me it’s more like an army at the edge, because I wait through the end of the movie, and I look at the credits scroll — hundreds and hundreds of people at work putting this together.

So you are dealing with not just one artist making a decision, you have an army of people. It’s astounding that you can bring this level of data-driven efficiency to it.

Movie-making’s mobile workforce

If you capture information, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.

Wike: It becomes so much more important, too, as we become a more mobile workforce.

Now it becomes imperative to be able to obtain the information about what those artists are doing so that they can collaborate. We know what value we are really getting from that, and so much information is available now. If you capture it, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.

Gardner: Before we close out, maybe a look into the crystal ball. With things like auto-scaling and composable infrastructure, where do we go next with computing infrastructure? As you say, it’s now all these great screens in people’s hands, handling high-definition, all the networks are able to deliver that, clearly almost an unlimited opportunity to bring entertainment to people. What can you now do with the flexible, efficient, optimized infrastructure? What should we expect?

Wike: There’s an explosion in content and explosion in delivery platforms. We are exploring all kinds of different mediums. I mean, there’s really no limit to where and how one can create great imagery. The ability to do that, the ability to not say “No” to any project that comes along is going to be a great asset.

We always say that we don’t know in the future how audiences are going to consume our content. We just know that we want to be able to supply that content and ensure that it’s the highest quality that we can deliver to audiences worldwide.

Gardner: It sounds like you feel confident that the infrastructure you have in place is going to be able to accommodate whatever those demands are. The art and the economics are the variables, but the infrastructure is not.

Wike: Having a software-defined environment is essential. I came from the software side; I started as a programmer, so I am coming back into my element. I really believe that now that you can compose infrastructure, you can change things with software without having to have people go in and rewire or re-stack, but instead change on-demand. And with machine learning, we’re able to learn what those demands are.

I want the computers to actually optimize and compose themselves so that I can rest knowing that my infrastructure is changing, scaling, and flexing in order to meet the demands of whatever we throw at it.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

 

Posted in application transformation, artificial intelligence, big data, Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, Software-defined storage, storage | Tagged , , , , , , , , , , , , , , | Leave a comment

Enterprises look for partners to make the most of Microsoft Azure Stack apps

The next BriefingsDirect Voice of the Customer hybrid cloud advancements discussion explores the application development and platform-as-a-service (PaaS) benefits from Microsoft Azure Stack.

We’ll now learn how ecosystems of solutions partners are teaming to provide specific vertical industries with applications and services that target private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the latest in successful cloud-based applications development and deployment is our panel, Martin van den Berg, Vice President and Cloud Evangelist at Sogeti USA, based in Cleveland, and Ken Won, Director of Cloud Solutions Marketing at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Martin, what are some of the trends that are driving the adoption of hybrid cloud applications specifically around the Azure Stack platform?

Van den Berg: What our clients are dealing with on a daily basis is an ever-expanding data center, they see ever-expanding private clouds in their data centers. They are trying to get into the hybrid cloud space to reap all the benefits from both an agility and compute perspective.

Martin van den Berg

van den Berg

They are trying to get out of the data center space, to see how the ever-growing demand can leverage the cloud. What we see is that Azure Stack will bridge the gap between the cloud that they have on-premises, and the public cloud that they want to leverage — and basically integrate the two in a true hybrid cloud scenario.

Gardner: What sorts of applications are your clients calling for in these clouds? Are these cloud-native apps, greenfield apps? What are they hoping to do first and foremost when they have that hybrid cloud capability?

Van den Berg: We see a couple of different streams there. One is the native-cloud development. More and more of our clients are going into cloud-native development. We recently brought out a white paper wherein we see that 30 percent of applications being built today are cloud-native already. We expect that trend to grow to more than 60 percent over the next three years for new applications.

HPE Partnership Case Studies

Show Power

of Flex Capacity Financing

The issue that some of our clients have has to do with some of the data being consumed in these applications. Either due to compliance issues, or that their information security divisions are not too happy, they don’t want to put this data in the public cloud. Azure Stack bridges that gap as well.

They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That’s a unique capability.

Microsoft Azure Stack can bridge the gap between the on-premises data center and what they do in the cloud. They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That’s a unique capability.

On the other hand, what we also see is that some of our clients are looking at Azure Stack as a bridge to gap the infrastructure-as-a-service (IaaS) space. Even in that space, where clients are not willing to expand their own data center footprint, they can use Azure Stack as a means to seamlessly go to the Azure public IaaS cloud.

Gardner: Ken, does this jibe with what you are seeing at HPE, that people are starting to creatively leverage hybrid models? For example, are they putting apps in one type of cloud and data in another, and then also using their data center and expanding capacity via public cloud means?

Won: We see a lot of it. The customers are interested in using both private clouds and public clouds. In fact, many of the customers we talk to use multiple private clouds and multiple public clouds. They want to figure out how they can use these together — rather than as separate, siloed environments. The great thing about Azure Stack is the compatibility between what’s available through Microsoft Azure public cloud and what can be run in their own data centers.

Ken Won

Won

The customer concerns are data privacy, data sovereignty, and security. In some cases, there are concerns about application performance. In all these cases, it’s a great situation to be able to run part or all of the application on-premises, or on an Azure Stack environment, and have some sort of direct connectivity to a public cloud like Microsoft Azure.

Because you can get full API compatibility, the applications that are developed in the Azure public cloud can be deployed in a private cloud — with no change to the application at all.

Gardner: Martin, are there specific vertical industries gearing up for this more than others? What are the low-lying fruit in terms of types of apps?

Hybrid healthcare files

Van den Berg: I would say that hybrid cloud is of interest across the board, but I can name a couple of examples of industries where we truly see a business case for Azure Stack.

One of them is a client of ours in the healthcare industry. They wanted to standardize on the Microsoft Azure platform. One of the things that they were trying to do is deal with very large files, such as magnetic resonance imaging (MRI) files. What they found is that in their environment such large files just do not work from a latency and bandwidth perspective in a cloud.

With Microsoft Azure Stack, they can keep these larger files on-premises, very close to where they do their job, and they can still leverage the entire platform and still do analytics from a cloud perspective, because that doesn’t require the bandwidth to interact with things right away. So this is a perfect example where Azure Stack bridges the gap between on-premises and cloud requirements while leveraging the entire platform.

Gardner: What are some of the challenges that these organizations are having as they move to this model? I assume that it’s a little easier said than done. What’s holding people back when it comes to taking full advantage of hybrid models such as Azure Stack?

Van den Berg: The level of cloud adoption is not really yet where it should be. A lot of our clients have cloud strategies that they are implementing, but they don’t have a lot of expertise yet on using the power that the platform brings.

Some of the basic challenges that we need to solve with clients are that they are still dealing with just going to Microsoft Azure cloud and the public cloud services. Azure Stack simplifies that because they now have the cloud on-premises. With that, it’s going to be easier for them to spin-up workload environments and try this all in a secure environment within their own walls, their own data centers.

Should a specific workload go in a private cloud, or should another workload go in a public cloud?

Won: We see a similar thing with our client base as customers look to adopt hybrid IT environments, a mix of private and public clouds. Some of the challenges they have include how to determine which workload should go where. Should a specific workload go in a private cloud, or should another workload go in a public cloud?

We also see some challenges around processes, organizational process and business process. How do you facilitate and manage an environment that has both private and public clouds? How do you put the business processes in place to ensure that they are being used in the proper way? With Azure Stack — because of that full compatibility with Azure — it simplifies the ability to move applications across different environments.

Gardner: Now that we know there are challenges, and that we are not seeing the expected adoption rate, how are organizations like Sogeti working in collaboration with HPE to give a boost to hybrid cloud adoption?

Strategic, secure, scalable cloud migration 

Van den Berg: As the Cloud Evangelist with Sogeti, for the past couple of years I have been telling my clients that they don’t need a data center. The truth is, they probably need some form of on-premises still. But the future is in the clouds, from a scalability and agility perspective — and the hyperscale with which Microsoft is building out their Azure cloud capabilities, there are no enterprise clients that can keep up with that.

We try to help our clients define strategy, help them with governance — how do they approach cloud and what workloads can they put where based on their internal regulations and compliance requirements, and then do migration projects.

The future is in the clouds, from a scalability and agility perspective.

We have a service offering called the Sogeti Cloud Assessment, where we go in and evaluate their application portfolio on their cloud readiness. At the end of this engagement, we start moving things right away. We have been really successful with many of our clients in starting to move workloads to the cloud.

Having Azure Stack will make that even easier. Now when a cloud assessment turns up some issues on moving the Microsoft Azure public cloud — because of compliance or privacy issues or just comfort (sometimes the information security departments just don’t feel comfortable moving certain types of data to a public cloud setting) — we can move those applications to the cloud, leverage the full power and scalability of the cloud while keeping it within the walls of our clients’ data centers. That’s how we are trying to accelerate the cloud adoption, and we truly feel that Azure Stack bridges that gap.

HPE Partnership Case Studies

Show Power

of Flex Capacity Financing

Gardner: Ken, same question, how are you and Sogeti working together to help foster more hybrid cloud adoption?

Won: The cloud market has been maturing and growing. In the past, it’s been somewhat complicated to implement private clouds. Sometimes these private clouds have been incompatible with each other, and with the public clouds.

In the Azure Stack area, now we have almost an appliance-like experience where we have systems that we build in our factories that we pre-configure, pretest, and get them into the customers’ environment so that they can quickly get their private cloud up and running. We can help them with the implementation, set it up so that Sogeti can help with the cloud-native applications work.

With Sogeti and HPE working together, we make it much simpler for companies to adopt the hybrid cloud models and to quickly see the benefit of moving into a hybrid environment.

Sogeti and HPE work together to make it much simpler for companies to adopt the hybrid cloud models.

Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations — if they are really honest — it doesn’t go very far past just virtualization. They truly haven’t leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer.

Won: I agree. When they talk about a private cloud, they are really talking about virtual  machines, or virtualization. But because the Microsoft Azure Stack solution provides built-in services that are fully compatible with what’s available through Microsoft Azure public cloud, it truly provides the full cloud experience. These are the types of services that are beyond just virtualization running within the customers’ data center.

Keep IT simple

I think Azure Stack adoption will be a huge boost to organizations looking to implement private clouds in their data centers.

Gardner: Of course your typical end-user worker is interested primarily their apps, they don’t really care where they are running. But when it comes to getting new application development, rapid application development (RAD), these are some of the pressing issues that most businesses tell us concern them.

So how does RAD, along with some DevOps benefits, play into this, Martin? How are the development people going to help usher in cloud and hybrid cloud models because it helps them satisfy the needs of the end-users in terms of rapid application updates and development?

Van den Berg: This is also where we are talking about the difference between virtualization, private cloud, hybrid clouds, and definitely cloud services. So for the application development staff, they still run in the traditional model, they still run into issues in provisioning of their development environments and sometimes test environments.

A lot of cloud-native application development projects are much easier because you can spin-up environments on the go. What Azure Stack is going to help with is having that environment within the client’s data center; it’s going to help the developers to spin up their own resources.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development — and it’s really beneficial to the whole DevOps suite.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development — and it’s really beneficial to the whole DevOps suite.

We need to integrate business development and IT operations to deliver value to our clients. If we are waiting multiple weeks for development and the best environment to spin up — that’s an issue our clients are still dealing with today. That’s where Azure Stack is going to bridge the gap, too.

Won: There are a couple of things that we see happening that will make developers much more productive and able to bring new applications or updates quicker than ever before. One is the ability to get access to these services very, very quickly. Instead of going to the IT department and asking them to spin up services, they will be able to access these services on their own.

The other big thing that Azure Stack offers is compatibility between private and public cloud environments. For the first time, the developer doesn’t have to worry about what the underlying environment is going to be. They don’t have to worry about deciding, is this application going to run in a private cloud or a public cloud, and based on where it’s going, do they have to use a certain set of tools for that particular environment.

Now that we have compatibility between the private cloud and the public cloud, the developer can just focus on writing code, focus on the functionality of the application they are developing, knowing that that application now can easily be deployed into a private cloud or a public cloud depending on the business situation, the security requirements, and compliance requirements.

So it’s really about helping the developers become more effective and helping them focus more on code development and applications rather than having them worry about the infrastructure, or waiting for infrastructure to come from the IT department.

HPE Partnership Case Studies

Show Power

of Flex Capacity Financing

Gardner: Martin, for those organizations interested in this and want to get on a fast track, how does an organization like Sogeti working in collaboration with HPE help them accelerate adoption?

Van den Berg: This is where we heavily partner with HPE, to bring the best solutions to our clients. We have all kinds of proof of concepts, we have accelerators, and one of the things that we talked about already is making developers get up to speed faster. We can truly leverage those accelerators and help our clients adopt cloud, and adopt all the services that are available on the hybrid platform.

We have all heard the stories about standardizing on micro-services, on a server fabric, or serverless computing, but developers have not had access to this up until now and IT departments have been slow to push this to the developers.

The accelerators that we have, the approaches that we have, and the proofs of concept that we can do with our client — together with HPE —  are going to accelerate cloud adoption with our clientele.

Gardner: Any specific examples, some specific vertical industry use-cases where this really demonstrates the power of the true hybrid model?

When the ship comes in

Won: I can share a couple of examples of the types of companies that we are working with in the hybrid area, and what places that we see typical customers using Azure Stack.

People want to implement disconnected applications or edge applications. These are situations where you may have a data center or an environment running an application that you may either want to run in a disconnected fashion or run to do some local processing, and then move that data to the central data center.

One example of this is the cruise ship industry. All large cruise ships have essentially data centers running the ship, supporting the thousands of customers that are on the ship. What the cruise line vendors want to do is put an application on their many ships and to run the same application in all of their ships. They want to be able to disconnect from connectivity of the central data center while the ship is out at sea and to do a lot of processing and analytics in the data center, in the ship. Then when the ship comes in and connects to port and to the central data center, it only sends the results of the analysis back to the central data center.

This is a great example of having an application that can be developed once and deployed in many different environments, you can do that with Azure Stack. It’s ideal, running that same application in multiple different environments, in either disconnected or connected situations.

Van den Berg: In the financial services industry, we know they are heavily regulated. We need to make sure that they are always in compliance.

So one of the things that we did in the financial services industry with one of our accelerators, we actually have a tool called Sogeti OneShare. It’s a portal solution on top of Microsoft Azure that can help you with orchestration, which can help you with the whole DevOps concept. We were able to have the edge node be Azure Stack — building applications, have some of the data reside within the data center on the Azure Stack appliance, but still leverage the power of the clouds and all the analytics performance that was available there.

That’s what DevOps is supposed to deliver — faster value to the business, leveraging the power of clouds.

Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations — if they are really honest — it doesn’t go very far past just virtualization. They truly haven’t leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer. We just did a project in this space and we were able to deliver functionality to the business from start of the project in just eight weeks. They have never seen that before — the project that just lasts eight weeks and truly delivers business value. That’s the direction that we should be taking. That’s what DevOps is supposed to deliver — faster value to the business, leveraging the power of clouds.

Gardner: Perhaps we could now help organizations understand how to prepare from a people, process, and technology perspective to be able to best leverage hybrid cloud models like Microsoft Azure Stack.

Martin, what do you suggest organizations do now in order to be in the best position to make this successful when they adopt?

Be prepared

Van den Berg: Make sure that the cloud strategy and governance are in place. That’s one of the first things this should always start with.

Then, start training developers, and make sure that the IT department is the broker of cloud services. In the traditional sense, it is always normal that the IT department is the broker for everything that is happening on-premises within the data center. In the cloud space, this doesn’t always happen. In the cloud space, because it is so easy to spin-up things, sometimes the line of business is deploying.

We try to enable IT departments and operators within our clients to be the broker of cloud services and to help with the adoption of Microsoft Azure cloud and Azure Stack. That will help bridge the gap between the clouds and the on-premises data centers.

Gardner: Ken, how should organizations get ready to be in the best position to take advantage of this successfully?

Mapping the way

Won: As IT organizations look at this transformation to hybrid IT, one of the most important things is to have a strong connection to the line of business and to the business goals, and to be able to map those goals to strategic IT priorities.

Once you have done this mapping, the IT department can look at these goals and determine which projects should be implemented and how they should be implemented. In some cases, they should be implemented in private clouds, in some cases public clouds, and in some cases across both private and public cloud.

The task then changes to understanding the workloads, the characterization of the workloads, and looking at things such as performance, security, compliance, risk, and determining the best place for that workload.

Then, it’s finding the right platform to enable developers to be as successful and as impactful as possible, because we know ultimately the big game changer here is enabling the developers to be much more productive, to bring applications out much faster than we have ever seen in the past.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

 

Posted in application transformation, Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Microsoft, professional services | Tagged , , , , , , , , , , | Leave a comment

How a Florida school district tames the wild west of education security at scale and on budget

Bringing a central IT focus to large public school systems has always been a challenge, but bringing a security focus to thousands of PCs and devices has been compared to bringing law and order to the Wild West.

For the Clay County School District in Florida, a team of IT administrators is grabbing the bull by the horns nonetheless to create a new culture of computing safety — without breaking the bank.

The next BriefingsDirect security insight’s discussion examines how Clay County is building a secure posture for their edge, network, and data centers while allowing the right mix and access for exploration necessary in an educational environment.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn how to ensure that schools are technically advanced and secure at low cost and at high scale, we’re joined by Jeremy Bunkley, Supervisor of the Clay County School District Information and Technology Services Department; Jon Skipper, Network Security Specialist at the Clay County School District, and Rich Perkins, Coordinator for Information Services at the Clay County School District. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the biggest challenges to improving security, compliance, and risk reduction at a large school district?

Bunkley: I think the answer actually scales across the board. The problem even bridges into businesses. It’s the culture of change — of making people recognize security as a forethought, instead of an afterthought. It has been a challenge in education, which can be a technology laggard.

Getting people to start the recognition process of making sure that they are security-aware has been quite the battle for us. I don’t think it’s going to end anytime soon. But we are starting to get our key players on board with understanding that you can’t clear-text Social Security numbers and credit card numbers and personally identifiable information (PII). It has been an interesting ride for us, let’s put it that way.

Gardner: Jon, culture is such an important part of this, but you also have to have tools and platforms in place to help give reinforcement for people when they do the right thing. Tell us about what you have needed on your network, and what your technology approach has been?

Skipper: Education is one of those weird areas where the software development has always been lacking in the security side of the house. It has never even been inside the room. So one of the things that we have tried to do in education, at least with the Clay County School District, is try to modify that view, with doing change management. We are trying to introduce a security focus. We try to interject ourselves and highlight areas that might be a bad practice.

Jonathan Skipper

Skipper

One of our vendors uses plain text for passwords, and so we went through with them and showed them how that’s a bad practice, and we made a little bit of improvement with that.

I evaluate our policies and how we manage the domains, maybe finding some stuff that came from a long time ago where it’s no longer needed. We can pull the information out, whereas before they put all the Social Security numbers into a document that was no longer needed. We have been trying really hard to figure that stuff out and then to try and knock it down, as much as we can.

Access for all, but not all-access

Gardner: Whenever you are trying to change people’s perceptions, behaviors, culture, it’s useful to have both the carrot and a stick approach.

So to you Rich, what’s been working in terms of a carrot? How do you incentivize people? What works in practice there?

Perkins: That’s a tough one. We don’t really have a carrot that we use. We basically say, “If you are doing the wrong things, you are not going to be able to use our network.”  So we focus more on negatives.

The positives would be you get to do your job. You get to use the Internet. We don’t really give them something more. We see security as directly intertwined with our customer service. Every person we have is our customer and our job is to protect them — and sometimes that’s from themselves.

Rich Perkins

Perkins

So we don’t really have a carrot-type of system. We don’t allow students to play games if they have no problems. We give everybody the same access and treat everybody the same. Either you are a student and you get this level of access, or you are a staff member, you get this level of access, or you don’t get access.

Gardner: Let’s get background on the Clay County School District. Tell us how many students you have, how many staff administrators, the size and scope of your school district?

Bunkley: Our school district is the 22nd largest in Florida, we are right on the edge of small and medium in Florida, which in most districts is a very large school district. We run about 38,500 students.

And as far as our IT team, which is our student information system, our Enterprise Resource Planning (ERP) system, security, down to desktop support, network infrastructure support, our web services, we have about 48 people total in our department.

Our scope is literally everything. For some reason IT means that if it plugs into a wall, we are responsible for it. That’s generally a true statement in education across the board, where the IT staff tends to be a Jack-of-all-trades, and we fix everything.

Practical IT

Gardner: Where you are headed in terms of technology? Is there a one-to-one student-to-device ratio in the works? What sort of technology do you enable for them?

Bunkley: I am extremely passionate about this, because the one-to-one scenario seems to be the buzzword, and we generally despise buzzwords in this office and we prefer a more practical approach.

The idea of one-to-one is itself to me flawed, because if I just throw a device in a student’s hand, what am I actually doing besides throwing a device in a student’s hand? We haven’t trained them. We haven’t given them the proper platform. All we have done is thrown technology.

And when I hear the terms, well, kids inherently know how to use technology today; it kind of just bothers me, because kids inherently know how to use social media, not technology. They are not production-driven, they are socially driven, and that is a sticking point with me.

We are in fact moving to a one-to-one, but in a nontraditional sense. We have established a one-to-one platform so we can introduce a unified platform for all students and employees to see through a portal system; we happen to use ClassLink, there are various other vendors out there, that’s just the one we happen to use.

We have integrated that in moving to Google Apps for Education and we have a very close relationship with Google. It’s pretty awesome, to be quite honest with you.

So we are moving in the direction of Chromebooks, because it’s just a fiscally more responsible move for us.

I know Microsoft is coming out with Windows 10 S, it’s kind of a strong move on their part. But for us, just because we have the expertise on the Google Apps for Education, or G Suite, it just made a lot of sense for us to go that direction.

So we are moving in one-to-one now with the devices, but the device is literally the least important — and the last — step in our project.

Non-stop security, no shenanigans

Gardner: Tell us about the requirements now for securing the current level of devices, and then for the new one. It seems like you are going to have to keep the airplane flying while changing the wings, right? So what is the security approach that works for you that allows for that?

Skipper: Clay County School District has always followed trends as far as devices go. So we actually have a good mixture of devices in our network, which means that no one solution is ever the right solution.

So, for example, we still have some iPads out in our networks, we still have some older Apple products, and then we have a mixture of Chromebooks and also Windows devices. We really need to make sure that we are running the right security platform for the full environment.

As we are transitioning more and more to a take-home philosophy — and that’s where we as an IT department are seeing this going – so that if the decision is made to make the entire student population go home, we are going to be ready to go.

We have coordinated with our content filter company, and they have some extensions that we can deploy that lock the Chromebooks into a filter situation regardless of their network. That’s been really successful in identifying, maybe blocking students, from those late-night searches. We have also been able to identify some shenanigans that might be taking place due to some interesting web searches that they might do over YouTube, for example. That’s worked really well.

Our next objective is to figure out how to secure our Windows devices and possibly even the Mac devices. While our content filter does a good job as far as securing the content on the Internet, it’s a little bit more difficult to deploy into a Windows device, because users have the option of downloading different Internet browsers. So, content filtering doesn’t really work as well on those.

I have deployed Bitdefender to my laptops, and also to take-home Apple products. That allows me to put in more content filtering, and use that to block people from malicious websites that maybe the content filter didn’t see or was unable to see due to a different browser being used.

In those aspects we definitely are securing our network down further than it ever has been before.

Block and Lock

Perkins: With Bitdefender, one of the things we like is that if we have those devices go off network, we can actually have it turn on the Bitdefender Firewall that allows us to further lock down those machines or protect them if they are in an open environment, like at a hotel or whatever, from possible malicious activity.

And it allows us to block executables at some point. So we can actually go in and say, “No, I don’t want you to be able to run this browser, because I can’t do anything to protect you. Or I can’t watch what you do, or I can’t keep you from doing things you shouldn’t do.” So those are all very useful tools in a single pane of glass that we can see all of those devices at one time and monitor and manage. It saves us a lot of time.

Bunkley: I would follow up on that with a base concept, Dana, and our base concept is of an external network. We come from the concept of, we are an everywhere network. We are not only aiming to defend our internal network while you are here and maybe do some stuff while you are at our house, we are literally an externally built network, where our network will extend directly down into the student and teacher’s home.

We have gone as far as moving everything we physically can out of this network, right down to our firewall. We are moving our domain controllers, external to the network to create literally an everywhere network. And so our security focus is not just internal, it is focused on external first, then internal.

Gardner: With security products, what have you been using, what wasn’t working, and where do you expect to go next given those constraints?

No free lunch

Perkins: Well, we can tell you that “free” is not always the best option; as a matter of fact, it’s almost never a good option, but we have had to deal with it.

We were previously using an antivirus called Avast, and it’s a great home product. We found out that it has not been the best business-level product. It’s very much marketed to education, and there are some really good things about it. Transferring away from it hasn’t been the easiest because it’s next to impossible to uninstall. So we have been having some problems with that.

We have also tested some other security measures and programs along the way that haven’t been so successful. And we are always in the process of evaluating where we are. We are never okay with status quo. Even if we achieve where we want to be, I don’t think any of us will be satisfied, and that’s actually something that a lot of this is built on — we always want to go that step further. And I know that’s cliché, but I would say for an institution of this size, the reason we are able to do some of the stuff is the staff that has been assembled here is second to none for an educational institution.

So even in the processes that we have identified, which were helter-skelter before we got here, we have some more issues to continue working out, but we won’t be satisfied with where we are even if we achieve the task.

Skipper: One of the things that our office actually hates is just checking the box on a security audit. I mean, we are very vocal to the auditors when they come in. We don’t do things just to satisfy their audit. We actually look at the audit and we look at the intent of the question and if we find merit in it, we are going to go and meet that expectation and then make it better. Audits are general. We are going to exceed and make it a better functioning process than just saying, “Yes, I have purchased an antivirus product,” or “I have purchased x.” To us that’s unacceptable.

Bunkley: Audits are a good thing, and nobody likes to do them because they are time-consuming. But you do them because they are required by law, for our institution anyways. So instead of just having a generic audit, where we ignore the audit, we have adopted the concept of the audit as a very useful thing for us to have as a self-reflection tool. It’s nice to not have the same set of eyes on your work all the time. And instead of taking offense to someone coming in and saying, “You are not doing this good enough,” we have literally changed our internal culture here, audits are not a bad thing; audits are a desired thing.

Gardner: Let’s go around the table and hear how you began your journey into IT and security, and how the transition to an educational environment went.

IT’s the curriculum

Bunkley: I started in the banking industry. Those hours were crazy and the pressure was pretty high. So as soon as I left that after a year, I entered education, and honestly, I entered education because I thought the schedule was really easy and I kind of copped out on that. Come to find out, I am working almost as many hours, but that’s because I have come to love it.

This is my 17th year in education, so I have been in a few districts now. Wholesale change is what I have been hired to do, that’s also what I was hired here to do in Clay. We want to change the culture, make IT part of the instruction instead of a separate segment of education.

We have to be interwoven into everything, otherwise we are going to be on an island, and the last time I heard the definition of education is to educate children. So IT can never by itself be a high-functioning department in education. So we have decided instead to go to instruction, and go to professional development, and go to administration and intervene ourselves.

Gardner: Jon, tell us about your background and how the transition has been for you.

Skipper: I was at active-duty Air Force until 2014 when I retired after 20 years. And then I came into education on the side. I didn’t really expect this job, wasn’t mentally searching for it. I tried it out, and that was three years ago.

It’s been an interesting environment. Education, and especially a small IT department like this one, is one of those interesting places where you can come and really expand on your weak areas. So that’s what I actually like about this. If I need to practice on my group policy knowledge, I can dive in there and I can affect that change. Overall this has been an effective change, totally different from the military, a lot looser as far as a lot of things go, but really interesting.

Gardner: Rick, same question to you, your background and how did the transition go?

Perkins: I spent 21 years in the military, I was Navy. When I retired in 2010, I actually went to work for a smaller district in education mainly because they were the first one to offer me a job. In that smaller district, just like here, we have eight people doing operations, and we have this big department. Jeremy understands from where he came from. It was pretty much me doing every aspect of it, so you do a little security, you do a little bit of everything, which I enjoyed because you are your own boss, but you are not your own boss.

You still have people residing over you and dictating how you are going to work, but I really enjoyed the challenge. Coming from IT security in the military and then coming into education, it’s almost a role reversal where we came in and found next to no policies.

I am used to a black-and-white world. So we are trying to interject some of that and some of the security best practices into education. You have to be flexible because education is not the military, so you can’t be that stringent. So that’s a challenge.

Gardner: What are you using to put policies in place enforce them? How does that work?

Policy plans

Perkins: From a [Microsoft] Active Directory side, we use group policy like most people do, and we try and automate it as much as we can. We are switching over, on the student side, very heavily to Google. They effectively have their own version of Active Directory with group policy. And then I will let Jon speak more to the security side though we have used various programs like PDQ for our patch management system that allows us to push out stuff. We use some logging systems with ManageEngine. And then as we have said before we use Bitdefender to push a lot of policy and security out as well, and we’ve been reevaluating some other stuff.

We also use SolarWinds to monitor our network and we actually manage changes to our network and switching using SolarWinds, but on the actual security side, I will let Jon get more specific for you.

Skipper: When we came in … there was a fear of having too much in policy equated to too much auditing overhead. One of the first things we did was identify what we can lock down, and the easiest one was the filter.

The content filter met such stipulations as making sure adult material is not acceptable on the network. We had that down. But it didn’t really take into account the dynamic of the Internet as far as sites are popping up every minute or second, and how do you maintain that for unclassified and uncategorized sites?

So one of the things we did was we looked at a vendor, like, okay, does this vendor have a better product for that aspect of it, and we got that working, I think that’s been working a lot better. And then we started moving down, we were like, okay, cool, so now we have content filtering down, luckily move on to active network, actually not about finding someone else who is doing it, and borrowing their work and making their own.

We look into some of the bigger school districts and see how they are doing it. I think Chicago, Los Angeles. We both looked at some of their policies where we can find it. I found a lot of higher education in some of the universities. Their policies are a lot more along the lines of where we want to be. I think they have it better than what some of the K-12s do.

So we have been going through there and we are going to have to rewrite policy – we are in an active rewrite of our policies right now, we are taking all of those in and we are looking at them, and we are trying to figure out which ones work in our environment and then make sure we do a really good search and replace.

Gardner: We have talked about people, process and technology. We have heard that you are on a security journey and that it’s long-term and culturally oriented.

Let’s look at this then as to what you get when you do it right, particularly vis-à-vis education. Do you have any examples of where you have been able to put in the right technology, add some policy and process improvements, and then culturally attune the people? What does that get for you? How do you turn a problem student into a computer scientist at some point? Tell us some of the examples of when it works, what it gets you.

Positive results

Skipper: When we first got in here, we were a Microsoft district. We had some policies in place to help prevent data loss, and stuff like that.

One of the first things we did is review those policies and activate them, and we started getting some hits. We were surprised at some of hits that we saw, and what we saw going out. We already knew we were moving to the Google networks, continuing the process.

We researched a lot and one of the things we discovered is that just by a minor tweak in a user’s procedures, we were able to identify that we could introduce that user to and get them used to using email encryption, for example. With the Gmail solution, we are able to add an extension, and that extension actually looks at their email as it goes out and finds keywords — or it may be PII — and automatically encrypt the email, preventing those kinds of breaches from going out there. So that’s really been helpful.

As far as taking a student who may be on the wrong path and reeducating them and bringing them back into the fold, Bitdefender has actually helped out on that one.

We had a student a while back who went out to YouTube and find out how he could just do a simple search on how to crash the school network, and he found about five links. And he researched those links and went out there and found that this batch filed with this type will crash a school server.

He was able to implement it and started trying to get that attack out there, and Bitdefender was able to actually go out there and see the batch file, see what it did and prevent it. By quarantining the file, I was able to get that reported very quickly from the moment that he introduced the attack, and it identified the student and we were able to sit down with the administrators and talk to the student about that process and educate them on the dangers of actually attacking a school network and the possible repercussions of it.

Gardner: It certainly helps when you can let them know that you are able to track and identify those issues, and then trace them back to an individual. Any other anecdotes about where the technology process and people have come together for a positive result?

Applied IT knowledge for the next generation

Skipper: One of the things that’s really worked well for the school district is what we call Network Academy. It’s taught by one of our local retired master chiefs, and he is actually going in there and teaching students at the high school level how to go as far as earning a Cisco Certified Network Associate (CCNA)-level IT certificate.

If a student comes in and they try hard enough, they will actually figure it out and they can leave when they graduate with a CCNA, which is pretty awesome. A high school student can walk away with a pretty major industry certification.

We like to try and grab these kids as soon as they leave high school, or even before they leave high school, and start introducing them to our network. They may have a different viewpoint on how to do something that’s revolutionary to us.

But we like having that aspect of it, we can educate those kids who are coming in and  getting their industry certifications, and we are able to utilize them before they move on to a college or another job that pays more than we do.

Bunkley: Charlie Thompson leads this program that Jon is speaking of, and actually over half of our team has been through the program. We didn’t create it, we have just taken advantage of the opportunity. We even tailor the classes to some of the specific things that we need. We have effectively created our own IT hiring pipeline out of this program.

Gardner: Next let’s take a look to the future. Where do you see things going, such as more use of cloud services, interest in unified consoles and controls from the cloud as APIs come into play more for your overall IT management? Encryption? Where do you take it from here?

Holistic solutions in the cloud

Bunkley: Those are some of the areas we are focusing on heavily as we move that “anywhere network.” The unified platform for management is going to be a big deal to us. It is a big deal to us already. Encryption is something we take very seriously because we have a team of eight protecting the data of  about 42,000 users..

If you consider the perfect cyber crime reaching down into a 7th or an 8th grader and stealing all of their personal information, taking that kid’s identity and using it, that kid won’t even know that their identity has been stolen.

We consider that a very serious charge of ours to take on. So we will continue to improve our protection of the students’ and teachers’ PII — even if it sometimes means protecting them from themselves. We take it very seriously.

As we move to the cloud, that unified management platform leads to a more unified security platform. As the operating systems continue to mature, they seem to be going different ways. And what’s good for Mac is not always good for Chrome, is not always good for Windows. But as we move forward with our projects we bring everything back to that central point — can the three be operated from the single point of connection, so that we can save money moving forward? Just because it’s a cool technology and we want to do, it doesn’t mean it’s the right thing for us.

Sometimes we have to choose an option that we don’t necessarily like as much, but pick it because it is better for the whole. As we continue to move forward, everything will be focused on that centralization. We can remain a small and flexible department to continue making sure that we are able to provide the services needed internally as well as protect our users.

Skipper: I think Jeremy hit it pretty solid on that one. As we integrate more with the cloud services, Google, etc., we are utilizing those APIs and we are leading our vendors that we use and forcing them into new areas. Lightspeed, for instance, is integrating more-and-more with Google and utilizing their API to ensure that content filtering — even to the point of mobile device management (MDM) that is more integrated into the Google and Apple platforms to make sure that students are well protected and we have all the tools available that they need at any given time.

We are really leaning heavily on more cloud services, and also the interoperability between APIs and vendors.

Perkins: Public education is changing more to the realm of college education where the classroom is not a classroom — a classroom is anywhere in the world. We are tasked with supporting them and protecting them no matter where they are located. We have to take care of our customers either way.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in big data, Cloud computing, Cyber security, disaster recovery, Enterprise architect, Government, Help desk, Identity, Microsoft, Security, social media | Tagged , , , , , , , , , , , , , | Leave a comment

How Imagine Communications leverages edge computing and HPC for live multiscreen IP video

The next BriefingsDirect Voice of the Customer HPC and edge computing strategies interview explores how a video delivery and customization capability has moved to the network edge — and closer to consumers — to support live, multi-screen Internet Protocol (IP) entertainment delivery.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn how hybrid technology and new workflows for IP-delivered digital video are being re-architected — with significant benefits to the end-user experience, as well as with new monetization values to the content providers.

Our guest is Glodina Connan-Lostanlen, Chief Marketing Officer at Imagine Communications in Frisco, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Your organization has many major media clients. What are the pressures they are facing as they look to the new world of multi-screen video and media?

Connan-Lostanlen: The number-one concern of the media and entertainment industry is the fragmentation of their audience. We live with a model supported by advertising and subscriptions that rely primarily on linear programming, with people watching TV at home.

Glodina Connan-Lostanlen

Connan-Lostanlen

And guess what? Now they are watching it on the go — on their telephones, on their iPads, on their laptops, anywhere. So they have to find the way to capture that audience, justify the value of that audience to their advertisers, and deliver video content that is relevant to them. And that means meeting consumer demand for several types of content, delivered at the very time that people want to consume it.  So it brings a whole range of technology and business challenges that our media and entertainment customers have to overcome. But addressing these challenges with new technology that increases agility and velocity to market also creates opportunities.

For example, they can now try new content. That means they can try new programs, new channels, and they don’t have to keep them forever if they don’t work. The new models create opportunities to be more creative, to focus on what they are good at, which is creating valuable content. At the same time, they have to make sure that they cater to all these different audiences that are either static or on the go.

Gardner: The media industry has faced so much change over the past 20 years, but this is a major, perhaps once-in-a-generation, level of change — when you go to fully digital, IP-delivered content.

As you say, the audience is pulling the providers to multi-screen support, but there is also the capability now — with the new technology on the back-end — to have much more of a relationship with the customer, a one-to-one relationship and even customization, rather than one-to-many. Tell us about the drivers on the personalization level.

Connan-Lostanlen: That’s another big upside of the fragmentation, and the advent of IP technology — all the way from content creation to making a program and distributing it. It gives the content creators access to the unique viewers, and the ability to really engage with them — knowing what they like — and then to potentially target advertising to them. The technology is there. The challenge remains about how to justify the business model, how to value the targeted advertising; there are different opinions on this, and there is also the unknown or the willingness of several generations of viewers to accept good advertising.

That is a great topic right now, and very relevant when we talk about linear advertising and dynamic ad insertion (DAI). Now we are able to — at the very edge of the signal distribution, the video signal distribution — insert an ad that is relevant to each viewer, because you know their preferences, you know who they are, and you know what they are watching, and so you can determine that an ad is going to be relevant to them.

But that means media and entertainment customers have to revisit the whole infrastructure. It’s not necessary rebuilding, they can put in add-ons. They don’t have to throw away what they had, but they can maintain the legacy infrastructure and add on top of it the IP-enabled infrastructure to let them take advantage of these capabilities.

Gardner: This change has happened from the web now all the way to multi-screen. With the web there was a model where you would use a content delivery network (CDN) to take the object, the media object, and place it as close to the edge as you could. What’s changed and why doesn’t that model work as well?

Connan-Lostanlen: I don’t know yet if I want to say that model doesn’t work anymore. Let’s let the CDN providers enhance their technology. But for sure, the volume of videos that we are consuming everyday is exponentially growing. That definitely creates pressure in the pipe. Our role at the front-end and the back-end is to make sure that videos are being created in different formats, with different ads, and everything else, in the most effective way so that it doesn’t put an undue strain on the pipe that is distributing the videos.

We are being pushed to innovate further on the type of workflows that we are implementing at our customers’ sites today, to make it efficient, to not leave storage at the edge and not centrally, and to do transcoding just-in-time. These are the things that are being worked on. It’s a balance between available capacity and the number of programs that you want to send across to your viewers – and how big your target market is.

The task for us on the back-end is to rethink the workflows in a much more efficient way. So, for example, this is what we call the digital-first approach, or unified distribution. Instead of planning a linear channel that goes the traditional way and then adding another infrastructure for multi-screen, on all those different platforms and then cable, and satellite, and IPTV, etc. — why not design the whole workflow digital-first. This frees the content distributor or provider to hold off on committing to specific platforms until the video has reached the edge. And it’s there that the end-user requirements determine how they get the signal.

This is where we are going — to see the efficiencies happen and so remove the pressure on the CDNs and other distribution mechanisms, like over-the-air.

Explore

High-Performance Computing

Solutions from HPE

Gardner: It means an intelligent edge capability, whereas we had an intelligent core up until now. We’ll also seek a hybrid capability between them, growing more sophisticated over time.

We have a whole new generation of technology for video delivery. Tell us about Imagine Communications. How do you go to market? How do you help your customers?

Education for future generations

Connan-Lostanlen: Two months ago we were in Las Vegas for our biggest tradeshow of the year, the NAB Show. At the event, our customers first wanted to understand what it takes to move to IP — so the “how.” They understand the need to move to IP, to take advantage of the benefits that it brings. But how do they do this, while they are still navigating the traditional world?

It’s not only the “how,” it’s needing examples of best practices. So we instructed them in a panel discussion, for example, on Over the Top Technology (OTT), which is another way of saying IP-delivered, and what it takes to create a successful multi-screen service. Part of the panel explained what OTT is, so there’s a lot of education.

There is also another level of education that we have to provide, which is moving from the traditional world of serial digital interfaces (SDIs) in the broadcast industry to IP. It’s basically saying analog video signals can be moved into digital. Then not only is there a digitally sharp signal, it’s an IP stream. The whole knowledge about how to handle IP is new to our own industry, to our own engineers, to our own customers. We also have to educate on what it takes to do this properly.

One of the key things in the media and entertainment industry is that there’s a little bit of fear about IP, because no one really believed that IP could handle live signals. And you know how important live television is in this industry – real-time sports and news — this is where the money comes from. That’s why the most expensive ads are run during the Super Bowl.

It’s essential to be able to do live with IP – it’s critical. That’s why we are sharing with our customers the real-life implementations that we are doing today.

We are also pushing multiple standards forward. We work with our competitors on these standards. We have set up a trade association to accelerate the standards work. We did all of that. And as we do this, it forces us to innovate in partnership with customers and bring them on board. They are part of that trade association, they are part of the proof-of-concept trials, and they are gladly sharing their experiences with others so that the transition can be accelerated.

Gardner: Imagine Communications is then a technology and solutions provider to the media content companies, and you provide the means to do this. You are also doing a lot with ad insertion, billing, in understanding more about the end-user and allowing that data flow from the edge back to the core, and then back to the edge to happen.

At the heart of it all

Connan-Lostanlen: We do everything that happens behind the camera — from content creation all the way to making a program and distributing it. And also, to your point, on monetizing all that with a management system. We have a long history of powering all the key customers in the world for their advertising system. It’s basically an automated system that allows the selling of advertising spots, and then to bill them — and this is the engine of where our customers make money. So we are at the heart of this.

We are in the prime position to help them take advantage of the new advertising solutions that exist today, including dynamic ad insertion. In other words, how you target ads to the single viewer. And the challenge for them is now that they have a campaign, how do they design it to cater both to the linear traditional advertising system as well as the multi-screen or web mobile application? That’s what we are working on. We have a whole set of next-generation platforms that allow them to take advantage of both in a more effective manner.

Gardner: The technology is there, you are a solutions provider. You need to find the best ways of storing and crunching data, close to the edge, and optimizing networks. Tell us why you choose certain partners and what are the some of the major concerns you have when you go to the technology marketplace?

Connan-Lostanlen: One fundamental driver here, as we drive the transition to IP in this industry, is in being able to rely on consumer-off-the-shelf (COTS) platforms. But even so, not all COTS platforms are born equal, right?

For compute, for storage, for networking, you need to rely on top-scale hardware platforms, and that’s why about two years ago we started to work very closely with Hewlett Packard Enterprise (HPE) for both our compute and storage technology.

Explore

High-Performance Computing

Solutions from HPE

We develop the software appliances that run on those platforms, and we sell this as a package with HPE. It’s been a key value proposition of ours as we began this journey to move to IP. We can say, by the way, our solutions run on HPE hardware. That’s very important because having high-performance compute (HPC) that scales is critical to the broadcast and media industry. Having storage that is highly reliable is fundamental because going off the air is not acceptable. So it’s 99.9999 percent reliable, and that’s what we want, right?

It’s a fundamental part of our message to our customers to say, “In your network, put Imagine solutions, which are powered by one of the top compute and storage technologies.”

Gardner: Another part of the change in the marketplace is this move to the edge. It’s auspicious that just as you need to have more storage and compute efficiency at the edge of the network, close to the consumer, the infrastructure providers are also designing new hardware and solutions to do just that. That’s also for the Internet of Things (IoT) requirements, and there are other drivers. Nonetheless, it’s an industry standard approach.

What is it about HPE Edgeline, for example, and the architecture that HPE is using, that makes that edge more powerful for your requirements? How do you view this architectural shift from core data center to the edge?

Optimize the global edge

Connan-Lostanlen: It’s a big deal because we are going to be in a hybrid world. Most of our customers, when they hear about cloud, we have to explain it to them. We explain that they can have their private cloud where they can run virtualized applications on-premises, or they can take advantage of public clouds.

Being able to have a hybrid model of deployment for their applications is critical, especially for large customers who have operations in several places around the globe. For example, such big names as Disney, Turner –- they have operations everywhere. For them, being able to optimize at the edge means that you have to create an architecture that is geographically distributed — but is highly efficient where they have those operations. This type of technology helps us deliver more value to the key customers.

Gardner: The other part of that intelligent edge technology is that it has the ability to be adaptive and customized. Each region has its own networks, its own regulation, and its own compliance, security, and privacy issues. When you can be programmatic as to how you design your edge infrastructure, then a custom-applications-orientation becomes possible.

Is there something about the edge architecture that you would like to see more of? Where do you see this going in terms of the capabilities of customization added-on to your services?

Connan-Lostanlen: One of the typical use-cases that we see for those big customers who have distributed operations is that they like to try and run their disaster recovery (DR) site in a more cost-effective manner. So the flexibility that an edge architecture provides to them is that they don’t have to rely on central operations running DR for everybody. They can do it on their own, and they can do it cost-effectively. They don’t have to recreate the entire infrastructure, and so they do DR at the edge as well.

We especially see this a lot in the process of putting the pieces of the program together, what we call “play out,” before it’s distributed. When you create a TV channel, if you will, it’s important to have end-to-end redundancy — and DR is a key driver for this type of application.

Gardner: Are there some examples of your cutting-edge clients that have adopted these solutions? What are the outcomes? What are they able to do with it?

Pop-up power

Connan-Lostanlen: Well, it’s always sensitive to name those big brand names. They are very protective of their brands. However, one of the top ones in the world of media and entertainment has decided to move all of their operations — from content creation, planning, and distribution — to their own cloud, to their own data center.

They are at the forefront of playing live and recorded material on TV — all from their cloud. They needed strong partners in data centers. So obviously we work with them closely, and the reason why they do this is simply to really take advantage of the flexibility. They don’t want to be tied to a restricted channel count; they want to try new things. They want to try pop-up channels. For the Oscars, for example, it’s one night. Are you going to recreate the whole infrastructure if you can just check it on and off, if you will, out of their data center capacity? So that’s the key application, the pop-up channels and ability to easily try new programs.

Gardner: It sounds like they are thinking of themselves as an IT company, rather than a media and entertainment company that consumes IT. Is that shift happening?

Connan-Lostanlen: Oh yes, that’s an interesting topic, because I think you cannot really do this successfully if you don’t start to think IT a little bit. What we are seeing, interestingly, is that our customers typically used to have the IT department on one side, the broadcast engineers on the other side — these were two groups that didn’t speak the same language. Now they get together, and they have to, because they have to design together the solution that will make them more successful. We are seeing this happening.

I wouldn’t say yet that they are IT companies. The core strength is content, that is their brand, that’s what they are good at — creating amazing content and making it available to as many people as possible.

They have to understand IT, but they can’t lose concentration on their core business. I think the IT providers still have a very strong play there. It’s always happening that way.

In addition to disaster recovery being a key application, multi-screen delivery is taking advantage of that technology, for sure.

Explore

High-Performance Computing

Solutions from HPE

Gardner: These companies are making this cultural shift to being much more technically oriented. They think about standard processes across all of what they do, and they have their own core data center that’s dynamic, flexible, agile and cost-efficient. What does that get for them? Is it too soon, or do we have some metrics of success for companies that make this move toward a full digitally transformed organization?

Connan-Lostanlen: They are very protective about the math. It is fair to say that the up-front investments may be higher, but when you do the math over time, you do the total cost of ownership for the next 5 to 10 years — because that’s typically the life cycle of those infrastructures – then definitely they do save money. On the operational expenditure (OPEX) side [of private cloud economics] it’s much more efficient, but they also have upside on additional revenue. So net-net, the return on investment (ROI) is much better. But it’s kind of hard to say now because we are still in the early days, but it’s bound to be a much greater ROI.

Another specific DR example is in the Middle East. We have a customer there who decided to operate the DR and IP in the cloud, instead of having a replicated system with satellite links in between. They were able to save $2 million worth of satellite links, and that data center investment, trust me, was not that high. So it shows that the ROI is there.

My satellite customers might say, “Well, what are you trying to do?” The good news is that they are looking at us to help them transform their businesses, too. So big satellite providers are thinking broadly about how this world of IP is changing their game. They are examining what they need to do differently. I think it’s going to create even more opportunities to reduce costs for all of our customers.

IT enters a hybrid world

Gardner: That’s one of the intrinsic values of a hybrid IT approach — you can use many different ways to do something, and then optimize which of those methods works best, and also alternate between them for best economics. That’s a very powerful concept.

Connan-Lostanlen: The world will be a hybrid IT world, and we will take advantage of that. But, of course, that will come with some challenges. What I think is next is the number-one question that I get asked.

Three years ago costumers would ask us, “Hey, IP is not going to work for live TV.” We convinced them otherwise, and now they know it’s working, it’s happening for real.

Secondly, they are thinking, “Okay, now I get it, so how do I do this?” We showed them, this is how you do it, the education piece.

Now, this year, the number-one question is security. “Okay, this is my content, the most valuable asset I have in my company. I am not putting this in the cloud,” they say. And this is where another piece of education has to start, which is: Actually, as you put stuff on your cloud, it’s more secure.

And we are working with our technology providers. As I said earlier, the COTS providers are not equal. We take it seriously. The cyber attacks on content and media is critical, and it’s bound to happen more often.

Initially there was a lack of understanding that you need to separate your corporate network, such as emails and VPNs, from you broadcast operations network. Okay, that’s easy to explain and that can be implemented, and that’s where most of the attacks over the last five years have happened. This is solved.

They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

However, the cyber attackers are becoming more clever, so they will overcome these initial defenses.They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

Gardner: Sure, the next domino to fall after you have the data center concept, the implementation, the execution, even the optimization, is then to remove risk, whether it’s disaster recovery, security, right down to the silicon and so forth. So that’s the next thing we will look for, and I hope I can get a chance to talk to you about how you are all lowering risk for your clients the next time we speak.

Explore

High-Performance Computing

Solutions from HPE

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

 

Posted in application transformation, big data, Cloud computing, Cyber security, data analysis, disaster recovery, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, Internet of Things, machine learning, Security, server, storage | Tagged , , , , , , , , , , , | Leave a comment

How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures process and IT ills

The next BriefingsDirect healthcare thought leadership panel discussion examines how a global standards body, The Open Group, is working to improve how the healthcare industry functions.

We’ll now learn how The Open Group Healthcare Forum (HCF) is advancing best practices and methods for better leveraging IT in healthcare ecosystems. And we’ll examine the forum’s Health Enterprise Reference Architecture (HERA) initiative and its role in standardizing IT architectures. The goal is to foster better boundaryless interoperability within and between healthcare public and private sector organizations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about improving the processes and IT that better supports healthcare, please welcome our panel of experts: Oliver Kipf, The Open Group Healthcare Forum Chairman and Business Process and Solution Architect at Philips, based in Germany; Dr. Jason Lee, Director of the Healthcare Forum at The Open Group, in Boston, and Gail Kalbfleisch, Director of the Federal Health Architecture at the US Department of Health and Human Services in Washington, D.C. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: For those who might not be that familiar with the Healthcare Forum and The Open Group in general, tell us about why the Healthcare Forum exists, what its mission is, and what you hope to achieve through your work.

Lee: The Healthcare Forum exists because there is a huge need to architect the healthcare enterprise, which is approaching 20 percent of the gross domestic product (GDP) of the economy in the US, and approaching that level in other developing countries in Europe.

Jason Lee (1)

Lee

There is a general feeling that enterprise architecture is somewhat behind in this industry, relative to other industries. There are important gaps to fill that will help those stakeholders in healthcare — whether they are in hospitals or healthcare delivery systems or innovation hubs in organizations of different sorts, such as consulting firms. They can better leverage IT to achieve business goals, through the use of best practices, lessons learned, and the accumulated wisdom of the various Forum members over many years of work. We want them to understand the value of our work so they can use it to address their needs.

Our mission, simply, is to help make healthcare information available when and where it’s needed and to accomplish that goal through architecting the healthcare enterprise. That’s what we hope to achieve.

Gardner: As the chairman of the HCF, could you explain what a forum is, Oliver? What does it consist of, how many organizations are involved?

Kipf: The HCF is made up of its members and I am really proud of this team. We are very passionate about healthcare. We are in the technology business, so we are more than just the governing bodies; we also have participation from the provider community. That makes the Forum true to the nature of The Open Group, in that we are global in nature, we are vendor-neutral, and we are business-oriented. We go from strategy to execution, and we want to bridge from business to technology. We take the foundation of The Open Group, and then we apply this to the HCF.

Oliver Matthias Kipf (1)

Kipf

As we have many health standards out there, we really want to leverage [experience] from our 30 members to make standards work by providing the right type of tools, frameworks, and approaches. We partner a lot in the industry.

The healthcare industry is really a crowded place and there are many standard development organizations. There are many players. It’s quite vital as a forum that we reach out, collaborate, and engage with others to reach where we want to be.

Gardner: Gail, why is the role of the enterprise architecture function an important ingredient to help bring this together? What’s important about EA when we think about the healthcare industry?

Kalbfleisch: From an EA perspective, I don’t really think that it matters whether you are talking about the healthcare industry or the finance industry or the personnel industry or the gas and electric industry. If you look at any of those, the organizations or the companies that tend to be highly functioning, they have not just architecture — because everyone has architecture for what they do. But that architecture is documented and it’s available for use by decision-makers, and by developers across the system so that each part can work well together.

Gail Kalbfleisch (1)

Kalbfleisch

We know that within the healthcare industry it is exceedingly complicated, and it’s a mixture of a lot of different things. It’s not just your body and your doctor, it’s also your insurance, your payers, research, academia — and putting all of those together.

If we don’t have EA, people new to the system — or people who were deeply embedded into their parts of the system — can’t see how that system all works together usefully. For example, there are a lot of different standards organizations. If we don’t see how all of that works together — where everybody else is working, and how to make it fit together – then we’re going to have a hard time getting to interoperability quickly and efficiently.

It’s important that we get to individual solution building blocks to attain a more integrated approach.

Kipf: If you think of the healthcare industry, we’ve been very good at developing individual solutions to specific problems. There’s a lot of innovation and a lot of technology that we use. But there is an inherent risk of producing silos among the many stakeholders who, ultimately, work for the good of the patient. It’s important that we get to individual solution building blocks to attain a more integrated approach based on architecture building blocks, and based on common frameworks, tools and approaches.

Gardner: Healthcare is a very complex environment and IT is very fast-paced. Can you give us an update on what the Healthcare Forum has been doing, given the difficulty of managing such complexity?

Bird’s-eye view mapping

Lee: The Healthcare Forum began with a series of white papers, initially focusing on an information model that has a long history in the federal government. We used enterprise architecture to evaluate the Federal Health Information Model (FHIM).  People began listening and we started to talk to people outside of The Open Group, and outside of the normal channels of The Open Group. We talked to different types of architects, such as information architects, solution architects, engineers, and initially settled on the problem that is essential to The Open Group — and that is the problem of boundaryless information flow.

It can be difficult to achieve boundaryless information flow to enable information to travel digitally, securely and quickly.

We need to get beyond the silos that Oliver mentioned and that Gail alluded to. As I mentioned in my opening comments, this is a huge industry, and Gail illustrated it by naming some of the stakeholders within the health, healthcare and wellness enterprises. If you think of your hospital, it can be difficult to achieve boundaryless information flow to enable your information to travel digitally, securely, quickly, and in a way that’s valid, reliable and understandable by those who send it and by those who receive it.  But if that is possible, it’s all to the betterment of the patient.

Initially, in our focus on what healthcare folks call interoperability — what we refer to as boundaryless information flow — we came to realize through discussions with stakeholders in the public sector, as well as the private sector and globally, that understanding how the different pieces are linked together is critical. Anybody who works in an organization or belongs to a church, school or family understands that sometimes getting the right message communicated from point A to point B can be difficult.

To address that issue, the HCF members have decided to create a Health Enterprise Reference Architecture (HERA) that is essentially a framework and a map at the highest level. It helps people see that what they do relates to what others do, regardless of their position in their company. You want to deliver value to those people, to help them understand how their work is interconnected, and how IT can help them achieve their goals.

Gardner: Oliver, who should be aware of and explore engaging with the HCF?

Kipf: The members of The Open Group themselves, many of them are players in the field of healthcare, and so they are the natural candidates to really engage with. In that healthcare ecosystem we have providers, payers, governing bodies, pharmaceuticals, and IT companies.

Those who deeply need planning, management and architecting — to make big thinking a reality out there — those decision-makers are the prime candidates for engagement in the Healthcare Forum. They can benefit from the kinds of products we produce, the reference architecture, and the white papers that we offer. In a nutshell, it’s the members, and it’s the healthcare industry, and the healthcare ecosystem that we are targeting.

Gardner: Gail, perhaps you could address the reference architecture initiative? Why do you see that as important? Who do you think should be aware of it and contribute to it?

Shared reference points

Kalbfleisch: Reference architecture is one of those building block pieces that should be used. You can call it a template. You can have words that other people can relate to, maybe easier than the architecture-speak.

If you take that template, you can make it available to other people so that we can all be designing our processes and systems with a common understanding of our information exchange — so that it crosses boundaries easily and securely. If we are all running on the same template, that’s going to enable us to identify how to start, what has to be included, and what standards we are going to use.

A reference architecture is one of those very important pieces that not only forms a list of how we want to do things, and what we agreed to, but it also makes it so that every organization doesn’t have to start from scratch. It can be reused and improved upon as we go through the work. If someone improves the architecture, that can come back into the reference architecture.

Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector.

Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector — whether it’s Oliver in Europe, whether it’s someone over in California, Australia, it really doesn’t matter. Anyone who wants to make interoperability better should know about it.

My focus is on decision-makers, policymakers, process developers, and other people who look at it from a device-design perspective. One of the things that has been discussed within the HCF’s reference architecture work is the need to make sure that it’s all at a high-enough level, where we can agree on what it looks like. Yet it also must go down deeply enough so that people can apply it to what they are doing — whether it’s designing a piece of software or designing a medical device.

Gardner: Jason, The Open Group has been involved with standards and reference architectures for decades, with such recent initiatives as the IT4IT approach, as well as the longstanding TOGAF reference architecture. How does the HERA relate to some of these other architectural initiatives?

Building on a strong foundation

Lee: The HERA starts by using the essential components and insights that are built into the TOGAF ArchitecturalDevelopment Model (ADM) and builds from there. It also uses the ArchiMate language, but we have never felt restricted to using only those existing Open Group models that have been around for some time and are currently being developed further.

We are a big organization in terms of our approach, our forum, and so we want to draw from the best there is in order to fill in the gaps. Over the last few decades, an incredible amount of talent has joined The Open Group to develop architectural models and standards that apply across multiple industries, including healthcare. We reuse and build from this important work.

In addition, as we have dug deeper into the healthcare industry, we have found other issues – gaps — that need filling. There are related topics that would benefit. To do that, we have been working hard to establish relationships with other organizations in the healthcare space, to bring them in, and to collaborate. We have done this with the Health Level Seven Organization (HL7), which is one of the best-known standards organizations in the world.

We are also doing this now with an organization called Healthcare Services Platform Consortium (HSPC), which involves academic, government and hospital organizations, as well as people who are focused on developing standards around terminology.

IT’s getting better all the time

Kipf: If you think about reference architecture in a specific domain, such as in the healthcare industry, you look at your customers and the enterprises — those really concerned with the delivery of health services. You need to ask yourself the question: What are their needs?

And the need in this industry is a focus on the person and on the service. It’s also highly regulatory, so being compliant is a big thing. Quality is a big thing. The idea of lifetime evolution — that you become better and better all the time — that is very important, very intrinsic to the healthcare industry.

When we are looking into the customers out there that we believe that the HERA could be of value, it’s the small- to mid-sized and the large enterprises that you have to think of, and it’s really across the globe. That’s why we believe that the HERA is something that is tuned into the needs of our industry.

And as Jason mentioned, we build on open standards and we leverage them where we can. ArchiMate is one of the big ones — not only the business language, but also a lot of the concepts are based on ArchiMate. But we need to include other standards as well, obviously those from the healthcare industry, and we need to deviate from specific standards where this is of value to our industry.

Gardner: Oliver, in order to get this standard to be something that’s used, that’s very practical, people look to results. So if you were to take advantage of such reference architectures as HERA, what should you expect to get back? If you do it right, what are the payoffs?

Capacity for change and collaboration

Kipf: It should enable you to do a better job, to become more efficient, and to make better use of technology. Those are the kinds of benefits that you see realized. It’s not only that you have a place where you can model all the elements of your enterprise, where you can put and manage your processes and your services, but it’s also in the way you are architecting your enterprise.

The HERA gives you the tools to get where you want to be, to define where you want to be — and also how to get there.

It gives you the ability to change. From a transformation management perspective, we know that many healthcare systems have great challenges and there is this need to change. The HERA gives you the tools to get where you want to be, to define where you want to be — and also how to get there. This is where we believe it provides a lot of benefits.

Gardner: Gail, similar question, for those organizations, both public and private sector, that do this well, that embrace HERA, what should they hope to get in return?

Kalbfleisch: I completely agree with what Oliver said. To add, one of the benefits that you get from using EA is a chance to have a perspective from outside your own narrow silos. The HERA should be able to help a person see other areas that they have to take into consideration, that maybe they wouldn’t have before.

Another value is to engage with other people who are doing similar work, who may have either learned lessons, or are doing similar things at the same time. So that’s one of the ways I see the effectiveness and of doing our jobs better, quicker, and faster.

Also, it can help us identify where we have gaps and where we need to focus our efforts. We can focus our limited resources in much better ways on specific issues — where we can accomplish what we are looking to — and to gain that boundaryless information flow.

Reaching your goals

We show them how they can follow a roadmap to accomplish their self-defined goals more effectively.

Lee: Essentially, the HERA will provide a framework that enables companies to leverage IT to achieve their goals. The wonderful thing about it is that we are not telling organizations what their goals should be. We show them how they can follow a roadmap to accomplish their self-defined goals more effectively. Often this involves communicating the big picture, as Gail said, to those who are in siloed positions within their organizations.

There is an old saying: “What you see depends on where you sit.” The HERA helps stakeholders gain this perspective by helping key players understand the relationships, for example, between business processes and engineering. So whether a stakeholder’s interest is increasing patient satisfaction, reducing error, improving quality, and having better patient outcomes and gaining more reimbursement where reimbursement is tied to outcomes — using the product and the architecture that we are developing helps all of these goals.

Gardner: Jason, for those who are intrigued by what you are doing with HERA, tell us about its trajectory, its evolution, and how that journey unfolds. Who can they learn more or get involved?

Lee: We have only been working on the HERA per se for the last year, although its underpinnings go back 20 years or more. Its trajectory is not to a single point, but to an evolutionary process. We will be producing products, white papers, as well as products that others can use in a modular fashion to leverage what they already use within their legacy systems.

We encourage anyone out there, particularly in the health system delivery space, to join us. That can be done by contacting me at j.lee@opengroup.org and at www.opengroup.org/healthcare.

It’s an incredible time, a very opportune time, for key players to be involved because we are making very important decisions that lay the foundation for the HERA. We collaborate with key players, and we lay down the tracks from which we will build increasing levels of complexity.

But we start at the top, using non-architectural language to be able to talk to decision-makers, whether they are in the public sector or private sector. So we invite any of these organizations to join us.

Learn from others’ mistakes

Kalbfleisch: My first foray into working with The Open Group was long before I was in the health IT sector. I was with the US Air Force and we were doing very non-health architectural work in conjunction with The Open Group.

The interesting part to me is in ensuring boundaryless information flow in a manner that is consistent with the information flowing where it needs to go and who has access to it. How does it get from place to place across distinct mission areas, or distinct business areas where the information is not used the same way or stored in the same way? Such dissonance between those business areas is not a problem that is isolated just to healthcare; it’s across all business areas.

We don’t have to make the same mistakes. We can take what people have learned and extend it much further.

That was exciting. I was able to take awareness of The Open Group from a previous life, so to speak, and engage with them to get involved in the Healthcare Forum from my current position.

A lot of the technical problems that we have in exchanging information, regardless of what industry you are in, have been addressed by other people, and have already been worked on. By leveraging the way organizations have already worked on it for 20 years, we can leverage that work within the healthcare industry. We don’t have to make the same mistakes that were made before. We can take what people have learned and extend it much further. We can do that best by working together in areas like The Open Group HCF.

Kipf: On that evolutionary approach, I also see this as a long-term journey. Yes, there will be releases when we have a specification, and there will guidelines. But it’s important that this is an engagement, and we have ongoing collaboration with customers in the future, even after it is released. The coming together of a team is what really makes a great reference architecture, a team that places the architecture at a high level.

We can also develop distinct flavors of the specification. We should expect much more detail. Those implementation architectures then become spin-offs of reference architectures such as the HERA.

Lee: I can give some concrete examples, to bookend the kinds of problems that can be addressed using the HERA. At the micro end, a hospital can use the HERA structure to implement a patient check-in to the hospital for patients who would like to bypass the usual process and check themselves in. This has a number of positive value outcomes for the hospital in terms of staffing and in terms of patient satisfaction and cost savings.

At the other extreme, a large hospital system in Philadelphia or Stuttgart or Oslo or in India finds itself with patients appearing at the emergency room or in the ambulatory settings unaffiliated with that particular hospital. Rather than have that patient come as a blank sheet of paper, and redo all the tests that had been done prior, the HERA will help these healthcare organizations figure out how to exchange data in a meaningful way. So the information can flow digitally, securely, and it means the same thing to those who get it as much as it does to those who receive it, and everything is patient-focused, patient-centric.

Gardner: Oliver, we have seen with other Open Group standards and reference architectures, a certification process often comes to bear that helps people be recognized for being adept and properly trained. Do you expect to have a certification process with HERA at some point?

Certifiable enterprise expertise

Kipf: Yes, the more we mature with the HERA, along with the defined guidelines and the specifications and the HERA model, the more there will be a need and demand for health enterprise-focused employees in the marketplace. They can show how consulting services can then use HERA.

And that’s a perfect place when you think of certification. It helps make sure that the quality of the workforce is strong, whether it’s internal or in the form of a professional services role. They can comply with the HERA.

Gardner: Clearly, this has applicability to healthcare payer organizations, provider organizations, government agencies, and the vendors who supply pharmaceuticals or medical instruments. There are a great deal of process benefits when done properly, so that enterprise architects could become certified eventually.

My question then is how do we take the HERA, with such a potential for being beneficial across the board, and make it well-known? Jason, how do we get the word out? How can people who are listening to this or reading this, help with that?

Spread the word, around the world

Lee: It’s a question that has to be considered every time we meet. I think the answer is straightforward. First, we build a product [the HERA] that has clear value for stakeholders in the healthcare system. That’s the internal part.

Second—and often, simultaneously—we develop a very important marketing/collaboration/socialization capability. That’s the external part. I’ve worked in healthcare for more than 30 years, and whether it’s public or private sector decision-making, there are many stakeholders, and everybody’s focused on the same few things: improving value, enhancing quality, expanding access, and providing security.

All companies must plan, build, operate and improve.

We will continue developing relationships with key players to ensure them that what they’re doing is key to the HERA. At the broadest level, all companies must plan, build, operate and improve.

There are immense opportunities for business development. There are innumerable ways to use the HERA to help health enterprise systems operate efficiently and effectively. There are opportunities to demonstrate to key movers and shakers in healthcare system how what we’re doing integrates with what they’re doing. This will maximize the uptake of the HERA and minimize the chances it sits on a shelf after it’s been developed.

Gardner: Oliver, there are also a variety of regional conferences and events around the world. Some of them are from The Open Group. How important is it for people to be aware of these events, maybe by taking part virtually online or in person? Tell us about the face-time opportunities, if you will, of these events, and how that can foster awareness and improvement of HERA uptake.

Kipf: We began with the last Open Group event. I was in Berlin, presenting the HERA. As we see more development, more maturity, we can then show more. The uptake will be there and we also need to include things like cyber security, things like risk compliance. So we can bring in a lot of what we have been doing in various other initiatives within The Open Group. We can show how it can be a fusion, and make this something that is really of value.

I am confident that through face-to-face events, such as The Open Group events, we can further spread the message.

Lee: And a real shout-out to Gail and Oliver who have been critical in making introductions and helping to share The Open Group Healthcare Forum’s work broadly. The most recent example is the 2016 HIMSS conference, a meeting that brings together more than 40,000 people every year. There is a federal interoperability showcase there, and we have been able to introduce and discuss our HERA work there.

We’ve collaborated with the Office of the National Coordinator where the Federal Heath Architecture sits, with the US Veterans Administration, with the US Department of Defense, and with the Centers for Medicare and Medicaid (CMS). This is all US-centered, but there are lots of opportunities globally to not just spread the word in public for domains and public venues, but also to go to those key players who are moving the industry forward, and in some cases convince them that enterprise architecture does provide that structure, that template that can help them achieve their goals.

Future forecast

Gardner: I’m afraid we are almost out of time. Gail, perhaps a look into the crystal ball. What do you expect and hope to see in the next few years when it comes to improvements initiatives like HERA at The Open Group Forum can provide? What do you hope to see in the next couple of years in terms of improvement?

Kalbfleisch: What I would like to see happen in the next couple of years as it relates to the HERA, is the ability to have a place where we can go from anywhere and get a glimpse of the landscape. Right now, it’s hard to find anywhere where someone in the US can see the great work that Oliver is doing, or the people in Norway, or the people in Australia are doing.

Reference architecture is great to have, but it has no power until it’s used.

It’s really important that we have opportunities to communicate as large groups, but also the one-on-one. Yet when we are not able to communicate personally, I would like to see a resource or a tool where people can go and get the information they need on the HERA on their own time, or as they have a question. Reference architecture is great to have, but it has no power until it’s used.

My hope for the future is for the HERA to be used by decision-makers, developers, and even patients. So when an organizations such as some hospital wants to develop a new electronic health record (EHR) system, they have a place to go and get started, without having to contact Jason or wait for a vendor to come along and tell them how to solve a problem. That would be my hope for the future.

Lee: You can think of the HERA as a soup with three key ingredients. First is the involvement and commitment of very bright people and top-notch organizations. Second, we leverage the deep experience and products of other forums of The Open Group. Third, we build on external relationships. Together, these three things will help make the HERA successful as a certifiable product that people can use to get their work done and do better.

Gardner: Jason, perhaps you could also tee-up the next Open Group event in Amsterdam. Can you tell us more about that and how to get involved?

Lee: We are very excited about our next event in Amsterdam in October. You can go to www.opengroup.org and look under Events, read about the agendas, and sign up there. We will have involvement from experts from the US, UK, Germany, Australia, Norway, and this is just in the Healthcare Forum!

The Open Group membership will be giving papers, having discussions, moving the ball forward. It will be a very productive and fun time and we are looking forward to it. Again, anyone who has a question or is interested in joining the Healthcare Forum can please send me, Jason Lee, an email at j.lee@opengroup.org.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in big data, Cloud computing, Cyber security, data analysis, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Information management, Platform 3.0, Security, The Open Group | Tagged , , , , , , , , | Leave a comment

Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack

The next BriefingsDirect cloud deployment strategies interview explores how hybrid cloud ecosystem players such as PwC and Hewlett Packard Enterprise (HPE) are gearing up to support the Microsoft Azure Stack private-public cloud continuum.

We’ll now learn what enterprises can do to make the most of hybrid cloud models and be ready specifically for Microsoft’s solutions for balancing the boundaries between public and private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to explore the latest approaches for successful hybrid IT, we’re joined by Rohit “Ro” Antao, a Partner at PwC, and Ken Won, Director of Cloud Solutions Marketing at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ro, what are the trends driving adoption of hybrid cloud models, specifically Microsoft Azure Stack? Why are people interested in doing this?

Antao: What we have observed in the last 18 months is that a lot of our clients are now aggressively pushing toward the public cloud. In that journey there are a couple of things that are becoming really loud and clear to them.

Journey to the cloud

Number one is that there will always be some sort of a private data center footprint. There are certain workloads that are not appropriate for the public cloud; there are certain workloads that perform better in the private data center. And so the first acknowledgment is that there is going to be that private, as well as public, side of how they deliver IT services.

Now, that being said, they have to begin building the capabilities and the mechanisms to be able to manage these different environments seamlessly. As they go down this path, that’s where we are seeing a lot of traction and focus.

The other trend in conjunction with that is in the public cloud space where we see a lot of traction around Azure. They have come on strong. They have been aggressively going after the public cloud market. Being able to have that seamless environment between private and public with Azure Stack is what’s driving a lot of the demand.

Won: We at HPE are seeing that very similarly, as well. We call that “hybrid IT,” and we talk about how customers need to find the right mix of private and public — and managed services — to fit their businesses. They may put some services in a public cloud, some services in a private cloud, and some in a managed cloud. Depending on their company strategy, they need to figure out which workloads go where.

Ken Won

Won

We have these conversations with many of our customers about how do you determine the right placement for these different workloads — taking into account things like security, performance, compliance, and cost — and helping them evaluate this hybrid IT environment that they now need to manage.

Gardner: Ro, a lot of what people have used public cloud for is greenfield apps — beginning in the cloud, developing in the cloud, deploying in the cloud — but there’s also an interest in many enterprises about legacy applications and datasets. Is Azure Stack and hybrid cloud an opportunity for them to rethink where their older apps and data should reside?

Antao: Absolutely. When you look at the broader market, a lot of these businesses are competing today in very dynamic markets. When companies today think about strategy,

Rohit Antao

Antao

it’s no longer the 5- and 10-year strategy. They are thinking about how to be relevant in the market this year, today, this quarter. That requires a lot of flexibility in their business model; that requires a lot of variability in their cost structure.

When you look at it from that viewpoint, a lot of our clients look at the public cloud as more than, “Is the app suitable for the public cloud?” They are also seeking certain cost advantages in terms of variability in that cost structure that they can take advantage of. And that’s where we are seeing them look at the public cloud beyond just applications in terms that are suitable for public cloud.

Public and/or private power

Won: We help a lot of companies think about where the best place is for their traditional apps. Often they don’t want to restructure them, they don’t want to rewrite them, because they are already an investment; they don’t want to spend a lot of time refactoring them.

If you look at these traditional applications, a lot of times when they are dealing with data – especially if they are dealing with sensitive data — those are better placed in a private cloud.

Antao: One of the great things about Microsoft Azure Stack is it gives the data center that public cloud experience — where developers have the similar experience as they would in a public cloud. The only difference is that you are now controlling the costs as well. So that’s another big advantage we see.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: Yeah, absolutely, it’s giving the developers the experience of a public cloud, but from the IT standpoint of also providing the compliance, the control, and the security of a private cloud. Allowing applications to be deployed in either a public or private cloud — depending on its requirements — is incredibly powerful. There’s no other environment out there that provides that API-compatibility between private and public cloud deployments like Azure Stack does. 

Gardner: Clearly Microsoft is interested in recognizing that skill sets, platform affinity, and processes are all really important. If they are able to provide a private cloud and public cloud experience that’s common to the IT operators that are used to using Microsoft platforms and frameworks — that’s a boon. It’s also important for enterprises to be able to continue with the skills they have.

Ro, is such a commonality of skills and processes not top of mind for many organizations? 

Antao: Absolutely! I think there is always the risk when you have different environments having that “swivel chair” approach. You have a certain set of skills and processes for your private data center. Then you now have a certain set of skills and processes to manage your public cloud footprint.

One of the big problems and challenges that this solves is being able to drive more of that commonality across consistent sets of processes. You can have a similar talent pool, and you have similar kinds of training and awareness that you are trying to drive within the organization — because you now can have similar stacks on both ends.

Won: That’s a great point. We know that the biggest challenge to adopting new concepts

The biggest challenge to adopting new concepts is not the technology; it’s really the people and process issues.

is not the technology; it’s really the people and process issues. So if you can address that, which is what Azure Stack does, it makes it so much easier for enterprises to bring on new capabilities, because they are leveraging the experience that they already have using Azure public cloud.

Gardner: Many IT organizations are familiar with Microsoft Azure Stack. It’s been in technical preview for quite some time. As it hits the market in September 2017, in seeking that total-solution, people-and-process approach, what is PwC bringing to the table to help organizations get the best value and advantage out of Azure Stack?

Hybrid: a tectonic IT shift

Antao: Ken made the point earlier in this discussion about hybrid IT. When you look at IT pivoting to more of the hybrid delivery mode, it’s a tectonic shift in IT’s operating model, in their architecture, their culture, in their roles and responsibilities – in the fundamental value proposition of IT to the enterprise.

When we partner with HPE in helping organizations drive through this transformation, we work with HPE in rethinking the operating model, in understanding the new kinds of roles and skills, of being able to apply these changes in the context of the business drivers that are leading it. That’s one of the typical ways that we work with HPE in this space.

Won: It’s a great complement. HPE understands the technology, understands the infrastructure, combined with the business processes, and then the higher level of thinking and the strategy knowledge that PwC has. It’s a great partnership.

Gardner: Attaining hybrid IT efficiency and doing it with security and control is not something you buy off the shelf. It’s not a license. It seems to me that an ecosystem is essential. But how do IT organizations manage that ecosystem? Are there ways that you all are working together, HPE in this case with PwC, and with Microsoft to make that consumption of an ecosystem solution much more attainable?

Won: One of the things that we are doing is working with Microsoft on their partnerships so that we can look at all these companies that have their offerings running on Azure public cloud and ensuring that those are all available and supported in Azure Stack, as well as running in the data center.

We are spending a lot of time with Microsoft on their ecosystem to make sure those services, those companies, or those products are available on Azure Stack — as well fully supported on Azure Stack that’s running on HPE gear.

Gardner: They might not be concerned about the hardware, but they are concerned about the total value — and the total solution. If the hardware players aren’t collaborating well with the service providers and with the cloud providers — then that’s not going to work.

Quick collaboration is key

Won: Exactly! I think of it like a washing machine. No one wants to own a washing machine, but everyone wants clean clothes. So it’s the necessary evil, it’s super important, but you just as soon not have to do it.

Gardner: I just don’t know what to take to the dry cleaner or not, right?

Won: Yeah, there you go!

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: From a consulting standpoint, clients no longer have the appetite for these five- to six-year transformations. Their businesses are changing at a much faster pace. One of the ways that we are working the ecosystem-level solution — again much like the deep and longstanding relationship we have had with HPE – is we have also been working with Microsoft in the same context.

And in a three-way fashion, we have focused on being able to define accelerators to deploying these solutions. So codifying a lot of our experiences, the lessons learned, a deep understanding of both the public and the private stack to be able to accelerate value for our customers — because that’s what they expect today.

Won: One of the things, Ro, that you brought up, and I think is very relevant here, is these three-way relationships. Customers don’t want to have to deal with all of these different vendors, these different pieces of stack or different aspects of the value chain. They instead expect us as vendors to be working together. So HPE, PwC, Microsoft are all working together to make it easier for the customers to ultimately deliver the services they need to drive their business.

Low risk, all reward

Gardner: So speed-to-value, super important; common solution cooperation and collaboration synergy among the partners, super important. But another part of this is doing it at low risk, because no one wants to be in a transition from a public to private or a full hybrid spectrum — and then suffer performance issues, lost data, with end customers not happy.

PwC has been focused on governance, risk management and compliance (GRC) in trying to bring about better end-to-end hybrid IT control. What is it that you bring to this particular problem that is unique? It seems that each enterprise is doing this anew, but you have done it for a lot of others and experience can be very powerful that way.

Antao: Absolutely! The move to hybrid IT is a fundamental shift in governance models, in how you address certain risks, the emergence of new risks, and new security challenges. A lot of what we have been doing in this space has been in helping that IT organizations accelerate that shift — that paradigm shift — that they have to make.

In that context, we have been working very closely with HPE to understand what the requirements of that new world are going to look like. We can build and bring to the table solutions that support those needs.

Won: It’s absolutely critical — this experience that PwC has is huge. We always come up with new technologies; every few years you have something new. But it’s that experience that PwC has to bring to the table that’s incredibly helpful to our customer base.

There’s this whole journey getting to that hybrid IT state and having the governing mechanisms around it.

Antao: So often when we think of governance, it’s more in terms of the steady state and the runtime. But there’s this whole journey between getting from where we today to that hybrid IT state — and having the governing mechanisms around it — so that they can do it in a way that doesn’t expose their business to too much risk. There is always risk involved in these large-scale transformations, but how do you manage and govern that process through getting to that hybrid IT state? That’s where we also spend a lot of time as we help clients through this transformation.

Gardner: For IT shops that are heavily Microsoft-focused, is there a way for them to master Azure Stack, the people, process and technology that will then be an accelerant for them to go to a broader hybrid IT capability? I’m thinking of multi-cloud, and even being able to develop with DevOps and SecOps across a multiple cloud continuum as a core competency.

Is Azure Stack for many companies a stepping-stone to a wider hybrid capability, Ro?

Managed multi-cloud continuum

Antao: Yes. And I think in many cases that’s inevitable. When you look at most organizations today, generally speaking, they have at least two public cloud providers that they use. They consume several Software as a service (SaaS) applications. They have multiple data center locations.  The role of IT now is to become the broker and integrator of multi-cloud environments, among and between on-premise and in the public cloud. That’s where we see a lot of them evolve their management practices, their processes, the talent — to be able to abstract these different pools and focus on the business. That’s where we see a lot of the talent development.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: We see that as well at HPE as this whole multi-cloud strategy is being implemented. More and more, the challenge that organizations are having is that they have these multiple clouds, each of which is managed by a different team or via different technologies with different processes.

So as a way to bring these together, there is huge value to the customer, by bringing together, for example, Azure Stack and Azure [public cloud] together. They may have multiple Azure Stack environments, perhaps in different data centers, in different countries, in different locales. We need to help them align their processes to run much more efficiently and more effectively. We need to engage with them not only from an IT standpoint, but also from the developer standpoint. They can use those common services to develop that application and deploy it in multiple places in the same way.

Antao: What’s making this whole environment even more complex these days is that a couple of years ago, when we talked about multi-cloud, it was really the capability to either deploy in one public cloud versus another.

Within a given business workflow, how do you leverage different clouds, given their unique strengths and weaknesses?

Few years later, it evolved into being able to port workloads seamlessly from one cloud to another. Today, as we look at the multi-cloud strategy that a lot of our clients are exploring this: Within a given business workflow, depending on the unique characteristics of different parts of that business process, how do you leverage different clouds given their unique strengths and weaknesses?

There might be portions of a business process that, to your point earlier, Ken, are highly confidential. You are dealing with a lot of compliance requirements. You may want to consume from an internal private cloud. There are other parts of it that you are looking for, such as immense scale, to deal with the peaks when that particular business process gets impacted. How do you go back to where the public cloud has a history with that? In a third case, it might be enterprise-grades workloads.

So that’s where we are seeing multi-cloud evolve, into where in one business process could have multiple sources, and so how does an IT organization manage that in a seamless way?

Gardner: It certainly seems inevitable that the choice of such a cloud continuum configuration model will vary and change. It could be one definition in one country or region, another definition in another country and region. It could even be contextual, such as by the type of end user who’s banging on the app. As the Internet of Things (IoT) kicks in, we might be thinking about not just individuals, but machine-to-machine (M2M), app-to-app types of interactions.

So quite a bit of complexity, but dealt with in such a way that the payoff could be monumental. If you do hybrid cloud and hybrid IT well, what could that mean for your business in three to five years, Ro?

Nimble, quick and cost-efficient

Antao: Clearly there is the agility aspect, of being able to seamlessly leverage these different clouds to allow IT organizations to be much more nimble in how they respond to the business.

From a cost standpoint, and this is actually a great example we had for a large-scale migration that we are currently doing to the public cloud. What the IT organization found was they consumed close to 70 percent of their migration budget for only 30 percent of the progress that they made.

And a larger part of that was because the minute you have your workloads sitting on a public cloud — whether it is a development workload or you are still working your way through it, but technically it’s not yet providing value — the clock is ticking. Being able to allow for a hybrid environment, where you a do a lot of that development, get it ready — almost production-ready — and then when the time is right to drive value from that application — that’s when you move to a public cloud. Those are huge cost savings right there.

Clients that have managed to balance those two paradigms are the ones who are also seeing a lot of economic efficiencies.

Won: The most important thing that people see value in is that agility. The ability to respond much faster to competitive actions or to new changes in the market, the ability to bring applications out faster, to be able to update applications in months — or sometimes even weeks — rather than the two years that it used to take.

It’s that agility to allow people to move faster and to shift their capabilities so much quicker than they have ever been able to do – that is the top reason why we’re seeing people moving to this hybrid model. The cost factor is also really critical as they look at whether they are doing CAPEX or OPEX and private cloud or public cloud.

One of the things that we have been doing at HPE through our Flexible Capacity program is that we enable our customers who were getting hardware to run these private clouds to actually pay for it on a pay-as-you-go basis. This allows them to better align their usage — the cost to their usage. So taking that whole concept of pay-as-you-go that we see in the public cloud and bringing that into a private cloud environment.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: That’s a great point. From a cost standpoint, there is an efficiency discussion. But we are also seeing in today’s world that we are depending on edge computing a lot more. I was talking to the CIO of a large park the other day, and his comment to me was, yes, they would love to use the public cloud but they cannot afford for any kind of latency or disruption of services because that means he’s got thousands of visitors and guests in his park, because of the amount of dependency on technology he can afford that kind of latency.

And so part of it is also the revenue impact discussion, and using public cloud in a way that allows you to manage some of those risks in terms of that analytical power and that computing power you need closer to the edge — closer to your internal systems.

Gardner: Microsoft Azure Stack is reinforcing the power and capability of hybrid cloud models, but Azure Stack is not going to be the same for each individual enterprise. How they differentiate, how they use and take advantage of a hybrid continuum will give them competitive advantages and give them a one-up in terms of skills.

It seems to me that the continuum of Azure Stack, of a hybrid cloud, is super-important. But how your organization specifically takes advantage of that is going to be the key differentiator. And that’s where an ecosystem solutions approach can be a huge benefit.

Let’s look at what comes next. What might we be talking about a year from now when we think about Microsoft Azure Stack in the market and the impact of hybrid cloud on businesses, Ken?

Look at clouds from both sides now

You will see that as a break in the boundary of private cloud versus public cloud, so think of it as a continuum.

Won: You will see organizations shifting from a world of using multiple clouds and having different applications or services on clouds to having an environment where services are based on multiple clouds. With the new cloud-native applications you’ll be running different aspects of those services in different locations based on what are the requirements of that particular microservice.

So a service may be partially running in Azure, part of it may be running in Azure Stack. You will certainly see that as a kind of break in the boundary of private cloud versus public cloud, and so think of it as a continuum, if you will, of different environments able to support whatever applications they need.

Gardner: Ro, as people get more into the weeds with hybrid cloud, maybe using Azure Stack, how will the market adjust?

Antao: I completely agree with Ken in terms of how organizations are going to evolve their architecture. At PwC we have this term called the Configurable Enterprise, which essentially focuses on how the IT organization consumes services from all of these different sources to be able to ultimately solve business problems.

To that point, where we see the market trends is in the hybrid IT space, the adoption of that continuum. One of the big pressures IT organizations face is how they are going to evolve their operating model to be successful in this new world. CIOs, especially the forward-thinking ones, are starting to ask that question. We are going to see in the next 12 months a lot more pressure in that space.

Gardner: These are, after all, still early days of hybrid cloud and hybrid IT. Before we sign off, how should organizations that might not yet be deep into this prepare themselves? Are there some operations, culture, and skills? How might you want to be in a good position to take advantage of this when you do take the plunge?

Plan to succeed with IT on board

Won: One of the things we recommend is a workshop where we sit down with the customer and think through their company strategy. What is their IT strategy? How does that relate or map to the infrastructure that they need in order to be successful?

This makes the connection between the value they want to offer as a company, as a business, to the infrastructure. It puts a plan in place so that they can see that direct linkage. That workshop is one of the things that we help a lot of customers with.

We also have innovation centers that we’ve built with Microsoft where customers can come in and experience Azure Stack firsthand. They can see the latest versions of Azure Stack, they can see the hardware, and they can meet with experts. We bring in partners such as PwC to have a conversation in these innovation centers with experts.

Gardner: Ro, how to get ready when you want to take the plunge and make the best and most of it?

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: We are at a stage right now where these transformations can no longer be done to the IT organization; the IT organization has to come along on this journey. What we have seen is, especially in the early stages, the running of pilot projects, of being able to involve the developers, the infrastructure architects, and the operations folks in pilot workloads, and learn how to manage it going forward in this new model.

You want to create that from a top-down perspective, being able to tie in to where this adds the most value to the business. From a grassroots effort, you need to also create champions within the trenches that are going to be able to manage this new environment. Combining those two efforts has been very successful for organizations as they embark on this journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, cloud messaging, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, Hewlett Packard Enterprise, Microsoft, PwC | Tagged , , , , , , , , , , | Leave a comment

Advanced IoT systems provide analysis catalyst for the petrochemical refinery of the future

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) technology trends interview explores how IT combines with IoT to help create the refinery of the future.

We’ll now learn how a leading-edge petrochemical company in Texas is rethinking data gathering and analysis to foster safer environments and greater overall efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To help us define the best of the refinery of the future vision is Doug Smith, CEO of Texmark Chemicals in Galena Park, Texas, and JR Fuller, Worldwide Business Development Manager for Edgeline IoT at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top trends driving this need for a new refinery of the future? Doug, why aren’t the refinery practices of the past good enough?

Smith: First of all, I want to talk about people. People are the catalysts who make this refinery of the future possible. At Texmark Chemicals, we spent the last 20 years making capital investments in our infrastructure, in our physical plant, and in the last four years we have put together a roadmap for our IT needs.

Through our introduction to HPE, we have entered into a partnership that is not just a client-customer relationship. It’s more than that, and it allows us to work together to discover IoT solutions that we can bring to bear on our IT challenges at Texmark. So, we are on the voyage of discovery together — and we are sailing out to sea. It’s going great.

Gardner: JR, it’s always impressive when a new technology trend aids and abets a traditional business, and then that business can show through innovation what should then come next in the technology. How is that back and forth working? Where should we expect IoT to go in terms of business benefits in the not-to-distant future?

Fuller: One of powerful things about the partnership and relationship we have is that we each respect and understand each other’s “swim lanes.” I’m not trying to be a chemical company. I’m trying to understand what they do and how I can help them.

JR FullerAnd they’re not trying to become an IT or IoT company. Their job is to make chemicals; our job is to figure out the IT. We’re seeing in Texmark the transformation from an Old World economy-type business to a New World economy-type business.

This is huge, this is transformational. As Doug said, they’ve made huge investments in their physical assets and what we call Operational Technology (OT). They have done that for the past 20 years. The people they have at Texmark who are using these assets are phenomenal. They possess decades of experience.

Learn From Customers Who

Realize the IoT Advantage

 Read More

Yet IoT is really new for them. How to leverage that? They have said, “You know what? We squeezed as much as we can out of OT technology, out of our people, and our processes. Now, let’s see what else is out there.”

And through introductions to us and our ecosystem partners, we’ve been able to show them how we can help squeeze even more out of those OT assets using this new technology. So, it’s really exciting.

Gardner: Doug, let’s level-set this a little bit for our audience. They might not all be familiar with the refinery business, or even the petrochemical industry. You’re in the process of processing. You’re making one material into another and you’re doing that in bulk, and you need to do it on a just-in-time basis, given the demands of supply chains these days.

You need to make your business processes and your IT network mesh, to reach every corner. How does a wireless network become an enabler for your requirements?

The heart of IT 

Smith: In a large plant facility, we have different pieces of equipment. One piece of equipment is a pump — the analogy would be the heart of the process facility of the plant.

So your question regarding the wireless network, if we can sensor a pump and tie it into a mesh network, there are incredible cost savings for us. The physical wiring of a pump runs anywhere from $3,000 to $5,000 per pump. So, we see a savings in that.

Doug Smith (1)Being able to have the information wirelessly right away — that gives us knowledge immediately that we wouldn’t have otherwise. We have workers and millwrights at the plant that physically go out and inspect every single pump in our plant, and we have 133 pumps. If we can utilize our sensors through the wireless network, our millwrights can concentrate on the pumps that they know are having problems.

To have the information wirelessly right away — that gives us knowledge immediately that we wouldn’t have otherwise.

Gardner: You’re also able to track those individuals, those workers, so if there’s a need to communicate, to locate, to make sure that they hearing the policy, that’s another big part of IoT and people coming together.

Safety is good business

Smith: The tracking of workers is more of a safety issue — and safety is critical, absolutely critical in a petrochemical facility. We must account for all our people and know where they are in the event of any type of emergency situation.

Gardner: We have the sensors, we can link things up, we can begin to analyze devices and bring that data analytics to the edge, perhaps within a mini data center facility, something that’s ruggedized and tough and able to handle a plant environment.

Given this scenario, JR, what sorts of efficiencies are organizations like Texmark seeing? I know in some businesses, they talk about double digit increases, but in a mature industry, how does this all translate into dollars?

Fuller: We talk about the power of one percent. A one percent improvement in one of the major companies is multi-billions of dollars saved. A one percent change is huge, and, yes, at Texmark we’re able to see some larger percentage-wise efficiency, because they’re actually very nimble.

It’s hard to turn a big titanic ship, but the smaller boat is actually much better at it. We’re able to do things at Texmark that we are not able to do at other places, but we’re then able to create that blueprint of how they do it.

You’re absolutely right, doing edge computing, with our HPE Edgeline products, and gathering the micro-data from the extra compute power we have installed, provides a lot of opportunities for us to go into the predictive part of this. It’s really where you see the new efficiencies.

Recently I was with the engineers out there, and we’re walking through the facility, and they’re showing us all the equipment that we’re looking at sensoring up, and adding all these analytics. I noticed something on one of the pumps. I’ve been around pumps, I know pumps very well.

I saw this thing, and I said, “What is that?”

“So that’s a filter,” they said.

I said, “What happens if the filter gets clogged?”

“It shuts down the whole pump,” they said.

“What happens if you lose this pump?” I asked.

“We lose the whole chemical process,” they explained.

“Okay, are there sensors on this filter?”

“No, there are only sensors on the pump,” they said.

There weren’t any sensors on the filter. Now, that’s just something that we haven’t thought of, right? But again, I’m not a chemical guy. So I can ask questions that maybe they didn’t ask before.

So I said, “How do you solve this problem today?”

“Well, we have a scheduled maintenance plan,” they said.

They don’t have a problem, but based on the scheduled maintenance plan that filter gets changed whether it needs to or not. It just gets changed on a regular basis. Using IoT technology, we can tell them exactly when to change that filter. Therefore IoT saves on the cost of the filter and the cost of the manpower — and those types of potential efficiencies and savings are just one small example of the things that we’re trying to accomplish.

Continuous functionality

Smith: It points to the uniqueness of the people-level relationship between the HPE team, our partners, and the Texmark team. We are able to have these conversations to identify things that we haven’t even thought of before. I could give you 25 examples of things just like this, where we say, “Oh, wow, I hadn’t thought about that.” And yet it makes people safer and it all becomes more efficient.

Learn From Customers Who

Realize the IoT Advantage
Read More

Gardner: You don’t know until you have that network in place and the data analytics to utilize what the potential use-cases can be. The name of the game is utilization efficiency, but also continuous operations.

How do you increase your likelihood or reduce the risk of disruption and enhance your continuous operations using these analytics?

Smith: To answer, I’m going to use the example of toll processing. Toll processing is when we would have a customer come to us and ask us to run a process on the equipment that we have at Texmark.

Normally, they would give us a recipe, and we would process a material. We take samples throughout the process, the production, and deliver a finished product to them. With this new level of analytics, with the sensoring of all these components in the refinery of the future vision, we can provide a value-add to the customers by giving them more data than they could ever want. We can document and verify the manufacture and production of the particular chemical that we’re toll processing for them.

Fuller: To add to that, as part of the process, sometimes you may have to do multiple runs when you’re tolling, because of your feed stock and the way it works.

By using advanced analytics and the predictive benefits of having all that data, we’re looking to gain efficiencies.

By usingadvanced analytics, and some of the predictive benefits of having all of that data available, we’re looking to gain efficiencies to cut down the number of additional runs needed. If you take a process that would have taken three runs and we can knock that down to two runs — that’s a 30 percent decrease in total cost and expense. It also allows them produce more products, and to get it out to people a lot faster

Smith: Exactly. Exactly!

Gardner: Of course, the more insight that you can obtain from a pump, and the more resulting data analysis, that gives you insight into the larger processes. You can extend that data and information back into your supply chain. So there’s no guesswork. There’s no gap. You have complete visibility — and that’s a big plus when it comes to reducing risk in any large, complex, multi-supplier undertaking.

Beyond data gathering, data sharing

Smith: It goes back to relationships at Texmark. We have relationships with our neighbors that are unique in the industry, and so we would be able to share the data that we have.

Fuller: With suppliers.

Smith: Exactly, with suppliers and vendors. It’s transformational.

Gardner: So you’re extending a common standard industry-accepted platform approach locally into an extended process benefit. And you can share that because you are using common, IT-industry-wide infrastructurefrom HPE.

Fuller: And that’s very important. We have a three-phase project, and we’ve just finished the first two phases. Phase 1 was to put ubiquitous WiFi infrastructure in there, with the location-based services, and all of the things to enable that. The second phase was to upgrade the compute infrastructure with our Edgeline compute and put in our HPE Micro Datacenter in there. So now they have some very robust compute.

Learn From Customers Who

Realize the IoT Advantage

Read More

With that infrastructure in place, it now allows us to do that third phase, where we’re bringing in additional IoT projects. We will create a data infrastructure with data storage, and application programming interfaces (APIs), and things like that. That will allow us to bring in a specialty video analytic capability that will overlay on top of the physical and logical infrastructure. And it makes it so much easier to integrate all that.

Gardner: You get a chance to customize the apps much better when you have a standard IT architecture underneath that, right?

Trailblazing standards for a new workforce

Smith: Well, exactly. What are you saying, Dana is – and it gives me chills when I start thinking about what we’re doing at Texmark within our industry – is the setting of standards, blazing a new trail. When we talk to our customers and our suppliers and we tell them about this refinery of the future project that we’re initiating, all other business goes out the window. They want to know more about what we’re doing with the IoT — and that’s incredibly encouraging.

Gardner: I imagine that there are competitive advantages when you can get out in front and you’re blazing that trail. If you have the experience, the skills of understanding how to leverage an IoT environment, and an edge computing capability, then you’re going to continue to be a step ahead of the competition on many levels: efficiency, safety, ability to customize, and supply chain visibility.

Smith: It surely allows our Texmark team to do their jobs better. I use the example of the millwrights going out and inspecting pumps, and they do that everyday. They do it very well. If we can give them the tools, where they can focus on what they do best over a lifetime of working with pumps, and only work on the pumps that they need to, that’s a great example.

I am extremely excited about the opportunities at the refinery of the future to bring new workers into the petrochemical industry. We have a large number of people within our industry who are retiring; they’re taking intellectual capital with them. So to be able to show young people that we are using advanced technology in new and exciting ways is a real draw and it would bring more young people into our industry.

Gardner: By empowering that facilities edge and standardizing IT around it, that also gives us an opportunity to think about the other part of this spectrum — and that’s the cloud. There are cloud services and larger data sets that could be brought to bear.

How does the linking of the edge to the cloud have a benefit?

Cloud watching

Fuller: Texmark Chemicals has one location, and they service the world from that location as a global leader in dicyclopentadiene (DCPD) production. So the cloud doesn’t have the same impact as it would for maybe one of the other big oil or big petrochemical companies. But there are ways that we’re going to use the cloud at Texmark and rally around it for safety and security.

Utilizing our location-based services, and our compute, if there is an emergency — whether it’s at Texmark or a neighbor — using cloud-based information like weather, humidity, and wind direction — and all of these other things that are constantly changing — we can provide better directed responses. That’s one way we would be using cloud at Texmark.

When we start talking about the larger industry — and connecting multiple refineries together or upstream, downstream and midstream kinds of assets together with a petrochemical company — cloud becomes critical. And you have to have hybrid infrastructure support.

You don’t want to send all your video to the cloud to get analyzed. You want to do that at the edge. You don’t want to send all of your vibration data to the cloud, you want to do that at the edge. But, yes, you do want to know when a pump fails, or when something happens so you can educate and train and learn and share that information and institutional knowledge throughout the rest of the organization.

Gardner: Before we sign off, let’s take a quick look into the crystal ball. Refinery of the future, five years from now, Doug, where do you see this going?

Learn From Customers Who

 Realize the IoT Advantage

Read More

Smith: The crystal ball is often kind of foggy, but it’s fun to look into it. I had mentioned earlier opportunities for education of a new workforce. Certainly, I am focused on the solutions that IoT brings to efficiencies, safety, and profitability of Texmark as a company. But I am definitely interested in giving people opportunities to find a job to work in a good industry that can be a career.

Gardner: JR, I know HPE has a lot going on with edge computing, making these data centers more efficient, more capable, and more rugged. Where do you see the potential here for IoT capability in refineries of the future?

Future forecast: safe, efficient edge

Fuller: You’re going to see the pace pick up. I have to give kudos to Doug. He is a visionary. Whether he admits that or not, he is actually showing an industry that has been around for many years how to do this and be successful at it. So that’s incredible. In that crystal ball look, that five-year look, he’s going to be recognized as someone who helped really transform this industry from old to new economy.

As far as edge-computing goes, what we’re seeing with our converged Edgeline systems, which are our first generation, and we’ve created this market space for converged edge systems with the hardening of it. Now, we’re working on generation 2. We’re going to get faster, smaller, cheaper, and become more ubiquitous. I see our IoT infrastructure as having a dramatic impact on what we can actually accomplish and the workforce in five years. It will be more virtual and augmented and have all of these capabilities. It’s going to be a lot safer for people, and it’s going to be a lot more efficient.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Business intelligence, Cloud computing, Cyber security, data analysis, data center, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, HP, Information management, Internet of Things, Security | Tagged , , , , , , , , , , | Leave a comment

Get ready for the post-cloud world

Just when cloud computing seems inevitable as the dominant force in IT, it’s time to move on because we’re not quite at the end-state of digital transformation. Far from it.

Now’s the time to prepare for the post-cloud world.

It’s not that cloud computing is going away. It’s that we need to be ready for making the best of IT productivity once cloud in its many forms become so pervasive as to be mundane, the place where all great IT innovations must go.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, data analysis, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning | Tagged , , , , , , , | Leave a comment

India Smart Cities Mission shows IoT potential for improving quality of life at vast scale

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) transformation discussion examines the potential impact and improvement of low-power edge computing benefits on rapidly modernizing cities.

These so-called smart city initiatives are exploiting open, wide area networking (WAN) technologies to make urban life richer in services, safer, and far more responsive to residences’ needs. We will now learn how such pervasively connected and data-driven IoT architectures are helping cities in India vastly improve the quality of life there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share how communication service providers have become agents of digital urban transformation are VS Shridhar, Senior Vice President and Head of the Internet-of-Things Business Unit at Tata Communications in Chennai area, India, and Nigel Upton, General Manager of the Universal IoT Platform and Global Connectivity Platform and Communications Solutions Business at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about India’s Smart Cities mission. What are you up to and how are these new technologies coming to bear on improving urban quality of life?

Shridhar: The government is clearly focusing on Smart Cities as part of their urbanization plan, as they believe Smart Cities will not only improve the quality of living, but also generate employment, and take the whole country forward in terms of technologically embracing and improving the quality of life.

So with that in mind, the Government of India has launched 100 Smart Cities initiatives. It’s quite interesting because each of the cities that aspire to belong had to make a plan and their own strategy around how they are going to evolve and how they are going to execute it, present it, and get selected. There was a proper selection process.

Many of the cities made it, and of course some of them didn’t make it. Interestingly, some of the cities that didn’t make it are developing their own plans.

IoT Solutions for Communications Service Providers and Enterprises from HPE

Learn More

There is lot of excitement and curiosity as well as action in the Smart Cities project. Admittedly, it’s a slow process, it’s not something that you can do at the blink of the eye, and Rome wasn’t built overnight, but I definitely see a lot of progress.

Gardner: Nigel, it seems that the timing for this is auspicious, given that there are some foundational technologies that are now available at very low cost compared to the past, and that have much more of a pervasive opportunity to gather information and make a two-way street, if you will, between the edge and central administration. How is the technology evolution synching up with these Smart Cities initiatives in India?

Upton: I am not sure whether it’s timing or luck, or whatever it happens to be, but adoption of the digitization of city infrastructure and services is to some extent driven by economics. While I like to tease my colleagues in India about their sensitivity to price, the truth of the matter is that the economics of digitization — and therefore IoT in smart cities — needs to be at the right price, depending on where it is in the world, and India has some very specific price points to hit. That will drive the rate of adoption.

And so, we’re very encouraged that innovation is continuing to drive price points down to the point that mass adoption can then be taken up, and the benefits realized to a much more broad spectrum of the population. Working with Tata Communications has really helped HPE understand this and continue to evolve as technology and be part of the partner ecosystem because it does take a village to raise an IoT smart city. You need a lot of partners to make this happen, and that combination of partnership, willingness to work together and driving the economic price points to the point of adoption has been absolutely critical in getting us to where we are today.

Balanced Bandwidth

Gardner: Shridhar, we have some very important optimization opportunities around things like street lighting, waste removal, public safety, water quality; of course, the pervasive need for traffic and parking, monitoring and improvement.

How do things like a low-power specification Internet and network gateways and low-power WANs (LPWANs) create a new foundation technically to improve these services? How do we connect the services and the technology for an improved outcome?

Shridhar: If you look at human interaction to the Internet, we have a lot of technology coming our way. We used to have 2G, that has moved to 3G and to 4G, and that is a lot of bandwidth coming our way. We would like to have a tremendous amount of access and bandwidth speeds and so on, right?

VS Shridhar

Shridhar

So the human interaction and experience is improving vastly, given the networks that are growing. On the machine-to-machine (M2M) side, it’s going to be different. They don’t need oodles of bandwidth. About 80 to 90 percent of all machine interactions are going to be very, very low bandwidth – and, of course, low power. I will come to the low power in a moment, but it’s going to be very low bandwidth requirement.

In order to switch off a streetlight, how much bandwidth do you actually require? Or, in order to sense temperature or air quality or water and water quality, how much bandwidth do you actually require?

When you ask these questions, you get an answer that the machines don’t require that much bandwidth. More importantly, when there are millions — or possibly billions — of devices to be deployed in the years to come, how are you going to service a piece of equipment that is telling a streetlight to switch on and switch off if the battery runs out?

Machines are different from humans in terms of interactions. When we deploy machines that require low bandwidth and low power consumption, a battery can enable such a machine to communicate for years.

Aside from heavy video streaming applications or constant security monitoring, where low-bandwidth, low-power technology doesn’t work, the majority of the cases are all about low bandwidth and low power. And these machines can communicate with the quality of service that is required.

When it communicates, the network has to be available. You then need to establish a network that is highly available, which consumes very little power and provides the right amount of bandwidth. So studies show that less than 50 kbps connectivity should suffice for the majority of these requirements.

Now the machine interaction also means that you collect all of them into a platform and basically act on them. It’s not about just sensing it, it’s measuring it, analyzing it, and acting on it.

Low-power to the people

So the whole stack consists not just of connectivity alone. It’s LPWAN technology that is emerging now and is becoming a de facto standard as more-and-more countries start embracing it.

At Tata Communications we have embraced the LPWAN technology from the LoRa Alliance, a consortium of more than 400 partners who have gotten together and are driving standards. We are creating this network over the next 18 to 24 months across India. We have made these networks available right now in four cities. By the end of the year, it will be many more cities — almost 60 cities across India by March 2018.

Gardner: Nigel, how do you see the opportunity, the market, for a standard architecture around this sort of low-power, low-bandwidth network? This is a proof of concept in India, but what’s the potential here for taking this even further? Is this something that has global potential?

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

Upton: The global potential is undoubtedly there, and there is an additional element that we didn’t talk about which is that not all devices require the same amount of bandwidth. So we have talked about video surveillance requiring higher bandwidth, we have talked about devices that have low-power bandwidth and will essentially be created once and forgotten when expected to last 5 or 10 years.

Nigel Upton

Upton

We also need to add in the aspect of security, and that really gave HPE and Tata the common ground of understanding that the world is made up of a variety of network requirements, some of which will be met by LPWAN, some of which will require more bandwidth, maybe as high as 5G.

The real advantage of being able to use a common architecture to be able to take the data from these devices is the idea of having things like a common management, common security, and a common data model so that you really have the power of being able to take information, take data from all of these different types of devices and pull it into a common platform that is based on a standard.

In our case, we selected the oneM2M standard, it’s the best standard available to be able to build that common data model and that’s the reason why we deployed the oneM2M model within the universal IoT platform to get that consistency no matter what type of device over no matter what type of network.

Gardner: It certainly sounds like this is an unprecedented opportunity to gather insight and analysis into areas that you just really couldn’t have measured before. So going back to the economics of this, Shridhar, have you had any opportunity through these pilot projects in such cities as Jamshedpur to demonstrate a return on investment, perhaps on street lighting, perhaps on quality of utilization and efficiency? Is there a strong financial incentive to do this once the initial hurdle of upfront costs is met?

Data-driven cost reduction lights up India

Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions.

Shridhar: Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions. So if you look at how things have been progressing, I will give you a few examples of how the costs have started constructing and playing out. One of course is to have devices, meeting at certain price point, we talked about how in India — we talked that Nigel was remarking how constant still this Indian market is, but it’s important, once we delivered to a certain cost, we believe we can now deliver globally to scale. That’s very important, so if we build something in India it would deliver to the global market as well.

The streetlight example, let’s take that specifically and see what kind of benefits it would give. When a streetlight operates for about 12 hours a day, it costs about Rs.12, which is about $0.15, but when you start optimizing it and say, okay, this is a streetlight that is supported currently on halogen and you move it to LED, it brings a little bit of cost saving, in some cases significant as well. India is going through an LED revolution as you may have read in the newspapers, those streetlights are being converted, and that’s one distinct cost advantage.

Now they are looking and driving, let’s say, the usage and the electricity bills even lower by optimizing it. Let’s say you sync it with the astronomical clock, that 6:30 in the evening it comes up and let’s say 6:30 in the morning it shuts down linking to the astronomical clock because now you are connecting this controller to the Internet.

The second thing that you would do is during busy hours keep it at the brightest, let’s say between 7:00 and 10:00, you keep it at the brightest and after that you start minimizing it. You can control it down in 10 percent increments.

The point I am making is, you basically deliver intensity of light to the kind of requirement that you have. If it is busy, or if there is nobody on the street, or if there is a safety requirement — a sensor will trigger up a series of lights, and so on.

So your ability to play around with just having streetlight being delivered to the requirement is so high that it brings down total cost. While I was telling you about $0.15 that you would spend per streetlight, that could be brought down to $0.05. So that’s the kind of advantage by better controlling the streetlights. The business case builds up, and a customer can save 60 to 70 percent just by doing this. Obviously, then the business case stands out.

The question that you are asking is an interesting one because each of the applications has its own way of returning the investment back, while the optimization of resources is being done. There is also a collateral positive benefit by saving the environment. So not only do I gain a business savings and business optimization, but I also pass on a general, bigger message of a green environment. Environment and safety are the two biggest benefits of implementing this and it would really appeal to our customers.

Gardner: It’s always great to put hard economic metrics on these things, but Shridhar just mentioned safety. Even when you can’t measure in direct economics, it’s invaluable when you can bring a higher degree of safety to an urban environment.

It opens up for more foot traffic, which can lead to greater economic development, which can then provide more tax revenue. It seems to me that there is a multiplier effect when you have this sort of intelligent urban landscape that creates a cascading set of benefits: the more data, the more efficiency; the more efficiency, the more economic development; the more revenue, the more data and so on. So tell us a little bit about this ongoing multiplier and virtuous adoption benefit when you go to intelligent urban environments?

Quality of life, under control

Upton: Yes, also it’s important to note that it differs almost by country to country and almost within region to region within countries. The interesting challenge with smart cities is that often you’re dealing with elected officials rather than hard-nosed businessman who are only interested in the financial return. And it’s because you’re dealing with politicians and they are therefore representing the citizens in their area, either their city or their town or their region, their priorities are not always the same.

There is quite a variation of one of the particular challenges, particular social challenges as well as the particular quality of life challenges in each of the areas that they work in. So things like personal safety are a very big deal in some regions. I am currently in Tokyo and here there is much more concern around quality of life and mobility with a rapidly aging population and their challenges are somewhat different.

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

But in India, the set of opportunities and challenges that are set out, they are in that combination of economic as well as social, and if you solve them and you essentially give citizens more peace of mind, more ability to be able to move freely, to be able to take part in the economic interaction within that area, then undoubtedly that leads to greater growth, but it is worth bearing in mind that it does vary almost city by city and region by region.

Gardner: Shridhar, do you have any other input into a cascading ongoing set of benefits when you get more data, more network opportunity. I guess I am trying to understand for a longer-term objective that being intelligent and data-driven has an ongoing set of benefits, what might those be? How can this be a long-term data and analytics treasure trove when you think about it in terms of how to provide better urban experiences?

Home/work help

Shridhar: From our perspective, when we looked at the customer benefits there is a huge amount of focus around the smart cities and how smart cities are benefiting from a network. If you look at the enterprise customers, they are also looking at safety, which is an overlapping application that a smart city would have.

So the enterprise wants to provide safety to its workers, for example, in mines or in difficult terrains, environments where they are focusing on helping them. Or women’s safety, which is as you know in India is a big thing as well — how do you provide a device which is not very obvious and it gives the women all the safety that is there.

So all this in some form is providing data. One of the things that comes to my mind when you ask about how data-driven resources can be and what kind of quality it would give is if you action your mind to some of the customer services devices, there could be applications or let’s say a housewife could have a multiple button kind of a device where she can order a service.

Depending on the service she presses and an aggregate of households across India, you would know the trends and direction of a certain service, and mind you, it could be as simple as a three-button device which says Service A, Service B, Service C, and it could be a consumer service that gets extended to a particular household that we sell it as a service.

So you could get lots of trends and patterns that are emerging from that, and we believe that the customer experience is going to change, because no longer is a customer going to retain in his mind what kind of phone numbers or your, let’s say, apps and all to order, you give them the convenience of just a button-press service. That immediately comes to my mind.

Feedback fosters change

The second one is in terms of feedback. You use the same three-button service to say, how well have you used utility — or rather how — what kind of quality of service that you rate multiple utilities that you are using, and there is toilet revolution in India. For example, you put these buttons out there, they will tell you at any given point of time what’s the user satisfaction and so on.

So these are all data that is getting gathered and I believe that while it is early days for us to go on and put out analytics and give you distinct kind of benefits that are there, but some of the things that customers are already looking at is which geographies, which segment, who are my biggest — profile of the customers using this and so on. That kind of information is going to come out very, very distinctly.

The Smart Cities is all about experience. The enterprises are now looking at the data that is coming out and seeing how they can use it to better segment, and provide better customer experience which would obviously mean both adding to their top line as well as helping them manage their bottom line. So it’s beyond safety, it’s getting into the customer experience – the realm of managing customer experience.

Gardner: From a go-to-market perspective, or a go-to-city’s perspective, these are very complex undertakings, lots of moving parts, lots of different technologies and standards. How are Tata and HPE are coming together — along with other service providers, Pointnext for example? How do you put this into a package that can then actually be managed and put in place? How do we make this appealing not only in terms of its potential but being actionable as well when it comes to different cities and regions?

Upton: The concept of Smart Cities has been around for a while and various governments around the world have pumped money into their cities over an extended period of time.

We now have the infrastructure in place, we have the price points and we have IoT becoming mainstream.

As usual, these things always take more time than you think, and I do not believe today that we have a technology challenge on our hands. We have much more of a business model challenge. Being able to deploy technology to be able to bring benefits to citizens, I think that is finally getting to the point where it is much better understood where innovation of the device level, whether it’s streetlights, whether it’s the ability to measure water quality, sound quality, humidity, all of these metrics that we have available to us now. There has been very rapid innovation at that device level and at the economics of how to produce them, at a price that will enable widespread deployment.

All that has been happening rapidly over the last few years getting us to the point where we now have the infrastructure in place, we have the price points in place, and we have IoT becoming mainstream enough that it is entering into the manufacturing process of all sorts of different devices, as I said, ranging from streetlights to personal security devices through to track and trace devices that are built into the manufacturing process of goods.

That is now reaching mainstream and we are now able to take advantage of this massive data that’s now being produced to be able to produce even more efficient and smarter cities, and make them safer places for our citizens.

Gardner: Last word to you, Shridhar. If people wanted to learn more about the pilot proof of concept (PoC) that you are doing there at Jamshedpur and other cities, through the Smart Cities Mission, where might they go, are there any resources, how would you provide more information to those interested in pursuing more of these technologies?

Pilot projects take flight

Shridhar: I would be very happy to help them look at the PoCs that we are doing. I would classify the PoCs that we are doing is as far as safety is concerned, we talked of energy management in one big bucket that is there, then the customer service I spoke about, the fourth one I would say is more on the utility side. Gas and water are two big applications where customers are looking at these PoCs very seriously.

And there is very one interesting application in that one customer wanted for pest control, where he wanted his mouse traps to have sensors so that they will at any point of time know if there is a rat trap at all, which I thought was a very interesting thing.

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

There are multiple streams that we have, we have done multiple PoCs, we will be very happy as Tata Communications team [to provide more information], and the HPE folks are in touch with us.

You could write to us, to me in particular for some period of time. We are also putting information on our website. We have marketing collateral, which describes this. We will do some of the joint workshops with HPE as well.

So there are multiple ways to reach us, and one of the best ways obviously is through our website. We are always there to provide more important help, and we believe that we can’t do it all alone; it’s about the ecosystem getting to know and getting to work on it.

While we have partners like HPE on the platform level, we also have partners such as Semtech, who established Center of Excellence in Mumbai along with us. So the access to the ecosystem from HPE side as well as our other partners is available, and we are happy to work and co-create the solutions going forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business networks, Cloud computing, data analysis, data center, Data center transformation, enterprise architecture, Enterprise transformation, Government, Hewlett Packard Enterprise, HP, Information management, Internet of Things, machine learning, Mobile apps, mobile computing, Networked economy, Security, storage, User experience | Tagged , , , , , , , , , | Leave a comment

How confluence of cloud, UC and data-driven insights newly empowers contact center agents

The next BriefingsDirect customer experience insights discussion explores how Contact center-as-a-service (CCaaS) capabilities are becoming more powerful as a result of leveraging cloud computing, multi-mode communications channels, and the ability to provide optimized and contextual user experiences.

More than ever, businesses have to make difficult and complex decisions about how to best source their customer-facing services. Which apps and services, what data and resources should be in the cloud or on-premises — or in some combination — are among the most consequential choices business leaders now face. As the confluence of cloud and unified communications (UC) — along with data-driven analytics — gain traction, the contact center function stands out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

We’ll now hear why traditional contact center technology has become outdated, inflexible and cumbersome, and why CCaaS is becoming more popular in meeting the heightened user experience requirements of today.

Here to share more on the next chapter of contact center and customer service enhancements, is Vasili Triant, CEO of Serenova in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the new trends reshaping the contact center function?

Triant: What’s changed in the world of contact center and customer service is that we’re seeing a generational spread — everything from baby boomers all the way now to Gen Z.

With the proliferation of smartphones through the early 2000s, and new technologies and new channels — things like WeChat and Viber — all these customers are now potential inbound discussions with brands. And they all have different mediums that they want to communicate on. It’s no longer just phone or e-mail: It’s phone, e-mail, web chat, SMS, WeChat, Facebook, Twitter, LinkedIn, and there are other channels coming around the corner that we don’t even know about yet.

Vasili Triant

Vasili Triant

When you take all of these folks — customers or brands — and you take all of these technologies that consumers want to engage with across all of these different channels – it’s simple, they want to be heard. It’s now the responsibility of brands to determine what is the best way to respond and it’s not always one-to-one.

So it’s not a phone call for a phone call, it’s maybe an SMS to a phone call, or a phone call to a web chat — whatever those [multi-channels] may be. The complexity of how we communicate with customers has increased. The needs have changed dramatically. And the legacy types of technologies out there, they can’t keep up — that’s what’s really driven the shift, the paradigm shift, within the contact center space.

Gardner: It’s interesting that the new business channels for marketing and capturing business are growing more complex. They still have to then match on the back end how they support those users, interact with them, and carry them through any sort of process — whether it’s on-boarding and engaging, or it’s supporting and servicing them.

What we’re requiring then is a different architecture to support all of that. It seems very auspicious that we have architectural improvements right along with these new requirements.

Triant: We have two things that have collided at the same time – cloud technologies and the growth of truly global companies.

Most of the new channels that have rolled out are in the cloud. I mean, think about it — Facebook is a cloud technology, Twitter is a cloud technology. WeChat, Viber, all these things, they are all cloud technologies. It’s becoming a Software-as-a-Service (SaaS)-based world. The easiest and best way to integrate with these other cloud technologies is via the cloud — versus on-premises. So what began as the shift of on-premises technology to cloud contact center — and that really began in 2011-2012 – has rapidly picked up speed with the adoption of multi-channels as a primary method of communication.

The only way to keep up with the pace of development of all these channels is through cloud technologies because you need to develop an agile world, you need to be able to get the upgrades out to customers in a quick fashion, in an easy fashion, and in an inexpensive fashion. That’s the core difference between the on-premises world and the cloud world.

At the same time, we are no longer talking about a United States company, an Australia company, or a UK company — we are talking about everything as global brands, or global businesses. Customer service is global now, and no one cares about borders or countries when it comes to communication with a brand.

Customer service is global now, and no one cares about borders or countries when it comes to communications with a brand.

Gardner: We have been speaking about this through the context of the end-user, the consumer. But this architecture and its ability to leverage cloud also benefits the agent, the person who is responsible for keeping that end-user happy and providing them with the utmost in intelligent services. So how does the new architecture also aid and abet the agent.

Triant: The agent is frankly one of the most important pieces to this entire puzzle. We talk a lot about channels and how to engage with the customer, but that’s really what we call listening. But even in just simple day-to-day human interactions, one of the most important things is how you communicate back. There has been a series of time-and-motion studies done within contact centers, within brands — and you can even look at your personal experiences. You don’t have to read reports to understand this.

The baseline for how an interaction will begin and end and whether that will be a happy or a poor interaction with the brand, is going to be dependent on the agents’ state of mind. If I call up and I speak to “Joe,” and he starts the conversation, he is in a great mood and he is having a great day, then my conversation will most likely end in a positive interaction because it started that way.

But if someone is frustrated, they had a rough day, they can’t find their information, their computers have been crashing or rebooting, then the interaction is guaranteed to end up poor. You hear this all the time, “Oh, can you wait a moment, my systems are loading. Oh, I can’t get you an answer, that screen is not coming up. I can’t see your account information.” The agents are frustrated because they can’t do their job, and that frustration then blends into your conversation.

So using the technology to make it easy for the agent to do their job is essential. If they have to go from one screen to another screen to conduct one interaction with the customer — they are going to be frustrated, and that will lead to a poor experience with the customer.

The cloud technologies like Serenova, which is web-based, are able to bring all those technologies into one screen. The agent can have all the information brought to them easily, all in one click, and then be able to answer all the customer needs. The agent is happy and that adds to the customer satisfaction. The conclusion of the call is a happy customer, which is what we all want. That’s a great scenario and you need cloud technology to do that because the on-premises world does not deliver a great agent experience.

One-stop service

Gardner: Another thing that the older technologies don’t provide is the ability to have a flexible spectrum to move across these channels. Many times when I engage with an organization I might start with an SMS or a text chat, but then if that can’t satisfy my needs, I want to get a deeper level of satisfaction. So it might end up going to a phone call or an interaction on the web, or even a shared desktop, if I’m in IT support, for example.

The newer cloud technology allows you to intercept via different types of channels, but you can also escalate and vary between and among them seamlessly. Why is that flexibility both of benefit to the end-user as well as the agent?

Triant: I always tell companies and customers of ours that you don’t have to over-think this; all you have to do is look to your personal life. Most common things that we as users deal with — such as cell phone companies, cable companies, airlines, — you can get onto any of these websites and begin chatting, but you can find that your interaction isn’t going well. Before I started at Serenova, I had these experiences where I was dealing with the cable company and — chat, chat, chat, — trying to solve my problem. But we couldn’t get there, and so then we needed to get on the phone. But they said, “Here is our 800 number, call in.” I’d call in, but I’d have to start a whole new interaction.

Basically, I’d have to re-explain my entire situation. Then, I am talking with one person, and they have to turn around and send me an email, but I am not going to get that email for 30 to 45 minutes because they have to get off the phone, and get into another system and send it off. In the meantime, I am frustrated, I am ticked off — and guess what I have done now? I have left that brand. This happens across the board. I can even have two totally different types of interactions with the company.

You can use a major airline brand as an example. One of our employees called on the phone trying to resolve an issue that was caused by the airline. They basically said, “No, no, no.” It made her very frustrated. She decided she’s going to fly with a different airline now. She then sent a social post [to that effect], and the airline’s VP of Customer Service answered it, and within minutes they had resolved her issue. But they already spent three hours on the phone trying to push her off through yet another channel because it was a totally different group, a totally different experience.

By leveraging technologies where you can pivot from one channel to another, everyone will get answers quicker. I can be chatting with you, Dana, and realize that we need to escalate to a voice conversation, for example, and I as the agent; I can then turn that conversation into a voice call. You don’t have to re-explain yourself and you are like, “Wow, that’s cool! Now I’m on the phone with a facility,” and we are able to handle our business.

As agent, I can also pivot simultaneously to an email channel to send you something as simple as a user guide or a series of knowledge-based articles that I may have at my fingertips as an agent. But you and I are still on the phone call. Even better yet, after-the-fact, as a business, I have all the analytics and the business intelligence to say that I had one interaction with Dana that started out as a web chat, pivoted to a phone call, and I simultaneously then sent a knowledge-based article of “X” around this issue and I can report on it all at once. Not three separate interactions, not three separate events — and I have made you a happy customer.

Gardner: We are clearly talking about enabling the agent to be a super-agent, and they can, of course, be anywhere. I think this is really important now because the function of an agent — we are already seeing the beginnings of this — but it’s going to certainly include and increase having more artificial intelligence (AI) and machine learning and associated data analytics benefits. The agent then might be a combination of human and AI functions and services.

So we need to be able to integrate at a core communications basis. Without going too far down this futuristic route, isn’t it important for that agent to be an assimilation of more assets and more services over time?

Artificial Intelligence plus human support

Triant: I‘m glad you brought up AI and these other technologies. The reality is that we’ve been through a number of cycles around what this technology is going to do and how it is going to interact with an agent. In my view, and I have been in this world for a while, the agent is the most important piece of customer service and brand engagement. But you have to be able to bring information to them, and you have to be able to give information to your customers so that if there is something simple, get it to them as quick as possible — but also bring all the relevant information to the agent.

AI has had multiple forms; it has existed for a long time. Sometimes people get confused because of marketing schemes and sales tactics [and view AI] as a way for cost avoidance, to reduce agents and eliminate staff by implementing these technologies. Really the focus is how to create a better customer experience, how to create a better agent experience.

We have had AI in our product for last three years, and we are re-releasing some components that will bring business intelligence to the forefront around the end of the year. What it essentially does is alIow you to see what you’re doing as a user out on the Internet and within these technologies. I can see that you have been looking for knowledge-based articles around, for example, “why my refrigerator keeps freezing up and how can I defrost it.” You can see such things on Twitter and you can see these things on Facebook. The amount of information that exists out there is phenomenal and in real-time. I can now gather that information … and I can proactively, as a business, make decisions about what I want to do with you as a potential consumer.

I can even identify you as a consumer within my business, know how many products you have acquired from me, and whether you’re a “platinum” customer or even a basic customer, and then make a decision.

For example, I have TVs, refrigerators, washer-dryers and other appliances all from the same manufacturer. So I am a large consumer to that one manufacturer because all of my components are there. But I may be searching a knowledge-based article on why the refrigerator continues to freeze up.

Now I may call in about just the refrigerator, but wouldn’t it be great for that agent to know that I own 22 other products from that same company? I’m not just calling about the refrigerator; I am technically calling about the entire brand. My experience around the refrigerator freaking out may change my entire brand decision going forward. That information may prompt me to decide that I want to route that customer to a different pool of agents, based on what their total lifetime value is as a brand-level consumer.

Through AI, by leveraging all this information, I can be a better steward to my customer and to the agent, because I will tell you, an agent will act differently if they understand the importance of that customer or to know that I, Vasili, have spent the last two hours searching online for information, which I posted on Facebook and I posted on Twitter.

Through AI, by leveraging all this information, I can be a better steward to the customer and to the agent.

At that point, the level of my frustration already has reached a certain height on a scale. As an agent, if you knew that, you might treat me differently because you already know that I am frustrated. The agent may be able to realize that you have been looking for some information on this, realize you have been on Facebook and Twitter. They can then say: “I am really sorry, I’m not able to get you answers. Let me see how I can help you, it seems that you are looking online about how to keep the refrigerator from freezing up.”

If I start the conversation that way, I’ve now diffused a lot of the frustration of the customer. The agent has already started that interaction better. Bringing that information to that person, that’s powerful, that’s business intelligence — and that’s creating action from all that information.

Keep your cool

Gardner: It’s fascinating that that level of sentiment analysis brings together the best of what AI and machine learning can do, which is to analyze all of these threads of data and information and determine a temperature, if you will, of a person’s mood and pass that on to a human agent who can then have the emotional capacity to be ready to help that person get to a lower temperature, be more able to help them overall.

It’s becoming clear to me, Vasili, that this contact center function and CCaaS architectural benefits are far more strategic to an organization than we may have thought, that it is about more than just customer service. This really is the best interface between a company — and all the resources and assets it has across customer service, marketing, and sales interactions. Do you agree that this has become far more strategic because of these new capabilities?

Triant: Absolutely, and as brands begin to realize the power of what the technology can do for their overall business, it will continue to evolve, and gain pace around global adoption.

As brands begin to realize the power of what the technology can do for their overall businesses, it will continue to evolve and gain global adoption.

We have only scratched the surface on adoption of these cloud technologies within organizations. A majority of brands out there look at these interactions as a cost of doing business. They still seek to reduce that cost versus the lifetime value of both the consumer, as well as the agent experience. This will shift, it is shifting, and there are companies that are thriving by recognizing that entire equation and how to leverage the technologies.

Technology is nothing without action and result. There have been some really cool things that have existed for a while, but they don’t ever produce any result that’s meaningful to the customer so they never get adopted and deployed and ultimately reach some type of a mass proliferation of results.

Gardner: You mentioned cost. Let’s dig into that. For organizations that are attracted to the capabilities and the strategic implications of CCaaS, how do we evaluate it in terms of cost? The old CapEx approach often had a high upfront cost, and then high operating costs, if you have an inefficient call center. Other costs involve losing your customers, losing brand affinity, losing your perception in the market. So when you talk to a prospect or customer, how do you help them tease out the understanding of a pay-as-you-go service as highly efficient? Does the highly empowered agent approach save money, or even make money, and CCaaS becomes not a cost center but a revenue generator?

Cost consciousness

Triant: Interesting point, Dana. When I started at Serenova about five years ago, customers all the time would say, “What’s the cost of owning the technology?” And, “Oh, my, on-premises stuff has already depreciated and I already own it, so it’s cheaper for me to keep it.” That was the conversation pretty much every day. Beginning in 2013, it rapidly started shifting. This shift was mainly driven by the fact that organizations started realizing that consumers want to engage on different channels, and the on-premises guys couldn’t keep up with this demand.

The cost of ownership no longer matters. What matters is that the on-premises guys just literally could not deliver the functionality. And so, whether that’s Cisco, Avaya, or Shoretel, they quickly started falling away in consideration for technology companies that were looking to deploy applications for their business to meet these needs.

The cost of ownership quickly disappeared as the main discussion point. Instead it came around to, “What is the solution that you’re going to deliver?” Customers that are looking for contact center technologies are beginning to take a cloud-first approach. And once they see the power of CCaaS through demonstration and through some trials of what an agent can do – and it’s all browser-based, there is no client install, there is no equipment on-premises – then it takes on a life of its own. It’s about, “What is the experience going to be? Are these channels all integrated? Can I get it all from one manufacturer?”

Following that, organizations focus on other intricacies around – Can it scale? Can it be redundant? Is it global? But those become architectural concerns for the brands themselves. There is a chunk of the industry that is not looking at these technologies, and they are stuck in brand euphoria or have to stay with on-premises infrastructure, or with a certain vendor because of their name or that they are going to get there someday.

As we have seen, Avaya has declared bankruptcy. Avaya does not have cloud technologies despite their marketing message. So the customers that are in those technologies now realize they have to find a path to keep up with the basic customer service at a global scale. Unfortunately, those customers have to find a path forward and they don’t have one right now.

It’s less about cost of ownership and it’s more about the high cost of not doing anything. If I don’t do anything, what’s going to be the cost? That cost ultimately becomes – I’m not going to be able to have engagement with my customers because the consumers are changing.

It’s less about cost of ownership and it’s more about the high cost of not doing anything.

Gardner: What about this idea of considering your contact center function not just as a cost center, but also as a business development function? Am I being too optimistic.

It seems to me that as AI and the best of what human interactions can do combine across multichannels, that this becomes no longer just a cost center for support, a check-off box, but a strategic must-do for any business.

Multi-channel customer interaction

Triant: When an organization reaches the pinnacle of happiness within what these technologies can do, they will realize that no longer do you need to have delineation between a marketing department that answers social media posts, an inside sales department that is only taking calls for upgrades and renewals, and a customer service department that’s dealing with complaints or inbound questions. They will see that you can leverage all the applications across a pool of agents with different skills.

I may have a higher skill around social media than over voice, or I may have a higher skill level around a sales activity, or renewal activity, over customer service problems. I should be able to do any interaction. And potentially one day it’ll just be customer interaction department and the channels are just a medium of inbound and outbound choice for a brand.

But you can now take information from whatever you see the customer doing. Each of their actions have a leading indicator, everything has a predictive action prior to the inbound touch, everything does. Now that a brand can see that, it will be able to have “consumer interaction departments,” and it will be properly routed to the right person based on that information. You’ll be able to bring information to that agent that will allow them to answer the customer’s questions.

Gardner: I can see how that agent’s job would be very satisfying and fulfilling when you are that important, when you have that sort of a key role in your organization that empowers people. That’s good news for people that are trying to find those skills and fill those positions.

Vasili, we only have a few minutes left, but I’d love to hear about a couple of examples. It’s one thing to tell, it’s another thing to show. Do we have some examples of organizations that have embraced this concept of a strategic contact center, taken advantage of those multi-channels, added perhaps some intelligence and improved the status and capability of the agents — all to some business benefit? Walk us through a couple of actual use cases where this has all come together.

Cloud communication culture shift

Triant: No one has reached that level of euphoria per se, but there are definitely companies that are moving in that direction.

It is a culture change, so it takes time. I know as well as anybody what it takes to shift a culture, and it doesn’t happen overnight. As an example, there is a ride-hailing company that engages in a different way with their consumer, and their consumer might be different than what you think from the way I am describing it. They use voice systems and SMS and often want to pivot between the two. Our technology actually allows the agent to make that decision even if they aren’t even physically in the same country. They are dynamically spread across multiple countries to answer any question they may need to answer based on time and day.

But they can pivot from what’s predominantly an SMS inbound and outbound communication into a voice interaction, and then they can also follow up with an e-mail, and that’s already happened. Now, it initially started with some SMS inbound and outbound, then they added voice – an interesting move as most people think adding voice is what people are getting away from. What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.

What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.

That’s one example. Another company that provides the latest technology in food order and delivery initially started with voice-only to order and deliver food. Now they’ve added SMS confirmations automatically, and e-mail as well for confirmation or for more information from the inbound voice call. And now, once they are an existing customer, they can even start an order from an SMS, and pivot back to a voice call for confirmation — all within one interaction. They are literally one of the fastest growing alternative food delivery companies, growing at a global scale.

They are deploying agents globally across one technology. They would not be able to do this with legacy technologies because of the expense. When you get into these kinds of high-volume, low-margin businesses, cost matters. When you can have an OpEx model that will scale, you are adding better customer service to the applications, and you are able to allow them to build a profitable model because you are not burning them with high CapEx processes.

Gardner: Before we sign off, you had mentioned your pipeline about your products and services, such as engaging more with AI capabilities toward the end of the year. Could give us a level-set on your roadmap? Where are your products and services now? Where do you go next?

A customer journey begins with insight

Triant: We have been building cloud technologies for 16 years in the contact center space. We released our latest CCaaS platform in March 2016 called CxEngage. We then had a major upgrade to the platform in March of this year, where we take that agent experience to the next level. It’s really our leapfrog in the agent interface and making it easier, bringing in more information to them.

Where we are going next is around the customer journey — predictive interactions. Some people call it AI, but I will call it “customer journey mapping with predictive action insights.” That’s going to be a big cornerstone in our product, including business analytics. It’s focused around looking at a combination of speech, data and text — all simultaneously creating predictive actions. This is another core area we are going in an and continue to expand the reach of our platform from a global scale.

At this point, we are a global company. We have the only global cloud platform built on a single software stack with one data pipeline. We now have more users on a pure cloud platform than any of our competitors globally. I know that’s a big statement, but when you look at a pure cloud infrastructure, you’re talking in a whole different realm of what services you are able to offer to customers. Our ability to provide a broad reach including to Europe, South Africa, Australia, India, and Singapore — and still deliver good cloud quality at a reasonable cost and redundant fashion –  we are second to none in that space.

Gardner: I’m afraid we will have to leave it there. We have been listening to a sponsored BriefingsDirect discussion on how CCaaS capabilities are becoming more powerful as a result of cloud computing, multimode communications channels, and the ability to provide optimized and contextual user experiences.

And we’ve learned how new levels of insight and intelligence are now making CCaaS approaches able to meet the highest user experience requirements of today and tomorrow. So please join me now in thanking our guest, Vasili Triant, CEO of Serenova in Austin, Texas.

Triant: Thank you very much, Dana. I appreciate you having me today.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of BriefingsDirect discussions. A big thank you to our sponsor, Serenova, as well as to you, our audience. Do come back next time and thanks for listening.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Serenova.

Transcript of a discussion on how contact center-as-a-service capabilities are becoming more powerful to provide optimized and contextual user experiences for agents and customers. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, contact center, data analysis, data center, Enterprise transformation, Help desk, machine learning, managed services, professional services, User experience | Tagged , , , , , , , | Leave a comment

Women in business leadership — networking their way to success

The next BriefingsDirect digital business insights panel discussion focuses on the evolving role of women in business leadership. We’ll explore how pervasive business networks are impacting relationships and changes in business leadership requirements and opportunities for women.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

To learn more about the transformation of talent management strategies as a result of digital business and innovation, please join me in welcoming our guests, Alicia Tillman, Chief Marketing Officer at SAP Ariba, and Lisa Skeete Tatum, Co-founder and CEO of Landit in New York. The panel was recorded in association with the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Alicia, looking at a confluence of trends, we have the rise of business networks and we have an advancing number of women in business leadership roles. Do they have anything to do with one another? What’s the relationship?

Tillman: It is certainly safe to say that there is a relationship between the two. Networks historically connected businesses mostly from a transactional standpoint. But networks today are so much more about connecting people. And not only connecting them in a business context, but also from a relationship-standpoint as well.

Alicia Tillman

Tillman

There is as much networking and influence that happens in a digital network as  does from meeting somebody at an event, conference or forum. It has really taken off in the recent years as being a way to connect quickly and broadly — across geographies and industries. There is nothing that brings you speed like a network, and that’s why I think there is such a strong correlation to how digital networking has taken off — and what a true technical network platform can allow.

Gardner: When people first hear “business networks,” they might think about transactions and applications talking to applications. But, as you say, this has become much broader in the last few years; business networks are really about social interactions, collaboration, and even joining companies culturally.

How has that been going? Has this been something that’s been powerful and beneficial to companies?

Tillman: It’s incredibly powerful and beneficial. If you think about how buying habits are these days, buyers are very particular about the goods that they are interested in, and, frankly, the people that they source from.

If I look at my buying population in particular at SAP Ariba, there is a tremendous movement toward sustainable goal or fair-trade types of responsibilities, of wanting to source goods from minority-owned businesses, wanting to source only organic or fair-trade products, wanting to only partner with organizations where they know within their supply chain the distribution of their product is coming from locations in the world where the working conditions are safe and their employees are being paid fairly.

A network allows for that; the SAP Ariba Network certainly allows for that, as we can match suppliers directly with what those incredibly diverse buyer needs are in today’s environment.

Gardner: Lisa, we just heard from Alicia about how it’s more important that companies have a relationship with one another and that they actually look for culture and character in new ways. Tell us about Landit, and how you’re viewing this idea of business networks changing the way people relate to their companies and even each other?

Skeete Tatum: Our goal at Landit is to democratize career success for women around the globe. We have created a technology platform that not only increases the success and engagement of women in the workplace, but it also enables companies in this new environment to attract, develop, and retain high-potential diverse talent.

Our goal at Landit is to democratize career success for women around the globe.

Lisa Skeete Tatum

Skeete Tatum

We do that by providing each woman with the personalized playbook in the spirit of one-size-fits-one. That empowers them with the access to the tools, the resources, the know-how, and, yes, the human connections that they need to more successfully navigate their paths.

It’s really in response to the millions of women who will find themselves at an inflection point; whether they are in a company that they love but are just trying to figure out how to more successfully navigate there, or they may be feeling a little stuck and are not sure how to get out. The challenge is: “I am motivated, I have the skills, I just don’t know where to start.”

We have really focused on knitting what we believe are those key elements together — leveraged by technology that actually guides them. But we find that companies in this new environment are often overwhelmed and trying to figure out a way to manage this new diverse workforce in this era of connectedness. So we give them a turnkey, one-size-fits-one solution, too.

As Alicia mentioned, in this next stage of collaborative businesses, there are really two things. One, we are more networked and more visible than ever before, which is great, because it’s created more opportunities and flexibility than we have seen — not to mention more access. However, those opportunities are highly dependent on how someone showcases their value, their contribution, and their credibility, which makes it even more important to cultivate not only your brand and your network. It goes beyond just individual capabilities of getting at what is the sponsorship in the support of a strong network.

The second thing I would say, that Alicia also mentioned, is that today’s business environment — which is more global, more diverse in its tapestry — requires businesses to create an environment where everyone feels valued. People need to feel like they can bring the full measure of their talent and passion to the workplace. Companies want amazing talent to find a place at their company.

Gardner: If I’m at a company looking to be more diverse, how would I use Landit to accomplish that? Also, if I were an individual looking to get into the type of company that I want to be involved with, how would I use Landit?

Connecting supply and demand for talent

Skeete Tatum: As an individual, when you come on to Landit, we actually give you one of the key ingredients for success. Because we often don’t know what we don’t know, we knit together the first step, of “Where do I fit?” If you are not in a place that fits with your values, it’s not sustainable.

So we help you figure out what is it that fits with “all of me,” and we then connect you to those opportunities. Many times with diversity programs, they are focused just on the intake, which is just one component. But you want people to thrive when they get there.

Many times with diversity programs, they are focused just on the intake, which is just one component. But you want people to thrive when they get there.

And so, whether it is building your personal brand or building your board of advisors or continuing with your skill development in a personalized, relevant way — or access to coaching because often many of us don’t have that unless we are in the C-suite on the way — we are able to knit that together in a way that is relevant, that’s right-sized for the individual.

For the company, we give them a turnkey solution to invest in a scalable way, to touch more lives across their company, particularly in a more global environment. Rather than having to place multiple bets, they place one bet with Landit. We leverage that one-size-fits-one capability with things that we all know are keys to success. We are then able to have them deliver that again, whether it is to the newly minted managers or people they have just acquired or maybe they are leaders that they want to continue to invest in. We enable them to do that in a measurable way, so that they can see the engagement and the success and the productivity.

Gardner: Alicia, I know that SAP Ariba is already working to provide services to those organizations that are trying to create diversity and inclusion within their supply chains. How do you see Landit fitting into the business network that SAP Ariba is building around diversity?

Tillman: First, the SAP Ariba Network is the largest business to business (B2B) network on the planet. We connect more than 2.5 million companies that transact over $1 trillion in commerce annually. As you can imagine, there is incredible diversity in the buying requirements that exist amongst those companies that are located in all parts of the world and work in virtually every industry.

One of things that we offer as an organization is a Discovery tool. When you have a network that is so large, it can be difficult and a bit daunting for a buyer to find the supplier that meets their business requirements, and for a supplier to find their ideal buyer. So our SAP Ariba Discovery application is a matching service, if you will, that enables a buyer to list their requirements. You then let the tool work for you to allow matching you to suppliers that most meet your requirements, whatever they may be.

I’m very proud to have Lisa present at our Women in Leadership Forum at SAP AribaLIVE 2017. I am showcasing Lisa not only because of her entrepreneurial spirit and the success that she’s had in her career — that I think will be very inspirational and motivational to women who are looking to continue to develop their careers — but she has also created a powerful platform with Landit. For women, it helps provide a digital environment that allows them to harness precisely what it is that’s important to them when it comes to career development, and then offers the coaching in the Landit environment to enable that.

For women, it helps provide a digital environment that allows them to harness precisely what it is that’s important to them when it comes to career development.

Landit also offers companies an ability to support their goals around gender diversity. They can look at the Landit platform and source talent that is not only very focused on careers — but also supports a company in their diversity goals. It’s a tremendous capability that’s necessary and needed in today’s environment.

Gardner: Lisa, what has changed in the past several years that has prompted this changed workforce? We have talked about the business network as an enabler, and we have talked about social networks connecting people. But what’s going to be different about the workforce going forward?

Collaborative visibility via networking

Skeete Tatum: There are three main things. First, there is a recognition that diversity is not a “nice to have,” it’s a “must-have” from a competitive standpoint; to acquire the best ideas and gain a better return on capital. So it’s a business imperative to invest in and value diversity within one’s workforce. Second, businesses are continuing to shift toward matching opportunities with the people who are best able to do that job, but in a less-biased way. Thirdly, business-as-usual isn’t going to work in this new reality of career management.

Business-as-usual isn’t going to work in this new reality of career management.

It’s no longer one- or bi-directional, where it’s just the manager or the employee. It’s much more collaborative and driven by the individual. And so all of these things … where there is much more opportunity, much more freedom. But how do you anchor that with a problem and a framework and a connectivity that enables someone to more successfully navigate the new environment and new opportunities? How do you leverage and build your network?  Everyone knows they need to do it, but many people don’t know how to do it. Or when your brand is even more important, visibility is more important, how do you develop and communicate your accomplishments and your value? It is the confluence of those things coming together that creates this new world order.

Gardner: Alicia, one of the biggest challenges for most businesses is getting the skills that they need in a timely fashion. How do we get past the difficulty of best matching hiring?  How do we use business networks to help solve that?

Tillman: This is the beauty of technology. Technology is an enabler in business to form relationships more quickly, and to transact more quickly. Similarly, technology also provides a network to help you grow from a development standpoint. Lisa’s organization, Landit, is one example of that.

Within SAP Ariba we are very focused on closing the gap in gaining the skills that are necessary to be successful in today’s business environment. I look at the offering of SAP SuccessFactors – which is  focused on empowering the humancapital management (HCM) organization to lead performance management and career development. And SAP Fieldglass helps companies find and source the right temporary labor that they need to service their most pressing projects. Combine all that with a business network, and there is no better place in today’s environment to find something you need — and find it quickly.

But it all comes down to the individual’s desire to want to grow their skills, or find new skills, to get out of their comfort zone and try something new. I don’t believe there is a shortage of tools or applications to help enable that growth and talent. It comes down to the individual’s desire to want to grab it and go after it.

Maximize your potential with technology

Skeete Tatum: I couldn’t agree more. The technology and the network are what create the opportunity. In the past, there may have been a skills gap, but you have to be able to label it, you have to be able to identify it in a way that is relevant to the individual. As Alicia said, there are many opportunities out there for development, but how do you parse that down and deliver it to the individual in a way that is relevant — and that’s actionable? That’s where a network comes in and where the power of one can be leveraged in a scalable way.

Now is probably one of the best times to invest in and have an individual grow to reach their full potential. The desire to meet their goals can be leveraged by technology in a very personal way.

Gardner: As we have been hearing here at SAP Ariba LIVE 2017, more-and-more technologies along the lines of artificial intelligence (AI) and machine learning (ML) – are taking advantage of all the data and analyzing it and making it actionable — can now be brought to bear on this set of issues of matching workforce requirements with skill sets.

Where should we expect to see these technologies reduce the complexity and help companies identify the right workforce, and the workforce identify the right companies?

Skeete Tatum: Having the data and being able to quantify and qualify it gives you the power to set a path forward. The beauty is that it actually enables everyone to have the opportunity to contribute, the opportunity to grow, and to create a path and a sense of belonging by having a way to get there. From our perspective, it is that empowerment and that ownership — but with the support of the network from the overall organization — that enables someone to move forward. And it enables the organization to be more successful and more embracing of this new workforce, this diverse talent.

Tillman: Individuals should feel more empowered today than ever before to really take their career development to unprecedented levels. There are so many technologies, so many applications out there to help coach you on every level. It’s up to the individual to truly harness what is standing in front of them and to really grab hold of it — and use it to their advantage to reach their career goal.

Gardner: Lisa, what should you be thinking about from a personal branding perspective when it comes to making the best use of tools like Landit and business networks?

Skeete Tatum: The first thing is that people actually have to think of themselves as a brand, as opposed to thinking that they are bragging or being boastful. The most important brand you have is the brand of you.

Second, people have to realize that this notion of building your brand is something that you nurture and it develops over time. What we believe is important is that we have to make it tangible, we have to make it actionable, and we have to make it bite-size, otherwise it seems overwhelming.

So we have defined what we believe are the 12 key elements for anyone to have a successful brand, such as have you been visible, do you have a strategic plan of you, are you seeking feedback, do you have a regular cadence of interaction with your network, et cetera. Knowing what to do and how to do it and at what cadence and at what level is what enables someone to move forward. And in today’s environment, again, it’s even more important.

Pique their curiosity by promoting your own

Tillman: Employers want to be sure that they are attracting candidates and employing candidates that are really invested in their own development. An employer operates in the best interest of the employee in terms of helping to enable tools and allow for that development to occur. At the same time, where candidates can really differentiate themselves in today’s work environment is when they are sitting across the table and they are in that interview. It’s really important for a candidate to talk about his or her own development and what are they doing to constantly learn and support their curiosity.

Employers want curious people. They want those that are taking advantage of development and tools and learning, and these are the things that I think set people apart from one another when they know that individually they are going to go after learning opportunities and push themselves out of their comfort zone to take themselves – and ultimately the companies that employ them – to the next level.

Gardner: Before we close out, let’s take a peek into the crystal ball. What, Alicia, would be your top two predictions given that we are just on sort of an inflection point with this new network, with this new workforce and the networking effect for it?

Tillman: First, technology is only going to continue to improve. Networks have historically enabled buyers and sellers to come together and transact to build their organizations and support growth, but networks are taking on a different form.

Technology is going to continue to enable priorities professionally and priorities personally. Technology is going to become a leading enabler of a person’s professional development.

Second, individuals are going to set themselves apart from others by their desire and their hunger to really grab hold of that technology. When you think about decision-making among companies in terms of candidates they hire and candidates they don’t, employers are going to report back and say, “One of the leading reasons why I selected one candidate over another is because of their desire to learn and their desire to grab hold of technologies and networks that were standing in front of them to bring their careers to an unprecedented level.”

Gardner: Lisa, what are your top two predictions for the new workforce and particularly for diversity playing a bigger role?

Skeete Tatum: Technology is the ultimate leveler of the playing field. It enables companies as well as the individual to make decisions based on things that matter. That is what enables people to bring their full selves, the full measure of their talent, to the workplace.

In terms of networks in particular, they have always been a key element to success but now they are even more important. It actually poses a special challenge for diverse talent. They are often not part of the network, and they may have competing personal responsibilities that make the investment of the time and the frequency in those relationships a challenge.

Sometimes there is a discomfort with how to do it. We believe that through technology people will have to get comfortable with being uncomfortable. They need to learn not only how to codify their network, but also have the right access to the right person with the right cadence, and access to that know how, that guidance, can be delivered through technology.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, Business networks, Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, ERP, Identity, managed services, Networked economy, Platform 3.0, procurement, professional services, SAP, SAP Ariba, social media, Spot buying, User experience | Tagged , , , , , , , , , , , , | Leave a comment

The next line of defense—How new security leverages virtualization to counter sophisticated threats

When it comes to securing systems and data, the bad guys are constantly upping their games — finding new ways to infiltrate businesses and users. Those who protect systems from these cascading threats must be ever vigilant for new technical advances in detection and protection. In fact, they must out-innovate their assailants.

The next BriefingsDirect security insights discussion examines the relationship between security and virtualization. We will now delve into how adaptive companies are finding ways to leverage their virtualization environments to become more resilient, more intelligent, and how they can protect themselves in new ways.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how to ensure that virtualized data centers do not pose risks — but in fact prove more defensible — we are joined by two security-focused executives, Kurt Roemer, Chief Security Strategist at Citrix, and Harish Agastya, Vice President for Enterprise Solutions at Bitdefender. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Kurt, virtualization has become widespread and dominant within data centers over the past decade. At that same time, security has risen to the very top of IT leadership’s concerns. What is it about the simultaneous rise of virtualization and the rise of security concerns? Is there any intersection? Is there any relationship that most people may miss?

Roemer: The rise of virtualization and security has been concurrent. A lot of original deployments for virtualization technologies were for remote access, but they were also for secure remote access. The apps that people needed to get access to remotely were usually very substantial applications for the organization —  things like order processing or partner systems; they might have been employee access to email or internal timecard systems. These were things that you didn’t really want an attacker messing with — or arbitrary people getting access to.

Roemer.Kurt (1)

Roemer

Security has grown from just providing basic access to virtualization to really meeting a lot of the risks of these virtualized applications being exposed to the Internet in general, as well as now expanding out into the cloud. So, we have had to grow security capabilities to be able to not only keep up with the threat, but try to keep ahead of it as well.

Gardner: Hasn’t it historically been true that most security prevention technologies have been still focused at the operating system (OS)-level, not so much at the virtualization level? How has that changed over the past several years?

Roemer: That’s a good question. There have been a lot of technologies that are associated with virtualization, and as you go through and secure and harden your virtual environments, you really need to do it from the hardware level, through the hypervisor, through the operating system level, and up into the virtualization system and the applications themselves.

We are now seeing people take a much more rigorous approach at each of those layers, hardening the virtualization system and the OS and integrating in all the familiar security technologies that we’re used to, like antivirus, but also going through and providing for application-specific security.

So if you have a SAP system or something else where you need to protect some very sensitive company data and you don’t want that data to be accessed outside the office arbitrarily, you can provide very set interfaces into that system, being able to control the clipboard or copy and paste, what peripherals the application can interface with; i.e., turn off the camera, turn off the microphone if it’s not needed, and even get down to the level of with the browser, whether things like JavaScript is enabled or Flash is available.

So it helps to harden the overall environment and cut down on a lot of the vulnerabilities that would be inherent by just leaving things completely wide open. One of the benefits of virtualization is that you can get security to be very specific to the application.

Gardner: Harish, now that we are seeing this need for comprehensive security, what else is it that people perhaps don’t understand that they can do in the virtualization layer? Why is virtualization still uncharted territory as we seek to get even better security across the board?

Let’s get better than physical

Agastya: Customers often don’t realize when they are dealing with security in physical or virtual environments. The opportunities that virtual environments provide to them are to have the ability to take security to a higher level than physical-only. So better than physical is, I think, a key value proposition that they can benefit from — and the technology innovation of today has enabled that.

Harish Agastya

Agastya

There is a wave of innovation among security vendors in this space. How do we run resource-intensive security workloads in a way that does not compromise the service-level agreements (SLAs) that those information technology operations (IT Ops) administrators need to deliver?

There is a lot of work happening to offload security-scanning mechanisms on to dedicated security virtual appliances, for example. Bitdefender has been working withpartners like Citrix to enable that.

Now, the huge opportunity is to take that story further in terms of being able to provide higher levels of visibility, detection, and prevention from the attacks of today, which are advanced persistent threats. We seek to detect how they manifest in the data center and — in a virtual environment — what you have the opportunity to do, and how you can respond. That game is really changing now.

Gardner: Kurt, is there something about the ability to spin up virtualized environments, and then take them down that provides a risk that the bad guys can target or does that also provide an opportunity to start fresh: To eliminate vulnerabilities, or learn quickly and adapt quickly? Is there something about the rapid change that virtualization enables that is a security plus?

Persistent protection anywhere

Roemer: You really hit on the two sides of the coin. On one side, virtualization does oftentimes provide an image of the application or the applications plus OS that could be fairly easy for a hacker to steal and be able to spin up offline and be able to get access to secrets. So you want to be able to protect your images, to make sure that they are not something that can be easily stolen.

On the other side, having the ability to define persistence — what do you want to have to persist between reboots versus what’s non-persistent — allows you to have a constantly refreshed system. So when you reboot it, it’s exactly back to the golden image — and everything is as it should be. As you patch and update you are working with a known quantity as opposed to the endpoint where somebody might have administrative access and it has installed personal applications and plug-ins to their browser and other things like that that you may or may not want to have in placer.

The nice thing with virtualization is that it’s independent of the OS, the applications, the endpoints, and the varied situations that we all access our apps and data from.

Layering also comes into play and helps to make sure that you can dynamically layer in applications or components of the OS, depending on what’s needed. So if somebody is accessing a certain set of functionality in the office, maybe they have 100% functionality. But when they go home, because they are no longer in a trusted environment or maybe not working on a trusted PC from their home system, they get a degraded experience, seeing fewer applications and having less functionality layered onto the OS. Maybe they can’t save to local drives or print to local printers. All of that’s defined by policy. The nice thing with virtualization is that it’s independent of the OS, the applications, the endpoints, and the varied situations that we all access our apps and data from.

Gardner: Harish, with virtualization that there is a certain level of granularity as to how one can manage their security environment parameters. Can you expand on why having that granular capability to manage parameters is such a strong suit, and why virtualization is a great place to make that happen?

On the move, virtually

Agastya: That is one of the opportunities and challenges that security solutions need to be able to cope with.

As workloads are moving across different subgroups, sub-networks, that virtual machine (VM) needs to have a security policy that moves with it. It depends on what type of application is running, and it is not specific to the region or sub-network that that particular VM is resident on. That is something that security solutions that are designed to operate in virtual environments have the ability to do.

Security moves with the workload, as the workload is spawned off and new VMs are created. The same set of security policies associated with that workload now can protect that workload without needing to have a human step in and determine what security posture needs to belong to that VM.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

That is the opportunity that virtualization provides. But it’s also a challenge. For example, maybe the previous generations of solutions predated all of this. We now need to try and address that.

We love the fact that virtualization is happening and that it has become a very elastic software-defined mechanism that moves around and gives the IT operations people so much more control. It allows an opportunity to be able to sit very well in that environment and provide security that works tightly integrated with the virtualization layer.

Gardner: I hear this so much these days that IT operations people are looking for more automation, and more control.

Kurt, I think it’s important to understand that when we talk about security within a virtualization layer, that doesn’t obviate the value of security that other technologies provide at the OS level or network level. So this isn’t either-or, this is an augmentation, isn’t that correct, when we talk about virtualization and security?

The virtual focus

Roemer: Yes, that’s correct. Virtualization provides some very unique assets that help extend security, but there are some other things that we want to be sure to focus on in terms of virtualization. One of them is Bitdfender Hypervisor Introspection (HVI). It’s the ability for the hypervisor to provide a set of direct inspect application programming interfaces (APIs) that allow for inspection of guest memory outside of the guest.

When you look at Windows or Linux guests that are running on a hypervisor, typically when you have tried to secure those it’s been through technology installed in the guest. So you have the guest that’s self-protecting, and they are relying on OS APIs to be able to effect security. Sometimes that works really well and sometimes the attackers get around OS privileges and are successful, even with security solutions in place.

One of the things that HVI does is it looks for the techniques that would be associated with an attack against the memory of the guest from outside the guest. It’s not relying on the OS APIs and can therefore catch attacks that otherwise would have slipped past the OS-based security functionality.

Gardner: Harish, maybe you can tell us about how Citrix and Bitdefender are working together?

Step into the breach, together

Agastya: The solution is Bitdefender HVI. It works tightly with Citrix’s XenServer hypervisor, and it has been available in a controlled release for the last several months. We have had some great customer traction on it. At Citrix Synergy this year wewill be making that solution generally available.

We have been working together for the last four years to bring this groundbreaking technology to the market.

What is the problem we are trying to solve? It is the issue of advanced attacks that hit the data center when, as Kurt mentioned, advanced attackers are able to skirt past endpoint security defense mechanisms by having root access and operating at the same level of privilege as the endpoint security that may be running within the VM.

They can then essentially create a blind spot where the attackers can do anything they want while the endpoint security solution continues to run.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

These types of attacks stay in the environment and the customer suffers on average 200 days before a breach is discovered. The marketplace is filled with stories like this and it’s something that we have been working together with Citrix to address.

The fundamental solution leverages the power of the hypervisor to be able to monitor attacks that modify memory. It does that by looking for the common attack mechanisms that all these attackers use, whether it’s buffer overflows or it’s heap spraying, the list goes on.

They all result in memory modification that the endpoint security solution within the VM is blinded to. However, if you are leveraging the direct inspect APIs that Kurt talked about — available as part of Citrix’s XenServer solution – then we have the ability to look into that VM without having a footprint in there. It is a completely agentless solution that runs outside the security virtual appliance. It monitors all of the VMs in the data center against these types of attacks. It allows you to take action immediately, reduces the time to detection and blocks the attack.

Gardner: Kurt, what are some of the major benefits for the end-user organization in deploying something like HVI? What is the payback in business terms?

Performance gains

Roemer: Hypervisor Introspection, which we introduced in XenServer 7.1, allows an organization to deploy virtualization with security technologies behind it at the hypervisor level. What that means for the business is that every guest you bring up has protection associated with it. Even if it’s a new version of Linux that you haven’t previously tested and you really don’t know which antivirus you would have integrated with it; or something that you are working on from an appliance perspective — anything that can run on XenServer would be protected through these direct inspect APIs, and the Bitdefender HVI solution. That’s really exciting.

It also has performance benefits because you don’t have to run antivirus in every guest at the same level. By knowing what’s being protected at the hypervisor level, you can configure for a higher level of performance.

Now, of course, we always recommend having antivirus in guests, as you still have file-based access and so you need to look for malware, and sometimes files get emailed in or out or produced, and so having access to the files from an anti-malware perspective is very valuable.

So for the business, HVI gives you higher security, it gives you better performance, and the assurance that you are covered.

But you may need to cut down some of the scanning functionality and be able to meet much higher performance objectives.

Gardner: Harish, it sounds like this ability to gain introspection into that hypervisor is wonderful for security and does it in such a way that it doesn’t degrade performance. But it seems to me that there are also other ancillary benefits in addition to security, when you have that ability to introspect and act quickly. Is there more than just a security benefit, that the value could go quite a bit further?

The benefits of introspection

Agastya: That’s true. The ability to introspect into memory has huge potential in the market. First of all, with this solution right now, we address the ability to detect advanced attacks, which is a very big problem in the industry — where you have everything from nation-sponsored attacks to deep dark web, malicious components, attack components available to common citizens who can do bad things with them.

The capability to reduce that window to advanced attack detection is huge. But now with the power of introspection, we also have the ability to inject, on the fly, into the VM, additional solutions tools that can do deep forensics, measure network operations and the technology can expand to cover more. The future is bright for where we can take this between our companies.

Gardner: Kurt, anything to add on the potential for this memory introspection capability?

Specific, secure browsers

Roemer: There are a couple things to add. One is taking a look at the technologies and just rolling back through a lot of the exploits that we have seen, even throughout the last three months. There have been exploits against Microsoft Windows, exploits against Internet Explorer and Edge, hypervisors, there’s been EternalBlue and the Server Message Block (SMB) exploits. You can go back and be able to try these out against the solution and be able to see exactly how it would catch them, and what would have happened to your system had those exploits actually taken effect.

If you have a team that is doing forensics and trying to go through and determine whether systems had previously been exploited, you are giving that team additional functionality to be able to look back and see exactly how the exploits would have worked. Then they can understand better how things would have happened within their environment. Because you are doing that outside of the guest, you have a lot of visibility and a lot of information you otherwise wouldn’t have had.

One big expanded use-case here is to get the capability for HVI between Citrix and Bitdefender in the hands of your security teams, in the hands of your forensics teams, and in the hands of your auditors — so that they can see exactly what this tool brings to the table.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

Something else you want to look at is the use-case that allows users to expand what they are doing and makes their lives easier — and that’s secured browsing.

Today, when people go out and browse the Internet or hit a popular application like Facebook or Outlook Web Access — or if you have an administrator who is hitting an administrative console for your Domain Name System (DNS) environment, your routers, your Cisco, Microsoft environments, et cetera, oftentimes they are doing that via a web browser.

One big expanded use-case here is to get the capability for HVI between Citrix and Bitdefender in the hands of your security teams.

Well, if that’s the same web browser that they use to do everything else on their PC, it’s over-configured, it presents excessive risk, and you now have the opportunity with this solution to publish browsers that are very specific to each use.

For example, you publish one browser specifically for administrative access, and you know that you have advanced malware detection. Even if somebody is trying to target your administrators, you are able to thwart their ability to get in and take over the environments that the administrators are accessing.

As more things move to the browser — and more very sensitive and critical applications move to the cloud — it’s extremely important to set up secured browsing. We strongly recommend doing this with XenServer and HVI along with Bitdefender providing security.

Agastya: The problem in the market with respect to the human who is sitting in front of the browser being the weakest link in the chain is a very important one. Many, many different technology approaches have been taken to address this problem — and most of them have struggled to make it work.

The value of XenApp coming in with its secured browser model is this: You can stream your browser and you are just presenting, rendering an interface on the client device, but the browser is actually running in the backend, in the data center, running on XenServer, protected by Bitdefender HVI. This model not only allows you to shift the threat away from the client device, but also kill it completely, because that exploit which previously would have run on the client device is not on the client device anymore. It’s not even on the server anymore because HVI has gotten to it and stopped it.

Roemer: I bring up the browser benefit as an example because when you think of the lonely browser today, it is the interface to some of your most critical applications. A browser, at the same time, is also connected to your file system, your network, your Windows registry, your certificate chain and keys — it’s basically connected to everything you do and everything you have access to in most OSes.

What we are talking about here is publishing a browser that is very specific to purpose and configured for an individual application. Just put an icon out there, users click on it and everything works for them silently in the background. By being able to redirect hyperlinks over to the new joint XenServer-Bitdefender solution, you are not only protecting against known applications and things that you would utilize — but you can also redirect arbitrary links.

Even if you tell people, “don’t click on any links”, you know every once in a while it’s going to happen. When that one person clicks on the link and takes down the entire network, it’s awful. Ransomware attacks happen like that all the time. With this solution, that arbitrary link would be redirected over to a one-time use browser. Bitdefender would come up and say, “Hey, yup, there’s definitely a problem here, we are going to shut this down,” and the attack never would have had a chance to get anywhere.

What we are talking about here is publishing a browser that is very specific to purpose and configured for an individual application.

The organization is notified and can take additional remediatative actions. It’s a great opportunity to really change how people are working and take this arbitrary link problem and the ransomware problem and neutralize it.

Gardner: It sounds revolutionary rather than evolutionary when it comes to security. It’s quite impressive. I have learned a lot in just the last week or two in looking into this. Harish, you mentioned earlier that before the general availability being announced in May for Bitdefender HVI on XenServer that you have had this in beta. Do you have any results from that? Can you offer any metrics of what’s happened in the real world when people deploy this? Are the results as revolutionary as it sounds?

Real-world rollout

Agastya: The product was first in beta and then released in controlled availability mode, so the product is actually in production deployment at several companies in both North America and Europe. We have a few financial services companies, and we have some hospitals. We have put the product to use in production deployments for virtual desktop infrastructure (VDI) deployments where the customers are running XenApp and XenDesktop on top of XenServer with Bitdefender HVI.

We have server workloads running straight on XenServer, too. These are typically application workloads that the financial services companies or the hospitals need to run. We have had some great feedback from them. Some of them have become references as well, and we will be talking more about it at Citrix Synergy 2017, so stay tuned. We are very excited about the fact that the product is able to provide value in the real world.

Roemer: We have a very detailed white paper on how to set up the secured browsing solution, the joint solution between Citrix and Bitdefender. Even if you are running other hypervisors in your environment, I would recommend that you set up this solution and try redirecting some arbitrary hyperlinks over to it, to see what value you are going to get in your organization. It’s really straightforward to set up and provides a considerable amount of additional security visibility.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

Bitdefender also has some really amazing videos that show exactly how the solution can block some of the more popular exploits from this year. They are really impressive to watch.

Gardner: Kurt, we are about out of time, but I was curious, what’s the low-lying fruit? Harish mentioned government, VDI, healthcare. Is it the usual suspects with compliance issues hanging over their heads that are the low-lying fruit, or are there other organizations that would be ripe to enjoy the benefits?

Roemer: I would say compliance environments and anybody with regulatory requirements would very much be low-lying fruit for this, but anybody who has sensitive applications or very sensitive use-cases, too. Oftentimes, we hear things like outsourcing as being one of the more sensitive use-cases because you have external third parties who are getting in and either developing code for you, administering part of the operating environment, or something else.

We have also seen a pretty big uptick in terms of people being interested in this for administering the cloud. As you move up to cloud environments and you are defining new operating environments in the cloud while putting new applications up in the cloud, you need to make sure that your administrative model is protected.

Oftentimes, you use a browser directly to provide all of the security interfaces for the cloud, and by publishing that browser and putting this solution in front of it, you can make sure that malware is not interrupting your ability to securely administer the cloud environment.

Gardner: Last question to you, Harish. What should organizations do to get ready for this? I hope we have enticed them to learn more about it. For those organizations that actually might want to deploy, what do they need to think about in order to be in the best position to do that?

A new way of life

Agastya: Organizations need to think aboutsecure virtualization as a way of life within organizational behavior. As a result, I think we will start to see more people with titles like Security DevOps (SecDevOps).

As far as specifically using HVI, organizations should be worried about how advanced attacks could enter their data center and potentially result in a very, very dangerous breach and the loss of confidential intellectual property.

If you are worried about that, you are worried about ransomware because an end-user sitting in front of a client browser is potentially putting out your address. You will want to think about a technology like HVI. The first step for that is to talk to us and there is a lot of information on the Bitdefender website as well as on Citrix’s website.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in application transformation, big data, Bitdefender, Citrix, Cloud computing, Cyber security, data analysis, data center, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Identity, risk assessment, Security, Virtualization | Tagged , , , , , , , , , , , , | Leave a comment

SAP Ariba and MercadoLibre to consumerize business commerce in Latin America

The next BriefingsDirect global digital business panel discussion explores how the expansion of automated tactical buying for business commerce is impacting global markets, and what’s in store next for Latin America.

We’ll specifically examine how “spot buy” approaches enable companies to make time-sensitive and often mission-critical purchases, even in complex and dynamic settings, like Latin America.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the rising tide of such tactical business buying improvements, please join our guests, Karen Bruck, Corporate Sales Director at MercadoLibre.com in Buenos Aires, Argentina; Diego Cabrera Canay, Director of Financial Planning at MercadoLibre, and Tony Alvarez, General Manager of SAP Ariba‘s Spot Buy Business. The panel was recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: SAP Ariba Spot Buy has been in the market a few years. Tell us about where it has rolled out so far, why certain markets are being approached, and then about Latin America specifically.

Alvarez: The concept is a few years old, but we’ve been delivering SAP Ariba Spot Buy for about a year. We began in the US, and over the past 12 months the concept of Spot Buy has progressed because of our customer base. Our customer base has pushed us in a direction that is, quite frankly, even beyond Spot Buy — and it’s getting into trusted, vetted content.

Tony Alvarez

Alvarez

We are approaching the market with a two-pronged strategy of, yes, we have the breadth of content so that when somebody goes into an SAP Ariba application they can find what they are looking for, but we also now have parameters and controls that allow them to vet that content and to put a filter on it.

Over the last 12 months, we’ve come a long way. We are live in the US, and with early access in the UK and Germany. We just went live in Australia, and now we are very much looking forward to going live and moving fast into Latin America with MercadoLibre.

Gardner: Spot buying, or tactical buying, is different from strategic or more organized long-term buying. Tell us about this subset of procurement.

Alvarez: SAP Ariba is a 20 year-old company, and its roots are in that rigorous, sourced approach. We do hundreds of billions of dollars through contract catalog on the Ariba Network, but there’s a segment — and we believe it’s upward of 15% of spend — that is spot buy spend. The procurement professional often has no idea what’s being bought. And I think there are two approaches to that — either ignorance is bliss and they are glad that it’s out of their purview, or it also keeps them up at night.

SAP Ariba Spot Buy allows them to have visibility into that spend. By partnering with providers like MercadoLibre, they have content from trusted and vetted sellers to bring to the table – so it’s a really nice match for procurement.

Liberating limits

Gardner: The trick is to allow for flexibility and being dynamic, but also putting in enough rules and policies so that things don’t go off-track.

Alvarez: Exactly. For example, it’s like putting a filter on your kids’ smartphone. You want them to be able to be liberated so they can go and do as they please with phone calls — but not to go off the guardrails.

Gardner: Karen, tell us about MercadoLibre and why Latin America might be a really interesting market for this type of Spot Buy service.

Bruck: MercadoLibre is a leading e-commerce platform in Latin America, where we provide the largest marketplaces in 16 different countries. Our main markets are Brazil, Mexico, and Argentina, and that’s where we are going the start this partnership with SAP Ariba.

Karen Bruck

Bruck

We have upward of 60 million items listed on our platform, and this breadth of supplies will make purchasing very exciting. Latin America is a complicated market — and we like this complexity. We do very well.

It’s complicated because there are different rates of inflation in different countries, and so contracts can be hard to complete. What we bring to the table is an assortment of great payment and shipping solutions that make it easy for companies to purchase items. As Tony was saying, these are not under long-term contracts, but we still get to make use of this vast supply.

Gardner: Tony mentioned that maybe 15% of spend is in this category. Diego, do you think that that number might be higher in some of the markets that you serve?

Cabrera Canay: That’s probably the number — but that is a big number in terms of the spend within companies. So we have to get there and see what happens.

Progressive partnership

Gardner: Tony, tell us about the partnership. What is MercadoLibre.com bringing to the table? What is Ariba bringing to the table? How does this fit together for a whole that is greater than the sum of its parts?

Alvarez: It really is a well-matched partnership. SAP Ariba is the leading cloud procurement platform, period. When you look in Latin America, our penetration with SAP Enterprise Resource Planning (ERP) is even greater. We have a very strong installed base with SAP ERP.

Our plan is to take the SAP Ariba Spot Buy content and make it available to the SAP installed base. So this goes way beyond just SAP Ariba. And when you think about what Karen mentioned — difficulties in Latin America with high inflation — the catalog approach is not used as much in Latin America because everything is so dynamic.

For example, you might sign a contract but in just in a couple of weeks that contract may be obsolete, or unfavorable because of a change in pricing. But once we build controls and parameters in SAP Ariba Spot Buy, you can layer that on top of MercadoLibre content, which is super-broad. If you’re looking for it you’re going to find it, and that content is constantly updated. You gain real-time access to the latest information, and then the procurement person gets the benefit of control.

So I’m very optimistic. As Diego mentioned, I think 15% is really on the low-end in Latin America for this type of spend. I think this will be a really nice way to put digital catalog buying in the hands of large enterprise buyers.

Gardner: Speaking of large enterprise buyers, if I’m a purchasing official in one of your new markets, what should I be thinking about how this is going to benefit me?

Transparent, trusted transactions

It saves a lot of time, it makes the comparison very transparent, and you are able to control the different options. Overall, it’s a win-win … a partnership, a match made in heaven.

Bruck: Let me talk about this from experience. As a country manager at MercadoLibre, I had to do a lot of the procurement, together with our procurement officers. It was really frustrating at times because all of these purchases had to be one-off engagements, with a different vendor every time. That takes a lot of time. You also have to bring in price comparisons, and that’s not always a simple process.

So what this platform gives you is the ability to be very transparent about prices and among different supplies. That makes it very easy to be able to buy every time without having to call and get the vendor to be in your own buying platform.

It saves a lot of time, it makes the comparison very transparent, and you are able to control the different options. Overall, it’s a win-win. So I do believe this is a partnership, a match made in heaven.

We were also very interested in business-to-business (B2B) industries. When Tony and SAP Ariba came to our offices to offer this partnership, we thought this would be a great way to leverage their needs with our supply and make it work.

Gardner: For sellers, this enables them to do repeated business more easily, more automated and so at scale. For buyers, with transparency they have more insight into getting the best prices, the best terms of delivery. Let’s expand on that win-win. Diego, tell us about the business benefits for all parties.

Big and small, meet at the mall 

Cabrera Canay: In the past few years, we have been working to make MercadoLibre the biggest “mall” in e-commerce. We have the most important brands and the most important retailers selling through MercadoLibre.

Diego Cabrera Canay

Cabrera Canay

What differentiates us is that we are confident we have the best prices — and also other great services such as free shipping, easy payments, and financing. We are sure that we can offer the buyers better purchasing.

Obviously, from the side of sellers, this all provides higher demand, it raises the bar in terms of having qualified buyers, and then giving the best services. That’s very exciting for us.

Gardner: Tony, we mentioned large enterprises, but this cuts across a great deal more of the economy, such as small- to medium sized (SMB) businesses. Tell us about how this works across diverse economies where there are large players but lots of small ones, too?

Alvarez: On the sales side, this gives really small businesses opportunity to reach large enterprise buyers that probably weren’t there before.

Diego was being modest, but MercadoLibre’s payment structure, MercadoPago, is incredibly robust, and it’s incredibly valuable to that end-seller, and also to the buyer.

Just having that platform and then connecting — you are basically taking two populations, the large and small sellers, and the large and small buyers, and allowing them to commingle more than they ever had in the past.

Gardner: Karen, as you mentioned from your own experience, when you’re dealing with paper, and you are dealing with one-offs, it’s hard to just keep track of the process, never mind to analyze it. But when we go digital, when we have a platform, when we have business networks at work, then we can start to analyze things for companies — and more broadly into markets.

How do you see this partnership accelerating the ability to leverage analytics, leverage some of the back-end platform technologies with SAP HANA and SAP Ariba, and making more strides toward productivity for your customers?

Data discoveries

Bruck: Right. When everything is tracked, as this will be, because every single purchase will be inside their SAP Ariba platform, it is all part of your “big data.” So then you can actually drop it, control it, analyze it, and say, “Hey, maybe these particular purchases mean that we should have long-term contracts, or that our long-term contracts were not priced correctly,” and maybe that’s an opportunity to save money and lower costs.

So once you can track data, you can do a lot of things, and discover new opportunities for either being more efficient or reducing costs – and that’s ultimately what we all want in all the departments of our companies.

Gardner: And for those listeners and readers who are interested in taking advantage of these services, and ultimately that great ability to analyze, what should they be doing now to get ready? Are there some things they could do culturally, organizationally, in order to become that more digital business when these services are available to them?

Paper is terrible for companies; you have to rethink your purchase processing in a digital way.

Cabrera Canay: I can talk about in our own case, where we are rebuilding our purchase processes. Paper is terrible for companies; you have to rethink your purchase processing in a digital way. Once you do it, SAP Ariba is a great solution, and with SAP Ariba Spot Buy we will have the best conditions for the buyers.

Bruck: It’s a natural process. People are going digital and embracing these new trends and technologies. It will make them more efficient. If they get up to speed quickly, it will become less about controlling stuff that they don’t need to control. They will really understand the benefits, so it will be a natural adoption.

Gardner: Tony, coming back full circle, as you have rolled SAP Ariba Spot Buy out from North America to Europe to Asia-Pacific, and now to Latin America — what have you learned in the way people use it?

Alvarez: First, at a macro level, people have found this to be a useful tool to replace some of the contracts that were less important, and so they can rely on marketplaces.

Second, we’ve really found as we’ve deployed in the US that a lot of times multinational companies are like, “Hey, that’s great, I love this, but I really want to use this in Latin America.” So they want to go and get visibility elsewhere.

Turn-key technique

Third, they want a tool that doesn’t require any training. If I’m a procurement professional, I want my users to already be expert at using the tool. We’ve designed this in the process context, and in concert with the content partners. You can just walk up and start using it. You don’t have to be an expert, and it keeps you within the guardrails without even thinking about it.

Gardner: And being a cloud-based, software-as-a-service (SaaS) solution you’re always analyzing how it’s being used — going after that ultimate optimized user experience — and then building those improvements back in on a constant basis?

Alvarez: Exactly. Always.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Business intelligence, Business networks, Cloud computing, data analysis, Enterprise transformation, ERP, machine learning, procurement, SAP, SAP Ariba, Spot buying, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Awesome Procurement —Survey shows how business networks fuel innovation and business transformation

The next BriefingsDirect digital business insights interview explores the successful habits, practices, and culture that define highly effective procurement organizations.

We’ll uncover unique new research that identifies and measures how innovative companies have optimized their practices to overcome the many challenges facing business-to-business (B2B) commerce.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the traits and best practices of the most successful procurement organizations, please join Kay Ree Lee, Director of Business Analytics and Insights at SAP Ariba. The interview was recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Procurement is more complex than ever, supply chains stretch around the globe, regulation is on the rise, and risk is heightened across many fronts. Despite these, innovative companies have figured out how to overcome their challenges, and you have uncovered some of their secrets through your Annual Benchmarking Survey. Tell us about your research and your findings.

Lee: Every year we conduct a large benchmark program benefiting our customers that combines a traditional survey with data from the procurement applications, as well as business network.

Kay Ree Lee

Lee

This past year, more than 200 customers participated, covering more than $400 billion in spend. We analyzed the quantitative and qualitative responses of the survey and identified the intersection between those responses for top performers compared to average performers. This has allowed us to draw correlations between what top performers did well and the practices that drove those achievements.

Gardner: What’s changed from the past, what are you seeing as long-term trends?

Lee: There are three things that are quite different from when we last talked about this a year ago.

The number one trend that we see is that digital procurement is gaining momentum quickly. A lot of organizations are now offering self-service tools to their internal stakeholders. These self-service tools enable the user to evaluate and compare item specifications and purchase items in an electronic marketplace, which allows them to operate 24×7, around-the-clock. They are also utilizing digital networks to reach and collaborate with others on a larger scale.

The second trend that we see is that while risk management is generally acknowledged as important and critical, for the average company, a large proportion of their spend is not managed. Our benchmark data indicates that an average company manages 68% of their spend. This leaves 32% of spend that is unmanaged. If this spend is not managed, the average company is also probably not managing their risk. So, what happens when something unexpected occurs to that non-managed spend?

The third trend that we see is related to compliance management. We see compliance management as a way for organizations to deliver savings to the bottom line. Capturing savings through sourcing and negotiation is a good start,  but at the end of the day, eliminating loopholes through a focus on implementation and compliance management is how organizations deliver and realize negotiated savings.

Gardner: You have uncovered some essential secrets — or the secret sauce — behind procurement success in a digital economy. Please describe those.

Five elements driving procurement processes

Lee: From the data, we identified five key takeaways. First, we see that procurement organizations continue to expand their sphere of influence to greater depth and quality within their organizations. This is important because it shows that the procurement organization and the work that procurement professionals are involved in matters and is appreciated within the organization.

The second takeaway is that – while cost reduction savings is near and dear to the heart of most procurement professionals — leading organizations are focused on capturing value beyond basic cost reduction. They are focused on capturing value in other areas and tracking that value better.

The third takeaway is that digital procurement is firing on all cylinders and is front and center in people’s minds. This was reflected in the transactional data that we extracted.

The fourth takeaway is related to risk management. This is a key focus area that we see instead of just news tracking related to your suppliers.

The fifth takeaway is — compliance management and closing the purchasing loopholes is what will help procurement deliver bottom-line savings.

Gardner: What next are some of the best practices that are driving procurement organizations to have a strategic impact at their companies, culturally?

Lee: To have a strategic impact in the business, procurement needs to be proactive in engaging the business. They should have a mentality of helping the business solve business problems as opposed to asking stakeholders to follow a prescribed procurement process. Playing a strategic role is a key practice that drives impact.

Another practice that drives strategic impact is the ability to utilize and adopt technology to your advantage through the use of digital networks.

They should also focus on broadening the value proposition of procurement. We see leading organizations placing emphasis on contributing to revenue growth, or increasing their involvement in product development, or co-innovation that contributes to a more efficient and effective process.

Another practice that drives strategic impact is the ability to utilize and adopt technology to your advantage through the use of digital networks, system controls to direct compliance, automation through workflow, et cetera.

These are examples of practices and focus areas that are becoming more important to organizations.

Using technology to track technology usage

Gardner: In many cases, we see the use of technology having a virtuous adoption cycle in procurement. So the more technology used, the better they become at it, and the more technology can be exploited, and so on. Where are we seeing that? How are leading organizations becoming highly technical to gain an advantage?

Lee: Companies that adopt new technology capabilities are able to elevate their performance and differentiate themselves through their capabilities. This is also just a start. Procurement organizations are pivoting towards advanced and futuristic concepts, and leaving behind the single-minded focus on cost reduction and cost efficiency.

Digital procurement utilizing electronic marketplaces, virtual catalogs, gaining visibility into the lifecycle of purchase transactions, predictive risk management, and utilizing large volumes of data to improve decision-making – these are key capabilities that benefit the bold and the future-minded. This enables the transformation of procurement, and forms new roles and requirements for the future procurement organization.

Gardner: We are also seeing more analytics become available as we have more data-driven and digital processes. Is there any indication from your research that procurement people are adopting data-scientist-ways of thinking? How are they using analysis more now that the data and analysis are available through the technology?

If you extract all of that data, cleanse it, mine it, and make sense out of it, you can then make informed business decisions and create valuable insights.

Lee: You are right. The users of procurement data want insights. We are working with a couple of organizations on co-innovation projects. These organizations   actively research, analyze, and use their data to answer questions such as:

  • How does an organization validate that the prices they are paying are competitive in the marketplace?
  • After an organization conducts a sourcing event and implements the categories, how do they actually validate that the price paid is what was negotiated?
  • How do we categorize spend accurately, particularly if a majority of spend is services spend where the descriptions are non-standard?
  • Are we using the right contracts with the right pricing?

As you can imagine, when people enter transactions in a system, not all of it is contract-based or catalog-based. There is still a lot of free-form text. But if you extract all of that data, cleanse it, mine it, and make sense out of it, you can then make informed business decisions and create valuable insights. This goes back to the managing compliance practice we talked about earlier.

They are also looking to answer questions like, how do we scale supplier risk management to manage all of our suppliers systematically, as opposed to just managing the top-tier suppliers?

These two organizations are taking data analysis further in terms of creating advantages that begin to imbue excellence into modern procurement and across all of their operations.

Gardner: Kay Ree, now that you have been tracking this Benchmark Survey for a few years, and looking at this year’s results, what would you recommend that people do based on your findings?

Future focus: Cost-reduction savings and beyond

Lee: There are several recommendations that we have. One is that procurement should continue to expand their span of influence across the organization. There are different ways to do this but it starts with an understanding of the stakeholder requirements.

The second is about capturing value beyond cost-reduction savings. From a savings perspective, the recommendation we have is to continue to track sourcing savings — because cost-reduction savings are important. But there are other measures of value to track beyond cost savings. That includes things like contribution to revenue, involvement in product development, et cetera.

The third recommendation relates to adopting digital procurement by embracing technology. For example, SAP Ariba has recently introduced some innovations. I think the user really has an advantage in terms of going out there, evaluating what is out there, trying it out, and then seeing what works for them and their organization.

As organizations expand their footprint globally, the fourth recommendation focuses on transaction efficiency. The way procurement can support organizations operating globally is by offering self-service technology so that they can do more with less. With self-service technology, no one in procurement needs to be there to help a user buy. The user goes on the procurement system and creates transactions while their counterparts in other parts of the world may be offline.

The fifth recommendation is related to risk management. A lot of organizations when they say, “risk management,” they are really only tracking news related to their suppliers. But risk management includes things like predictive analytics, predictive risk measures beyond your strategic suppliers, looking deeper into supply chains, and across all your vendors. If you can measure risk for your suppliers, why not make it systematic? We now have the ability to manage a larger volume of suppliers, to in fact manage all of them. The ones that bubble to the top, the ones that are the most risky, those are the ones that you create contingency plans for. That helps organizations really prepare to respond to disruptions in their business.

The last recommendation is around compliance management, which includes internal and external compliance. So, internal adherence to procurement policies and procedures, and then also external following of governmental regulations. This helps the organization close all the loopholes and ensure that sourcing savings get to the bottom line.

Be a leader, not a laggard

Gardner: When we examine and benchmark companies through this data, we identify leaders, and perhaps laggards — and there is a delta between them. In trying to encourage laggards to transform — to be more digital, to take upon themselves these recommendations that you have — how can we entice them? What do you get when you are a leader? What defines the business value that you can deliver when you are taking advantage of these technologies, following these best practices?

Lee: Leading organizations see higher cost reduction savings, process efficiency savings and better collaboration internally and externally. These benefits should speak for themselves and entice both the average and the laggards to strive for improvements and transformation.

From a numbers perspective, top performers achieve 9.7% savings as a percent of sourced spend. This translates to approximately $20M higher savings per $B in spend compared to the average organization.

We talked about compliance management earlier. A 5% increase in compliance increases realized savings of $4.4M per $1B in spend. These are real hard dollar savings that top performers are able to achieve.

In addition, top performers are able to attract a talent pool that will help the procurement organization perform even better. If you look at some of the procurement research, industry analysts and leaders are predicting that there may be a talent shortage in procurement. But, as a top performer, if you go out and recruit, it is easier to entice talent to the organization. People want to do cool things and they want to use new technology in their roles.

Gardner: Wrapping up, we are seeing some new and compellingtechnologies here at Ariba LIVE 2017 — more use of artificial intelligence(AI), increased use of bringing predictive tools into a context so that they can be of value to procurement during the life-cycle of a process.

As we think about the future, and more of these technologies become available, what is it that companies should be doing now to put themselves in the best position to take advantage of all of that?

Curious org

Lee: It’s important to be curious about the technology available in the market and perhaps structure the organization in such a way that there is a team of people on the procurement team who are continuously evaluating the different procurement technologies from different vendors out there. Then they can make decisions on what best fits their organization.

Having people who can look ahead, evaluate, and then talk about the requirements, then understand the architecture, and evaluate what’s out there and what would make sense for them in the future. This is a complex role. He or she has to understand the current architecture of the business, the requirements from the stakeholders, and then evaluate what technology is available. They must then determine if it will assist the organization in the future, and if adopting these solutions provides a return on investment and ongoing payback.

So I think being curious, understanding the business really well, and then wearing a technology hat to understand what’s out there are key. You can then be helpful to the organization and envision how adopting these newer technologies will play out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, CRM, data analysis, Enterprise transformation, ERP, Information management, machine learning, Networked economy, procurement, SAP, SAP Ariba, Security, Spot buying | Tagged , , , , , , , , , , , | Leave a comment

Experts define new ways to manage supply chain risk in a digital economy

The next BriefingsDirect digital business thought leadership panel discussion explores new ways that companies can gain improved visibility, analytics, and predictive responses to better manage supply chain risk in the digital economy.

The panel examines how companies such as Nielsen are using cognitive computing search engines, and even machine learning and artificial intelligence (AI), to reduce risk in their overall buying and acquisitions.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the exploding sophistication around gaining insights into advanced business commerce, we welcome James Edward Johnson, Director of Supply Chain Risk Management and Analysis at Nielsen; Dan Adamson, Founder and CEO of OutsideIQ in Toronto, and Padmini Ranganathan, Vice President of Products and Innovation at SAP Ariba.

The panel was assembled and recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Padmini, we heard at SAP Ariba LIVE that risk is opportunity. That stuck with me. Are the technologies really now sufficient that we can fully examine risks to such a degree that we can turn that into a significant business competitive advantage? That is to say, those who take on risk seriously, can they really have a big jump over their competitors?